id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
http://arxiv.org/abs/2010.00190v1
|
A Compare Aggregate Transformer for Understanding Document-grounded Dialogue
|
Unstructured documents serving as external knowledge of the dialogues help to generate more informative responses. Previous research focused on knowledge selection (KS) in the document with dialogue. However, dialogue history that is not related to the current dialogue may introduce noise in the KS processing. In this paper, we propose a Compare Aggregate Transformer (CAT) to jointly denoise the dialogue context and aggregate the document information for response generation. We designed two different comparison mechanisms to reduce noise (before and during decoding). In addition, we propose two metrics for evaluating document utilization efficiency based on word overlap. Experimental results on the CMUDoG dataset show that the proposed CAT model outperforms the state-of-the-art approach and strong baselines.
|
[
"Language Models",
"Natural Language Interfaces",
"Semantic Text Processing",
"Dialogue Systems & Conversational Agents"
] |
[
52,
11,
72,
38
] |
SCOPUS_ID:85063643300
|
A Compare-Aggregate Model with Embedding Selector for Answer Selection
|
Answer selection is a challenging task in natural language processing that requires both natural language understanding and word knowledge. At present, most of recent methods draw on insights from attention mechanism to learn the complex semantic relations between questions and answers. Previous remarkable approaches mainly apply general Compare-Aggregate framework. In this paper, we propose a novel Compare-Aggregate framework with embedding selector to solve answer selection task. Unlike previous Compare-Aggregate methods which just use one type of Attention mechanism and lack the use of word vectors at different level, we employ two types of Attention mechanism in a model and add a selector layer to choose a best input for aggregation layer. We evaluate the model on the two answer selection tasks: WikiQA and TrecQA. On the two different datasets, our approach outperforms several strong baselines and achieves state-of-the-art performance.
|
[
"Natural Language Interfaces",
"Semantic Text Processing",
"Question Answering",
"Representation Learning"
] |
[
11,
72,
27,
12
] |
SCOPUS_ID:85096623923
|
A Compare-Aggregate Model with External Knowledge for Query-Focused Summarization
|
Query-focused extractive summarization aims to create a summary by selecting sentences from original document according to query relevance and redundancy. With recent advances of neural network models in natural language processing, attention mechanism is widely used to address text summarization task. However, existing methods are always based on a coarse-grained sentence-level attention, which likely to miss the intent of query and cause relatedness misalignment. To address the above problem, we introduce a fine-grained and interactive word-by-word attention to the query-focused extractive summarization system. In that way, we capture the real intent of query. We utilize a Compare-Aggregate model to implement the idea, and simulate the interactively attentive reading and thinking of human behavior. We also leverage external conceptual knowledge to enrich the model and fill the expression gap between query and document. In order to evaluate our method, we conduct experiments on DUC 2005–2007 query-focused summarization benchmark datasets. Experimental results demonstrate that our proposed approach achieves better performance than state-of-the-art.
|
[
"Semantic Text Processing",
"Summarization",
"Knowledge Representation",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
72,
30,
18,
47,
3
] |
http://arxiv.org/abs/1905.12897v2
|
A Compare-Aggregate Model with Latent Clustering for Answer Selection
|
In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing. First, we explore the effect of additional information by adopting a pretrained language model to compute the vector representation of the input text and by applying transfer learning from a large-scale corpus. Second, we enhance the compare-aggregate model by proposing a novel latent clustering method to compute additional information within the target corpus and by changing the objective function from listwise to pointwise. To evaluate the performance of the proposed approaches, experiments are performed with the WikiQA and TREC-QA datasets. The empirical results demonstrate the superiority of our proposed approach, which achieve state-of-the-art performance for both datasets.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
https://aclanthology.org//2011.mtsummit-papers.51/
|
A Comparison Study of Parsers for Patent Machine Translation
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85134875697
|
A Comparison Study of Pre-trained Language Models for Chinese Legal Document Classification
|
Legal artificial intelligence (LegalAI), aiming to benefit the legal domain using artificial intelligence technologies, is the hot topic of the moment. As the basis for various LegalAI tasks such as judgment prediction and similar case matching, the classification of legal documents is an issue that has to be addressed. The majority of current approaches focus on the legal systems of native English-speaking countries. However, both Chinese language and legal system differ significantly from that of English. Given the success of pre-trained Language Models (PLMs) and outperformance compared with feature-engineering-based machine learning models as well as traditional deep neural network models such as CNNs and RNNs in NLP, their effectiveness in specific domains needs to be further investigated, especially in legal domain. Moreover, few studies have made comparisons of these PLMs for specific legal tasks. Therefore, in this paper we train several strong PLMs which differ in pre-training corpus on three datasets of Chinese legal documents. Experimental results show that the model pre-trained on the legal corpus demonstrates its high efficiency on all datasets.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85078312157
|
A Comparison Study on Legal Document Classification Using Deep Neural Networks
|
Despite the rapid development of artificial intelligence technology in legal services around the world, little research work is being performed in the area of legal document classification in Korean language. In this paper, we propose and compare three different legal document classification approaches based on two deep neural network models, i.e., Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), and two word embedding schemes. Based on nearly 60,000 precedent case data, we obtained the highest classification accuracy (up to 86 percent) with the RNN model with Word2Vec embedding.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
https://aclanthology.org//W09-3948/
|
A Comparison between Dialog Corpora Acquired with Real and Simulated Users
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
SCOPUS_ID:85112181240
|
A Comparison between Machine Learning Researches that use Arabic Text: A Case Study of Social Media Datasets
|
The world is directed to use the huge data and use it in a beneficial way, this allowed researchers to think about how to classify these data, which have many shapes, speeds and sizes, as it was important to study the data that are in social media, analyse and benefit from it through its classification. We used more than 20 papers to compare how the researchers use different algorithms to classify the huge data that comes through social networking sites and the most important language they used to classify the text was the Arabic language. The researchers concluded that deep learning and convolution neural networks speed up the classification process.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
http://arxiv.org/abs/cs/0009022v1
|
A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation
|
This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNoW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-the-art algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.
|
[
"Semantic Text Processing",
"Word Sense Disambiguation"
] |
[
72,
65
] |
SCOPUS_ID:84939539666
|
A Comparison between multi-layer perceptrons and convolutional neural networks for text image super-resolution
|
We compare the performances of several Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (ConvNets) for single text image Super-Resolution. We propose an example-based framework for both MLP and ConvNet, where a non-linear mapping between pairs of patches and high-frequency pixel values is learned. We then demonstrate that for equivalent complexity, ConvNets are better than MLPs at predicting missing details in upsampled text images. To evaluate the performances, we make use of a recent database (ULR-textSISR-2013a) along with different quality measures. We show that the proposed methods outperforms sparse coding-based methods for this database.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
http://arxiv.org/abs/2205.01600v1
|
A Comparison of Approaches for Imbalanced Classification Problems in the Context of Retrieving Relevant Documents for an Analysis
|
One of the first steps in many text-based social science studies is to retrieve documents that are relevant for the analysis from large corpora of otherwise irrelevant documents. The conventional approach in social science to address this retrieval task is to apply a set of keywords and to consider those documents to be relevant that contain at least one of the keywords. But the application of incomplete keyword lists risks drawing biased inferences. More complex and costly methods such as query expansion techniques, topic model-based classification rules, and active as well as passive supervised learning could have the potential to more accurately separate relevant from irrelevant documents and thereby reduce the potential size of bias. Yet, whether applying these more expensive approaches increases retrieval performance compared to keyword lists at all, and if so, by how much, is unclear as a comparison of these approaches is lacking. This study closes this gap by comparing these methods across three retrieval tasks associated with a data set of German tweets (Linder, 2017), the Social Bias Inference Corpus (SBIC) (Sap et al., 2020), and the Reuters-21578 corpus (Lewis, 1997). Results show that query expansion techniques and topic model-based classification rules in most studied settings tend to decrease rather than increase retrieval performance. Active supervised learning, however, if applied on a not too small set of labeled training instances (e.g. 1,000 documents), reaches a substantially higher retrieval performance than keyword lists.
|
[
"Topic Modeling",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
9,
24,
36,
3
] |
http://arxiv.org/abs/2101.11040v1
|
A Comparison of Approaches to Document-level Machine Translation
|
Document-level machine translation conditions on surrounding sentences to produce coherent translations. There has been much recent work in this area with the introduction of custom model architectures and decoding algorithms. This paper presents a systematic comparison of selected approaches from the literature on two benchmarks for which document-level phenomena evaluation suites exist. We find that a simple method based purely on back-translating monolingual document-level data performs as well as much more elaborate alternatives, both in terms of document-level metrics as well as human evaluation.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/1912.10169v1
|
A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings
|
The lack of annotated data in many languages is a well-known challenge within the field of multilingual natural language processing (NLP). Therefore, many recent studies focus on zero-shot transfer learning and joint training across languages to overcome data scarcity for low-resource languages. In this work we (i) perform a comprehensive comparison of state-ofthe-art multilingual word and sentence encoders on the tasks of named entity recognition (NER) and part of speech (POS) tagging; and (ii) propose a new method for creating multilingual contextualized word embeddings, compare it to multiple baselines and show that it performs at or above state-of-theart level in zero-shot transfer settings. Finally, we show that our method allows for better knowledge sharing across languages in a joint training setting.
|
[
"Language Models",
"Low-Resource NLP",
"Semantic Text Processing",
"Representation Learning",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
52,
80,
72,
12,
4,
0
] |
http://arxiv.org/abs/2211.02976v1
|
A Comparison of Automatic Labelling Approaches for Sentiment Analysis
|
Labelling a large quantity of social media data for the task of supervised machine learning is not only time-consuming but also difficult and expensive. On the other hand, the accuracy of supervised machine learning models is strongly related to the quality of the labelled data on which they train, and automatic sentiment labelling techniques could reduce the time and cost of human labelling. We have compared three automatic sentiment labelling techniques: TextBlob, Vader, and Afinn to assign sentiments to tweets without any human assistance. We compare three scenarios: one uses training and testing datasets with existing ground truth labels; the second experiment uses automatic labels as training and testing datasets; and the third experiment uses three automatic labelling techniques to label the training dataset and uses the ground truth labels for testing. The experiments were evaluated on two Twitter datasets: SemEval-2013 (DS-1) and SemEval-2016 (DS-2). Results show that the Afinn labelling technique obtains the highest accuracy of 80.17% (DS-1) and 80.05% (DS-2) using a BiLSTM deep learning model. These findings imply that automatic text labelling could provide significant benefits, and suggest a feasible alternative to the time and cost of human labelling efforts.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85125172678
|
A Comparison of Concept Embeddings for German Clinical Corpora
|
Clinical concept embeddings enable unsupervised learning of relationships among medical concepts. A range of benchmarks quantifies the degree to which learned representations capture medical semantics. However, training and evaluation of embeddings require a large amount of data. In addition, embeddings' benchmark score varies in different languages because it differs with the size of the available corpora. Multi-modal data increases the corpus size, but data protection regulations limit access to clinical multi-modal data. We present an extendable pipeline for training clinical concept embeddings on various text corpora and evaluating the quality of trained embeddings on selected benchmark tasks. Our work provides different ways to identify clinical concepts in textual corpora. We train embeddings on selected German clinical text corpora and evaluate them on various benchmark scores. Our work can be extended to train embeddings in other languages in which a large multi-modal dataset is not available.
|
[
"Representation Learning",
"Semantic Text Processing",
"Multimodality"
] |
[
12,
72,
74
] |
https://aclanthology.org//2022.amta-upg.22/
|
A Comparison of Data Filtering Methods for Neural Machine Translation
|
With the increasing availability of large-scale parallel corpora derived from web crawling and bilingual text mining, data filtering is becoming an increasingly important step in neural machine translation (NMT) pipelines. This paper applies several available tools to the task of data filtration, and compares their performance in filtering out different types of noisy data. We also study the effect of filtration with each tool on model performance in the downstream task of NMT by creating a dataset containing a combination of clean and noisy data, filtering the data with each tool, and training NMT engines using the resulting filtered corpora. We evaluate the performance of each engine with a combination of direct assessment (DA) and automated metrics. Our best results are obtained by training for a short time on all available data then filtering the corpus with cross-entropy filtering and training until convergence.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85042363902
|
A Comparison of Dictionary Building Methods for Sentiment Analysis in Software Engineering Text
|
Sentiment Analysis (SA) in Software Engineering (SE) texts suffers from low accuracies primarily due to the lack of an effective dictionary. The use of a domain-specific dictionary can improve the accuracy of SA in a particular domain. Building a domain dictionary is not a trivial task. The performance of lexical SA also varies based on the method applied to develop the dictionary. This paper includes a quantitative comparison of four dictionaries representing distinct dictionary building methods to identify which methods have higher/lower potential to perform well in constructing a domain dictionary for SA in SE texts.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85129396982
|
A Comparison of Different NMT Approaches to Low-Resource Dutch-Albanian Machine Translation
|
Low-resource languages can be understood as languages that are more scarce, less studied, less privileged, less commonly taught and for which there are less resources available (Singh, 2008; Cieri et al., 2016; Magueresse et al., 2020). Natural Language Processing (NLP) research and technology mainly focuses on those languages for which there are large data sets available. To illustrate differences in data availability: there are 6 million Wikipedia articles available for English, 2 million for Dutch, and merely 82 thousand for Albanian. The scarce data issue becomes increasingly apparent when large parallel data sets are required for applications such as Neural Machine Translation (NMT). In this work, we investigate to what extent translation between Albanian (SQ) and Dutch (NL) is possible comparing a one-to-one (SQ↔AL) model, a low-resource pivot-based approach (English (EN) as pivot) and a zero-shot translation (ZST) (Johnson et al., 2016; Mattoni et al., 2017) system. From our experiments, it results that the EN-pivot-model outperforms both the direct one-to-one and the ZST model. Since often, small amounts of parallel data are available for low-resource languages or settings, experiments were conducted using small sets of parallel NL↔SQ data. The ZST appeared to be the worst performing models. Even when the available parallel data (NL↔SQ) was added, i.e. in a few-shot setting (FST), it remained the worst performing system according to the automatic (BLEU and TER) and human evaluation.
|
[
"Low-Resource NLP",
"Machine Translation",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
80,
51,
47,
4,
0
] |
SCOPUS_ID:85078808005
|
A Comparison of Distractor Selection Among Proficiency Levels in Reading Tests: A Focus on Summarization Processes in Japanese EFL Learners
|
This study aimed to compare selection patterns of distractors (incorrect options) according to test taker proficiency regarding Japanese students’ summarization skills of an English paragraph. Participants included 414 undergraduate students, and the test comprised three summarization process types—deletion, generalization, and integration. Within the questions, which represented summary candidates for a final version of a test, distractors were created reflecting typical student errors related to each summarization process. Six distractor types were tested. Results showed that distractors that were missing important information for the summary functioned well for determining low-, middle-, and high-proficiency students regarding deletion items. For generalization items, both distractor types, those containing examples and those with inappropriate superordinates, were attractive for low- and middle-proficiency students. Regarding integration items, it was found that distractors missing the author’s viewpoint in the summary were more attractive only for less-proficient students. Several tips to guide future item writing are provided.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85099716572
|
A Comparison of Genetic Swarm Intelligence-Based Feature Selection Algorithms for Author Identification
|
Researchers are moving beyond stylometric features to improve author identification systems. They are exploring non-traditional and hybrid feature sets that include areas like sentiment analysis and topic models. This feature set exploration leads to the concern of determining which features are best suited for which systems and datasets. In this paper, we compare Genetic Search and a number of Swarm Intelligence (SI) methods for feature selection. In addition to Genetic Search methods, we compare SI methods including Artificial Bee Colony, Ant System optimization, Glowworm Swarm optimization and Particle Swarm optimization for feature selection.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Sentiment Analysis"
] |
[
9,
3,
78
] |
https://aclanthology.org//W07-2325/
|
A Comparison of Hedged and Non-hedged NLG Texts
|
[
"Text Generation"
] |
[
47
] |
|
SCOPUS_ID:0346507059
|
A Comparison of Human and Statistical Language Model Performance using Missing-Word Tests
|
This paper presents results from a series of missing-word tests, in which a small fragment of text is presented to human subjects who are then asked to suggest a ranked list of completions. The same experiment is repeated with the WA model, an n-gram statistical language model. From the completion data two measures are obtained: (i) verbatim predictability, which indicates the extent to which subjects nominated exactly the missing word, and (ii) grammatical class predictability, which indicates the extent to which subjects nominated words of the same grammatical class as the missing word. The differences in language model performance and human performance are encouragingly small, especially for verbatim predictability. This is especially significant given that the WA model was able, on average, to use at most half the available context. The results highlight human superiority in handling missing content words. Most importantly, the experiments illustrate the detailed information one can obtain about the performance of a language model through using missing-word tests.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85122877707
|
A Comparison of Hybrid and End-to-End ASR Systems for the IberSpeech-RTVE 2020 Speech-to-Text Transcription Challenge
|
This paper describes a comparison between hybrid and end-to-end Automatic Speech Recognition (ASR) systems, which were evaluated on the IberSpeech-RTVE 2020 Speech-to-Text Transcription Challenge. Deep Neural Networks (DNNs) are becoming the most promising technology for ASR at present. In the last few years, traditional hybrid models have been evaluated and compared to other end-to-end ASR systems in terms of accuracy and efficiency. We contribute two different approaches: a hybrid ASR system based on a DNN-HMM and two state-of-the-art end-to-end ASR systems, based on Lattice-Free Maximum Mutual Information (LF-MMI). To address the high difficulty in the speech-to-text transcription of recordings with different speaking styles and acoustic conditions from TV studios to live recordings, data augmentation and Domain Adversarial Training (DAT) techniques were studied. Multi-condition data augmentation applied to our hybrid DNN-HMM demonstrated WER improvements in noisy scenarios (about 10% relatively). In contrast, the results obtained using an end-to-end PyChain-based ASR system were far from our expectations. Nevertheless, we found that when including DAT techniques, a relative WER improvement of 2.87% was obtained as compared to the PyChain-based system.
|
[
"Low-Resource NLP",
"Speech & Audio in NLP",
"Robustness in NLP",
"Text Generation",
"Responsible & Trustworthy NLP",
"Speech Recognition",
"Multimodality"
] |
[
80,
70,
58,
47,
4,
10,
74
] |
SCOPUS_ID:85072853000
|
A Comparison of Hybrid and End-to-End Models for Syllable Recognition
|
This paper presents a comparison of a traditional hybrid speech recognition system (kaldi using WFST and TDNN with lattice-free MMI) and a lexicon-free end-to-end (TensorFlow implementation of multi-layer LSTM with CTC training) models for German syllable recognition on the Verbmobil corpus. The results show that explicitly modeling prior knowledge is still valuable in building recognition systems. With a strong language model (LM) based on syllables, the structured approach significantly outperforms the end-to-end model. The best word error rate (WER) regarding syllables was achieved using kaldi with a 4-gram LM, modeling all syllables observed in the training set. It achieved 10.0% WER w.r.t. the syllables, compared to the end-to-end approach where the best WER was 27.53%. The work presented here has implications for building future recognition systems that operate independent of a large vocabulary, as typically used in a tasks such as recognition of syllabic or agglutinative languages, out-of-vocabulary techniques, keyword search indexing and medical speech processing.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
SCOPUS_ID:0037878134
|
A Comparison of ID3 and Backpropagation for English Text-To-Speech Mapping
|
The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be closely matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially in this domain. © 1995, Kluwer Academic Publishers. All rights reserved.
|
[
"Speech & Audio in NLP",
"Multimodality"
] |
[
70,
74
] |
http://arxiv.org/abs/2009.05451v1
|
A Comparison of LSTM and BERT for Small Corpus
|
Recent advancements in the NLP field showed that transfer learning helps with achieving state-of-the-art results for new tasks by tuning pre-trained models instead of starting from scratch. Transformers have made a significant improvement in creating new state-of-the-art results for many NLP tasks including but not limited to text classification, text generation, and sequence labeling. Most of these success stories were based on large datasets. In this paper we focus on a real-life scenario that scientists in academia and industry face frequently: given a small dataset, can we use a large pre-trained model like BERT and get better results than simple models? To answer this question, we use a small dataset for intent classification collected for building chatbots and compare the performance of a simple bidirectional LSTM model with a pre-trained BERT model. Our experimental results show that bidirectional LSTM models can achieve significantly higher results than a BERT model for a small dataset and these simple models get trained in much less time than tuning the pre-trained counterparts. We conclude that the performance of a model is dependent on the task and the data, and therefore before making a model choice, these factors should be taken into consideration instead of directly choosing the most popular model.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
http://arxiv.org/abs/2005.10113v2
|
A Comparison of Label-Synchronous and Frame-Synchronous End-to-End Models for Speech Recognition
|
End-to-end models are gaining wider attention in the field of automatic speech recognition (ASR). One of their advantages is the simplicity of building that directly recognizes the speech frame sequence into the text label sequence by neural networks. According to the driving end in the recognition process, end-to-end ASR models could be categorized into two types: label-synchronous and frame-synchronous, each of which has unique model behaviour and characteristic. In this work, we make a detailed comparison on a representative label-synchronous model (transformer) and a soft frame-synchronous model (continuous integrate-and-fire (CIF) based model). The results on three public dataset and a large-scale dataset with 12000 hours of training data show that the two types of models have respective advantages that are consistent with their synchronous mode.
|
[
"Text Generation",
"Speech & Audio in NLP",
"Speech Recognition",
"Multimodality"
] |
[
47,
70,
10,
74
] |
SCOPUS_ID:85053770279
|
A Comparison of Language Model Training Techniques in a Continuous Speech Recognition System for Serbian
|
In this paper, a number of language model training techniques will be examined and utilized in a large vocabulary continuous speech recognition system for the Serbian language (more than 120000 words), namely Mikolov and Yandex RNNLM, TensorFlow based GPU approaches and CUED-RNNLM approach. The baseline acoustic model is a chain sub-sampled time delayed neural network, trained using cross-entropy training and a sequence-level objective function on a database of about 200 h of speech. The baseline language model is a 3-gram model trained on the training part of the database transcriptions and the Serbian journalistic corpus (about 600000 utterances), using the SRILM toolkit and the Kneser-Ney smoothing method, with a pruning value of 10−7 (previous best). The results are analyzed in terms of word and character error rates and the perplexity of a given language model on training and validation sets. Relative improvement of 22.4% (best word error rate of 7.25%) is obtained in comparison to the baseline language model.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
https://aclanthology.org//W11-2005/
|
A Comparison of Latent Variable Models For Conversation Analysis
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
SCOPUS_ID:85130711140
|
A Comparison of Lexicon-based and Transformer-based Sentiment Analysis on Code-mixed of Low-Resource Languages
|
Sentiment analysis from code-mixed texts has been gaining wide attention in the past decade from researchers and practicians from various communities motivated, among others, by the increasing popularity of social media resulted in a huge volume of code-mixed texts. Sentiment analysis is an interesting problem in Natural Language Processing with wide potential applications, among others, to understand public concerns or aspirations toward some issues. This paper presents experimentation results aim to compare performance of lexicon-based and Sentence-BERT as sentiment analysis models from code-mixed of low-resources texts as input. In this study, some code-mixed texts of Bahasa Indonesia and Javanese language are used as sample of low-resource code-mixed languages. The input dataset are first translated to English using Google Machine Translation. The Sentiwordnet and VADER are two English lexicon label datasets used in this study as basis for predicting sentiment category using lexicon-based sentiment analysis method. In addition, a pretrained Sentence-BERT model is used as classification model from the translated input text to English. In this study, the dataset is categorized into positives and negative categories. The model performance was measured using accuracy, precision, recall, and F1 score. The experimentation found that the combined Google machine translator and Sentence-BERT model achieved 83 % average accuracy, 90 % average precision, 76 % average recall, and 83 % average F1 Score.
|
[
"Multilinguality",
"Language Models",
"Low-Resource NLP",
"Machine Translation",
"Semantic Text Processing",
"Text Generation",
"Sentiment Analysis",
"Responsible & Trustworthy NLP"
] |
[
0,
52,
80,
51,
72,
47,
78,
4
] |
SCOPUS_ID:85136143185
|
A Comparison of Machine Learning Classification Algorithms and Methods for English Author's Works and their Translations into Bulgarian
|
The aim of the publication is to compare the accuracy, precision, sensitivity and F-measure of machine algorithms trained in the classification of authors of works by English authors and the classification of authors of the same works translated into Bulgarian. The algorithms examined are Multinomial Naive Bayes classifier, Bernoulli Naive Bayes classifier, Support Vector Machines, Random Forest, AdaBoost, Decision Tree and K-Nearest Neighbosr. The research results show that in the English author's classification with an equal number of works in English, Support Vector Machines and Multinomial Naive Bayes classifier receive the highest values of the studied indicators. In Bulgarian texts, the best results are obtained depending on specific authors.
|
[
"Machine Translation",
"Information Extraction & Text Mining",
"Text Classification",
"Text Generation",
"Information Retrieval",
"Multilinguality"
] |
[
51,
3,
36,
47,
24,
0
] |
SCOPUS_ID:85099597445
|
A Comparison of Machine Learning and Deep Learning Methods with Rule Based Features for Mixed Emotion Analysis
|
Multi-class classification of sentiments from text data still remains a challenging task to detect the sentiments hidden behind the sentences because of the probable existence of multiple meanings for some of the texts in the dataset. To overcome this, the proposed rule based modified Convolutional neural network-Global Vectors (RCNN-GloVe) and rule-based modified Support Vector Machine - Global Vectors (RSVM-GloVe) were developed for classifying the twitter complex sentences at twelve different levels focusing on mixed emotions by targeting abstract nouns and adjective emotion words. To execute this, three proposed algorithms were developed such as the optimized abstract noun algorithm (OABNA) to identify the abstract noun emotion words, optimized complex sentences algorithm (OCSA) to extract all the complex sentences in a tweet precisely and adjective searching algorithm (ADJSA) to retrieve all the sentences with adjectives. The results of this study indicates that our proposed RCNNGloVe method used in the sentiment analysis was able to classify the mixed emotions accurately from the twitter dataset with the highest accuracy level of 92.02% in abstract nouns and 88.93% in adjectives. It is distinctly evident from the research that the proposed deep learning model (RCNN-GloVe) had an edge over the machine learning model (RSVM-GloVe).
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Sentiment Analysis",
"Emotion Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
78,
61,
24,
3
] |
https://aclanthology.org//W18-2108/
|
A Comparison of Machine Translation Paradigms for Use in Black-Box Fuzzy-Match Repair
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
http://arxiv.org/abs/1805.06239v2
|
A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese
|
The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or context-dependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of $26.64\%$ on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a $4.8\%$ relative improvement over the existing best CER of $28.0\%$ by the joint CTC-attention based encoder-decoder network.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
SCOPUS_ID:85142806421
|
A Comparison of Multi-Label Text Classification Models in Research Articles Labeled with Sustainable Development Goals
|
The classification of scientific articles aligned to Sustainable Development Goals is crucial for research institutions and universities when assessing their influence in these areas. Machine learning enables the implementation of massive text data classification tasks. The objective of this study is to apply Natural Language Processing techniques to articles from peer-reviewed journals to facilitate their classification according to the 17 Sustainable Development Goals of the 2030 Agenda. This article compares the performance of multi-label text classification models based on a proposed framework with datasets of different characteristics. The results show that the combination of Label Powerset (a transformation method) with Support Vector Machine (a classification algorithm) can achieve an accuracy of up to 87% for an imbalanced dataset, 83% for a dataset with the same number of instances per label, and even 91% for a multiclass dataset.
|
[
"Information Extraction & Text Mining",
"Green & Sustainable NLP",
"Text Classification",
"Information Retrieval",
"Responsible & Trustworthy NLP"
] |
[
3,
68,
36,
24,
4
] |
http://arxiv.org/abs/1308.0661v1
|
A Comparison of Named Entity Recognition Tools Applied to Biographical Texts
|
Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating Wikipedia articles. We then select publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85120490143
|
A Comparison of Natural Language Processing Methods for the Classification of Lumbar Spine Imaging Findings Related to Lower Back Pain
|
Rationale and Objectives: The use of natural language processing (NLP) in radiology provides an opportunity to assist clinicians with phenotyping patients. However, the performance and generalizability of NLP across healthcare systems is uncertain. We assessed the performance within and generalizability across four healthcare systems of different NLP representational methods, coupled with elastic-net logistic regression to classify lower back pain-related findings from lumbar spine imaging reports. Materials and Methods: We used a dataset of 871 X-ray and magnetic resonance imaging reports sampled from a prospective study across four healthcare systems between October 2013 and September 2016. We annotated each report for 26 findings potentially related to lower back pain. Our framework applied four different NLP methods to convert text into feature sets (representations). For each representation, our framework used an elastic-net logistic regression model for each finding (i.e., 26 binary or “one-vs.-rest” classification models). For performance evaluation, we split data into training (80%, 697/871) and testing (20%, 174/871). In the training set, we used cross validation to identify the optimal hyperparameter value and then retrained on the full training set. We then assessed performance based on area under the curve (AUC) for the test set. We repeated this process 25 times with each repeat using a different random train/test split of the data, so that we could estimate 95% confidence intervals, and assess significant difference in performance between representations. For generalizability evaluation, we trained models on data from three healthcare systems with cross validation and then tested on the fourth. We repeated this process for each system, then calculated mean and standard deviation (SD) of AUC across the systems. Results: For individual representations, n-grams had the best average performance across all 26 findings (AUC: 0.960). For generalizability, document embeddings had the most consistent average performance across systems (SD: 0.010). Out of these 26 findings, we considered eight as potentially clinically important (any stenosis, central stenosis, lateral stenosis, foraminal stenosis, disc extrusion, nerve root displacement compression, endplate edema, and listhesis grade 2) since they have a relatively greater association with a history of lower back pain compared to the remaining 18 classes. We found a similar pattern for these eight in which n-grams and document embeddings had the best average performance (AUC: 0.954) and generalizability (SD: 0.007), respectively. Conclusion: Based on performance assessment, we found that n-grams is the preferred method if classifier development and deployment occur at the same system. However, for deployment at multiple systems outside of the development system, or potentially if physician behavior changes within a system, one should consider document embeddings since embeddings appear to have the most consistent performance across systems.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
SCOPUS_ID:85113210652
|
A Comparison of Natural Language Processing and Machine Learning Methods for Phishing Email Detection
|
Phishing is the most-used malicious attempt in which attackers, commonly via emails, impersonate trusted persons or entities to obtain private information from a victim. Even though phishing email attacks are a known cybercriminal strategy for decades, their usage has been expanded over last couple of years due to the COVID-19 pandemic, where attackers exploit people's consternation to lure victims. Therefore, further research is needed in the phishing email detection field. Recent phishing email detection solutions that extract representational text-based features from the email's body have proved to be an appropriate strategy to tackle these threats. This paper proposes a comparison approach for the combined usage of Natural Language Processing (TF-IDF, Word2Vec, and BERT) and Machine Learning (Random Forest, Decision Tree, Logistic Regression, Gradient Boosting Trees, and Naive Bayes) methods for phishing email detection. The evaluation was performed on two datasets, one balanced and one imbalanced, both of which were comprised of emails from the well-known Enron corpus and the most recent emails from the Nazario phishing corpus. The best combination in the balanced dataset proved to be the Word2Vec with the Random Forest algorithm, while in the imbalanced dataset the Word2Vec with the Logistic Regression algorithm.
|
[
"Language Models",
"Semantic Text Processing",
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
58,
4
] |
http://arxiv.org/abs/2012.02640v2
|
A Comparison of Natural Language Understanding Platforms for Chatbots in Software Engineering
|
Chatbots are envisioned to dramatically change the future of Software Engineering, allowing practitioners to chat and inquire about their software projects and interact with different services using natural language. At the heart of every chatbot is a Natural Language Understanding (NLU) component that enables the chatbot to understand natural language input. Recently, many NLU platforms were provided to serve as an off-the-shelf NLU component for chatbots, however, selecting the best NLU for Software Engineering chatbots remains an open challenge. Therefore, in this paper, we evaluate four of the most commonly used NLUs, namely IBM Watson, Google Dialogflow, Rasa, and Microsoft LUIS to shed light on which NLU should be used in Software Engineering based chatbots. Specifically, we examine the NLUs' performance in classifying intents, confidence scores stability, and extracting entities. To evaluate the NLUs, we use two datasets that reflect two common tasks performed by Software Engineering practitioners, 1) the task of chatting with the chatbot to ask questions about software repositories 2) the task of asking development questions on Q&A forums (e.g., Stack Overflow). According to our findings, IBM Watson is the best performing NLU when considering the three aspects (intents classification, confidence scores, and entity extraction). However, the results from each individual aspect show that, in intents classification, IBM Watson performs the best with an F1-measure > 84%, but in confidence scores, Rasa comes on top with a median confidence score higher than 0.91. Our results also show that all NLUs, except for Dialogflow, generally provide trustable confidence scores. For entity extraction, Microsoft LUIS and IBM Watson outperform other NLUs in the two SE tasks. Our results provide guidance to software engineering practitioners when deciding which NLU to use in their chatbots.
|
[
"Text Classification",
"Named Entity Recognition",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
34,
11,
38,
24,
3
] |
https://aclanthology.org//W17-3531/
|
A Comparison of Neural Models for Word Ordering
|
We compare several language models for the word-ordering task and propose a new bag-to-sequence neural model based on attention-based sequence-to-sequence models. We evaluate the model on a large German WMT data set where it significantly outperforms existing models. We also describe a novel search strategy for LM-based word ordering and report results on the English Penn Treebank. Our best model setup outperforms prior work both in terms of speed and quality.
|
[
"Text Generation"
] |
[
47
] |
http://arxiv.org/abs/1910.12674v1
|
A Comparison of Neural Network Training Methods for Text Classification
|
We study the impact of neural networks in text classification. Our focus is on training deep neural networks with proper weight initialization and greedy layer-wise pretraining. Results are compared with 1-layer neural networks and Support Vector Machines. We work with a dataset of labeled messages from the Twitter microblogging service and aim to predict weather conditions. A feature extraction procedure specific for the task is proposed, which applies dimensionality reduction using Latent Semantic Analysis. Our results show that neural networks outperform Support Vector Machines with Gaussian kernels, noticing performance gains from introducing additional hidden layers with nonlinearities. The impact of using Nesterov's Accelerated Gradient in backpropagation is also studied. We conclude that deep neural networks are a reasonable approach for text classification and propose further ideas to improve performance.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:0028923269
|
A Comparison of Phonological Skills in Children with Reading Comprehension Difficulties and Children with Decoding Difficulties
|
Abstract This paper examines phonologic skills in children with two distinct forms of reading difficulty: comprehension problems and decocting problems. In the first study a group of children with normal decoding skills but poor reading comprehension skill. was studied. These children were found to have age‐appropriate phonological skills It is argued that normal phonological skills have enabled them to develop proficient decoding skills. A second study assessed the phonological skills of a group of children with decoding difficulties. These children showed marked deficits on tests of phonological skills. It appears that weak phonological skills underlie these children's decoding difficulties. Copyright © 1995, Wiley Blackwell. All rights reserved
|
[
"Reasoning",
"Phonology",
"Syntactic Text Processing",
"Machine Reading Comprehension"
] |
[
8,
6,
15,
37
] |
SCOPUS_ID:85107663575
|
A Comparison of Pre-Trained Language Models for Multi-Class Text Classification in the Financial Domain
|
Neural networks for language modeling have been proven effective on several sub-Tasks of natural language processing. Training deep language models, however, is time-consuming and computationally intensive. Pre-Trained language models such as BERT are thus appealing since (1) they yielded state-of-The-Art performance, and (2) they offload practitioners from the burden of preparing the adequate resources (time, hardware, and data) to train models. Nevertheless, because pre-Trained models are generic, they may underperform on specific domains. In this study, we investigate the case of multi-class text classification, a task that is relatively less studied in the literature evaluating pre-Trained language models. Our work is further placed under the industrial settings of the financial domain. We thus leverage generic benchmark datasets from the literature and two proprietary datasets from our partners in the financial technological industry. After highlighting a challenge for generic pre-Trained models (BERT, DistilBERT, RoBERTa, XLNet, XLM) to classify a portion of the financial document dataset, we investigate the intuition that a specialized pre-Trained model for financial documents, such as FinBERT, should be leveraged. Nevertheless, our experiments show that the FinBERT model, even with an adapted vocabulary, does not lead to improvements compared to the generic BERT models.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85100355922
|
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and Reports
|
Joint image-text embedding extracted from medical images and associated contextual reports is the bedrock for most biomedical vision-and-language (\mathrm{V}+\mathrm{L}) tasks, including medical visual question answering, clinical image-text retrieval, clinical report auto-generation. In this study, we adopt four pre-trained \mathrm{V}+\mathrm{L} models: LXMERT, VisualBERT, UNIER and PixelBERT to learn multimodal representation from MIMIC-CXR images and associated reports. External evaluation using the OpenI dataset shows that the joint embedding learned by pre-trained \mathrm{V}+\mathrm{L} models demonstrates performance improvement of 1.4% in thoracic finding classification tasks compared to a pioneering CNN + RNN model. Ablation studies are conducted to further analyze the contribution of certain model components and validate the advantage of joint embedding over text-only embedding. Attention maps are also visualized to illustrate the attention mechanism of \mathrm{V}+\mathrm{L} models.
|
[
"Language Models",
"Visual Data in NLP",
"Semantic Text Processing",
"Representation Learning",
"Reasoning",
"Numerical Reasoning",
"Multimodality"
] |
[
52,
20,
72,
12,
8,
5,
74
] |
SCOPUS_ID:85089717758
|
A Comparison of Pre-trained Word Embeddings for Sentiment Analysis Using Deep Learning
|
The public opinion expressed on review or blogging sites and social networking platforms can be the source for the extraction of very critical information related to feelings and emotions of mass towards the subject matter in the field of commerce and governance. Natural Language Processing (NLP) and Artificial Intelligence can be used for sentiment analysis of this textual information. For text processing, NLP applications nowadays rely on pre-trained embeddings derived from large corpora such as news collection and web crawlers. There are many pre-trained word embeddings available. However, no study found which compares the accuracy achieved using these embeddings. In this paper, we worked on different kinds of word embeddings (pre-trained and untrained) and derived a comparison concerning accuracy for sentiment analysis applications using Deep Learning (DL) models. We found that the deep learning models perform better with pre-trained embeddings compared to Keras default (untrained) embedding.
|
[
"Representation Learning",
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
12,
52,
72,
78
] |
SCOPUS_ID:85107388851
|
A Comparison of Question Rewriting Methods for Conversational Passage Retrieval
|
Conversational passage retrieval relies on question rewriting to modify the original question so that it no longer depends on the conversation history. Several methods for question rewriting have recently been proposed, but they were compared under different retrieval pipelines. We bridge this gap by thoroughly evaluating those question rewriting methods on the TREC CAsT 2019 and 2020 datasets under the same retrieval pipeline. We analyze the effect of different types of question rewriting methods on retrieval performance and show that by combining question rewriting methods of different types we can achieve state-of-the-art performance on both datasets (Resources can be found at https://github.com/svakulenk0/cast_evaluation.)
|
[
"Paraphrasing",
"Natural Language Interfaces",
"Text Generation",
"Dialogue Systems & Conversational Agents",
"Passage Retrieval",
"Information Retrieval"
] |
[
32,
11,
47,
38,
66,
24
] |
https://aclanthology.org//W00-0408/
|
A Comparison of Rankings Produced by Summarization Evaluation Measures
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
|
http://arxiv.org/abs/2211.02563v1
|
A Comparison of SVM against Pre-trained Language Models (PLMs) for Text Classification Tasks
|
The emergence of pre-trained language models (PLMs) has shown great success in many Natural Language Processing (NLP) tasks including text classification. Due to the minimal to no feature engineering required when using these models, PLMs are becoming the de facto choice for any NLP task. However, for domain-specific corpora (e.g., financial, legal, and industrial), fine-tuning a pre-trained model for a specific task has shown to provide a performance improvement. In this paper, we compare the performance of four different PLMs on three public domain-free datasets and a real-world dataset containing domain-specific words, against a simple SVM linear classifier with TFIDF vectorized text. The experimental results on the four datasets show that using PLMs, even fine-tuned, do not provide significant gain over the linear SVM classifier. Hence, we recommend that for text classification tasks, traditional SVM along with careful feature engineering can pro-vide a cheaper and superior performance than PLMs.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85103983514
|
A Comparison of Self-Supervised Speech Representations As Input Features for Unsupervised Acoustic Word Embeddings
|
Many speech processing tasks involve measuring the acoustic similarity between speech segments. Acoustic word embeddings (AWE) allow for efficient comparisons by mapping speech segments of arbitrary duration to fixed-dimensional vectors. For zero-resource speech processing, where unlabelled speech is the only available resource, some of the best AWE approaches rely on weak top-down constraints in the form of automatically discovered word-like segments. Rather than learning embeddings at the segment level, another line of zero-resource research has looked at representation learning at the short-time frame level. Recent approaches include self-supervised predictive coding and correspondence autoencoder (CAE) models. In this paper we consider whether these frame-level features are beneficial when used as inputs for training to an unsupervised AWE model. We compare frame-level features from contrastive predictive coding (CPC), autoregressive predictive coding and a CAE to conventional MFCCs. These are used as inputs to a recurrent CAE-based AWE model. In a word discrimination task on English and Xitsonga data, all three representation learning approaches outperform MFCCs, with CPC consistently showing the biggest improvement. In cross-lingual experiments we find that CPC features trained on English can also be transferred to Xitsonga.
|
[
"Low-Resource NLP",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Representation Learning",
"Responsible & Trustworthy NLP",
"Multimodality"
] |
[
80,
72,
70,
12,
4,
74
] |
SCOPUS_ID:85078116986
|
A Comparison of Semantic Similarity Methods for Maximum Human Interpretability
|
The inclusion of semantic information in any similarity measures improves the efficiency of the similarity measure and provides human interpretable results for further analysis. The similarity calculation method that focuses on features related to the text's words only, will give less accurate results. This paper presents three different methods that not only focus on the text's words but also incorporates semantic information of texts in their feature vector and computes semantic similarities. These methods are based on corpus-based and knowledge-based methods, which are: cosine similarity using tf-idf vectors, cosine similarity using word embedding and soft cosine similarity using word embedding. Among these three, cosine similarity using tf-idf vectors performed best in finding similarities between short news texts. The similar texts given by the method are easy to interpret and can be used directly in other information retrieval applications.
|
[
"Semantic Text Processing",
"Semantic Similarity",
"Representation Learning",
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP"
] |
[
72,
53,
12,
81,
4
] |
https://aclanthology.org//2021.mtsummit-research.15/
|
A Comparison of Sentence-Weighting Techniques for NMT
|
Sentence weighting is a simple and powerful domain adaptation technique. We carry out domain classification for computing sentence weights with 1) language model cross entropy difference 2) a convolutional neural network 3) a Recursive Neural Tensor Network. We compare these approaches with regard to domain classification accuracy and and study the posterior probability distributions. Then we carry out NMT experiments in the scenario where we have no in-domain parallel corpora and and only very limited in-domain monolingual corpora. Here and we use the domain classifier to reweight the sentences of our out-of-domain training corpus. This leads to improvements of up to 2.1 BLEU for German to English translation.
|
[
"Machine Translation",
"Information Extraction & Text Mining",
"Text Classification",
"Text Generation",
"Information Retrieval",
"Multilinguality"
] |
[
51,
3,
36,
47,
24,
0
] |
SCOPUS_ID:85081181886
|
A Comparison of Several Word Clustering Models
|
Sparse-data problem is a main issue that influences the performances of statistical language models; statistical language model based on word classes is an effective method to solve sparse-data problems. This paper presents a definition of word similarity by utilizing mutual information of adjoining words, and gives the definition of word set similarity based on word similarity, and puts forward a bottom-up hierarchical word clustering algorithm which can get global optimum. Experimental results show that the word clustering algorithm is of high executing speed and have good clustering performances. We then interpolated the class-based models with the word-based models and found that it mitigates remaining sparse-data problems of statistical language models.
|
[
"Language Models",
"Semantic Text Processing",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
52,
72,
3,
29
] |
https://aclanthology.org//W09-2205/
|
A Comparison of Structural Correspondence Learning and Self-training for Discriminative Parse Selection
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
4
] |
|
SCOPUS_ID:85126937389
|
A Comparison of Support Vector Machine and Naïve Bayes Classifier in Binary Sentiment Reviews for PeduliLindungi Application
|
COVID-19 statistics in Indonesia show more than 4.2 million active confirmed cases with more than 140 thousand deaths. The Indonesian government has made several policies to reduce the number of COVID-19 cases, one of them is by implementing the PeduliLindungi application. The government has socialized and recommended this application as an effort to fulfill the tracking, tracing, and fencing program. Various kinds of responses appear in the community to this application, therefore sentiment analysis is needed to find out public trends so that the government can evaluate the policies that have been made. This study aims to determine the best model from the comparison of the Naïve Bayes algorithm and the Support Vector Machine, besides that this study will also see whether a simpler model such as Naive Bayes is still good in handling binary sentiment for PeduliLindungi data reviews. The data was obtained by web scraping from the PeduliLindungi application review on the Google Play Store. The Naïve Bayes accuracy value is 81%, smaller than the Support Vector Machine which has an accuracy of 84%, although the Support Vector Machine is the best model we have, Naive Bayes itself can still be used to handle binary sentiment data because the difference in accuracy values is not too far.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
http://arxiv.org/abs/2008.04636v1
|
A Comparison of Synthetic Oversampling Methods for Multi-class Text Classification
|
The authors compared oversampling methods for the problem of multi-class topic classification. The SMOTE algorithm underlies one of the most popular oversampling methods. It consists in choosing two examples of a minority class and generating a new example based on them. In the paper, the authors compared the basic SMOTE method with its two modifications (Borderline SMOTE and ADASYN) and random oversampling technique on the example of one of text classification tasks. The paper discusses the k-nearest neighbor algorithm, the support vector machine algorithm and three types of neural networks (feedforward network, long short-term memory (LSTM) and bidirectional LSTM). The authors combine these machine learning algorithms with different text representations and compared synthetic oversampling methods. In most cases, the use of oversampling techniques can significantly improve the quality of classification. The authors conclude that for this task, the quality of the KNN and SVM algorithms is more influenced by class imbalance than neural networks.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85063074242
|
A Comparison of Techniques for Language Model Integration in Encoder-Decoder Speech Recognition
|
Attention-based recurrent neural encoder-decoder models present an elegant solution to the automatic speech recognition problem. This approach folds the acoustic model, pronunciation model, and language model into a single network and requires only a parallel corpus of speech and text for training. However, unlike in conventional approaches that combine separate acoustic and language models, it is not clear how to use additional (unpaired) text. While there has been previous work on methods addressing this problem, a thorough comparison among methods is still lacking. In this paper, we compare a suite of past methods and some of our own proposed methods for using unpaired text data to improve encoder-decoder models. For evaluation, we use the medium-sized Switchboard data set and the large-scale Google voice search and dictation data sets. Our results confirm the benefits of using unpaired text across a range of methods and data sets. Surprisingly, for first-pass decoding, the rather simple approach of shallow fusion performs best across data sets. However, for Google data sets we find that cold fusion has a lower oracle error rate and outperforms other approaches after second-pass rescoring on the Google voice search data set.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
http://arxiv.org/abs/1905.04727v1
|
A Comparison of Techniques for Sentiment Classification of Film Reviews
|
We undertake the task of comparing lexicon-based sentiment classification of film reviews with machine learning approaches. We look at existing methodologies and attempt to emulate and improve on them using a 'given' lexicon and a bag-of-words approach. We also utilise syntactical information such as part-of-speech and dependency relations. We will show that a simple lexicon-based classification achieves good results however machine learning techniques prove to be the superior tool. We also show that more features do not necessarily deliver better performance as well as elaborate on three further enhancements not tested in this article.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85141728769
|
A Comparison of Text Classification Methods: Towards Fake News Detection for Indonesian Websites
|
Fake news reports false or distorted information that aims to mislead us and undoubtedly has a negative impact on society. For example, medical research in Taiwan shows that fake news about the COVID-19 vaccine reduces the number of doses absorbed by the public significantly as those exposed to fake news become hesitant and even anti-vaccine. Therefore, early automatic detection of fake news on the internet is crucial. In detecting fake news, machine learning algorithms, especially text classification algorithms, are used as a solution for this problem. Searching for the best method with high accuracy needs to be done continuously. This paper compares four main algorithms, namely Support Vector Machine (SVM), Stochastic Gradient Descent (SGD), Logistic Regression (LR), and Naïve Bayes. The experiment was carried out using 200 Indonesian news datasets, consisting of 100 fake news and 100 real news. The performance matrix of each algorithm was evaluated with accuracy, recall, precision, and F1-score as the harmonic mean. The results showed that Logistic Regression was able to separate fake news and real news with the highest F1-scorc reaching 90.9%. This paper also proposes a framework of detecting fake news that can be implemented on public websites.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Ethical NLP",
"Reasoning",
"Fact & Claim Verification",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
3,
24,
17,
8,
46,
36,
4
] |
SCOPUS_ID:85076233064
|
A Comparison of Text Classifiers on IT Incidents Using WEKA
|
IT service management and incident management is a hot topic in every company which serves IT services and they require human effort to manage. In ITIL framework for IT service management, it's always useful to link the incidents with configuration items, in other words the assets or components necessary to deliver IT services and this task is managed manually by IT support technicians in many IT service management tools. The aim of the study is to remove this manual linking step by applying text classification methods and instead to provide an automatic assignment of CI's to incidents. Four of the text classification methods, Naïve Bayes Multinomial, k-Nearest Neighbor, Support Vector Machine and J48 decision tree classifiers, are used on three different sets of incidents extracted from the same database by using different filters. The impact of some pre-processing steps is compared on different sizes of datasets.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:0043238076
|
A Comparison of Text-Based Methods for Detecting Duplication in Scanned Document Databases
|
This paper presents an experimental evaluation of several text-based methods for detecting duplication in scanned document databases using uncorrected OCR output. This task is made challenging both by the wide range of degradations printed documents can suffer, and by conflicting interpretations of what it means to be a "duplicate." We report results for four sets of experiments exploring various aspects of the problem space. While the techniques studied are generally robust in the face of most types of OCR errors, there are nonetheless important differences which we identify and discuss in detail.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85126733255
|
A Comparison of Topic Modeling Algorithms on Visual Social Media Networks
|
Topic modeling algorithms are statistical algorithms that produce meaningful information from unstructured data. Topic modeling has been applied to many social media platforms such as Instagram, Twitter, and Facebook. Images are an important type of multimedia, as they contain rich visual content that conveys semantic information which can be effectively extracted. In this paper, we utilize topic modeling to find user interests in Instagram, which is significant for various fields, such as recommendations, sentiment analysis, or improving products. We study three algorithms: Latent Semantic Indexing, Latent Dirichlet Allocation, and Non-Negative Matrix Factorization. We evaluate our models using coherence and similarity measures. NMF was found to be the most successful algorithm, based on similarity measures as well as human interpretation of results.
|
[
"Visual Data in NLP",
"Topic Modeling",
"Information Extraction & Text Mining",
"Multimodality"
] |
[
20,
9,
3,
74
] |
http://arxiv.org/abs/1806.06957v2
|
A Comparison of Transformer and Recurrent Neural Networks on Multilingual Neural Machine Translation
|
Recently, neural machine translation (NMT) has been extended to multilinguality, that is to handle more than one translation direction with a single system. Multilingual NMT showed competitive performance against pure bilingual systems. Notably, in low-resource settings, it proved to work effectively and efficiently, thanks to shared representation space that is forced across languages and induces a sort of transfer-learning. Furthermore, multilingual NMT enables so-called zero-shot inference across language pairs never seen at training time. Despite the increasing interest in this framework, an in-depth analysis of what a multilingual NMT model is capable of and what it is not is still missing. Motivated by this, our work (i) provides a quantitative and comparative analysis of the translations produced by bilingual, multilingual and zero-shot systems; (ii) investigates the translation quality of two of the currently dominant neural architectures in MT, which are the Recurrent and the Transformer ones; and (iii) quantitatively explores how the closeness between languages influences the zero-shot translation. Our analysis leverages multiple professional post-edits of automatic translations by several different systems and focuses both on automatic standard metrics (BLEU and TER) and on widely used error categories, which are lexical, morphology, and word order errors.
|
[
"Language Models",
"Low-Resource NLP",
"Machine Translation",
"Semantic Text Processing",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
52,
80,
51,
72,
47,
4,
0
] |
http://arxiv.org/abs/2210.00367v1
|
A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
|
Phoneme recognition is a very important part of speech recognition that requires the ability to extract phonetic features from multiple frames. In this paper, we compare and analyze CNN, RNN, Transformer, and Conformer models using phoneme recognition. For CNN, the ContextNet model is used for the experiments. First, we compare the accuracy of various architectures under different constraints, such as the receptive field length, parameter size, and layer depth. Second, we interpret the performance difference of these models, especially when the observable sequence length varies. Our analyses show that Transformer and Conformer models benefit from the long-range accessibility of self-attention through input frames.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85100449713
|
A Comparison of Transformer, Recurrent Neural Networks and SMT in Tamil to Sinhala MT
|
Neural Machine Translation (NMT) is currently the most promising approach for machine translation. The attention mechanism is a successful technique in modern Natural Language Processing (NLP), especially in tasks like machine translation. The recently proposed network architecture of the Transformer is based entirely on attention mechanisms and achieves a new state of the art results in neural machine translation, outperforming other sequence-to-sequence models. Although it is successful in a resource-rich setting, its applicability for low-resource language pairs is still debatable. Additionally when the language pair is morphologically rich and also when the corpora is multi-domain, the lack of a large parallel corpus becomes a significant barrier. In this study, we explore different NMT algorithms - Long Short Term Memory (LSTM) and Transformer based NMT, to translate the Tamil to Sinhala language pair. Where we clearly see transformer outperforms LSTM by 2.43 BLEU score for Tamil to Sinhala direction. And this work provides a preliminary comparison of statistical machine translation (SMT) and Neural Machine Translation (NMT) for Tamil to Sinhala in the open domain context.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Text Generation",
"Multilinguality"
] |
[
52,
51,
72,
47,
0
] |
http://arxiv.org/abs/2009.06257v1
|
A Comparison of Two Fluctuation Analyses for Natural Language Clustering Phenomena: Taylor and Ebeling & Neiman Methods
|
This article considers the fluctuation analysis methods of Taylor and Ebeling & Neiman. While both have been applied to various phenomena in the statistical mechanics domain, their similarities and differences have not been clarified. After considering their analytical aspects, this article presents a large-scale application of these methods to text. It is found that both methods can distinguish real text from independently and identically distributed (i.i.d.) sequences. Furthermore, it is found that the Taylor exponents acquired from words can roughly distinguish text categories; this is also the case for Ebeling and Neiman exponents, but to a lesser extent. Additionally, both methods show some possibility of capturing script kinds.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
https://aclanthology.org//2011.mtsummit-papers.40/
|
A Comparison of Unsupervised Bilingual Term Extraction Methods Using Phrase-Tables
|
[
"Low-Resource NLP",
"Machine Translation",
"Information Extraction & Text Mining",
"Structured Data in NLP",
"Term Extraction",
"Multimodality",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
80,
51,
3,
50,
1,
74,
47,
4,
0
] |
|
SCOPUS_ID:85132970194
|
A Comparison of Web Services for Sentiment Analysis in Digital Mental Health Interventions
|
The use of web services allows for an easy and cost-effective way to implementation natural language processing capabilities such as sentiment analysis in digital interventions such as those used in mental healthcare. To the best of our knowledge, the majority of studies to date focus on the use of sentiment analysis for the analysis of user reviews and social platforms. This study thus aims to explore the use of 18 currently available web services in the analysis of user submitted content from a digital mental health intervention. The web services are compared on the basis of their accuracy, precision, recall, f-measures and mean square error. Given the sensitive nature of user content from digital mental health interventions, we also explored how the various web services handled the data submitted to them for analysis. The results of the study provide other researchers with a better idea of the performance and suitability of the various web services for use in digital mental health interventions.
|
[
"Responsible & Trustworthy NLP",
"Ethical NLP",
"Sentiment Analysis"
] |
[
4,
17,
78
] |
http://arxiv.org/abs/1611.02956v3
|
A Comparison of Word Embeddings for English and Cross-Lingual Chinese Word Sense Disambiguation
|
Word embeddings are now ubiquitous forms of word representation in natural language processing. There have been applications of word embeddings for monolingual word sense disambiguation (WSD) in English, but few comparisons have been done. This paper attempts to bridge that gap by examining popular embeddings for the task of monolingual English WSD. Our simplified method leads to comparable state-of-the-art performance without expensive retraining. Cross-Lingual WSD - where the word senses of a word in a source language e come from a separate target translation language f - can also assist in language learning; for example, when providing translations of target vocabulary for learners. Thus we have also applied word embeddings to the novel task of cross-lingual WSD for Chinese and provide a public dataset for further benchmarking. We have also experimented with using word embeddings for LSTM networks and found surprisingly that a basic LSTM network does not work well. We discuss the ramifications of this outcome.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Word Sense Disambiguation",
"Representation Learning",
"Text Generation",
"Cross-Lingual Transfer",
"Multilinguality"
] |
[
52,
51,
72,
65,
12,
47,
19,
0
] |
SCOPUS_ID:85123221498
|
A Comparison of Word Embeddings to Study Complications in Neurosurgery
|
Our study aimed to compare the capability of different word embeddings to capture the semantic similarity of clinical concepts related to complications in neurosurgery at the level of medical experts. Eighty-four sets of word embeddings (based on Word2vec, GloVe, FastText, PMI, and BERT algorithms) were benchmarked in a clustering task. FastText model showed the best close to the medical expertise capability to group medical terms by their meaning (adjusted Rand index = 0.682). Word embedding models can accurately reflect clinical concepts' semantic and linguistic similarities, promising their robust usage in medical domain-specific NLP tasks.
|
[
"Representation Learning",
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Text Clustering"
] |
[
12,
3,
72,
29
] |
http://arxiv.org/abs/1906.05468v1
|
A Comparison of Word-based and Context-based Representations for Classification Problems in Health Informatics
|
Distributed representations of text can be used as features when training a statistical classifier. These representations may be created as a composition of word vectors or as context-based sentence vectors. We compare the two kinds of representations (word versus context) for three classification problems: influenza infection classification, drug usage classification and personal health mention classification. For statistical classifiers trained for each of these problems, context-based representations based on ELMo, Universal Sentence Encoder, Neural-Net Language Model and FLAIR are better than Word2Vec, GloVe and the two adapted using the MESH ontology. There is an improvement of 2-4% in the accuracy when these context-based representations are used instead of word-based representations.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
http://arxiv.org/abs/cmp-lg/9809003v1
|
A Comparison of WordNet and Roget's Taxonomy for Measuring Semantic Similarity
|
This paper presents the results of using Roget's International Thesaurus as the taxonomy in a semantic similarity measurement task. Four similarity metrics were taken from the literature and applied to Roget's The experimental evaluation suggests that the traditional edge counting approach does surprisingly well (a correlation of r=0.88 with a benchmark set of human similarity judgements, with an upper bound of r=0.90 for human subjects performing the same task.)
|
[
"Semantic Text Processing",
"Semantic Similarity"
] |
[
72,
53
] |
https://aclanthology.org//W98-0716/
|
A Comparison of WordNet and Roget’s Taxonomy for Measuring Semantic Similarity
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
18,
72,
53
] |
|
SCOPUS_ID:84989811357
|
A Comparison of methods for identifying the translation of words in a comparable corpus: Recipes and limits
|
Identifying translations in comparabl corpora is a challenge that has attracted man researchers since a long time. It has applications i several applications including Machine Translation an Cross-lingual Information Retrieval. In this study w compare three state-of-The-Art approaches for thes tasks: The so-called context-based projection method the projection of monolingual word embeddings, as wel as a method dedicated to identify translations of rar words. We carefully explore the hyper-parameters o each method and measure their impact on the task o identifying the translation of English words in Wikipedi into French. Contrary to the standard practice, w designed a test case where we do not resort to heuristic in order to pre-select the target vocabulary among whic to find translations, therefore pushing each method to it limit. We show that all the approaches we tested hav a clear bias toward frequent words. In fact, the bes approach we tested could identify the translation of third of a set of frequent test words, while it could onl translate around 10% of rare words.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85026728689
|
A Comparison of the International Charters on Geographical Education
|
This article uses discourse analysis techniques associated with Foucauldian archaeology to examine the two international charters developed by the International Geographical Union Commission on Geographical Education (IGU-CGE), the original one in 1992 and the revised version endorsed in 2016 at the Beijing conference. The examination considers the consultation and development processes before outlining similarities and differences in the messages communicated and how discourses have changed through time. The article concludes with recommendations for the geography education community for the future.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85133006899
|
A Comparison of Transformer-Based Language Models on NLP Benchmarks
|
Since the advent of BERT, Transformer-based language models (TLMs) have shown outstanding effectiveness in several NLP tasks. In this paper, we aim at bringing order to the landscape of TLMs and their performance on important benchmarks for NLP. Our analysis sheds light on the advantages that some TLMs take over the others, but also unveils issues in making a complete and fair comparison in some situations.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
https://aclanthology.org//W19-5324/
|
A Comparison on Fine-grained Pre-trained Embeddings for the WMT19Chinese-English News Translation Task
|
This paper describes our submission to the WMT 2019 Chinese-English (zh-en) news translation shared task. Our systems are based on RNN architectures with pre-trained embeddings which utilize character and sub-character information. We compare models with these different granularity levels using different evaluating metics. We find that a finer granularity embeddings can help the model according to character level evaluation and that the pre-trained embeddings can also be beneficial for model performance marginally when the training data is limited.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Representation Learning",
"Text Generation",
"Multilinguality"
] |
[
52,
51,
72,
12,
47,
0
] |
SCOPUS_ID:85097251837
|
A Competence-Aware Curriculum for Visual Concepts Learning via Question Answering
|
Humans can progressively learn visual concepts from easy to hard questions. To mimic this efficient learning ability, we propose a competence-aware curriculum for visual concept learning in a question-answering manner. Specifically, we design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process with an adaptive curriculum. The mIRT effectively estimates the concept difficulty and the model competence at each learning step from accumulated model responses. The estimated concept difficulty and model competence are further utilized to select the most profitable training samples. Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances with superior data efficiency and convergence speed. Specifically, the proposed model only uses 40% of training data and converges three times faster compared with other state-of-the-art methods.
|
[
"Visual Data in NLP",
"Natural Language Interfaces",
"Question Answering",
"Multimodality"
] |
[
20,
11,
27,
74
] |
SCOPUS_ID:85099006756
|
A Compiler-based Approach for Natural Language to Code Conversion
|
There is a gap observed between the natural language (NL) of speech and writing a program to generate code. Programmers should know the syntax of the programming language in order to code. The aim of the proposed model is to do away with the syntactic structure of a programming language and the user can specify the instructions in human interactive form, using either text or speech. The designed solution is an application based on speech recognition and user interaction to make coding faster and efficient. Lexical, syntax and semantic analysis is performed on the user's instructions and then the code is generated. C is used as the programming language in the proposed model. The code editor is a web page and the user instructions are sent to a Flask server for processing. Using Python libraries NLTK and ply libraries, conversion of human language data to programmable C codes is done and the code is returned to the client. Lex is used for tokenization and the LALR parser of Yacc processes the syntax specifications to generate an output procedure. The results are recorded and analyzed for time taken to convert the NL commands to code and the efficiency of the implementation is measured with accuracy, precision and recall.
|
[
"Programming Languages in NLP",
"Speech & Audio in NLP",
"Syntactic Text Processing",
"Multimodality",
"Text Generation",
"Responsible & Trustworthy NLP",
"Code Generation",
"Green & Sustainable NLP"
] |
[
55,
70,
15,
74,
47,
4,
44,
68
] |
SCOPUS_ID:85063615546
|
A Complaint Text Classification Model Based on Character-Level Convolutional Network
|
With the increase of demand for service quality, a growing number of people are expressing their complaints on the Web for services from different businesses. The correct classification of complaint reasons can substantially improve the quality of business service. Existing methods for text classification used on various datasets are mushrooming, however analysis on complaint texts is rare in current literature. There are still great challenges to classify complaint texts. On the one hand, complaint texts contain obvious negative sentiments which are useless for complaint text classification. On the other hand, complaint texts have more semantic and grammatical errors caused by negative emotions especially in Chinese, which enhances the difficulty of modeling. In response to the challenge, we propose a novel complaint text classification model based on character-level convolutional network. First, we employ a Negative Elements Removal (NER)module to denoise complaint texts. Second, in order to reduce effects of semantic and grammatical errors, a character-based convolutional network for complaint texts is proposed. Experiments demonstrate that our model can achieve state-of-the-art results on Chinese and English complaint texts compared with traditional methods and deep learning methods.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
http://arxiv.org/abs/2204.02023v1
|
A Complementary Joint Training Approach Using Unpaired Speech and Text for Low-Resource Automatic Speech Recognition
|
Unpaired data has shown to be beneficial for low-resource automatic speech recognition~(ASR), which can be involved in the design of hybrid models with multi-task training or language model dependent pre-training. In this work, we leverage unpaired data to train a general sequence-to-sequence model. Unpaired speech and text are used in the form of data pairs by generating the corresponding missing parts in prior to model training. Inspired by the complementarity of speech-PseudoLabel pair and SynthesizedAudio-text pair in both acoustic features and linguistic features, we propose a complementary joint training~(CJT) method that trains a model alternatively with two data pairs. Furthermore, label masking for pseudo-labels and gradient restriction for synthesized audio are proposed to further cope with the deviations from real data, termed as CJT++. Experimental results show that compared to speech-only training, the proposed basic CJT achieves great performance improvements on clean/other test sets, and the CJT++ re-training yields further performance enhancements. It is also apparent that the proposed method outperforms the wav2vec2.0 model with the same model size and beam size, particularly in extreme low-resource cases.
|
[
"Low-Resource NLP",
"Speech & Audio in NLP",
"Multimodality",
"Text Generation",
"Speech Recognition",
"Responsible & Trustworthy NLP"
] |
[
80,
70,
74,
47,
10,
4
] |
https://aclanthology.org//W02-2106/
|
A Complete, Efficient Sentence-Realization Algorithm for Unification Grammar
|
[
"Responsible & Trustworthy NLP",
"Text Generation",
"Green & Sustainable NLP"
] |
[
4,
47,
68
] |
|
https://aclanthology.org//W97-1101/
|
A Complexity Measure for Diachronic Chinese Phonology
|
[
"Phonology",
"Syntactic Text Processing"
] |
[
6,
15
] |
|
SCOPUS_ID:84979610005
|
A Compliant Document Image Classification System Based on One-Class Classifier
|
Document image classification in a professional context requires to respect some constraints such as dealing with a large variability of documents and/or number of classes. Whereas most methods deal with all classes at the same time, we answer this problem by presenting a new compliant system based on the specialization of the features and the parametrization of the classifier separately, class per class. We first compute a generalized vector of features based on global image characterization and structural primitives. Then, for each class, the feature vector is specialized by ranking the features according a stability score. Finally, a one-class K-nn classifier is trained using these specific features. Conducted experiments reveal good classification rates, proving the ability of our system to deal with a large range of documents classes.
|
[
"Visual Data in NLP",
"Text Classification",
"Multimodality",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
20,
36,
74,
24,
3
] |
SCOPUS_ID:85049204886
|
A Composite Natural Language Processing and Information Retrieval Approach to Question Answering Using a Structured Knowledge Base
|
With the inception of the World Wide Web, the amount of data present on the Internet is tremendous. This makes the task of navigating through this enormous amount of data quite difficult for the user. As users struggle to navigate through this wealth of information, the need for the development of an automated system that can extract the required information becomes urgent. This paper presents a Question Answering system to ease the process of information retrieval. Question Answering systems have been around for quite some time and are a sub-field of information retrieval and natural language processing. The task of any Question Answering system is to seek an answer to a free form factual question. The difficulty of pinpointing and verifying the precise answer makes question answering more challenging than simple information retrieval done by search engines. The research objective of this paper is to develop a novel approach to Question Answering based on a composition of conventional approaches of Information Retrieval (IR) and Natural Language processing (NLP). The focus is on using a structured and annotated knowledge base instead of an unstructured one. The knowledge base used here is DBpedia and the final system is evaluated on the Text REtrieval Conference (TREC) 2004 questions dataset.
|
[
"Semantic Text Processing",
"Structured Data in NLP",
"Question Answering",
"Knowledge Representation",
"Natural Language Interfaces",
"Information Retrieval",
"Multimodality"
] |
[
72,
50,
27,
18,
11,
24,
74
] |
SCOPUS_ID:85129823962
|
A Compositional Adaptation-based Approach for Recommending Learning Resources in Software Development
|
In this paper, we discussed the application of a compositional adaptation approach to recommend learning resources to users in the area of software development. This approach makes use of a domainspecific ontology in this area to find those words, which are used in the technical description of the stored cases. A point peculiar with representing cases in the proposed approach is to take into account the characteristics of included learning resources, which justify the way they support the essential operations in the case of solution. In this way, only those components that comply with user's request would be considered in the final solution. In the paper, the performance of the proposed approach for recommending learning resources together with the status of user experience in his/ her interaction with the resulted recommending system, have been evaluated. Results demonstrate the fact that the learning resources through this approach are sufficiently beneficial for the users. Although the proposed approach has been applied for recommending learning resources in the area of software development, it can be equally applied to any technological area through developing domain-specific ontology for that area. This is mainly because any technological area has its own specific objects/ entities holding their own semantic similarities that finally lead to forming a domain-specific ontology for that area.
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
18,
72,
53
] |
http://arxiv.org/abs/1604.00100v1
|
A Compositional Approach to Language Modeling
|
Traditional language models treat language as a finite state automaton on a probability space over words. This is a very strong assumption when modeling something inherently complex such as language. In this paper, we challenge this by showing how the linear chain assumption inherent in previous work can be translated into a sequential composition tree. We then propose a new model that marginalizes over all possible composition trees thereby removing any underlying structural assumptions. As the partition function of this new model is intractable, we use a recently proposed sentence level evaluation metric Contrastive Entropy to evaluate our model. Given this new evaluation metric, we report more than 100% improvement across distortion levels over current state of the art recurrent neural network based language models.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
https://aclanthology.org//W07-1425/
|
A Compositional Approach toward Dynamic Phrasal Thesaurus
|
[
"Reasoning",
"Textual Inference"
] |
[
8,
22
] |
|
http://arxiv.org/abs/1509.06594v1
|
A Compositional Explanation of the Pet Fish Phenomenon
|
The `pet fish' phenomenon is often cited as a paradigm example of the `non-compositionality' of human concept use. We show here how this phenomenon is naturally accommodated within a compositional distributional model of meaning. This model describes the meaning of a composite concept by accounting for interaction between its constituents via their grammatical roles. We give two illustrative examples to show how the qualitative phenomena are exhibited. We go on to apply the model to experimental data, and finally discuss extensions of the formalism.
|
[
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP"
] |
[
81,
4
] |
SCOPUS_ID:85104993063
|
A Comprehensive Analysis of Deep Learning Techniques for Documentation Classification
|
The continuously increasing volume of documents in different fields has rendered document classification by manual labor infeasible. This has led to the genesis of automatic classification with the help of a myriad of techniques like Data Mining, Machine learning, and Deep Learning (N.L.P). This automation prevents human error and, more importantly, builds up speed, which is critical given the volubility of the data. Document classification has seen many developments in the past decade. This paper presents a comprehensive analysis of various state-of-the-art deep-learning algorithms for document classification and perform extensive experiments on these techniques. The performance of these algorithms is analyzed quantitatively using established metrics.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
http://arxiv.org/abs/2009.01989v1
|
A Comprehensive Analysis of Information Leakage in Deep Transfer Learning
|
Transfer learning is widely used for transferring knowledge from a source domain to the target domain where the labeled data is scarce. Recently, deep transfer learning has achieved remarkable progress in various applications. However, the source and target datasets usually belong to two different organizations in many real-world scenarios, potential privacy issues in deep transfer learning are posed. In this study, to thoroughly analyze the potential privacy leakage in deep transfer learning, we first divide previous methods into three categories. Based on that, we demonstrate specific threats that lead to unintentional privacy leakage in each category. Additionally, we also provide some solutions to prevent these threats. To the best of our knowledge, our study is the first to provide a thorough analysis of the information leakage issues in deep transfer learning methods and provide potential solutions to the issue. Extensive experiments on two public datasets and an industry dataset are conducted to show the privacy leakage under different deep transfer learning settings and defense solution effectiveness.
|
[
"Language Models",
"Semantic Text Processing",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
17,
4
] |
SCOPUS_ID:85111987422
|
A Comprehensive Analysis on Question Classification Using Machine Learning and Deep Learning Techniques
|
The competence of any online Web site depends on the type of experience it gives to its users, which depends largely on the content they put up on their Web site. Hence, the content which is being put online should be really taken care. There are many Web sites that provide content to their user in terms of questions and answers, for example, the online Web site Quora, which has large scale of data in terms of questions and answers of users. Users are the one who put up questions and also provide answers to those questions. In this paper, a system is proposed by considering significant amount of data from Kaggle, where it is utilized for different approaches to predict the insincere questions. The goal of the proposed work is to develop a model that considers question as an input in English language and produces output as either 0 or 1. In that, 0 represents sincere type of questions and 1 represents insincere type of questions. Hence, the data are transformed into result including F1 score using preprocessing, TF-IDF and pre-trained word embedding. For each method, preprocessing tasks like tokenization, case normalization and punctuation removal are used. Each individual word after preprocessing is used as word vector in this work. These word vectors are then used for the insincere and sincere question classification which is given as input to the machine learning models like SVM, Naïve Bayes, logistic regression and deep learning model called RNN with word embedding. It is used to detect the toxic content in the reviews as 1 for insincere and 0 for sincere questions. The accuracy of these different methods is critically examined with the help of LSTM, word embedding and model ensembling.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
SCOPUS_ID:85125422936
|
A Comprehensive Approach of Exploring Usability Problems in Enterprise Resource Planning Systems
|
Enterprise Resource Planning (ERP) is a frequently used system among organizations to automate their workflows, and companies’ performances are highly dependent on the ERP system. The usability issues of ERP systems may cause performance degradation, resulting in the company’s loss in terms of cost. Previously, several studies reported many usability problems of ERP systems. It can be helpful for the developers and designers of ERP systems to use design recommendations as a quick reference to avoid recurrent usability problems of ERP systems. Currently, this area lacks effective consolidation of the previously reported usability problems data. This paper presents a unique approach to developing a precise checklist of ERP usability problems using the topic modeling technique. Our analysis found six different usability problem-related topics that can be generalized for various ERP systems. We have successfully validated our checklist in three different usability studies of ERP systems. The most found usability problems are “difficulty searching and finding desired item/information in interface and error handling” and “missing data and information”. The outcome of our paper is the provision of recommendations to avoid the usability problems of ERP systems and help organizations efficiently prevent frequent issues during the development and maintenance of ERP systems.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
https://aclanthology.org//2021.eancs-1.3/
|
A Comprehensive Assessment of Dialog Evaluation Metrics
|
Automatic evaluation metrics are a crucial component of dialog systems research. Standard language evaluation metrics are known to be ineffective for evaluating dialog. As such, recent research has proposed a number of novel, dialog-specific metrics that correlate better with human judgements. Due to the fast pace of research, many of these metrics have been assessed on different datasets and there has as yet been no time for a systematic comparison between them. To this end, this paper provides a comprehensive assessment of recently proposed dialog evaluation metrics on a number of datasets. In this paper, 23 different automatic evaluation metrics are evaluated on 10 different datasets. Furthermore, the metrics are assessed in different settings, to better qualify their respective strengths and weaknesses. This comprehensive assessment offers several takeaways pertaining to dialog evaluation metrics in general. It also suggests how to best assess evaluation metrics and indicates promising directions for future work.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85084283402
|
A Comprehensive Comparison of Machine Learning Based Methods Used in Bengali Question Classification
|
QA classification system maps questions asked by humans to an appropriate answer category. A sound question classification (QC) system model is the pre-requisite of a sound QA system. This work demonstrates phases of assembling a QA type classification model. We present a comprehensive comparison (performance and computational complexity) among some machine learning based approaches used in QC for Bengali language.
|
[
"Text Classification",
"Question Answering",
"Natural Language Interfaces",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
27,
11,
24,
3
] |
http://arxiv.org/abs/2106.11483v8
|
A Comprehensive Comparison of Pre-training Language Models
|
Recently, the development of pre-trained language models has brought natural language processing (NLP) tasks to the new state-of-the-art. In this paper we explore the efficiency of various pre-trained language models. We pre-train a list of transformer-based models with the same amount of text and the same training steps. The experimental results shows that the most improvement upon the origin BERT is adding the RNN-layer to capture more contextual information for short text understanding. But the conclusion is: There are no remarkable improvement for short text understanding for similar BERT structures. Data-centric method[12] can achieve better performance.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
http://arxiv.org/abs/2110.05115v1
|
A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution
|
Coreference Resolution is an important NLP task and most state-of-the-art methods rely on word embeddings for word representation. However, one issue that has been largely overlooked in literature is that of comparing the performance of different embeddings across and within families in this task. Therefore, we frame our study in the context of Event and Entity Coreference Resolution (EvCR & EnCR), and address two questions : 1) Is there a trade-off between performance (predictive & run-time) and embedding size? 2) How do the embeddings' performance compare within and across families? Our experiments reveal several interesting findings. First, we observe diminishing returns in performance with respect to embedding size. E.g. a model using solely a character embedding achieves 86% of the performance of the largest model (Elmo, GloVe, Character) while being 1.2% of its size. Second, the larger model using multiple embeddings learns faster overall despite being slower per epoch. However, it is still slower at test time. Finally, Elmo performs best on both EvCR and EnCR, while GloVe and FastText perform best in EvCR and EnCR respectively.
|
[
"Coreference Resolution",
"Semantic Text Processing",
"Information Extraction & Text Mining",
"Representation Learning"
] |
[
13,
72,
3,
12
] |
https://aclanthology.org//2007.sigdial-1.31/
|
A Comprehensive Disfluency Model for Multi-Party Interaction
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
http://arxiv.org/abs/2303.07196v1
|
A Comprehensive Empirical Evaluation of Existing Word Embedding Approaches
|
Vector-based word representations help countless Natural Language Processing (NLP) tasks capture both semantic and syntactic regularities of the language. In this paper, we present the characteristics of existing word embedding approaches and analyze them with regards to many classification tasks. We categorize the methods into two main groups - Traditional approaches mostly use matrix factorization to produce word representations, and they are not able to capture the semantic and syntactic regularities of the language very well. Neural-Network based approaches, on the other hand, can capture sophisticated regularities of the language and preserve the word relationships in the generated word representations. We report experimental results on multiple classification tasks and highlight the scenarios where one approach performs better than the rest.
|
[
"Semantic Text Processing",
"Text Classification",
"Syntactic Text Processing",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
15,
12,
24,
3
] |
http://arxiv.org/abs/2201.02772v2
|
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval
|
Cross-Modal Retrieval (CMR) is an important research topic across multimodal computing and information retrieval, which takes one type of data as the query to retrieve relevant data of another type. It has been widely used in many real-world applications. Recently, the vision-language pre-trained models represented by CLIP demonstrate its superiority in learning the visual and textual representations and gain impressive performance on various vision and language related tasks. Although CLIP as well as the previous pre-trained models have shown great performance improvement in the unsupervised CMR, the performance and impact of these pre-trained models on the supervised CMR were rarely explored due to the lack of common representation for the multimodal class-level associations. In this paper, we take CLIP as the current representative vision-language pre-trained model to conduct a comprehensive empirical study. We evaluate its performance and impact on the supervised CMR, and attempt to answer several key research questions. To this end, we first propose a novel model CLIP4CMR (CLIP enhanced network for Cross-Modal Retrieval) that employs the pre-trained CLIP as backbone network to perform the supervised CMR. Then by means of the CLIP4CMR framework, we revisit the design of different learning objectives in current CMR methods to provide new insights on model design. Moreover, we investigate the most concerned aspects in applying CMR, including the robustness to modality imbalance and sensitivity to hyper-parameters, to provide new perspectives for practical applications. Through extensive experiments, we show that CLIP4CMR achieves the SOTA results with prominent improvements on the benchmark datasets, and can be used as a fundamental framework to empirically study the key research issues of the supervised CMR, with significant implications for model design and practical considerations.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Multimodality"
] |
[
20,
52,
72,
24,
74
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.