id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85102631442
A Comparative Study of Deep Neural Network Models on Multi-Label Text Classification in Finance
Multi-Label Text Classification (MLTC) is a well-known NLP task that allows the classification of texts into multiple categories indicating their most relevant domains. However, training model tasks on texts from web user deal with redundancy or ambiguity of linguistic information. In this work, we propose a comparative study about different neural network models for a multi-label text categorisation task in finance domain. Our main contribution consists of presenting a new annotated dataset that contains ∼26k posts from users associated to finance categories. To build that dataset, we defined 10 specific-domain categories that cover financial texts. To serve as a baseline, we present a comparative study analysing both the performance and training time of different learning models for the task of multilabel text categorisation on the new dataset. The results show that transformer-based language models outperformed RNN-based neural networks in all scenarios in terms of precision. However, transformers took much more time than RNN models to train an epoch model.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85100669951
A Comparative Study of Dictionary-based and Machine Learning-based Named Entity Recognition in Pashto
Information Extraction (IE) is the process of extracting structured information from unstructured text using natural language processing (NLP). One important sub-task of IE is the extraction of names of persons, places, and organizations, called Named Entity Recognition (NER). NER plays an important role in many NLP applications such as Question Answering, Machine Translation, and Text Summarization. It has been widely studied for high-resource languages like English. However, no research has taken place in this regard for Pashto. We hypothesized that based on the research done for English and other languages in the area of NER a system can be developed for Pashto. We have developed two NER systems for detecting names of persons, places, and organizations in Pashto text. First, a dictionary-based NER that uses three dictionaries containing names of persons, locations, and organizations, respectively. Second, a learning-based approach that uses Hidden Markov Model (HMM) for the task. We have evaluated both systems on a dataset collected from sports news. Our evaluation showed F-Measure of 82% for HMM and 60% for dictionary-based NER. Our findings highlight that HMM outperforms dictionary based NER.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85115444224
A Comparative Study of Different Models in Ancient Poetry Translation
Ancient poetry is an important part of Chinese culture. There have been projects like Jiuge to combine ancient poetry with deep learning. The language of ancient poetry is often refined, and it needs rich imagination to understand its meaning. As a result, it is difficult to automatically implement the translation. This paper makes a preliminary attempt in this aspect, based on the data set collected by ourselves, adopts deep encoder-decoder model, such as GRU, LSTM and Transformer models, to train our model. We compare the results of the three models, which have their own advantages and disadvantages. However, due to the size of the data set and the model itself, the effect is not very ideal, and still needs to be improved.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
SCOPUS_ID:85142535193
A Comparative Study of Different Sentiment Analysis Classifiers for Cybercrime Detection on Social Media Platforms
In the current scenario, social media has made it very easy to access and exploit the different types of data from various social media platforms, which are freely available to everyone to share their opinions openly. With this open access, the privacy and security of all social media users is a cause of concern and matters. Sentiment analysis plays an essential role in social media security as it is used by many applications domains like risk management, anomaly detection and disaster relief. The article proposes the sentiment analysis approaches and methods for social media defence and assessment. A study on the security challenges related to the user security breach, e-commerce websites, fake news, cyber-bullying, credibility on social media is presented in this paper. Here, the major challenge under consideration is phoney news detection on several social media platform. A brief outline of two classifiers, namely Multinomial Naïve Bayes and Passive-Aggressive Classifier, is presented initially. Then a comparison of two classifiers is conducted to analyze the techniques and methodologies on the Twitter dataset. The results from both the classifiers are discussed based on the performance metrics and the accuracy score. Finally, the article derives some important conclusion and future direction in this domain.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:85125012158
A Comparative Study of Different Text Classification Approaches for Bangla News Classification
At present, we have seen everything is getting digitized where technology almost takes full control over our life. As a result, a massive number of textual documents are generated on online platforms and news articles are no exception. People prefer to get connected with online news portals as they are updated every single hour. Newspaper articles have so many categories such as politics, sports, business, entertainment etc. Recently, we have noticed the rapid growth and increase of Bangla online news portals on the internet. It will be helpful for the online readers to get recommended the preferable news category which assists them in locating desired articles. Manually categorizing news articles takes huge time and effort. So, text categorization is necessary for the modern day, as enormous amounts of uncategorized data are an issue here. Although the study has improved in categorizing news articles greatly for languages such as English, Arabic, Chinese, Urdu, and Hindi. Among others, the Bangla language has shown little development. However, some approaches were applied to categorize Bangla news articles, using some machine learning algorithms where resources were minimum. We have applied five machine learning classifiers and two neural networks to categorize Bangla news articles where neural network LSTM performed best. To show the comparison between applied algorithms, which one is performing better, we have used four metrics that measure performance.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85118902266
A Comparative Study of Educational Texts for Native, Foreign, and Bilingual Young Speakers of Russian: Are Simplified Texts Equally Simple?
Studies on simple language and simplification are often based on datasets of texts, either for children or learners of a second language. In both cases, these texts represent an example of simple language, but simplification likely involves different strategies. As such, this data may not be entirely homogeneous in terms of text simplicity. This study investigates linguistic properties and specific simplification strategies used in Russian texts for primary school children with different language backgrounds and levels of language proficiency. To explore the structure and variability of simple texts for young readers of different age groups, we have trained models for multiclass and binary classification. The models were based on quantitative features of texts. Subsequently, we evaluated the simplification strategies applied to readers of the same age with different linguistic backgrounds. This study is particularly relevant for the Russian language material, where the concept of easy and plain language has not been sufficiently investigated. The study revealed that the three types of texts cannot easily be distinguished from each other by judging the performance of multiclass models based on various quantitative features. Therefore, it can be said that texts of all types exhibit a similar level of accessibility to young readers. In contrast, binary classification tasks demonstrated better results, especially in the R-native vs. non R-native track (with 0.78 F1-score), these results may indicate that the strategies used for adapting or creating texts for each type of audience are different.
[ "Paraphrasing", "Information Extraction & Text Mining", "Text Classification", "Text Generation", "Information Retrieval", "Multilinguality" ]
[ 32, 3, 36, 47, 24, 0 ]
https://aclanthology.org//W19-6714/
A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure’s Ability To Gauge Human Evaluation Biases
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85105432234
A Comparative Study of Ensemble Approaches to Fact-Checking for the FEVER Shared Task
The surge of information globally motivates for automated rumour detection. We use Fact-checking to detect rumours of the type misinformation. The FEVER-shared task is the Fact-checking task used for this comparative study. The task is divided into Document Retrieval, Sentence Selection, and Claim Verification components. We standardise TF-IDF for Document Retrieval, before creating the pipelines of Sentence Selection and Claim Verification algorithms. We evaluate each unique combination on the FEVER score, then compare the four pipelines to the baseline and state of the art. Our results show that the 2-way classification task using the Siamese BiLSTM achieves better Evidence Retrieval F1 scores than the state of the art models, and that the pipeline combinations, rival the state of the art for the Shared Task. The novelty of this research lies in the standardised text processing on novel pipeline combinations, allowing for comparable results, as well as the evaluation of the Siamese BiLSTM.
[ "Language Models", "Document Retrieval", "Semantic Text Processing", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 52, 56, 72, 17, 8, 46, 24, 4 ]
SCOPUS_ID:85145471329
A Comparative Study of Ensemble Techniques Based on Genetic Programming: A Case Study in Semantic Similarity Assessment
The challenge of assessing semantic similarity between pieces of text through computers has attracted considerable attention from industry and academia. New advances in neural computation have developed very sophisticated concepts, establishing a new state of the art in this respect. In this paper, we go one step further by proposing new techniques built on the existing methods. To do so, we bring to the table the stacking concept that has given such good results and propose a new architecture for ensemble learning based on genetic programming. As there are several possible variants, we compare them all and try to establish which one is the most appropriate to achieve successful results in this context. Analysis of the experiments indicates that Cartesian Genetic Programming seems to give better average results.
[ "Programming Languages in NLP", "Semantic Text Processing", "Semantic Similarity", "Multimodality" ]
[ 55, 72, 53, 74 ]
SCOPUS_ID:85122299998
A Comparative Study of Extractive and Abstractive Approaches for Automatic Text Summarization on Scientific Texts
Automatic summarization of long documents is a challenging task and it is not well studied. The existing text summarization approaches are developed and tested mainly on relatively short documents such as news, web pages etc. In this paper, we aim to study the performance of some of the existing state of the art text summarization algorithms on scientific papers, which are relatively long documents. For the conducted experiments we used the Yale Scientific Article Summarization Dataset. Summarizing scientific texts is a challenge itself due to the many different topics that have uncommon words, long and complex sentences and are hard to understand even by humans. The dataset consists of 1000 scientific papers with both human-generated summaries and the original abstracts. We have used both abstractive and extractive text-summarization algorithms. We propose a chunk-based approach for the abstractive algorithms (Google Pegasus and T5). The ROUGE score is used to evaluate and compare the results.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
http://arxiv.org/abs/2204.05514v1
A Comparative Study of Faithfulness Metrics for Model Interpretability Methods
Interpretation methods to reveal the internal reasoning processes behind machine learning models have attracted increasing attention in recent years. To quantify the extent to which the identified interpretations truly reflect the intrinsic decision-making mechanisms, various faithfulness evaluation metrics have been proposed. However, we find that different faithfulness metrics show conflicting preferences when comparing different interpretations. Motivated by this observation, we aim to conduct a comprehensive and comparative study of the widely adopted faithfulness metrics. In particular, we introduce two assessment dimensions, namely diagnosticity and time complexity. Diagnosticity refers to the degree to which the faithfulness metric favours relatively faithful interpretations over randomly generated ones, and time complexity is measured by the average number of model forward passes. According to the experimental results, we find that sufficiency and comprehensiveness metrics have higher diagnosticity and lower time complexity than the other faithfulness metric
[ "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP" ]
[ 81, 4 ]
http://arxiv.org/abs/1902.06242v1
A Comparative Study of Feature Selection Methods for Dialectal Arabic Sentiment Classification Using Support Vector Machine
Unlike other languages, the Arabic language has a morphological complexity which makes the Arabic sentiment analysis is a challenging task. Moreover, the presence of the dialects in the Arabic texts have made the sentiment analysis task is more challenging, due to the absence of specific rules that govern the writing or speaking system. Generally, one of the problems of sentiment analysis is the high dimensionality of the feature vector. To resolve this problem, many feature selection methods have been proposed. In contrast to the dialectal Arabic language, these selection methods have been investigated widely for the English language. This work investigated the effect of feature selection methods and their combinations on dialectal Arabic sentiment classification. The feature selection methods are Information Gain (IG), Correlation, Support Vector Machine (SVM), Gini Index (GI), and Chi-Square. A number of experiments were carried out on dialectical Jordanian reviews with using an SVM classifier. Furthermore, the effect of different term weighting schemes, stemmers, stop words removal, and feature models on the performance were investigated. The experimental results showed that the best performance of the SVM classifier was obtained after the SVM and correlation feature selection methods had been combined with the uni-gram model.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
http://arxiv.org/abs/2009.11898v1
A Comparative Study of Feature Types for Age-Based Text Classification
The ability to automatically determine the age audience of a novel provides many opportunities for the development of information retrieval tools. Firstly, developers of book recommendation systems and electronic libraries may be interested in filtering texts by the age of the most likely readers. Further, parents may want to select literature for children. Finally, it will be useful for writers and publishers to determine which features influence whether the texts are suitable for children. In this article, we compare the empirical effectiveness of various types of linguistic features for the task of age-based classification of fiction texts. For this purpose, we collected a text corpus of book previews labeled with one of two categories -- children's or adult. We evaluated the following types of features: readability indices, sentiment, lexical, grammatical and general features, and publishing attributes. The results obtained show that the features describing the text at the document level can significantly increase the quality of machine learning models.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85125791201
A Comparative Study of Fuzzy Topic Models and LDA in terms of Interpretability
In many domains that employ machine learning models, both high performing and interpretable models are needed. A typical machine learning task is text classification, where models are hardly interpretable. Topic models, used as topic embeddings, carry the potential to better understand the decisions made by text classification algorithms. With this goal in mind, we propose two new fuzzy topic models; FLSA-W and FLSA-V. Both models are derived from the topic model Fuzzy Latent Semantic Analysis (FLSA). After training each model ten times, we use the mean coherence score to compare the different models with the benchmark models Latent Dirichlet Allocation (LDA) and FLSA. Our proposed models generally lead to higher coherence scores and lower standard deviations than the benchmark models. These proposed models are specifically useful as topic embeddings in text classification, since the coherence scores do not drop for a high number of topics, as opposed to the decay that occurs with LDA and FLSA.
[ "Topic Modeling", "Information Extraction & Text Mining", "Information Retrieval", "Semantic Text Processing", "Representation Learning", "Explainability & Interpretability in NLP", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 9, 3, 24, 72, 12, 81, 36, 4 ]
SCOPUS_ID:85131125621
A Comparative Study of Information Extraction Strategies Using an Attention-Based Neural Network
This article focuses on information extraction in historical handwritten marriage records. Traditional approaches rely on a sequential pipeline of two consecutive tasks: handwriting recognition is applied before named entity recognition. More recently, joint approaches that handle both tasks at the same time have been investigated, yielding state-of-the-art results. However, as these approaches have been used in different experimental conditions, they have not been fairly compared yet. In this work, we conduct a comparative study of sequential and joint approaches based on the same attention-based architecture, in order to quantify the gain that can be attributed to the joint learning strategy. We also investigate three new joint learning configurations based on multi-task or multi-scale learning. Our study shows that relying on a joint learning strategy can lead to an 8% increase of the complete recognition score. We also highlight the interest of multi-task learning and demonstrate the benefit of attention-based networks for information extraction. Our work achieves state-of-the-art performance in the ICDAR 2017 Information Extraction competition on the Esposalles database at line-level, without any language modelling or post-processing.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Named Entity Recognition", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 52, 80, 72, 34, 4, 3 ]
SCOPUS_ID:85125251302
A Comparative Study of Key Themes of Scientific Research Post COVID-19 in the United Arab Emirates and WHO Using Text Mining Approach
Objective: The objective of this paper is to analyze approved areas of medical research related to COVID-19 from the United Arab Emirates (UAE) and World Health Organization (WHO) in order to identify key topics and themes for these two entities. The paper attempts to understand the key focus areas of the government and private agencies for further medical research in response to COVID-19. Research Design and Methods: In view of availability of large volumes of documents and advancements in computing systems, text mining has emerged as a significant tool to analyze large volumes of unstructured data. For this paper, we have applied latent semantic analysis (LSA) and singular value decomposition for text clustering. Findings: The findings of terms analysis results show various focus areas of medical research communities for UAE and WHO. Nutrition is a key theme of research in UAE whereas alternative medicines or infection study emerged as key focus areas for WHO. Further analysis of topic modeling indicates that topics like pneumonia and prevention approach has been a focus of approved research for WHO. Contribution/Value Added: The study contributes to text mining literature by providing a framework for analyzing research or policy documents at country or organization level. This can help to understand the key themes in COVID-19 response by various countries and organizations and identify the focus areas for them.
[ "Topic Modeling", "Information Extraction & Text Mining", "Text Clustering" ]
[ 9, 3, 29 ]
SCOPUS_ID:85102632838
A Comparative Study of Korean Feature Granularity Based on Hybrid Neural Network
In natural language processing, the selection of the token is a very important step. The original text should segment into some granularity, then subsequent processing and analysis work can be carried out. The influence of select different segmentation granularity on the Korean text classification task is discussed. Due to the Korean language composition characteristics, combined with the linguistic knowledge the text was divided into six different levels: phoneme, syllable, subword, word, space writing without suffix, and space writing. Then build text semantic expressions in the vector space model. According to the different granularity performance in five classic classifiers and six deep learning models, Korean text feature representations were analyzed and compared in six different granularities. The Korean scientific literature was classified, and the results show that the spacing without suffix level performs best in seven classifiers, and the highest classification accuracy rate reaches 91.94%.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2006.00031v1
A Comparative Study of Lexical Substitution Approaches based on Neural Language Models
Lexical substitution in context is an extremely powerful technology that can be used as a backbone of various NLP applications, such as word sense induction, lexical relation extraction, data augmentation, etc. In this paper, we present a large-scale comparative study of popular neural language and masked language models (LMs and MLMs), such as context2vec, ELMo, BERT, XLNet, applied to the task of lexical substitution. We show that already competitive results achieved by SOTA LMs/MLMs can be further improved if information about the target word is injected properly, and compare several target injection methods. In addition, we provide analysis of the types of semantic relations between the target and substitutes generated by different models providing insights into what kind of words are really generated or given by annotators as substitutes.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85104831758
A Comparative Study of Lexical and Semantic Emoji Suggestion Systems
Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems’ actual effects on the chat experience are unknown. We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared these in two different studies. To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants and a 15-day longitudinal field deployment with 18 participants. We found that participants picked more semantic suggestions than lexical suggestions and perceived the semantic suggestions as more relevant to the message content. Our subjective data showed that although the suggestion mechanism did not affect the chatting experience significantly, different mechanisms could change the composing behavior of the users and facilitate their emoji-searching needs in different ways.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85131926442
A Comparative Study of Machine Learning Based Image Captioning Models
Automated image captioning is a crucial concept for numerous real-world applications as it is useful in robotics, image indexing, self-driving vehicles and greatly helpful for impaired eyesight people. An image provided in real-time can be converted into text using image captioning models developed by machine learning algorithms. Understanding an image mostly depends on the features of the image. Machine learning techniques are widely used for image captioning tasks. This research study has performed a comparative analysis on three Machine Learning (ML) algorithms, i.e. k-Nearest Neighbor (KNN), Convolution Neural Network (CNN) with Long Short Term Memory (LSTM) and Attention Based LSTM. In addition, an improved KNN algorithm with reduced time complexity and an improved CNN with LSTM and Attention Based LSTM model with an added beam search method is proposed to improve the underlying approaches further. The performance of the three selected models are empirically evaluated using BLEU, ROUGE and METEOR scores on the widely used flickr8k dataset, and the experimental results demonstrate the supremacy of the Attention Based LSTM over the other two approaches. Finally, the current study's findings help guide the researchers and practitioners in selecting the appropriate approach for Image Captioning with empirical evidence in terms of standard evaluation metrics.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
http://arxiv.org/abs/1402.4380v1
A Comparative Study of Machine Learning Methods for Verbal Autopsy Text Classification
A Verbal Autopsy is the record of an interview about the circumstances of an uncertified death. In developing countries, if a death occurs away from health facilities, a field-worker interviews a relative of the deceased about the circumstances of the death; this Verbal Autopsy can be reviewed off-site. We report on a comparative study of the processes involved in Text Classification applied to classifying Cause of Death: feature value representation; machine learning classification algorithms; and feature reduction strategies in order to identify the suitable approaches applicable to the classification of Verbal Autopsy text. We demonstrate that normalised term frequency and the standard TFiDF achieve comparable performance across a number of classifiers. The results also show Support Vector Machine is superior to other classification algorithms employed in this research. Finally, we demonstrate the effectiveness of employing a "locally-semi-supervised" feature reduction strategy in order to increase performance accuracy.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85060735596
A Comparative Study of Machine Learning Techniques for Real-time Multi-tier Sentiment Analysis
Nowadays, Big Data, both structured and unstructured data, are generated from Social Media. Social Media are powerful marketing tools and social big data require real-time tracking and analytics because the speed may indeed be the most important competitive business profits. Compared to batch processing of Sentiment Analysis on Big Data Analytics platform, Real-time analytic is data intensive in nature and require to efficiently collect and process large volume and high velocity of data. Real-time multiclass Sentiment Analysis is oriented towards classification of text into more detailed sentiment labels in real-time manner. But Multiclass Sentiment Analysis with Single-tier architecture where single classification model is developed and entire labeled data is trained may decrease the classification accuracy. In this paper, Real-time Multi-tier Sentiment Analysis system (RMSA) is proposed to achieve high level performance of multi-class classification in Real-time manner. Lexicon and learning based classification scheme with Multi-tier architecture are combined to develop the proposed system. Real-time twitter stream data is collected by apache flume and, large volumes and high velocity of social data is efficiently analyzed by Spark. To improve the classification accuracy, the suitable classifier is selected by comparing the accuracy of three different learning based multiclass classification techniques: Naïve Bayes, Linear SVC and Logistic Regression. The evaluation results show that Real-time Multi-tier Sentiment Analysis will achieve the promising accuracy and Linear SVC is better than other techniques for Real-time Multi-tier Sentiment Analysis.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 68, 36, 78, 24, 4 ]
SCOPUS_ID:85069215657
A Comparative Study of Machine Learning and Deep Learning Techniques for Sentiment Analysis
In this day and age an increasing number of people are using online social networks and services to not only connect and communicate but also to voice their opinions. Sentiment Analysis is the identifying and categorizing of these opinions to determine the public' s opinion towards a particular topic, problem, product etc. The importance of Sentiment analysis is increasing day by day. Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Deep Learning is a subfield of machine learning concerned with algorithms that are neural implementations, most commonly seen as neural networks, neural beliefs, etc. It is crucial to employ the most feasible and accurate technique while analyzing sentiments for a given data as this affects both producers as well as consumers. This paper puts forward a study that compares various Machine learning, Deep learning as well as their hybrid techniques. It compares their accuracy for Sentiment Analysis and thus it can be concluded that in most cases Deep learning techniques give better results. However, in some cases the difference in the accuracies of the two techniques is not substantial enough and thus it is better to use Machine Learning methods as they are easier to implement.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85116441915
A Comparative Study of Methods for Visualizable Semantic Embedding of Small Text Corpora
Text embedding has recently emerged as a very useful and successful method for semantic representation. Following initial word-level embedding methods such as Latent Semantic Analysis (LSA) and topic-based bag-of-words approaches like Latent Dirichlet Allocation (LDA), the focus has turned to language models and text encoders implemented as neural networks - ranging from word-level models to those embedding whole documents. The distinctive feature of these models is their ability to infer semantic spaces at all levels based purely on data, with no need for complexities such as syntactic analysis or ontology building. Many of these models are available pre-trained on enormous amounts of data, providing downstream applications with general-purpose semantic spaces. In particular, embedding models at the sentence level or higher are most useful in applications because the meaning of text only becomes clear at that level. Most text embedding methods produce text embeddings in high-dimensional spaces, with a dimensionality ranging from a few hundred to thousands. However, it is often useful to visualize semantic spaces in very low dimension, which requires the use of dimensionality reduction methods. It is not clear what language models and what method of dimensionality reduction would work well in these cases. In this paper, we compare four text embedding methods in combination with three methods of dimensionality reduction to map three related real-world datasets comprising textual descriptions of items in a particular domain (sports) to a 2-dimensional semantic visualization space. The results provide several insights into the utility of these methods for data of this type.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
http://arxiv.org/abs/2107.02852v2
A Comparative Study of Modular and Joint Approaches for Speaker-Attributed ASR on Monaural Long-Form Audio
Speaker-attributed automatic speech recognition (SA-ASR) is a task to recognize "who spoke what" from multi-talker recordings. An SA-ASR system usually consists of multiple modules such as speech separation, speaker diarization and ASR. On the other hand, considering the joint optimization, an end-to-end (E2E) SA-ASR model has recently been proposed with promising results on simulation data. In this paper, we present our recent study on the comparison of such modular and joint approaches towards SA-ASR on real monaural recordings. We develop state-of-the-art SA-ASR systems for both modular and joint approaches by leveraging large-scale training data, including 75 thousand hours of ASR training data and the VoxCeleb corpus for speaker representation learning. We also propose a new pipeline that performs the E2E SA-ASR model after speaker clustering. Our evaluation on the AMI meeting corpus reveals that after fine-tuning with a small real data, the joint system performs 8.9--29.9% better in accuracy compared to the best modular system while the modular system performs better before such fine-tuning. We also conduct various error analyses to show the remaining issues for the monaural SA-ASR.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85113413233
A Comparative Study of N-gram and Skip-gram for Clinical Concepts Extraction
State-of-the-art technologies for clinical knowledge extraction are essential in a clinical decision support system (CDSS) to make a prediction of a diagnosis. Automatic analysis of a patient's health data is a requirement in such a process. The unstructured part of the data in electronic health records (EHR) is critical, as it may contain hidden risk factors. We present in this paper a comparative study of two well-known techniques N-gram and Skip-gram to enhance the extraction of risk factors concepts from the clinical narratives after applying initial natural language processing (NLP) techniques. We evaluate the use of both techniques using a case study dataset of patients' records with venous thromboembolism (VTE). Results of the techniques' comparative study yielded an advancement of N-gram precision while Skip-gram produced a better performance in terms of the recall measure.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85132901723
A Comparative Study of NLP based Semantic Web Standard model using SPARQL database
The work offers evidence of a philosophical idea that many sectors will draw considerable interest. A remote natural language interface (NLI) is the term for the issue of knowledge bases (KBs). The framework uses software from CoreNLP for natural language technology and allows KBs to use the SPARQL query language. Natural Language Processing (NLP) uses the semantics of a natural language query to interpret and produce related information. The query can be asked about KBs containing linked information with properly defined relationships. The related data fits the RDF model in terms of semantic threefold: subject-predicate-objet relationships. The RDF model. The data are connected to the RDF model. Any KB can be understood semantically with our NLI. AI will learn how to grasp the semantics of the RDF data stored in the KB by having the correct training data. Similar information derives from KB questions thanks to its capacity to understand RDF data. Questions can be translated into SPARQL and asked by relational expertise on the KB.
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
SCOPUS_ID:85099594742
A Comparative Study of Named Entity Recognition on Myanmar Language
This paper represents the development of the Myanmar Named Entity Recognition (NER) system using Conditional Random Fields (CRFs). In order to develop the system, a manually annotated Named Entities (NEs) corpus-collected from Myanmar news websites and Asia Language Treebank(ALT)-Parallel-Corpus has been used. We compare the performance of the system getting syllable-based input to the one getting character-based input. We observed that training data has more impact on the performance of the system. The experimental results show that the syllable-based system performs better than the character-based system. It achieves that Precision, Recall and F1-score values of 93.62%, 91.64% and 92.62% respectively.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85132698296
A Comparative Study of Natural Language Processing Algorithms Based on Cities Changing Diabetes Vulnerability Data
(1) Background: Poor adherence to management behaviors in Chinese Type 2 diabetes mellitus (T2DM) patients leads to an uncontrolled prognosis of diabetes, which results in significant economic costs for China. It is imperative to quickly locate vulnerability factors in the management behavior of patients with T2DM. (2) Methods: In this study, a thematic analysis of the collected interview materials was conducted to construct the themes of T2DM management vulnerability. We explored the applicability of the pre-trained models based on the evaluation metrics in text classification. (3) Results: We constructed 12 themes of vulnerability related to the health and well-being of people with T2DM in Tianjin. We considered that Bidirectional Encoder Representation from Transformers (BERT) performed better in this Natural Language Processing (NLP) task with a shorter completion time. With the splitting ratio of 6:3:1 and batch size of 64 for BERT, the test accuracy was 97.71%, the completion time was 10 min 24 s, and the macro-F1 score was 0.9752. (4) Conclusions: Our results proved the applicability of NLP techniques in this specific Chinese-language medical environment. We filled the knowledge gap in the application of NLP technologies in diabetes management. Our study provided strong support for using NLP techniques to rapidly locate vulnerability factors in T2DM management.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85061895487
A Comparative Study of Neural Network Models for Sentence Classification
This paper presents an extensive comparative study of four neural network models, including feed-forward networks, convolutional networks, recurrent networks and long short-term memory networks, on two sentence classification datasets of English and Vietnamese text. We show that on the English dataset, the convolutional network models without any feature engineering outperform some competitive sentence classifiers with rich hand-crafted linguistic features. We demonstrate that the GloVe word embeddings are consistently better than both Skip-gram word embeddings and word count vectors. We also show the superiority of convolutional neural network models on a Vietnamese newspaper sentence dataset over strong baseline models. Our experimental results suggest some good practices for applying neural network models in sentence classification.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85097199816
A Comparative Study of Opinion Summarization Techniques
In the Web 3.0 platforms, enormous amount of information is shared whereby individuals express their thoughts and opinions and learn from others' experiences. Many e-commerce websites provide service of posting opinionated reviews to allow consumers post their opinions using free text. Examples of these e-commerce websites include eBay, Amazon, and Yahoo shopping. Summarizing text is taken as an interesting task of Natural Language Processing (NLP). The proposed work presents a comparative study of different techniques used for opinion summarization. It covers both abstractive and extractive approaches where summary of sentences is achieved by considering aspects. This article highlights the gaps in the previous study by proposing a novel graph-based technique for generating abstractive summary of duplicate sentences. The method discusses the details by constructing graphs, ensuring the sentence correctness using some constraints, and finally scoring the sentences individually by fusing sentiments using SentiWordNet. Extractive approach uses the principle of principal component analysis (PCA). The work includes the application of PCA in summarization of text by reducing the number of dimensions in data (aspects) and relatively finding the summary of the reviews on ranking the most relevant ones, according to the prime aspects without any loss of information respective of a particular domain. The analysis is conducted on the standard Opinosis data set and comparison is made between both of the techniques to discuss which method generates more coherent and complete summary.
[ "Opinion Mining", "Structured Data in NLP", "Summarization", "Multimodality", "Text Generation", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 49, 50, 30, 74, 47, 78, 3 ]
SCOPUS_ID:85075045115
A Comparative Study of Optical Character Recognition in Health Information System
Most Health Institutes are transitioning between documents in physical format and digital format. It is pertinent and important to develop applications that helps health professionals on this transition. An application that would aid the process of digitalization of documents was developed using a Python library. To help with the decision of which library to use, a study was made regarding the precision and speed of execution of PyOCR, PyTesseract and TesseOCR.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85087051141
A Comparative Study of Parametric Versus Non-Parametric Text Classification Algorithms
Evolution of modern technologies allowed to store the text in various digital formats such as e-mails, e-documents, libraries, etc. The amount of text data that is produced daily is increasing dramatically. Discovering useful patterns in text that can be represented in unstructured, semi-structured or structured format is a difficult task that requires a good understanding of machine learning algorithms. Finding a suitable algorithm for text mining tasks such as classification, clustering or natural language processing is a demanding situation that tests researchers' abilities. This paper provides an overview of the text mining process also, presents a comparison of the performance and limitations of two predictive models generated using the parametric Naïve Bayes algorithm and nonparametric Deep Learning neural network. RapidMiner data science software platform has been used for models' implementations and e-mail classification.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85060038942
A Comparative Study of Polarity Lexicons to Identify Extreme Opinions
This paper comparing a method to automatically build a sentiment lexicon, with four well-known sentiment lexicons. For this purpose, an indirect evaluation is carried out. The lexicons are integrated into supervised sentiment classifiers and their performance is evaluated in two sentiment classification tasks in order to identify i) the most negative vs. not most negative opinions, and ii) the most positive vs. not most positive. Moreover, a set of textual features is integrated into the classifiers so as to analyze how these textual features improve the lexicon performance.
[ "Text Classification", "Polarity Analysis", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 33, 78, 24, 3 ]
https://aclanthology.org//2022.repl4nlp-1.6/
A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data. However, their performance in low-resource scenarios, where such data is not available, remains an open question. We introduce an encoder evaluation framework, and use it to systematically compare the performance of state-of-the-art pre-trained representations on the task of low-resource NER. We analyze a wide range of encoders pre-trained with different strategies, model architectures, intermediate-task fine-tuning, and contrastive learning. Our experimental results across ten benchmark NER datasets in English and German show that encoder performance varies significantly, suggesting that the choice of encoder for a specific low-resource scenario needs to be carefully evaluated.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Representation Learning", "Named Entity Recognition", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 3, 12, 34, 4 ]
SCOPUS_ID:85136970139
A Comparative Study of Pre-trained Word Embeddings for Arabic Sentiment Analysis
In this paper, we conduct a series of experiments to systematically study both context-independent and context-dependent word embeddings for the purpose of Arabic sentiment analysis. We use pretrained word embeddings as fixed features extractors to provide input features for a CNN model. Experimental results with two different Arabic sentiment analysis datasets indicate that the pre-trained contextualized AraBERT model is the most suitable for such tasks. AraBERT reaches an accuracy score of 91.4% and 95.49% on the large Arabic book reviews dataset (LABR) and the hotel Arabic-reviews dataset (HARD), respectively.
[ "Representation Learning", "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 12, 52, 72, 78 ]
SCOPUS_ID:85082303429
A Comparative Study of Pretrained Language Models on Thai Social Text Categorization
The ever-growing volume of data of user-generated content on social media provides a nearly unlimited corpus of unlabeled data even in languages where resources are scarce. In this paper, we demonstrate that state-of-the-art results on two Thai social text categorization tasks can be realized by pretraining a language model on a large noisy Thai social media corpus of over 1.26 billion tokens and later fine-tuned on the downstream classification tasks. Due to the linguistically noisy and domain-specific nature of the content, our unique data preprocessing steps designed for Thai social media were utilized to ease the training comprehension of the model. We compared four modern language models: ULMFiT, ELMo with biLSTM, OpenAI GPT, and BERT. We systematically compared the models across different dimensions including speed of pretraining and fine-tuning, perplexity, downstream classification benchmarks, and performance in limited pretraining data.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/2211.08170v1
A Comparative Study of Question Answering over Knowledge Bases
Question answering over knowledge bases (KBQA) has become a popular approach to help users extract information from knowledge bases. Although several systems exist, choosing one suitable for a particular application scenario is difficult. In this article, we provide a comparative study of six representative KBQA systems on eight benchmark datasets. In that, we study various question types, properties, languages, and domains to provide insights on where existing systems struggle. On top of that, we propose an advanced mapping algorithm to aid existing models in achieving superior results. Moreover, we also develop a multilingual corpus COVID-KGQA, which encourages COVID-19 research and multilingualism for the diversity of future AI. Finally, we discuss the key findings and their implications as well as performance guidelines and some future improvements. Our source code is available at \url{https://github.com/tamlhp/kbqa}.
[ "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Knowledge Representation", "Multilinguality" ]
[ 72, 27, 11, 18, 0 ]
SCOPUS_ID:85123312148
A Comparative Study of Recent Feature Selection Techniques Used in Text Classification
As we all know, handling large amounts of data is a problem these days. Despite having so many resources to store, train and process the data, still it is required to reduce these datasets in order to reduce computational complexity, save time, cost and retrieve valuable information from large text documents. The presentation of a machine learning algorithm relies upon the dataset utilized. When the dataset is large, the learning algorithm tries to accommodate all the features which increases the dimensionality of the data. This high-dimensional data is not useful as it might contain irrelevant and redundant features. It becomes important to remove these features. Thus, pre-processing of data is required to compress and analyse the dataset for the purpose of text classification (TC). This can be achieved by using feature selection (FS) techniques. The fundamental goal of FS techniques is to acknowledge pertinent features and to get rid of repetitive attributes w.r.t. high-dimensional data (Shroff and Maheta in 2015 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 2015 [1]). Nowadays, major FS methods use optimization algorithms (Brownlee in https://machinelearningmastery.com/. 23 Dec 20 [Online]. Available: https://machinelearningmastery.com/tour-of-optimization-algorithms/. Accessed 3 Feb 2021 [2]) to get an ideal component subset from high-dimensional information from feature space which decreases computational expense and builds classifier precision. Some of the recent feature selection techniques have been discussed in this paper which can prove to be useful for text classification (TC).
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/1801.05420v2
A Comparative Study of Rule Extraction for Recurrent Neural Networks
Understanding recurrent networks through rule extraction has a long history. This has taken on new interests due to the need for interpreting or verifying neural networks. One basic form for representing stateful rules is deterministic finite automata (DFA). Previous research shows that extracting DFAs from trained second-order recurrent networks is not only possible but also relatively stable. Recently, several new types of recurrent networks with more complicated architectures have been introduced. These handle challenging learning tasks usually involving sequential data. However, it remains an open problem whether DFAs can be adequately extracted from these models. Specifically, it is not clear how DFA extraction will be affected when applied to different recurrent networks trained on data sets with different levels of complexity. Here, we investigate DFA extraction on several widely adopted recurrent networks that are trained to learn a set of seven regular Tomita grammars. We first formally analyze the complexity of Tomita grammars and categorize these grammars according to that complexity. Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars. Our experiments show that for most recurrent networks, their extraction performance decreases as the complexity of the underlying grammar increases. On grammars of lower complexity, most recurrent networks obtain desirable extraction performance. As for grammars with the highest level of complexity, while several complicated models fail with only certain recurrent networks having satisfactory extraction performance.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85135745767
A Comparative Study of Self-Supervised Speech Representation Based Voice Conversion
We present a large-scale comparative study of self-supervised speech representation (S3R)-based voice conversion (VC). In the context of recognition-synthesis VC, S3Rs are attractive owing to their potential to replace expensive supervised representations such as phonetic posteriorgrams (PPGs), which are commonly adopted by state-of-the-art VC systems. Using S3PRL-VC, an open-source VC software we previously developed, we provide a series of in-depth objective and subjective analyses under three VC settings: intra-/cross-lingual any-to-one (A2O) and any-to-any (A2A) VC, using the voice conversion challenge 2020 (VCC2020) dataset. We investigated S3R-based VC in various aspects, including model type, multilinguality, and supervision. We also studied the effect of a post-discretization process with k-means clustering and showed how it improves in the A2A setting. Finally, the comparison with state-of-the-art VC systems demonstrates the competitiveness of S3R-based VC and also sheds light on the possible improving directions.
[ "Multilinguality", "Low-Resource NLP", "Semantic Text Processing", "Speech & Audio in NLP", "Representation Learning", "Multimodality", "Cross-Lingual Transfer", "Responsible & Trustworthy NLP" ]
[ 0, 80, 72, 70, 12, 74, 19, 4 ]
SCOPUS_ID:85123752577
A Comparative Study of Sentiment Analysis Tools
COVID-19 outbreak compelled people to stay at home due to complete lockdown in all the working areas. Immense use of World Wide Web and social media to exchange and share opinions, generated enormous web data to be utilized in the research work of the Natural Language Processing (NLP) field. Being a dominant side of NLP, Sentiment Analysis uses numerous tools to classify human sentiments as Positive (1), Negative (-1) and Neutral (0) so as to reach various conclusions. This research work focused on sentiment analysis of four datasets, web scraped from four different sources namely: Twitter, Facebook, Economic Times Headlines and news articles keyed by stock market. Seven contemporary and tremendously used sentiment analysis tools: Stanford, SVC, TextBlob, Henry, Loughran-McDonald, Logistic Regression and VADER are considered here to process four scraped datasets individually and analyses result in two ways: Facebook scraped data generates maximum overall positive sentiment score as 38.17% and VADER tool performs best among seven tools. VADER calculates overall positive sentiment score as 56.63%
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85124044127
A Comparative Study of Sentiment Analysis Using NLP and Different Machine Learning Techniques on US Airline Twitter Data
Today's business ecosystem has become very competitive. Customer satisfaction has become a major focus for business growth. Business organizations are spending a lot of money and human resources on various strategies to understand and fulfill their customer's needs. But, because of defective manual analysis on multifarious needs of customers, many organizations are failing to achieve customer satisfaction. As a result, they are losing customer's loyalty and spending extra money on marketing. We can solve the problems by implementing Sentiment Analysis. It is a combined technique of Natural Language Processing (NLP) and Machine Learning (ML). Sentiment Analysis is broadly used to extract insights from wider public opinion behind certain topics, products, and services. We can do it from any online available data. In this paper, we have introduced two NLP techniques (Bag-of-Words and TF-IDF) and various ML classification algorithms (Support Vector Machine, Logistic Regression, Multinomial Naive Bayes, Random Forest) to find an effective approach for Sentiment Analysis on a large, imbalanced, and multi-classed dataset. Our best approaches provide 77% accuracy using Support Vector Machine and Logistic Regression with Bag-of-Words technique.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85062841415
A Comparative Study of Sentiment-Based Graphs of Text Summaries
Sentiment included in a sentence can indicate whether a sentence may have positive, negative or neutral polarity. Polarity of the sentences is deemed important in text summarization, especially when summarizing narrative texts. This paper proposes to discover the patterns and sentiment scores of the summaries generated by established summarization methods: Luhn, Latent Semantic Analysis (LSA) and LexRank. This is done by conducting a study and comparison on the generated sentiment-based graphs of the summaries. A comparative study is conducted on the sentiment-based graph of the generated summaries with two different sentiment lexicons, namely SentiWordNet and VADER. The analysis is conducted by comparing the patterns of the sentiment-based graph and their sentiment scores as well. In the experiments conducted, there is an obvious pattern for the two sentiment lexicons. This implies that sentiment-based graph's pattern and score are helpful in generating a compact summary. The analysis will alleviate future research on sentiment-based summarization and motivates a new method which can be considered as a graph-based summarization to extract a summary based on its sentiment score.
[ "Structured Data in NLP", "Summarization", "Multimodality", "Text Generation", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 50, 30, 74, 47, 78, 3 ]
http://arxiv.org/abs/2003.04972v1
A Comparative Study of Sequence Classification Models for Privacy Policy Coverage Analysis
Privacy policies are legal documents that describe how a website will collect, use, and distribute a user's data. Unfortunately, such documents are often overly complicated and filled with legal jargon; making it difficult for users to fully grasp what exactly is being collected and why. Our solution to this problem is to provide users with a coverage analysis of a given website's privacy policy using a wide range of classical machine learning and deep learning techniques. Given a website's privacy policy, the classifier identifies the associated data practice for each logical segment. These data practices/labels are taken directly from the OPP-115 corpus. For example, the data practice "Data Retention" refers to how long a website stores a user's information. The coverage analysis allows users to determine how many of the ten possible data practices are covered, along with identifying the sections that correspond to the data practices of particular interest.
[ "Text Classification", "Ethical NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 17, 4, 24, 3 ]
SCOPUS_ID:85141202766
A Comparative Study of Short Text Classification with Spiking Neural Networks
Short text classification is an important task widely used in many applications. However, few works investigated applying Spiking Neural Networks (SNNs) for text classification. To the best of our knowledge, there were no attempts to apply SNNs as classifiers of short texts. In this paper, we offer a comparative study of short text classification using SNNs. To this end, we selected and evaluated three popular implementations of SNNs: evolving Spiking Neural Networks (eSNN), the NeuCube implementation of SNNs, as well as the SNNTorch implementation that is available as the Python language package. In order to test the selected classifiers, we selected and preprocessed three publicly available datasets: 20-newsgroup dataset as well as imbalanced and balanced PubMed datasets of medical publications. The preprocessed 20-newsgroup dataset consists of first 100 words of each text, while for the classification of PubMed datasets we use only a title of each publication. As a text representation of documents, we applied the TF-IDF encoding. In this work, we also offered a new encoding method for eSNN networks, that can effectively encode values of input features having non-uniform distributions. The designed method works especially effectively with the TF-IDF encoding. The results of our study suggest that SNN networks may provide the classification quality is some cases matching or outperforming other types of classifiers.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85065740702
A Comparative Study of Supervised and Unsupervised Classifiers Utilizing Extractive Text Summarization Techniques to Support Automated Customer Query Question-Answering
Customer service majorly involves a one-way kind of communication where the organization usually controls the point of interaction through either a call center, helpdesk email address, or even a postal address. The challenges faced by this model are 1) response time (time it takes a customer to get a response about an inquiry they have made) and 2) response rate (rate at which customer inquiries are retrieved and attended to).This paper looks at the use of machine learning algorithms and classifiers, utilizing extractive text summarization techniques for semantic and key phrase extraction of customer queries to facilitate customer response retrieval from a Frequently Asked Questions database. A comparative study of two text summarization approaches (supervised and unsupervised) is carried out by implementing a prototype of an automated agent to respond to customer queries in an electronic media domain.The study illustrates the use of machine learning; text summarization techniques to develop tools that can assist organizations manage their customer interactions effectively and implement robust, efficient, and effective electronic media enabled customer support mechanisms.
[ "Low-Resource NLP", "Text Classification", "Question Answering", "Summarization", "Natural Language Interfaces", "Text Generation", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 80, 36, 27, 30, 11, 47, 4, 24, 3 ]
SCOPUS_ID:85137780551
A Comparative Study of Supervised and Unsupervised Machine Learning Algorithms on Consumer Reviews
For any organization involving consumers, reviews and feedbacks are quite important. For this purpose, the bulk of data is generated from various social networking sites in terms of reviews and feedbacks. In order to understand consumer's perception about an item, this research scrutinizes various supervised and unsupervised machine learning algorithms on two data sets. A comparative analysis is made for deliberating the efficiency of these algorithms on distinct datasets for text classification. This research is an attempt to find the best fit classifier for consumer's perception using sentiment analysis. So, in order to accomplish this objective, firstly text preprocessing techniques are applied on datasets then feature extraction techniques are applied on the processed data. Thereafter, classification and clustering are applied using supervised and unsupervised machine learning algorithms respectively. Further, these algorithms are evaluated and the result reveals that supervised machine learning algorithms especially Support Vector Machine (SVM) outperforms unsupervised machine learning algorithms for garments dataset. And Naive Bayes (NB), Logistic Regression (LR) outperforms unsupervised machine learning algorithms for restaurant dataset.
[ "Low-Resource NLP", "Information Extraction & Text Mining", "Text Classification", "Text Clustering", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 80, 3, 36, 29, 24, 4 ]
SCOPUS_ID:85084984934
A Comparative Study of Support Vector Machine and Naive Bayes Classifier for Sentiment Analysis on Amazon Product Reviews
This paper represents a comparison between two machine learning approaches for analyzing the sentiment of the customers' reviews on Amazon products. Eventually, reviews of a product help the customers to understand the product quality. Incorporating multiple product review factors, including product quality, content, time of the review related to product durability, and historically older positive customer reviews will affect product rankings accordingly. Conversely, the manual approach on a large scale comment is time-consuming leading to an inefficient and unproductive way. In this era of artificial intelligence, machine learning is the most convenient way to train the neural network. So it would be much easier to go through thousands of comments if a model were adopted to polarize those reviews and learn from them. In this research work, firstly, the sentiment of the consumer has been analyzed by the Naive Bayes classifier. Meanwhile, the support vector machine(SVM) has classified the sentiments of the users in binary categories. Hence, the data has been passed through the network model after the preprocessing method named term frequency(TF) and inverse document frequency(IDF) lead to evaluate the feature. To sum up, the goal of this research is to find comparatively better machine learning approaches among SVM and Naive Bayes classifier based on statistical measurement.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
http://arxiv.org/abs/2110.03142v1
A Comparative Study of Transformer-Based Language Models on Extractive Question Answering
Question Answering (QA) is a task in natural language processing that has seen considerable growth after the advent of transformers. There has been a surge in QA datasets that have been proposed to challenge natural language processing models to improve human and existing model performance. Many pre-trained language models have proven to be incredibly effective at the task of extractive question answering. However, generalizability remains as a challenge for the majority of these models. That is, some datasets require models to reason more than others. In this paper, we train various pre-trained language models and fine-tune them on multiple question answering datasets of varying levels of difficulty to determine which of the models are capable of generalizing the most comprehensively across different datasets. Further, we propose a new architecture, BERT-BiLSTM, and compare it with other language models to determine if adding more bidirectionality can improve model performance. Using the F1-score as our metric, we find that the RoBERTa and BART pre-trained models perform the best across all datasets and that our BERT-BiLSTM model outperforms the baseline BERT model.
[ "Language Models", "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Information Extraction & Text Mining" ]
[ 52, 72, 27, 11, 3 ]
http://arxiv.org/abs/2111.15417v1
A Comparative Study of Transformers on Word Sense Disambiguation
Recent years of research in Natural Language Processing (NLP) have witnessed dramatic growth in training large models for generating context-aware language representations. In this regard, numerous NLP systems have leveraged the power of neural network-based architectures to incorporate sense information in embeddings, resulting in Contextualized Word Embeddings (CWEs). Despite this progress, the NLP community has not witnessed any significant work performing a comparative study on the contextualization power of such architectures. This paper presents a comparative study and an extensive analysis of nine widely adopted Transformer models. These models are BERT, CTRL, DistilBERT, OpenAI-GPT, OpenAI-GPT2, Transformer-XL, XLNet, ELECTRA, and ALBERT. We evaluate their contextualization power using two lexical sample Word Sense Disambiguation (WSD) tasks, SensEval-2 and SensEval-3. We adopt a simple yet effective approach to WSD that uses a k-Nearest Neighbor (kNN) classification on CWEs. Experimental results show that the proposed techniques also achieve superior results over the current state-of-the-art on both the WSD tasks
[ "Language Models", "Semantic Text Processing", "Word Sense Disambiguation", "Representation Learning" ]
[ 52, 72, 65, 12 ]
SCOPUS_ID:85077127495
A Comparative Study of Using Bag-of-Words and Word-Embedding Attributes in the Spoiler Classification of English and Thai Text
This research compares the effectiveness of using traditional bag-of-words and word-embedding attributes to classify movie comments into spoiler or non-spoiler. Both approaches were applied to comments in English, an inflectional language; and in Thai, a non-inflectional language. Experimental results suggested that in terms of classification performance, word embedding was not clearly better than bag of words. Yet, a decision to choose it over bag of words could be due to its scalability. Between Word2Vec and FastText embeddings, the former was favorable when few out-of-vocabulary (OOV) words were present. Finally, although FastText was expected to be helpful with a large number of OOV words, its benefit was hardly seen for Thai language.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85107689427
A Comparative Study of Using Pre-Trained Language Models for Toxic Comment Classification
As user-generated contents thrive, so does the spread of toxic comment. Therefore, detecting toxic comment becomes an active research area, and it is often handled as a text classification task. As recent popular methods for text classification tasks, pre-Trained language model-based methods are at the forefront of natural language processing, achieving state-of-The-Art performance on various NLP tasks. However, there is a paucity in studies using such methods on toxic comment classification. In this work, we study how to best make use of pre-Trained language model-based methods for toxic comment classification and the performances of different pre-Trained language models on these tasks. Our results show that, Out of the three most popular language models, i.e. BERT, RoBERTa, and XLM, BERT and RoBERTa generally outperform XLM on toxic comment classification. We also prove that using a basic linear downstream structure outperforms complex ones such as CNN and BiLSTM. What is more, we find that further fine-Tuning a pre-Trained language model with light hyper-parameter settings brings improvements to the downstream toxic comment classification task, especially when the task has a relatively small dataset.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/1703.00993v1
A Comparative Study of Word Embeddings for Reading Comprehension
The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area.
[ "Machine Reading Comprehension", "Reasoning", "Semantic Text Processing", "Representation Learning" ]
[ 37, 8, 72, 12 ]
SCOPUS_ID:85115879697
A Comparative Study of Word Embeddings for the Construction of a Social Media Expert Filter
With the proliferation of fake news and misinformation on social media, being able to differentiate a reliable source of information has become increasingly important. In this paper we present a new algorithm for filtering expert users in social networks according to a certain topic under study. For the algorithm fine-tuning, a comparative study of results according to different word embeddings as well as different representation models, such as Skip-Gram and CBOW, is provided alongside the paper.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
http://arxiv.org/abs/cmp-lg/9705012v1
A Comparative Study of the Application of Different Learning Techniques to Natural Language Interfaces
In this paper we present first results from a comparative study. Its aim is to test the feasibility of different inductive learning techniques to perform the automatic acquisition of linguistic knowledge within a natural language database interface. In our interface architecture the machine learning module replaces an elaborate semantic analysis component. The learning module learns the correct mapping of a user's input to the corresponding database command based on a collection of past input data. We use an existing interface to a production planning and control system as evaluation and compare the results achieved by different instance-based and model-based learning algorithms.
[ "Natural Language Interfaces" ]
[ 11 ]
SCOPUS_ID:84965441316
A Comparative Study of the Effects of a Developmentally Based Instructional Model on Young Children with Autism and Young Children with Other Disorders of Behavior and Development
The progress made by two different groups of preschool children, those with autism or related disorders and those with other emotional/behavioral and developmental disorders, in a particular instruction model was examined. The model was developmentally based and heavily influenced by Piaget's theory of cognitive development, pragmatics theory of language development, and Mahler's theory of development of interpersonal relationships. Both groups of children made greater progress than was predicted by their initial developmental rates in cognitive and language areas. An important and unexpected finding was the similar amount of progress made by the two groups: Specifically, the groups with autism did not make less progress than the comparison group, which ran contrary to our hypothesis. © 1991, SAGE Publications. All rights reserved.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85087056151
A Comparative Study of the Performance of Unsupervised Text Segmentation Techniques on Dialogue Transcripts
Contact centers provide customer interaction support to numerous organizations. In 2017, the contact center industry generated 200 billion in revenue worldwide, contributing to a significant proportion of market share, and yet businesses lost 75 billion due to poor customer satisfaction. Around 48% of consumers prefer using phones as their mode of communication with contact centers. Analysis of these calls can give insights into customer views and help businesses improve their customer engagement. To understand the structure and flow of the conversation, the conversation transcript can be segmented into meaningful sections such as 'greeting exchange' 'problem description' and 'problem resolution', to name a few. In this paper, we present a comparative study of various unsupervised methods of dialogue segmentation. We choose three classic unsupervised text segmentation techniques: TextTiling, TopicTiling, and Content Vector Segmentation, and evaluate their performance on 50 manually labeled dialogue conversation transcripts. The transcripts used span across contact center calls, live chat, interactions with chat-bots and talk show conversations. Additionally, we build on the TextTiling algorithm by incorporating semantic word embeddings for text representation. We show that this modification outperforms the three benchmarked approaches with a mean Pk value of 0.31, indicating that 69% of the boundaries are identified accurately at an average.
[ "Low-Resource NLP", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Text Segmentation", "Responsible & Trustworthy NLP" ]
[ 80, 72, 15, 12, 11, 38, 21, 4 ]
SCOPUS_ID:85143297984
A Comparative Study of Classification and Clustering Methods from Text of Books
Book collections in libraries are an important means of information, but without proper assignment of books into appropriate categories, searching for books on similar topics is very troublesome for both librarians and readers. This is a difficult problem due to the analysis of large sets of real text data, such as the content of books. For this purpose, we propose to create an appropriate model system, the use of which will allow for automatic assignment of books to appropriate categories by analyzing the text from the content of the books. Our research was tested on a database consisting of 552 documents. Each document contains the full content of the book. All books are from Project Gutenberg in the Art, Biology, Mathematics, Philosophy, or Technology category. Well-known techniques of natural language processing (NLP) were used for the proper preprocessing of the book content and for data analysis. Then, two different machine learning approaches were used: classification (as supervised learning) and clustering (as unsupervised learning) in order to properly assign books to selected categories. Measures of accuracy, precision and recall were used to evaluate the quality of classification. In our research, good classification results were obtained, even above 90% accuracy. Also, the use of clustering algorithms allowed for effective assignment of books to categories.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Text Clustering" ]
[ 3, 24, 36, 29 ]
SCOPUS_ID:85144415660
A Comparative Study of Question Answering over Knowledge Bases
Question answering over knowledge bases (KBQA) has become a popular approach to help users extract information from knowledge bases. Although several systems exist, choosing one suitable for a particular application scenario is difficult. In this article, we provide a comparative study of six representative KBQA systems on eight benchmark datasets. In that, we study various question types, properties, languages, and domains to provide insights on where existing systems struggle. On top of that, we propose an advanced mapping algorithm to aid existing models in achieving superior results. Moreover, we also develop a multilingual corpus COVID-KGQA, which encourages COVID-19 research and multilingualism for the diversity of future AI. Finally, we discuss the key findings and their implications as well as performance guidelines and some future improvements. Our source code is available at https://github.com/tamlhp/kbqa.
[ "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Knowledge Representation", "Multilinguality" ]
[ 72, 27, 11, 18, 0 ]
SCOPUS_ID:85121934925
A Comparative Study of Transformers on Word Sense Disambiguation
Recent years of research in Natural Language Processing (NLP) have witnessed dramatic growth in training large models for generating context-aware language representations. In this regard, numerous NLP systems have leveraged the power of neural network-based architectures to incorporate sense information in embeddings, resulting in Contextualized Word Embeddings (CWEs). Despite this progress, the NLP community has not witnessed any significant work performing a comparative study on the contextualization power of such architectures. This paper presents a comparative study and an extensive analysis of nine widely adopted Transformer models. These models are BERT, CTRL, DistilBERT, OpenAI-GPT, OpenAI-GPT2, Transformer-XL, XLNet, ELECTRA, and ALBERT. We evaluate their contextualization power using two lexical sample Word Sense Disambiguation (WSD) tasks, SensEval-2 and SensEval-3. We adopt a simple yet effective approach to WSD that uses a k-Nearest Neighbor (kNN) classification on CWEs. Experimental results show that the proposed techniques also achieve superior results over the current state-of-the-art on both the WSD tasks.
[ "Language Models", "Semantic Text Processing", "Word Sense Disambiguation", "Representation Learning" ]
[ 52, 72, 65, 12 ]
http://arxiv.org/abs/2208.01355v1
A Comparative Study on COVID-19 Fake News Detection Using Different Transformer Based Models
The rapid advancement of social networks and the convenience of internet availability have accelerated the rampant spread of false news and rumors on social media sites. Amid the COVID 19 epidemic, this misleading information has aggravated the situation by putting peoples mental and physical lives in danger. To limit the spread of such inaccuracies, identifying the fake news from online platforms could be the first and foremost step. In this research, the authors have conducted a comparative analysis by implementing five transformer based models such as BERT, BERT without LSTM, ALBERT, RoBERTa, and a Hybrid of BERT & ALBERT in order to detect the fraudulent news of COVID 19 from the internet. COVID 19 Fake News Dataset has been used for training and testing the models. Among all these models, the RoBERTa model has performed better than other models by obtaining an F1 score of 0.98 in both real and fake classes.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 8, 46, 4 ]
http://arxiv.org/abs/2104.07924v1
A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale
Explicating implicit reasoning (i.e. warrants) in arguments is a long-standing challenge for natural language understanding systems. While recent approaches have focused on explicating warrants via crowdsourcing or expert annotations, the quality of warrants has been questionable due to the extreme complexity and subjectivity of the task. In this paper, we tackle the complex task of warrant explication and devise various methodologies for collecting warrants. We conduct an extensive study with trained experts to evaluate the resulting warrants of each methodology and find that our methodologies allow for high-quality warrants to be collected. We construct a preliminary dataset of 6,000 warrants annotated over 600 arguments for 3 debatable topics. To facilitate research in related downstream tasks, we release our guidelines and preliminary dataset.
[ "Reasoning" ]
[ 8 ]
SCOPUS_ID:85135226966
A Comparative Study on Conceptualisations and Linguistic Encodings of Smell Sense in Persian and Russian from Cutural-Cognitive Point of View
This research aims at studying the conceptualizations and linguistic encodings of smell sense in Persian and Russian from Cultural-Cognitive Linguistics point of view, using Sharifian’s (2017) and Kövecses’ (2018) frameworks. Research data have been gathered through the internet from different weblogs and sites, but for extracting the synonyms and collocations of the word “smell”, dictionaries of these languages have been used too. The results show that in both languages the smell sense is applied both as the source and target domain in metaphors and both the higher and lower senses are used as the source domain in their synesthetic constructions. Two macro-metaphors, GOOD IS SMELLY and BAD IS SMELLY can be seen in both languages. "Suspecting, finding out/knowing, vanishing, filling, representing(something) and getting into trouble/occurring a difficulty are some of shared conceptualizations in these two languages. Apart from conceptualisations, some similarities and differences can be seen in the linguistic encodings in these two languages. Similarities confirm Kövecses (2010) in his belief that some conceptual metaphors in the sensory domain of languages are nearly universal because of the common experiences of all human beings. Differences reflect the Sharifian’s idea (2017) that says the origin of concetualisations is cultural cognition which is not totally the same, because of the asymmetric distribution, even in one community.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
http://arxiv.org/abs/1701.08694v1
A Comparative Study on Different Types of Approaches to Bengali document Categorization
Document categorization is a technique where the category of a document is determined. In this paper three well-known supervised learning techniques which are Support Vector Machine(SVM), Na\"ive Bayes(NB) and Stochastic Gradient Descent(SGD) compared for Bengali document categorization. Besides classifier, classification also depends on how feature is selected from dataset. For analyzing those classifier performances on predicting a document against twelve categories several feature selection techniques are also applied in this article namely Chi square distribution, normalized TFIDF (term frequency-inverse document frequency) with word analyzer. So, we attempt to explore the efficiency of those three-classification algorithms by using two different feature selection techniques in this article.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/1911.08870v1
A Comparative Study on End-to-end Speech to Text Translation
Recent advances in deep learning show that end-to-end speech to text translation model is a promising approach to direct the speech translation field. In this work, we provide an overview of different end-to-end architectures, as well as the usage of an auxiliary connectionist temporal classification (CTC) loss for better convergence. We also investigate on pre-training variants such as initializing different components of a model using pre-trained models, and their impact on the final performance, which gives boosts up to 4% in BLEU and 5% in TER. Our experiments are performed on 270h IWSLT TED-talks En->De, and 100h LibriSpeech Audiobooks En->Fr. We also show improvements over the current end-to-end state-of-the-art systems on both tasks.
[ "Machine Translation", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Speech Recognition", "Multilinguality" ]
[ 51, 70, 74, 47, 10, 0 ]
SCOPUS_ID:85097533665
A Comparative Study on Ethics Guidelines for Artificial Intelligence Across Nations
This study aimed to investigate the commonality and differences among AI research and development (R&D) guidelines across nations. Content analysis was conducted on AI R&D guidelines issued by more economically developed countries because they may guide the trend of AI-based applications in education. Specifically, this study consisted of three phases: 1) information retrieval, (2) key term extraction, and (3) data visualization. First, Fisher’s exact test was employed to ensure that different AI R&D guidelines (e.g., the latest ones in the US, EU, Japan, Mainland, and Taiwan) were comparable. Second, the Key Word Extraction System was developed to retrieve essential information in the guidelines. Third, data visualization techniques were performed on key terms across multiple guidelines. A word cloud revealed the similarity among guidelines (e.g., key terms that these guidelines share in common) while a color-coding scheme showed the differences (e.g., occurrence of a key term across guidelines and its frequency within a guideline). Importantly, three key terms, namely, AI, human, and development, are identified as essential commonality across guidelines. As for key terms that only extracted from particular guidelines, interestingly, results with the color-coding scheme suggested that these key terms were weighted differently depends on the developmental emphasis of a nation. Collectively, we discussed how these findings concerning ethics guidelines may shed light on AI research and development to educational technology.
[ "Responsible & Trustworthy NLP", "Ethical NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 4, 17, 24, 3 ]
SCOPUS_ID:85149761252
A Comparative Study on Improving Word Embeddings Beyond Word2Vec and GloVe
NLP or Natural Language Processing in Machine Learning forms a subarea with linguistic roots that has applications in analyzing and predicting natural language data, namely speech and text. Deep neural networks are used in cutting-edge NLP, and the process includes several steps: data collection, preprocessing, language modeling, parsing, and prediction. This research will focus on the aspect of language modeling where words must be represented using embeddings. The study emphasizes improving post-embedding model performance, particularly for unsupervised models, although the whole word embeddings used by these models have remained relatively unchanged. Word2Vec, for example, employs an unsupervised shallow 2-layered network that has suited its function thus far, but making this network denser and adding supervised elements appears to be a viable means of increasing spatial-semantic embedding. A better representation early in the model pipeline will significantly help any computer understand the semantic similarity of the data it is examining, which will considerably enhance the performance of any model that employs the stated representation.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 12, 4 ]
SCOPUS_ID:85149434825
A Comparative Study on Language Models for Dravidian Languages
We train embeddings for four Dravidian languages, a family of languages spoken by the people of South India. The embeddings are trained using the latest deep learning language models, to successfully encode semantic properties of words. We demonstrate the effect of vocabulary size on word similarity and model performance. We evaluate our models on the downstream task of text classification and small custom similarity tasks. Our best model attains accuracy on par with the current state of the art while being only a fraction of its size. Our models are released on the popular open-source platform HuggingFace. We hope that by publicly releasing our trained models, we will help in accelerating research and easing the effort involved in training embeddings for downstream tasks.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
SCOPUS_ID:85123758913
A Comparative Study on Language Models for Task-Oriented Dialogue Systems
The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pre-trained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pre-trained models for fine-tuning, such as BART and T5, on end-to-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85137975996
A Comparative Study on Language Models for the Kannada Language
We train word embeddings for Kannada, a Dravidian language spoken by the people of Karnataka, a southern state in India. The word embeddings are trained using the latest deep learning language models, to successfully encode semantic properties of words. We release our best models on HuggingFace, a popular open source repository of language models to be used for further Indic NLP research. We evaluate our models on the downstream task of text classification and small custom analogy and similarity tasks. Our best model attains accuracy on par with the current State of the Art while being only a fraction of its size. We hope that by publicly releasing our trained models, we will help in accelerating research and easing the effort involved in training embeddings for downstream tasks.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
http://arxiv.org/abs/1311.0833v1
A Comparative Study on Linguistic Feature Selection in Sentiment Polarity Classification
Sentiment polarity classification is perhaps the most widely studied topic. It classifies an opinionated document as expressing a positive or negative opinion. In this paper, using movie review dataset, we perform a comparative study with different single kind linguistic features and the combinations of these features. We find that the classic topic-based classifier(Naive Bayes and Support Vector Machine) do not perform as well on sentiment polarity classification. And we find that with some combination of different linguistic features, the classification accuracy can be boosted a lot. We give some reasonable explanations about these boosting outcomes.
[ "Text Classification", "Polarity Analysis", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 33, 78, 24, 3 ]
SCOPUS_ID:85133189449
A Comparative Study on Mapping Experience of Typical Battery Electric Vehicles Based on Big Data Text Mining Technology
Battery electric vehicles (BEV) are the core innovation of low-carbon travel transformation. However, there are still few evaluation studies on the user experience of its users. This paper is based on the text mining of big data natural language processing. Taking the user experience reviews of typical Battery electric vehicles as the research object, by comparing the user experience reviews of users of two typical electric vehicles that are more popular in China on Quora with Chinese characteristics, the specific research models are Tesla Model 3 which is representative of international brands and BYD Han EV which represents a domestic brand. After data collection, filtering, extraction, analysis, and mapping. The specific findings are as follows: Firstly, In terms of the vehicle hardware itself, the user experience of both focuses on the “battery”, Model3 Focusing on “brake”, the user experience of Han EV pays more attention to “appearance” and “rear space”; Secondly, in terms of vehicle software configuration, both pay attention to “system”, and Han EV users focus more on the features that are not obvious, Model 3 its user experience focuses on “charging”, “marketing” and “performance”; Thirdly, in terms of subjective feeling, both focus on “driving” and “experience”, and Model 3 focuses more on “owner” and “price”. Overall, we have the conditions to conclude that in the context of the experience economy, the application of artificial intelligence technology uses big data to actively “restore” user experience in real scenarios, and optimize the user experience of traditional subjective questionnaires based on small data. Ground on the research of group characteristics, explore the possibility of individual user experience characteristics that can highly fit the characteristics of user groups in the future.
[ "Information Extraction & Text Mining" ]
[ 3 ]
http://arxiv.org/abs/2106.05111v1
A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
End-to-end (E2E) modeling is advantageous for automatic speech recognition (ASR) especially for Japanese since word-based tokenization of Japanese is not trivial, and E2E modeling is able to model character sequences directly. This paper focuses on the latest E2E modeling techniques, and investigates their performances on character-based Japanese ASR by conducting comparative experiments. The results are analyzed and discussed in order to understand the relative advantages of long short-term memory (LSTM), and Conformer models in combination with connectionist temporal classification, transducer, and attention-based loss functions. Furthermore, the paper investigates on effectivity of the recent training techniques such as data augmentation (SpecAugment), variational noise injection, and exponential moving average. The best configuration found in the paper achieved the state-of-the-art character error rates of 4.1%, 3.2%, and 3.5% for Corpus of Spontaneous Japanese (CSJ) eval1, eval2, and eval3 tasks, respectively. The system is also shown to be computationally efficient thanks to the efficiency of Conformer transducers.
[ "Green & Sustainable NLP", "Speech & Audio in NLP", "Text Generation", "Responsible & Trustworthy NLP", "Speech Recognition", "Multimodality" ]
[ 68, 70, 47, 4, 10, 74 ]
http://arxiv.org/abs/2110.05249v1
A Comparative Study on Non-Autoregressive Modelings for Speech-to-Text Generation
Non-autoregressive (NAR) models simultaneously generate multiple outputs in a sequence, which significantly reduces the inference speed at the cost of accuracy drop compared to autoregressive baselines. Showing great potential for real-time applications, an increasing number of NAR models have been explored in different fields to mitigate the performance gap against AR models. In this work, we conduct a comparative study of various NAR modeling methods for end-to-end automatic speech recognition (ASR). Experiments are performed in the state-of-the-art setting using ESPnet. The results on various tasks provide interesting findings for developing an understanding of NAR ASR, such as the accuracy-speed trade-off and robustness against long-form utterances. We also show that the techniques can be combined for further improvement and applied to NAR end-to-end speech translation. All the implementations are publicly available to encourage further research in NAR speech processing.
[ "Text Generation", "Speech Recognition", "Speech & Audio in NLP", "Multimodality" ]
[ 47, 10, 70, 74 ]
http://arxiv.org/abs/1508.03721v1
A Comparative Study on Regularization Strategies for Embedding-based Neural Networks
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, re-embedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85116834628
A Comparative Study on Sentiment Analysis Influencing Word Embedding Using SVM and KNN
Development of sentiment analysis is one of the most active research areas that relates natural language and social networks. In our proposed work, we have done sentiment analysis on an annotated list of positive and negative sentiment words from dataset opinion-lexicon-English. Here to perform our task, we used pretrained word embedding that converts words into numeric vectors and forms the basis for a classifier. Word2vec, a commonly used algorithm, includes CBOW and Skip-gram model in learning word embedding which is basically used for calculating the word vector. Finally, feature vectors are used to train SVM and KNN classifier. Here, we got testing accuracy of 96.2% for SVM classifier and 93.4% for KNN classifier.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 78, 24, 3 ]
http://arxiv.org/abs/2203.16834v3
A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings
In this paper, we conduct a comparative study on speaker-attributed automatic speech recognition (SA-ASR) in the multi-party meeting scenario, a topic with increasing attention in meeting rich transcription. Specifically, three approaches are evaluated in this study. The first approach, FD-SOT, consists of a frame-level diarization model to identify speakers and a multi-talker ASR to recognize utterances. The speaker-attributed transcriptions are obtained by aligning the diarization results and recognized hypotheses. However, such an alignment strategy may suffer from erroneous timestamps due to the modular independence, severely hindering the model performance. Therefore, we propose the second approach, WD-SOT, to address alignment errors by introducing a word-level diarization model, which can get rid of such timestamp alignment dependency. To further mitigate the alignment issues, we propose the third approach, TS-ASR, which trains a target-speaker separation module and an ASR module jointly. By comparing various strategies for each SA-ASR approach, experimental results on a real meeting scenario corpus, AliMeeting, reveal that the WD-SOT approach achieves 10.7% relative reduction on averaged speaker-dependent character error rate (SD-CER), compared with the FD-SOT approach. In addition, the TS-ASR approach also outperforms the FD-SOT approach and brings 16.5% relative average SD-CER reduction.
[ "Text Generation", "Speech & Audio in NLP", "Speech Recognition", "Multimodality" ]
[ 47, 70, 10, 74 ]
SCOPUS_ID:85107226908
A Comparative Study on TF-IDF feature weighting method and its analysis using unstructured dataset
Text Classification is the process of categorizing text into the relevant categories and its algorithms are at the core of many Natural Language Processing (NLP). Term Frequency-Inverse Document Frequency (TF-IDF) and NLP are the most highly used information retrieval methods in text classification. We have investigated and analyzed the feature weighting method for text classification on unstructured data. The proposed model considered two features N-Grams and TF-IDF on the IMDB movie reviews and Amazon Alexa reviews dataset for sentiment analysis. Then we have used the state-of-the-art classifier to validate the method i.e., Support Vector Machine (SVM), Logistic Regression, Multinomial Naïve Bayes (Multinomial NB), Random Forest, Decision Tree, and k-nearest neighbors (KNN). From those two feature extractions, a significant increase in feature extraction with TF-IDF features rather than based on N-Gram. TF-IDF got the maximum accuracy (93.81%), precision (94.20%), recall (93.81%), and F1-score (91.99%) value in Random Forest classifier.
[ "Information Extraction & Text Mining", "Structured Data in NLP", "Text Classification", "Information Retrieval", "Multimodality" ]
[ 3, 50, 36, 24, 74 ]
http://arxiv.org/abs/2212.09873v1
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models
There is growing interest in incorporating eye-tracking data and other implicit measures of human language processing into natural language processing (NLP) pipelines. The data from human language processing contain unique insight into human linguistic understanding that could be exploited by language models. However, many unanswered questions remain about the nature of this data and how it can best be utilized in downstream NLP tasks. In this paper, we present eyeStyliency, an eye-tracking dataset for human processing of stylistic text (e.g., politeness). We develop a variety of methods to derive style saliency scores over text using the collected eye dataset. We further investigate how this saliency data compares to both human annotation methods and model-based interpretability metrics. We find that while eye-tracking data is unique, it also intersects with both human annotations and model-based importance scores, providing a possible bridge between human- and machine-based perspectives. In downstream few-shot learning tasks, adding salient words to prompts generally improved style classification, with eye-tracking-based and annotation-based salient words achieving the highest accuracy.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/2111.08658v1
A Comparative Study on Transfer Learning and Distance Metrics in Semantic Clustering over the COVID-19 Tweets
This paper is a comparison study in the context of Topic Detection on COVID-19 data. There are various approaches for Topic Detection, among which the Clustering approach is selected in this paper. Clustering requires distance and calculating distance needs embedding. The aim of this research is to simultaneously study the three factors of embedding methods, distance metrics and clustering methods and their interaction. A dataset including one-month tweets collected with COVID-19-related hashtags is used for this study. Five methods, from earlier to new methods, are selected among the embedding methods: Word2Vec, fastText, GloVe, BERT and T5. Five clustering methods are investigated in this paper that are: k-means, DBSCAN, OPTICS, spectral and Jarvis-Patrick. Euclidian distance and Cosine distance as the most important distance metrics in this field are also examined. First, more than 7,500 tests are performed to tune the parameters. Then, all the different combinations of embedding methods with distance metrics and clustering methods are investigated by silhouette metric. The number of these combinations is 50 cases. First, the results of these 50 tests are examined. Then, the rank of each method is taken into account in all the tests of that method. Finally, the major variables of the research (embedding methods, distance metrics and clustering methods) are studied separately. Averaging is performed over the control variables to neutralize their effect. The experimental results show that T5 strongly outperforms other embedding methods in terms of silhouette metric. In terms of distance metrics, cosine distance is weakly better. DBSCAN is also superior to other methods in terms of clustering methods.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Text Clustering", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 29, 3 ]
SCOPUS_ID:85081603635
A Comparative Study on Transformer vs RNN in Speech Applications
Sequence-To-sequence models have been widely used in end-To-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-To-speech (TTS). This paper focuses on an emergent sequence-To-sequence model called Transformer, which achieves state-of-The-Art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
[ "Multilinguality", "Language Models", "Machine Translation", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 0, 52, 51, 72, 70, 47, 10, 74 ]
https://aclanthology.org//W01-1412/
A Comparative Study on Translation Units for Bilingual Lexicon Extraction
[ "Multilinguality", "Machine Translation", "Text Generation", "Information Extraction & Text Mining" ]
[ 0, 51, 47, 3 ]
SCOPUS_ID:85126547083
A Comparative Study on Utilization of Semantic Information in Fuzzy Co-clustering
Fuzzy co-clustering is a technique for extracting co-clusters of mutually familiar pairs of objects and items from co-occurrence information among them, and has been utilized in document analysis on document-keyword relations and market analysis on purchase preferences of customers with products. Recently, multi-view data clustering attracts much attentions with the goal of revealing the intrinsic features among multi-source data stored over different organizations. In this paper, three-mode document data analysis is considered under multi-view analysis of document-keyword relations in conjunction with semantic information among keywords, where the results of two different approaches are compared. Fuzzy Bag-of-Words (Fuzzy BoW) introduces semantic information among keywords such that co-occurrence degrees are counted supported by fuzzy mapping of semantically similar keywords. On the other hand, three-mode fuzzy co-clustering simultaneously considers the cluster-wise aggregation degree among documents, keywords and semantic similarities. Numerical results with a Japanese novel document demonstrate the different features of these two approaches.
[ "Semantic Text Processing", "Semantic Similarity", "Information Extraction & Text Mining", "Text Clustering" ]
[ 72, 53, 3, 29 ]
SCOPUS_ID:85136241452
A Comparative Study on Various Approaches of Sentimental Analysis
On social networking platforms, millions of people express their thoughts in the form of the text and images every day. A tweet or text present on the online social media networking sites is very useful to carry out the sentiments of the users about the products, news etc., but from these briefs and highly unstructured data sets generated with which great insights can be gained. Several organizations now use Twitter to inform the public about their services and products. An analysis is a method to evaluate the meaning of data, which is one of the fields of Natural Language Processing (NLP) and text data mining. The sentiment analysis of the unstructured data has many facets. This paper shows a comparative study of various methods and approaches to find out the sentiments from the unstructured data.
[ "Multimodality", "Structured Data in NLP", "Sentiment Analysis" ]
[ 74, 50, 78 ]
SCOPUS_ID:85135737633
A Comparative Study on Various Deep Learning Techniques for Arabic NLP Syntactic Tasks on Noisy Data
Natural language processing (NLP) has three basic tasks divided into two levels, lexical, which includes Tokenization task and syntactic level which includes Part Of Speech tasks (POS) and Name Entity Recognition (NER) tasks. Recent research has demonstrated the effectiveness of deep learning in many NLP tasks including NER, POS, sentiment analysis, language modeling, and other tasks. This study focused on the utilizing of Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BLSTM), Bidirectional Long Short-Term Memory with Conditional Random Field (BLSTM-CRF), and Long Short-Term Memory with Conditional Random Field (LSTM-CRF) deep learning techniques for tasks at the syntactic level; and to compare their performance on noisy data. The models were trained and tested by using KALIMAT corpus with simulated noise on testing dataset. For evaluation purpose, F1-score was used, where the results of our experiments showed that a BLSTM-CRF model surpassed the rest of other models in the NER task at a low noise level, while the LSTM-CRF model obtained a higher F1-score at a higher noise level. With respect to the POS task, the BLSTM-CRF model gained the highest F1-score in all noise levels equated to the other competitive models.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 52, 72, 15, 34, 3 ]
SCOPUS_ID:85057768529
A Comparative Study on Various Deep Learning Techniques for Thai NLP Lexical and Syntactic Tasks on Noisy Data
In Natural Language Processing (NLP), there are three fundamental tasks of NLP which are Tokenization being a part of a lexical level, Part-of-Speech tagging (POS) and Named-Entity-Recognition (NER) being parts of a syntactic level. Recently, there have been many deep learning researches showing their success in many domains. However, there has been no comparative study for Thai NLP to suggest the most suitable technique for each task yet. In this paper, we aim to provide a performance comparison among various deep learning-based techniques on three NLP tasks, and study the effect on synthesized OOV words and the OOV handling algorithm with Levenshtein distance had been provided due to the fact that most existing works relied on a set of vocabularies in the trained model and not being fit for noisy text in the real use case. Our three experiments were conducted on BEST 2010 I2R, a standard Thai NLP corpus on F1 measurement, with the different percentage of noises having been synthesized. Firstly, for Tokenization, the result shows that Synthai, a jointed bidirectional LSTM, has the best performance. Additionally, for POS, bi-directional LSTM with CRF has obtained the best performance. For NER, variational bi-directional LSTM with CRF has outperformed other methods. Finally, the effect of noises reduces the performance of all algorithms on these foundation tasks and the result shows that our OOV handling technique could improve the performance on noisy data.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Named Entity Recognition", "Tagging", "Text Segmentation", "Information Extraction & Text Mining" ]
[ 52, 72, 15, 34, 63, 21, 3 ]
SCOPUS_ID:85081310385
A Comparative Study on Various Text Classification Methods
With the exponential growth in the enhancement of modes of information exchange, the spread of text has become not only substantially faster, but also widespread. Due to this, text has become an indispensable part of all kinds of decision-making. Hence, it has become imperative to analyse the methods that can help make sense of this text as efficiently as possible. We shall make an attempt at the same by discussing various tools to make this very task increasingly productive. We shall try to analyse the relationship between the way an algorithm works and how it performs on various sets of data having different types of featurization. We shall analyse featurization techniques such as bag of words/N-grams, Tf-Idf vectorization, average Word2Vec and Tf-Idf Word2Vec.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85098583084
A Comparative Study on Vectorization and Classification Techniques in Sentiment Analysis to Classify Student-Lecturer Comments
Sentiment analysis is one of the important fields in educational data mining. In this paper, a large dataset, more than 52 000 comments, was used during experiment to develop a state-of-Art classification model. The correlation test was conducted on sentiment analysis results and scale-rated survey results, and the result (r(203)=.79, p<.001) shows that sentiment analysis can be accepted as reasonable method for course and lecturer evaluation. A comparative analysis was done between different vectorization and classification techniques. The results of the experiment show that classifier built using Random Forest was most optimal and efficient classification model with state-of-Art prediction accuracy of 97% for 3-class classification. Moreover, to improve the diversity of the comments, a 5-class dataset was formed and experiment resulted with an efficient classification model with accuracy of 92%. The Tf-Idf vectorization technique performed better than Count (Binary) vectorization.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 68, 36, 78, 24, 4 ]
https://aclanthology.org//W16-2212/
A Comparative Study on Vocabulary Reduction for Phrase Table Smoothing
This work systematically analyzes the smoothing effect of vocabulary reduction for phrase translation models. We extensively compare various word-level vocabularies to show that the performance of smoothing is not significantly affected by the choice of vocabulary. This result provides empirical evidence that the standard phrase translation model is extremely sparse. Our experiments also reveal that vocabulary reduction is more effective for smoothing large-scale phrase tables.
[ "Machine Translation", "Structured Data in NLP", "Multimodality", "Text Generation", "Multilinguality" ]
[ 51, 50, 74, 47, 0 ]
SCOPUS_ID:85100668894
A Comparative Study on Word Embeddings in Deep Learning for Text Classification
Word embeddings act as an important component of deep models for providing input features in downstream language tasks, such as sequence labelling and text classification. In the last decade, a substantial number of word embedding methods have been proposed for this purpose, mainly falling into the categories of classic and context-based word embeddings. In this paper, we conduct controlled experiments to systematically examine both classic and contextualised word embeddings for the purposes of text classification. To encode a sequence from word representations, we apply two encoders, namely CNN and BiLSTM, in the downstream network architecture. To study the impact of word embeddings on different datasets, we select four benchmarking classification datasets with varying average sample length, comprising both single-label and multi-label classification tasks. The evaluation results with confidence intervals indicate that CNN as the downstream encoder outperforms BiLSTM in most situations, especially for document context-insensitive datasets. This study recommends choosing CNN over BiLSTM for document classification datasets where the context in sequence is not as indicative of class membership as sentence datasets. For word embeddings, concatenation of multiple classic embeddings or increasing their size does not lead to a statistically significant difference in performance despite a slight improvement in some cases. For context-based embeddings, we studied both ELMo and BERT. The results show that BERT overall outperforms ELMo, especially for long document datasets. Compared with classic embeddings, both achieve an improved performance for short datasets while the improvement is not observed in longer datasets.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 12, 24, 3 ]
SCOPUS_ID:85143738724
A Comparative Study on the Application of Text Mining in Cybersecurity
Aims: This paper aims to conduct a Systematic Literature Review (SLR) of the relative applications of text mining in cybersecurity. Objectives: The amount of data generated worldwide has been attributed to a change in different activities associated with cyber security, and demands a high automation level. Methods: In the cyber security domain, text mining is an alternative for improving the usefulness of various activities that entail unstructured data. This study searched databases of 516 papers from 2015 to 2021. Out of which, 75 papers are selected for analysis. A detailed evaluation of the selected studies employs sources, techniques, and information extraction on cyber security applications. Results: This study extends gaps for future studies, such as text processing, availability of datasets, innovative methods, and intelligent text mining. Conclusion: This study concludes with interesting findings of employing text mining in cybersecurity applications; the researchers need to exploit all related techniques and algorithms in text mining to detect and protect the organization from Cybersecurity applications.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85105115866
A Comparative Study on the Perception Performance of Handwriting in Korean and English Using Machine Learning
Currently, letters are used mainly for computers and mobile devices, but in some areas, hand-written documents are used. In addition, records written manually before computers and mobile devices are not digitized and stored in the archives. In this paper, we tried to explore the factors necessary for the development of the technology for the recognition of Hangeul by comparing the performance of handwriting recognition between Hangeul and English through the prior study of the Optical Character Recognition (OCR) technology for handwriting recognition and the prior study on the recognition of Korean handwriting.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85107353465
A Comparative Study on the Performance of Deep Learning Algorithms for Detecting the Sentiments Expressed in Modern Slangs
Sentiment analysis is a text investigation technique that distinguishes extremity inside the text, regardless of whether an entire document, sentence, etc. Understanding individuals’ feelings are fundamental for organizations since customers can communicate their considerations and emotions more transparently than any other time in recent memory. In this paper, the proposed model is the sentimental analysis on Twitter slangs, i.e., tweets that contain words that are not orthodox English words but are derived through the evolution of time. To do so, the proposed model will find the root words of the slangs using a snowball stemmer, vectorizing the root words, and then passing it through a neural network for building the model. Also, the tweets would pass through six levels of pre-processing to extract essential features. The tweets are then classified to be positive, neutral, or negative. Sentiment analysis of slangs used in 1,600,000 tweets is proposed using long short-term memory (LSTM) network, logistic regression (LR), and convolution neural network (CNN) algorithms for classification. Among these algorithms, the LSTM network gives the highest accuracy of 78.99%.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85126711205
A Comparative Study on the Quality of English-Chinese Machine Translation in the Era of Artificial Intelligence
By combing the status quo of the research on the quality of machine translation, this paper evaluates the translation quality under the guidance of the traditional translation standards of faithfulness, expressiveness and elegance. It makes a qualitative and quantitative analysis of the current mainstream online machine translation systems such as Google translation and Baidu translation, compares their differences, analyzes the translation quality, summarizes and discusses the common typical errors and their laws, in order to improve machine translation and its quality.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85149168878
A Comparative Survey of Multimodal Multilabel Sentiment Analysis and Its Applications Initiated Due to the Impact of COVID-19
This study presents a detailed survey of different works related to sentiment analysis. The COVID-19 pandemic and its impact on people's mental health act as the driving force behind this survey. The survey can help study sentiment analysis and approaches taken in many studies to detect human emotions via advanced technology. It can also help in improving present systems by finding loopholes and increasing their accuracy. Various lexicon and ML-based systems and models like Word2Vec and LSTM were studied in the surveyed papers. Some of the current and future directions highlighted were Twitter sentiment analysis, review-based market analysis, determining changing behavior and emotions in a given time period, and detecting the mental health of employees, and students. This survey provides details related to trends and topics in sentiment analysis and an in-depth understanding of various technologies used in different studies. It also gives an insight into the wide variety of applications related to sentiment analysis.
[ "Multimodality", "Ethical NLP", "Sentiment Analysis", "Emotion Analysis", "Responsible & Trustworthy NLP" ]
[ 74, 17, 78, 61, 4 ]
http://arxiv.org/abs/1906.08990v1
A Comparative Survey of Recent Natural Language Interfaces for Databases
Over the last few years natural language interfaces (NLI) for databases have gained significant traction both in academia and industry. These systems use very different approaches as described in recent survey papers. However, these systems have not been systematically compared against a set of benchmark questions in order to rigorously evaluate their functionalities and expressive power. In this paper, we give an overview over 24 recently developed NLIs for databases. Each of the systems is evaluated using a curated list of ten sample questions to show their strengths and weaknesses. We categorize the NLIs into four groups based on the methodology they are using: keyword-, pattern-, parsing-, and grammar-based NLI. Overall, we learned that keyword-based systems are enough to answer simple questions. To solve more complex questions involving subqueries, the system needs to apply some sort of parsing to identify structural dependencies. Grammar-based systems are overall the most powerful ones, but are highly dependent on their manually designed rules. In addition to providing a systematic analysis of the major systems, we derive lessons learned that are vital for designing NLIs that can answer a wide range of user questions.
[ "Natural Language Interfaces" ]
[ 11 ]
SCOPUS_ID:85146498174
A Comparative Survey on Parts of Speech Taggers for the Marathi Language
Natural Language Processing relies heavily on the POS tagger. The POS tagger is a useful tool for tagging each word in a phrase with parts of speech tags. NLP Applications performing various tasks use POS tagging as a crucial initial step. In terms of data tagging, the POS tagger for English is widely available, however, there is no similar tagger for Marathi. Marathi is a morphologically complex language with regional speech differences. Because of the ambiguity in the language, as well as its highly inflectional structure and free word order, establishing a successful POS tagger in Marathi is tough. This article provides a comprehensive overview of POS tagging for the Marathi language and its variations. Various POS Tagging models and approaches are investigated in this research. Tokenization in computer languages is similar to tagging in natural language processing. Choosing the appropriate tag for the scenario might be challenging for POS taggers. Research has been conducted to discover a solution to this conundrum.
[ "Tagging", "Speech & Audio in NLP", "Syntactic Text Processing", "Multimodality" ]
[ 63, 70, 15, 74 ]
SCOPUS_ID:85130976758
A Comparative Text Classification Study with Deep Learning-Based Algorithms
As a well-known Natural Language Processing (NLP) task, text classification can be defined as the process of categorizing documents depending on their content. In this process, selecting classification algorithms and tuning classification parameters are crucial for efficient classification. In recent years, many deep learning algorithms have been used successfully in text classification tasks. This paper performed a comparative study utilizing and optimizing several deep learning-based algorithms. We have implemented deep neural networks (DNN), convolutional neural networks (CNN), long shortest-term memory (LSTM), and gated recurrent units (GRU). In addition, we performed extensive experiments by tuning hyperparameters to improve classification accuracy. In addition, we implemented word embeddings techniques to acquire feature vectors of text data. Then we compared our word embeddings results with traditional TF-IDF vectorization results of DNN and CNN. In our experiments, we used an open-source Turkish News benchmarking dataset to compare our results with previous studies in the literature. Our experimental results revealed significant improvements in classification performance using word embeddings with deep learning-based algorithms and tuning hyperparameters. Furthermore, our work outperformed previous results on the selected dataset.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:70450177411
A Comparative Web Browser (CWB) for browsing and comparing web pages
In this paper, we propose a new type of Web browser, called the Comparative Web Browser(CWB), which concurrently presents multiple Web pages in a way that enables the content of the Web pages to be automatically synchronized. The ability to view multiple Web pages at one time is useful when we wish to make a comparison on the Web, such as when we compare similar products or news articles from different newspapers. The CWB is characterized by (1) automatic content-based retrieval of passages from another Web page based on a passage of the Web page the user is reading, and (2) automatic transformation of a user's behavior (scrolling, clicking, or moving backward or forward) on a Web page into a series of behaviors on the other Web pages. The CWB tries to concurrently present "similar" passages from different Web pages, and for this purpose our CWB automatically navigates Web pages that contain passages similar to those of the initial Web page. Furthermore, we propose an enhancement to the CWB, which enables it to use linkage information to find related documents based on link structure.
[ "Passage Retrieval", "Information Retrieval" ]
[ 66, 24 ]