id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
https://aclanthology.org//W07-0606/
|
A Cognitive Model for the Representation and Acquisition of Verb Selectional Preferences
|
[
"Cognitive Modeling",
"Semantic Text Processing",
"Linguistics & Cognitive NLP",
"Representation Learning"
] |
[
2,
72,
48,
12
] |
|
SCOPUS_ID:84988239801
|
A Cognitive Query Model for Arabic based on probabilistic associative morpho-phonetic Sub-Networks
|
This paper is discussing some novel aspects related to formalizing a Cognitive Query Model for Arabic based on constructing query associative morpho-phonetic Sub-Networks in the context of Arabic query analysis and expansion. As Humans tend to use limited number of words with possibly incomplete and ambiguous representation for requesting information, predicting the intended information conveyed in a query keywords might affect an inter-cognitive communication dramatically. Based on Associative Probabilistic Bi-directional Root-Pattern Relations introduced in APRoPAT Statistical Language Model, a cognitively motivated representation for query semantic network construction is proposed. This Model attempts to predicting the most plausible intended query information by constructing a morpho-phonetic cognitive Sub-Network based on the query terms and instantiation of the most probable query root-pattern phonetic vectors within the global Associative Network expressed by the APRoPAT Model.
|
[
"Language Models",
"Cognitive Modeling",
"Semantic Text Processing",
"Syntactic Text Processing",
"Linguistics & Cognitive NLP",
"Phonetics"
] |
[
52,
2,
72,
15,
48,
64
] |
http://arxiv.org/abs/2105.07144v3
|
A Cognitive Regularizer for Language Modeling
|
The uniform information density (UID) hypothesis, which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal, has gained traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices. In this work, we explore whether the UID hypothesis can be operationalized as an inductive bias for statistical language modeling. Specifically, we augment the canonical MLE objective for training language models with a regularizer that encodes UID. In experiments on ten languages spanning five language families, we find that using UID regularization consistently improves perplexity in language models, having a larger effect when training data is limited. Moreover, via an analysis of generated sequences, we find that UID-regularized language models have other desirable properties, e.g., they generate text that is more lexically diverse. Our results not only suggest that UID is a reasonable inductive bias for language modeling, but also provide an alternative validation of the UID hypothesis using modern-day NLP tools.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85147730994
|
A Cognitive Solver with Autonomously Knowledge Learning for Reasoning Mathematical Answers
|
Reasoning answers to mathematical problems requires machines to think and operate like a human to learn knowledge from mathematical data, which is one of the fundamental tasks for exploring general artificial intelligence. Most solutions focus on mimicking how humans understand problems, which generate the necessary expressions for answers. However, they are still far from enough since they ignore the core ability of humans to acquire knowledge from experience. In this paper, we propose a Cognitive Solver (CogSolver) that is capable of autonomously learning knowledge from scratch to solve mathematical problems, inspired by two cognitive science theories. Specifically, we draw one insight from the dual process theory to establish an intelligent BRAIN-ARM framework, and refer to another information processing theory to summarize the knowledge learning process into Store-Apply-Update steps. In CogSolver, the BRAIN system stores three types of mathematical knowledge, including semantics knowledge, relation knowledge, and mathematic rule knowledge. Then, the ARM system applies the knowledge in BRAIN to answer the problems. Specifically, we design a knowledge-aware module and a commutative module in ARM to improve its reasoning ability, where the knowledge is organically integrated into answer reasoning process. After solving the problems, BRAIN updates the stored knowledge according to the feedback of ARM, where we develop knowledge filters to eliminate the redundant ones and further form a more reasonable knowledge base. Our CogSolver carries out the above three steps iteratively, which behaves more like a human. We conduct extensive experiments on real-world math word problem datasets. The experimental results demonstrate the improvement in answer reasoning and clearly show how CogSolver gains knowledge from the problems, leading to superior interpretability. Our codes are available at https://github.com/bigdata-ustc/CogSolver.
|
[
"Reasoning",
"Numerical Reasoning",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
8,
5,
48,
57
] |
http://arxiv.org/abs/2207.11716v1
|
A Cognitive Study on Semantic Similarity Analysis of Large Corpora: A Transformer-based Approach
|
Semantic similarity analysis and modeling is a fundamentally acclaimed task in many pioneering applications of natural language processing today. Owing to the sensation of sequential pattern recognition, many neural networks like RNNs and LSTMs have achieved satisfactory results in semantic similarity modeling. However, these solutions are considered inefficient due to their inability to process information in a non-sequential manner, thus leading to the improper extraction of context. Transformers function as the state-of-the-art architecture due to their advantages like non-sequential data processing and self-attention. In this paper, we perform semantic similarity analysis and modeling on the U.S Patent Phrase to Phrase Matching Dataset using both traditional and transformer-based techniques. We experiment upon four different variants of the Decoding Enhanced BERT - DeBERTa and enhance its performance by performing K-Fold Cross-Validation. The experimental results demonstrate our methodology's enhanced performance compared to traditional techniques, with an average Pearson correlation score of 0.79.
|
[
"Language Models",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
52,
72,
53
] |
SCOPUS_ID:84873853655
|
A Cognitive linguistics view of terminology and specialized language
|
This book explores the importance of Cognitive Linguistics for specialized language within the context of Frame-based Terminology (FBT). FBT uses aspects of Frame Semantics, coupled with premises from Cognitive Linguistics to structure specialized domains and create non-language-specific knowledge representations. Corpus analysis provides information regarding the syntax, semantics, and pragmatics of specialized knowledge units. Also studied is the role of metaphor and metonymy in specialized texts. The first section explains the purpose and structure of the book. The second section gives an overview of basic concepts, theories, and applications in Terminology and Cognitive Linguistics. The third section explains the Frame-based Terminology approach. The fourth section explores the role of contextual information in specialized knowledge representation as reflected in linguistic contexts and graphical information. The final section highlights the conclusions that can be derived from this study.
|
[
"Cognitive Modeling",
"Semantic Text Processing",
"Explainability & Interpretability in NLP",
"Knowledge Representation",
"Linguistics & Cognitive NLP",
"Responsible & Trustworthy NLP"
] |
[
2,
72,
81,
18,
48,
4
] |
SCOPUS_ID:85038405997
|
A Cognitive, Usage-Based View on Lexical Pragmatics: Response to Hall
|
In her chapter on lexical pragmatics, Alison Hall aims at resolving the problem of contextual modulation of word meaning, where the latter is often seen as highly schematic and invariant across contexts. She suggests a model that preserves the schematic meaning yet allows for stored contextualised conceptual clusters. However, as we will show, her notion of “context-free decoded word meaning” leads to theoretical inconsistencies and does not give a sufficiently organised view on processes of contextual modulation which is often more systematic than Hall’s account suggests. In fact, her account is only one step away from a usage-based, cognitive approach which we argue presents a more viable alternative to answer the fundamental question of lexical (or linguistic) meaning and contextual modulation. The usage-based perspective of grammar as a structured inventory of symbolic units allows the seamless integration of both schematic (i.e., contextually neutral) meanings and specific (contextually-enriched) instantiations. In addition, its encyclopaedic view on meaning and its integration of general semantic operations like metaphor and metonymy resolve some of the vexed issues that have troubled linguistic theories when dealing with contextual modulation and/or semantic multiplicity.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
71,
72,
48,
57
] |
SCOPUS_ID:85079794574
|
A Cognitive-Semiotic Construal of Metaphor in Discourse A corpus-based approach
|
Cognitive semiotics is a new field for the study of meaning in trans-disciplines, such as semiotics, cognitive linguistics, and corpus linguistics. This paper aims at studying how cognitive semiotics is employed to construe conceptual metaphors in discourse. We conducted a corpus-based study, with Lakoff and Johnson's Conceptual Metaphor Theory (CMT) and Fauconnier and Turner's Blending Theory (BT), to illustrate our cognitive-semiotic model for metaphors in Dragon Seed, written by Nobel Prize winner Pearl S. Buck. The major finding is that metaphors are mental constructions involving many spaces and mappings in the cognitive-semiotic network. These integration networks are related to encoders' cognitive, cultural, and social contexts. Additionally, cognitive semiotics can be employed to construe conceptual metaphors in discourse vividly and comprehensively and thus is helpful to reveal the ideology and the theme of the discourse.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
71,
72,
48,
57
] |
SCOPUS_ID:85068310394
|
A Coherence Model for Sentence Ordering
|
Text generation applications such as machine translation and automatic summarization require an additional post-processing step to enhance readability and coherence of output texts. In this work, we identify a set of coherence features from different levels of discourse analysis. Features have either positive or negative input to the output coherence. We propose a new model that combines these features to produce more coherent summaries for our target application: extractive summarization. The model use a genetic algorithm to search for a better ordering of the extracted sentences to form output summaries. Experimentations on two datasets using an automatic coherence assessment measure show promising results.
|
[
"Summarization",
"Reasoning",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
8,
47,
3
] |
SCOPUS_ID:85131162666
|
A Coherent Approach to Analyze Sentiment of Cryptocurrency
|
In this paper, we have tried to analyze the real-time Twitter data of some popular cryptocurrencies, like Bitcoin. Bitcoin has been by far the largest cryptocurrency in terms of market size. Its market capitalization currently sits at over 1 Trillion US dollars. Even though it has gained popularity during this decade, still the cryptocurrency has seen many significant price swings on both daily as well as long-term valuations. In recent times, the influence of social media platforms like Twitter can be seen on cryptocurrency as well. Twitter is being used as a news source for many users who need to buy or sell Bitcoin. Therefore, understanding the sentiment behind the tweets which have a direct impact on price direction can help the user to trade in cryptocurrency better. The real-time data extracted using the APIs can help the user get the tweet sentiment at the time of purchase which can provide him/her an advantage in buying/selling the cryptocurrency. By combining the commonly used sentiment analysis tools like VADER and TextBlob into a single model, we can get a more accurate sentiment analysis of the tweets.
|
[
"Sentiment Analysis"
] |
[
78
] |
http://arxiv.org/abs/2301.08130v2
|
A Cohesive Distillation Architecture for Neural Language Models
|
A recent trend in Natural Language Processing is the exponential growth in Language Model (LM) size, which prevents research groups without a necessary hardware infrastructure from participating in the development process. This study investigates methods for Knowledge Distillation (KD) to provide efficient alternatives to large-scale models. In this context, KD means extracting information about language encoded in a Neural Network and Lexical Knowledge Databases. We developed two methods to test our hypothesis that efficient architectures can gain knowledge from LMs and extract valuable information from lexical sources. First, we present a technique to learn confident probability distribution for Masked Language Modeling by prediction weighting of multiple teacher networks. Second, we propose a method for Word Sense Disambiguation (WSD) and lexical KD that is general enough to be adapted to many LMs. Our results show that KD with multiple teachers leads to improved training convergence. When using our lexical pre-training method, LM characteristics are not lost, leading to increased performance in Natural Language Understanding (NLU) tasks over the state-of-the-art while adding no parameters. Moreover, the improved semantic understanding of our model increased the task performance beyond WSD and NLU in a real-problem scenario (Plagiarism Detection). This study suggests that sophisticated training methods and network architectures can be superior over scaling trainable parameters. On this basis, we suggest the research area should encourage the development and use of efficient models and rate impacts resulting from growing LM size equally against task performance.
|
[
"Language Models",
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Green & Sustainable NLP"
] |
[
52,
4,
72,
68
] |
SCOPUS_ID:85107791309
|
A Collaboration Multi-Domain Sentiment Classification on Specific Domain and Global Features
|
Sentiment classification has been attracting increasing attention with the growth of textual data created on the Internet. Text review data covers a wide range of field, and sentiment classification has been widely known as a highly domain-dependent problem. Unfortunately, the existing methods have achieved good results in the domain with a large number of labeled training data. Some researchers apply classifiers learned from source domain to target domain through transfer learning, which still requires the target domain to have enough unlabeled data to learn the similarity between the domains. In this paper, we propose a collaborative domain-specific and global multi-domain sentiment classification approaches with logistic regression. We train a domain-specific sentiment classifier for each source domain, reconstruct the source domain datasets, and train the global sentiment classifiers. Domain-specific sentiment classifier captures domain-specific sentiment features, and global sentiment classifier captures general sentiment knowledge. Finally, taking the output of the first layer as the input of the second layer, a two-level cross-domain sentiment classification model is constructed by logistic regression. Experimental results on benchmark datasets show that the proposed approach can effectively improve the performance of multi-domain sentiment classification and significantly outperform baseline methods.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85110904742
|
A Collaborative AI-Enabled Pretrained Language Model for AIoT Domain Question Answering
|
Large-scale knowledge in the artificial intelligence of things (AIoT) field urgently needs effective models to understand human language and automatically answer questions. Pretrained language models achieve state-of-the-art performance on some question answering (QA) datasets, but few models can answer questions on AIoT domain knowledge. Currently, the AIoT domain lacks sufficient QA datasets and large-scale pretraining corpora. In this article, we propose RoBERTa_ AIoT to address the problem of the lack of high-quality large-scale labeled AIoT QA datasets. We construct an AIoT corpus to further pretrain RoBERTa and BERT. RoBERTa_ AIoT and BERT_ AIoT leverage unsupervised pretraining on a large corpus composed of AIoT-oriented Wikipedia webpages to learn more domain-specific context and improve performance on the AIoT QA tasks. To fine-tune and evaluate the model, we construct three AIoT QA datasets based on the community QA websites. We evaluate our approach on these datasets, and the experimental results demonstrate the significant improvements of our approach.
|
[
"Language Models",
"Natural Language Interfaces",
"Semantic Text Processing",
"Question Answering"
] |
[
52,
11,
72,
27
] |
SCOPUS_ID:85141165752
|
A Collaborative Approach to Support Medication Management in Older Adults with Mild Cognitive Impairment Using Conversational Assistants (CAs)
|
Improving medication management for older adults with Mild Cognitive Impairment (MCI) requires designing systems that support functional independence and provide compensatory strategies as their abilities change. Traditional medication management interventions emphasize forming new habits alongside the traditional path of learning to use new technologies. In this study, we navigate designing for older adults with gradual cognitive decline by creating a conversational check-in system for routine medication management. We present the design of MATCHA-Medication Action To Check-In for Health Application, informed by exploratory focus groups and design sessions conducted with older adults with MCI and their caregivers, alongside our evaluation based on a two-phased deployment period of 20 weeks. Our results indicate that a conversational check-in medication management assistant increased system acceptance while also potentially decreasing the likelihood of accidental over-medication, a common concern for older adults dealing with MCI.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85083071248
|
A Collaborative Framework Based for Semantic Patients-Behavior Analysis and Highlight Topics Discovery of Alcoholic Beverages in Online Healthcare Forums
|
Medical data in online groups and social media contain valuable information, which is provided by both healthcare professionals and patients. In fact, patients can talk freely and share their personal experiences. These resources are a valuable opportunity for health professionals who can access patients’ opinions, as well as discussions between patients. Recently, the data processing of the health community and, how to extract knowledge is a significant technical challenge. There are many online group and forums that users can discuss on healthcare issues. Therefore, we can examine these text documents for discovering knowledge and evaluating patients’ behavior based on their opinions and discussions. For example, there are many questions and answering groups on Twitter or Facebook. Given the importance of the research, in this paper, we present a semantic framework based on topic model (LDA) and Random forest(RF) to predict and retrieval latent topics of healthcare text-documents from an online forum. We extract our healthcare records (patient-questions) from patient.info website as a real dataset. Experiments on our dataset show that social media forums could help for detecting significant patient safety problems on healthcare issues.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85122576192
|
A Collaborative Optimization-Guided Entity Extraction Scheme
|
Entity extraction as one of the most basic tasks in achieving information extraction and retrieval, has always been an important research area in natural language processing. Considering that most of the traditional entity extraction methods need to manually adjust their hyperparameters, it takes a lot of time and is easy to fall into local optimality. To avoid such limitations, this paper proposes a novel scheme to extract named entities, where the model hyperparameters are automatically adjusted to improve the performance of entity extraction. Here, the proposed scheme is composed of bi-directional encoder representation from transformers (BERT) and conditional random field (CRF). Specifically, through the fusion of collaborative computing paradigm, particle swarm optimization (PSO) algorithm is utilized in this paper to search for the best value of hyperparameters automatically in a cooperative way. The experimental results on two public datasets and a steel inquiry dataset verify that our proposed scheme can effectively improve the performance of entity extraction.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:79961214576
|
A Collision Theory inspired model for categorization of Wikipedia documents
|
Research in Information Retrieval has been inspired by various models in mathematics and physics. This paper presents a novel approach for text categorization of semi-structured Wikipedia documents by using the principles in Collision Theory. Profiles are created by analyzing the distribution of features within different structural elements. Mean Free Path (MFP), a concept derived from the Collision Theory which considers positions and weights of features in test documents is applied to identify the correct category. © EuroJournals Publishing, Inc. 2011.
|
[
"Linguistic Theories",
"Text Classification",
"Linguistics & Cognitive NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
57,
36,
48,
24,
3
] |
SCOPUS_ID:85081950265
|
A Combination Part of Speech Tagger using Selected Voting Methods
|
The development of resources in any language is an expensive process, many languages, including the indigenous languages of South Africa, can be classified as being resource scarce, or lacking in tagging resources. This study investigates and applies techniques and methodologies for optimising the use of available resources and improving the accuracy of a tagger using Afrikaans as resource-scarce language and aims to determine whether combination techniques can be effectively applied to improve the accuracy of a tagger for Afrikaans. In order to do this, existing methodologies for combining classification algorithms are investigated. Four taggers, trained using MBT, SVM1ight, MXPOST and TnT respectively, are then combined into a combination tagger using weighted voting. Weights are calculated by means of total precision, tag precision and a combination of precision and recall. Although the combination of taggers does not consistently lead to an error rate reduction with regard to the baseline, it manages to achieve an error rate reduction of up to 14.54% in some cases.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
SCOPUS_ID:85086034781
|
A Combination of DWT CLAHE and Wiener Filter for Effective Scene to Text Conversion and Pronunciation
|
An effective scene to text conversion and its pronunciation is realized. An intelligent combination of Discrete Wavelet Transform (DWT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Wiener filter and adaptive weighted average is utilized for the image enhancement. Subsequently, the Maximally Stable Extremal Region (MSER) is used to detect the text regions. Afterward, the geometrical and contour based approaches filter out the non-text MSERs. The connected component concept is used to group the text candidates. In next step the Optical Character Recognition (OCR) recognizes the text. The Microsoft speech to text synthesizer pronounces the extracted text. The system applicability is tested by using the standard robust reading competition dataset. The designed method secures 93% precision in text segmentation and 89.9% precision in end-to-end recognition.
|
[
"Visual Data in NLP",
"Speech & Audio in NLP",
"Multimodality"
] |
[
20,
70,
74
] |
SCOPUS_ID:85118314026
|
A Combination of Enhanced WordNet and BERT for Semantic Textual Similarity
|
The task of measuring sentence similarity deals with computing the likeness between a pair of sentences by adopting Natural Language Processing techniques (Euclidean distance, Jaccard distance, Manhattan distance, etc.) as well as embedding techniques (word2vec, GloVe, Flair, etc.). For the purpose of determining sentence similarity, this paper proposes a novel, ensemble learning approach which uses the WordNet corpus and the Bidirectional Encoder Representations from Transformers (BERT) in order to consider the context of words in sentences while computing the similarity scores. The accuracy of the proposed model is computed by calculating the Pearson and Spearman scores for the sentence pairs from the Sentences Involving Compositional Knowledge (SICK) dataset. On analyzing the results, the proposed approach is observed to outperform existing state-of-the-art semantic textual similarity models since it returns the highest correlation scores. Further, this paper also introduces a possible machine learning approach for the same and evaluates its scope and drawbacks.
|
[
"Language Models",
"Semantic Text Processing",
"Semantic Similarity",
"Representation Learning"
] |
[
52,
72,
53,
12
] |
SCOPUS_ID:85090577985
|
A Combination of Frequent Pattern Mining and Graph Traversal Approaches for Aspect Elicitation in Customer Reviews
|
Due to the remarkable increase in e-commerce transactions, people try to have an appropriate choice of purchase through considering other people's reflected experience in product's or service's reviews. Automatic analysis of such corpus requires enhanced developed algorithms based on natural language processing and opinion mining. Moreover, the linguistic differences make extending existing algorithms from one language to another challenging and in some cases impossible. Opinion mining focuses on different subjects of review analysis such as spam detection, aspect elicitation and polarity allocation. In this article, we focus on detection of explicit aspect and propose a methodology to overcome some difficult and problematic aspect compounds in the form of multi- words format in Persian language. Our approach proposes the construction of a directed weighted graph (ADG structure) based on some yielded information from FP-Growth frequent pattern identification algorithm on our corpus of Persian sentence. Traversing some special paths within the ADG graph according to our developed rules could lead us to the extraction of problematic multi-word aspects. We utilize Neo4j NoSQL graph database environment and its Cypher query language in order to create the ADG graph and access the desired paths that reflects our developed rules on the ADG structure which lead us to extract the multi-word aspects. The evaluation of our methodology with the existing approaches on the issue of aspect derivation in Persian language including ELDA, SAM, an MMI-based and an LRT-based algorithms indicates the robustness of our approach.
|
[
"Opinion Mining",
"Multimodality",
"Structured Data in NLP",
"Sentiment Analysis"
] |
[
49,
74,
50,
78
] |
SCOPUS_ID:85102892366
|
A Combination of Lexicon-Based and Classified-Based methods for Sentiment Classification based on Bert
|
Sentiment classification is a crucial problem in natural language processing and is essential to understand user opinions. There are two main approaches to solve this problem, one is the classified-based method, the other is the lexicon-based method; however, both methods perform not well on the long-sequence methods, and each method has its advantages and disadvantages. This paper introduced a new method called Lexiconed BERT, which cream off the best and filter out the impurities from the above two methods. The evaluation shows that our model achieves excellent results in the long sequence sentence and reduce resource consumption significantly.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
78,
24,
3
] |
SCOPUS_ID:85089201508
|
A Combination of Machine Learning and Lexicon Based Techniques for Sentiment Analysis
|
Today millions of web users put their opinions on the internet about various topics. Development of methods that automatically categorize these opinions to positive, negative or neutral is important. Opinion mining or sentiment analysis is known as mining of behavior, opinions and sentiments of the text, chat, etc. using natural language processing and information retrieval methods. The paper is aimed to study the effect of combining machine learning methods in a meta-classifier for sentiment analysis. The machine learning methods use the output of lexicon-based techniques. In this way, the score of SentiWordNet dictionary, Liu's sentiment list, SentiStrength and sentimental words ratios are computed and used as the input of machine learning techniques. Adjectives, adverbs and verbs of an opinion are used for opinion modeling and score of these words are extracted from lexicons. Experimental results show that the meta-classifier improve the accuracy of classification 0.9% and 1.09% for Amazon and IMDB reviews in comparison with the four machine learning techniques evaluated here.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85125262042
|
A Combination of Resampling Method and Machine Learning for Text Classification on Imbalanced Data
|
Imbalanced data will affect the accuracy of text classification, in order to solve this issue, 11 different algorithms are used to resampling the dataset. Results show that, 5 different oversampling method and SmoteTomek method can rebalance the dataset effectively, which can improve the recognition rate of models on the minority class obviously, while undersampling methods decrease the overall accuracy of models on imbalanced dataset. Meanwhile, 7 different machine learning algorithms are used to train the model with datasets resampled by SmoteTomek algorithm, after combination, Naive Bayes and Logistic Regression algorithms performs best, they can improve the predictive ability of models on the minority class significantly without decreasing the overall accuracy of models. So in handling multi-class imbalanced text classification, Naive Bayes or Logistic Regression combined with SmoteTomek resampling method should be a preference.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85125579672
|
A Combination of Resampling and Ensemble Method for Text Classification on Imbalanced Data
|
One of the major factor which can affect the accuracy of text classification is the imbalanced dataset. In order to find the suitable method to handle this issue, six different ensemble methods are used to train models on imbalanced dataset. The result shows that, without resampling the dataset, Stacking algorithm performs better than other ensemble method, it can increase the recall of the minority class by 19.3%. Meanwhile, ensemble algorithms combined with resampling methods are used to train the model. Results show that, ensemble algorithms combined with undersampling method (RUS) can improve the predictive ability of models on minority class, but it reduces the accuracy of models on other majority classes because of feature dropping; while voting algorithm combined with oversampling method (SmoteTomek) can improve the recall of the minority class by 40.4%, without decreasing the accuracy of models on other majority classes. Afterall, in training a text classification model with multi-class imbalanced datasets, Voting algorithm combined with SmoteTomek can be a preference.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85135747424
|
A Combination of Sentiment Analysis Systems for the Study of Online Travel Reviews: Many Heads are Better than One
|
This study presents an analysis of the Rest-Mex forum task 2021, which is the first international evaluation event using tourism-related (Online Travels Reviews - OTRs) data from Mexico. In that forum, 14 specialized sentiment analysis systems were presented. The main contribution of this research is a method to successfully combine those 14 systems specialized on sentiment analysis systems for OTRs. The outputs of those 14 systems were used to evaluate the proposed combination schemes. The systems were trained and tested with 7,413 OTRs from the city of Guanajuato, Mexico, a well-known cultural destination. All of them were collected from TripAdvisor. We propose three schemes to combine the systems to predict the polarity of OTRs efficiently. The combination based on deep learning improves significantly each of the results obtained in the sentiment analysis systems at the individual level. Also, the results were improved for 4 out of the 5 polarity classes in the collection. To the best of our knowledge, this is the first paper that reports results from the combination of different specialized systems in sentiment analysis for OTRs.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85145235609
|
A Combination of BERT and Transformer for Vietnamese Spelling Correction
|
Recently, many studies have shown the efficiency of using Bidirectional Encoder Representations from Transformers (BERT) in various Natural Language Processing (NLP) tasks. Specifically, English spelling correction task that uses Encoder-Decoder architecture and takes advantage of BERT has achieved state-of-the-art result. However, to our knowledge, there is no implementation in Vietnamese yet. Therefore, in this study, a combination of Transformer architecture (state-of-the-art for Encoder-Decoder model) and BERT was proposed to deal with Vietnamese spelling correction. The experiment results have shown that our model outperforms other approaches as well as the Google Docs Spell Checking tool, achieves an 86.24 BLEU score on this task.
|
[
"Language Models",
"Text Error Correction",
"Semantic Text Processing",
"Syntactic Text Processing"
] |
[
52,
26,
72,
15
] |
SCOPUS_ID:85056151999
|
A Combined Approach Using Semantic Role Labelling and Word Sense Disambiguation for Question Generation and Answer Extraction
|
Most question answering systems are used to predict an expected answer type given a question. In this work, we present a Question Answering System based on the combined approach of Word Sense Disambiguation (WSD) and Semantic Role Labeling (SRL). Our motivation is to generate reasonable questions and solve co-referencing problem extracted from the answer. The proposed model of work is factoid sense based question generation system. We have used Lesk algorithm for WSD and Senna tool for SRL. Based on the sense associated with the sentence, the system generates questions of semantically resolvable. Using deep syntax and semantics analysis, we have extracted an answer from the given question. Hobbs algorithm resolved co-reference problem generated in answer extraction. The experimental results show promising results for the proposed approach.
|
[
"Semantic Text Processing",
"Word Sense Disambiguation",
"Semantic Parsing",
"Question Answering",
"Question Generation",
"Natural Language Interfaces",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
72,
65,
40,
27,
76,
11,
47,
3
] |
SCOPUS_ID:84964766215
|
A Combined Approach for Disease/Disorder Template Filling
|
Disease/Disorder Template Filling is a complicated task of relation extraction, requiring a combination of several methods in order to solve it. The aim of this paper is to propose a combined approach for disorder template filling. The system combined three methods: rule-based, regular expression, and machine learning-based. This system added several features for the machine learning-based method in comparison with the our system that was proposed in Task 2: ShARe/CLEF eHealth Evaluation Lab 2014 [6]. This rule-based set is established on observation of instances of disease/disorder shown the dependency tree presentation. The regular expression used the rules in Heidel Time [2]. The machine learning method used the SVM algorithm to train the classification model based on the features that were added. This addition increased the result of the Doc Time Class attribute up to 6%. The system's result obtained an overall accuracy of 0.833, F1-score of 0.445, a precision of 0.406, and a recall of 0.516.
|
[
"Relation Extraction",
"Information Extraction & Text Mining"
] |
[
75,
3
] |
SCOPUS_ID:85105956410
|
A Combined Approach for Text Detection in Images Using MLP Neural Networks and Image Processing
|
Text detection in images is an important research area that has attracted the attention of many researchers. Due to the growing image databases, information retrieval from databases in less time and with maximum precision is more important. The low level features which are extracted from the images and videos include metrics of color, texture and shape, although these features can easily be obtained, they do not have enough accuracy to extract the content. Today, text retrieval in images is an important part of content recovery. Detection of vehicle plates, text ads or video frames, book title and addresses in mailing envelopes are among text mining applications. In the current article a method for combing image processing and neural networks for text detection in images is presented. In this approach, after detecting text location in image using MLP (Multiple Layer Perception) neural network structure, the text in the images is detected. Finally, the performance of two neural network MLP and LVQ (Learning Vector Quantization) for text detection are compared. The results show that the MLP neural network with suggested implemented algorithm in this network has a better performance.
|
[
"Visual Data in NLP",
"Information Retrieval",
"Multimodality"
] |
[
20,
24,
74
] |
http://arxiv.org/abs/1807.02911v3
|
A Combined CNN and LSTM Model for Arabic Sentiment Analysis
|
Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
78,
24,
3
] |
SCOPUS_ID:85103798480
|
A Combined Extractive with Abstractive Model for Summarization
|
Aiming at the difficulties in document-level summarization, this paper presents a two-stage, extractive and then abstractive summarization model. In the first stage, we extract the important sentences by combining sentences similarity matrix (only used for the first time) or pseudo-title, which takes full account of the features (such as sentence position, paragraph position, and more.). To extract coarse-grained sentences from a document, and considers the sentence differentiation for the most important sentences in the document. The second stage is abstractive, and we use beam search algorithm to restructure and rewrite these syntactic blocks of these extracted sentences. Newly generated summary sentence serves as the pseudo-summary of the next round. Globally optimal pseudo-title acts as the final summarization. Extensive experiments have been performed on the corresponding data set, and the results show our model can obtain better results.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85125416882
|
A Combined Model of NLP with Business Process Modelling for Sentiment Analysis
|
Natural language processing is one of the main sub-field of artificial intelligence, where all end users of internet in real time use their native language to extract the required information. This work concentrates on the sentiment analysis on social media, which comprises of text, numbers, hashtags, symbolic representations and much more. It becomes tedious in handling these unstructured data. Hence, the solution is to makes use of basic NLP tools, then implement Markov decision process in order to process the translation of the native language input to a SQL query. An implementation idea for business process models is also incorporated to have different analysis states of the input data. The accuracy achieved through Deep learning models in this proposed work are greater compared to the other machine learning and normal corpus methodologies.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85087398989
|
A Combined Weighting for the Feature-Based Method on Topological Parameters in Semantic Taxonomy Using Social Media
|
The textual analysis has become most important task due to the rapid increase of the number of texts that have been continuously generated in several forms such as posts and chats in social media, emails, articles, and news. The management of these texts requires efficient and effective methods, which can handle the linguistic issues that come from the complexity of natural languages. In recent years, the exploitation of semantic features from the lexical sources has been widely investigated by researchers to deal with the issues of "synonymy and ambiguity" in the tasks involved in the Social Media like document clustering. The main challenges of exploiting the lexical knowledge sources such as 1WordNet 3.1 in these tasks are how to integrate the various types of semantic relations for capturing additional semantic evidence, and how to settle the high dimensionality of current semantic representing approaches. In this paper, the proposed weighting of features for a new semantic feature-based method as which combined four things as which is "Synonymy, Hypernym, non-taxonomy, and Glosses". Therefore, this research proposes a new knowledge-based semantic representation approach for text mining, which can handle the linguistic issues as well as the high dimensionality issue. Thus, the proposed approach consists of two main components: a feature-based method for incorporating the relations in the lexical sources, and a topic-based reduction method to overcome the high dimensionality issue. The proposed method approach will evaluated using WordNet 3.1 in the text clustering and text classification.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Text Clustering",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
29,
24,
3
] |
SCOPUS_ID:85084930267
|
A Combined method to model policy interventions for local communities based on people knowledge
|
Policy interventions to promote innovative industries in peripheral regions are often hampered by lack of information on the functioning of the local socio-economic systems, due to their complexity. This might result in mismatches between policy objectives and the actual needs and capability of local communities. To overcome this drawback, it is crucial to obtain appropriate knowledge on the local system, which nevertheless is typically embedded in local actors’ minds in uncodified and tacit form. Fuzzy Cognitive Maps (FCMs) have been employed to decode this kind of knowledge in a reproducible manner. However, some problems remain as to how to integrate the necessary vagueness of local actors’ heuristic with experts’ knowledge into a rational framework. The following methodology customization is proposed: • Combine the FCMs with the Discourse Analysis to obtain relevant narratives (i.e. concepts, visions, insights, etc.) needed to define system boundaries and variables. • Employ individual interviews – rather than a participatory approach – to define the causal relations among system variables. • Integrate tacit and uncodified knowledge embedded in local actors within experts’ scientific knowledge.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85109158489
|
A Combined-Convolutional Neural Network for Chinese News Text Classification
|
At present, most of the researches on news classification are in English, and the traditional machine learning methods have a problem of incomplete extraction of local text block features in long text processing.In order to solve the problem of lack of special term set for Chinese news classification, a vocabulary suitable for Chinese text classification is made by constructing a data index method, and the text feature construction is combined with word2vec pre-trained word vector.In order to solve the problem of incomplete feature extraction, the effects of different convolution and pooling operations on the classification results are studied by improving the structure of classical convolution neural network model.In order to improve the precision of Chinese news text classification, this paper proposes and implements a combined-convolution neural network model, and designs an effective method of model regularization and optimization.The experimental results show that the precision of the combined-convolutional neural network model for Chinese news text classification reaches 93.69%, which is 6.34% and 1.19% higher than the best traditional machine learning method and classic convolutional neural network model, and it is better than the comparison model in recall and F-measure.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
SCOPUS_ID:85105637998
|
A Comment on Lemanski’s “Concept Diagrams and the Context Principle”
|
In this paper I make some critical comments on Jens Lemanski’s article “Concept Diagrams and the Context Principle,” mainly on his “rational representationalism.” This is a view concerning the question whether it is either concepts, judgements, or inferences that may count as the primary element of a “logic.” It suggests that concepts are considered to be primary for the explanation of linguistic meaning, whereas judgements are considered to be primary regarding our understanding of language. Criticism is put forward on the issues whether Schopenhauer proposed a “phylogenetic abstraction theory” (as Lemanski puts it) and a use theory of meaning. Also, a critique of the general idea of rational representationalism that Schopenhauerian concept diagrams can play the role of mediation between intuitive representation and rationality is developed.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
https://aclanthology.org//W17-3519/
|
A Commercial Perspective on Reference
|
I briefly describe some of the commercial work which XXX is doing in referring expression algorithms, and highlight differences between what is commercially important (at least to XXX) and the NLG research literature. In particular, XXX is less interested in generic reference algorithms than in high-quality algorithms for specific types of references, such as components of machines, named entities, and dates.
|
[
"Text Generation"
] |
[
47
] |
SCOPUS_ID:85117917779
|
A Common Formal Framework for Factorial and Probabilistic Topic Modelling Techniques
|
Topic modelling is nowadays one of the most popular techniques used to extract knowledge from texts. There are several families of methods related to this problem, among them 1) Factorial methods, 2) Probabilistic methods and 3) Natural Language Processing methods. In this paper a common conceptual framework is provided for Factorial and probabilistic methods by identifying common elements and describing them with common and homogeneous notation and 7 different methods are described accordingly. Under a common notation it is easy to make a comparative analysis and see how flexible or more or less realistic assumptions are made by the different methods. This is the first step to a wider analysis where all families can be related to this common conceptual framework and to go in depth in the understanding of stengths and weakenesses of each method and ellaboration of general guidelines to provide application criteria. The paper ends with a discussion comparing the presented methods and future research lines.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
https://aclanthology.org//2021.mtsummit-up.18/
|
A Common Machine Translation Post-Editing Training Protocol by GALA
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
http://arxiv.org/abs/2001.06381v2
|
A Common Semantic Space for Monolingual and Cross-Lingual Meta-Embeddings
|
This paper presents a new technique for creating monolingual and cross-lingual meta-embeddings. Our method integrates multiple word embeddings created from complementary techniques, textual sources, knowledge bases and languages. Existing word vectors are projected to a common semantic space using linear transformations and averaging. With our method the resulting meta-embeddings maintain the dimensionality of the original embeddings without losing information while dealing with the out-of-vocabulary problem. An extensive empirical evaluation demonstrates the effectiveness of our technique with respect to previous work on various intrinsic and extrinsic multilingual evaluations, obtaining competitive results for Semantic Textual Similarity and state-of-the-art performance for word similarity and POS tagging (English and Spanish). The resulting cross-lingual meta-embeddings also exhibit excellent cross-lingual transfer learning capabilities. In other words, we can leverage pre-trained source embeddings from a resource-rich language in order to improve the word representations for under-resourced languages.
|
[
"Multilinguality",
"Semantic Text Processing",
"Cross-Lingual Transfer",
"Representation Learning"
] |
[
0,
72,
19,
12
] |
https://aclanthology.org//W00-1009/
|
A Common Theory of Information Fusion from Multiple Text Sources Step One: Cross-Document Structure
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
11,
38,
48,
57
] |
|
http://arxiv.org/abs/0909.2718v1
|
A Common XML-based Framework for Syntactic Annotations
|
It is widely recognized that the proliferation of annotation schemes runs counter to the need to re-use language resources, and that standards for linguistic annotation are becoming increasingly mandatory. To answer this need, we have developed a framework comprised of an abstract model for a variety of different annotation types (e.g., morpho-syntactic tagging, syntactic annotation, co-reference annotation, etc.), which can be instantiated in different ways depending on the annotator's approach and goals. In this paper we provide an overview of the framework, demonstrate its applicability to syntactic annotation, and show how it can contribute to comparative evaluation of parser output and diverse syntactic annotation schemes.
|
[
"Syntactic Text Processing"
] |
[
15
] |
http://arxiv.org/abs/2212.00298v1
|
A Commonsense-Infused Language-Agnostic Learning Framework for Enhancing Prediction of Political Polarity in Multilingual News Headlines
|
Predicting the political polarity of news headlines is a challenging task that becomes even more challenging in a multilingual setting with low-resource languages. To deal with this, we propose to utilise the Inferential Commonsense Knowledge via a Translate-Retrieve-Translate strategy to introduce a learning framework. To begin with, we use the method of translation and retrieval to acquire the inferential knowledge in the target language. We then employ an attention mechanism to emphasise important inferences. We finally integrate the attended inferences into a multilingual pre-trained language model for the task of bias prediction. To evaluate the effectiveness of our framework, we present a dataset of over 62.6K multilingual news headlines in five European languages annotated with their respective political polarities. We evaluate several state-of-the-art multilingual pre-trained language models since their performance tends to vary across languages (low/high resource). Evaluation results demonstrate that our proposed framework is effective regardless of the models employed. Overall, the best performing model trained with only headlines show 0.90 accuracy and F1, and 0.83 jaccard score. With attended knowledge in our framework, the same model show an increase in 2.2% accuracy and F1, and 3.6% jaccard score. Extending our experiments to individual languages reveals that the models we analyze for Slovenian perform significantly worse than other languages in our dataset. To investigate this, we assess the effect of translation quality on prediction performance. It indicates that the disparity in performance is most likely due to poor translation quality. We release our dataset and scripts at: https://github.com/Swati17293/KG-Multi-Bias for future research. Our framework has the potential to benefit journalists, social scientists, news producers, and consumers.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Commonsense Reasoning",
"Text Generation",
"Reasoning",
"Information Retrieval",
"Multilinguality"
] |
[
52,
51,
72,
62,
47,
8,
24,
0
] |
SCOPUS_ID:85060024758
|
A Community Based Web Summarization in Near Linear Time
|
The rapid growth of web users searching for their topics of interest on web, pose challenges to the system, in particular to the search engines. Web content summarization is one crucial application which helps in leveraging the performance of search engines. However summarizing the totality of web content is a laborious task due to the massiveness of web data. Segmenting the web into communities and extracting only relevant pages from those communities for summarization could be a viable solution. This paper presents a novel technique for web summarization by extracting pages of the web according to their degree of authenticity. For this, a large collection of pages are crawled from the web and the communities are identified in linear time based on edge streaming in graph. Then, through link analysis, more authentic pages are identified for summarization. The proposed method is validated through experimentation using real and synthetic data. The results indicate that the proposed model is useful for building an optimized search engine.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
https://aclanthology.org//W00-0311/
|
A Compact Architecture for Dialogue Management Based on Scripts and Meta-Outputs
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
SCOPUS_ID:85056508289
|
A Compact Encoding for Efficient Character-level Deep Text Classification
|
This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.
|
[
"Information Extraction & Text Mining",
"Green & Sustainable NLP",
"Text Classification",
"Information Retrieval",
"Responsible & Trustworthy NLP"
] |
[
3,
68,
36,
24,
4
] |
SCOPUS_ID:85068988917
|
A Compact Framework for Voice Conversion Using Wavenet Conditioned on Phonetic Posteriorgrams
|
Voice conversion can benefit from WaveNet vocoder with improvement in converted speech's naturalness and quality. However, nowadays approaches segregate the training of conversion module and WaveNet vocoder towards different optimization objectives, which might lead to the difficulty in model tuning and coordination. In this paper, we propose a compact framework to unify the conversion and the vocoder parts. Multi-head self-attention structure and bidirectional long short-term memory (BLSTM) recurrent neural network (RNN) are employed to encode speaker independent phonetic posteriorgrams (PPGs) into an intermediate representation which is used as the condition input of WaveNet to generate target speaker's waveform. In this way, we unify the conversion and vocoder parts into a compact system in which all parameters can be tuned simultaneously for global optimization. We compared the proposed method with the baseline system that consists of separately trained conversion module and WaveNet vocoder. Subjective evaluations show that the proposed method can achieve better results in both naturalness and speaker similarity.
|
[
"Phonetics",
"Speech & Audio in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
64,
70,
15,
74
] |
http://arxiv.org/abs/2208.12367v2
|
A Compact Pretraining Approach for Neural Language Models
|
Domain adaptation for large neural language models (NLMs) is coupled with massive amounts of unstructured data in the pretraining phase. In this study, however, we show that pretrained NLMs learn in-domain information more effectively and faster from a compact subset of the data that focuses on the key information in the domain. We construct these compact subsets from the unstructured data using a combination of abstractive summaries and extractive keywords. In particular, we rely on BART to generate abstractive summaries, and KeyBERT to extract keywords from these summaries (or the original unstructured text directly). We evaluate our approach using six different settings: three datasets combined with two distinct NLMs. Our results reveal that the task-specific classifiers trained on top of NLMs pretrained using our method outperform methods based on traditional pretraining, i.e., random masking on the entire data, as well as methods without pretraining. Further, we show that our strategy reduces pretraining time by up to five times compared to vanilla pretraining. The code for all of our experiments is publicly available at https://github.com/shahriargolchin/compact-pretraining.
|
[
"Language Models",
"Structured Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
52,
50,
72,
74
] |
SCOPUS_ID:84885552274
|
A Companion to the Latin Language
|
A Companion to the Latin Language presents a collection of original essays from international scholars that track the development and use of the Latin language from its origins to its modern day usage. • Brings together contributions from internationally renowned classicists, linguists and Latin language specialists • Offers, in a single volume, a detailed account of different literary registers of the Latin language • Explores the social and political contexts of Latin • Includes new accounts of the Latin language in light of modern linguistic theory • Supplemented with illustrations covering the development of the Latin alphabet. © 2011 Blackwell Publishing Ltd.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
https://aclanthology.org//W12-3904/
|
A Comparable Corpus Based on Aligned Multilingual Ontologies
|
[
"Multilinguality"
] |
[
0
] |
|
SCOPUS_ID:85128223902
|
A Comparative Analysis for Optical Character Recognition for Text Extraction from Images Using Artificial Neural Network Fuzzy Inference System
|
Artificial neural networks (ANN) has the capability to analyze raw data from processing input-output relationships. This function considers them important in areas of industry with such information is unusual. Researchers have tried to extract the information embedded within ANNs as set of rules used with inference systems to resolve the black-box function of ANNs. When ANN applied within a fuzzy inference system, the extracted rules yield high classification accuracy. In this paper a Multi-Layer Neural Feed-Forward Network using Artificial Neural Network Fuzzy Inference System (MLNFFN-ANNFIS) is proposed for accurate character recognition from images. The technique targets areas of business that have less complicated issues about which there is no simpler approach is desired to a complex one. This paper proposed an Optical Character Recognition model for Text Extraction from Images using Artificial Neural Network Fuzzy Inference System for accurate text detection from images. The technique proposed is more effective and simple than most of the techniques previously proposed. The proposed model is compared with various traditional models and the results indicate that the proposed model accuracy is more and performance is also improved.
|
[
"Visual Data in NLP",
"Multimodality",
"Information Extraction & Text Mining"
] |
[
20,
74,
3
] |
SCOPUS_ID:85130264052
|
A Comparative Analysis of Automatic Extractive and Abstractive Text Summarization
|
With the advancement of the web, a large amount of data is being generated by people on the web. A shorter yet logical and consistent version of such a large amount of text is very beneficial to easily draw significant conclusions. Summarization is a way to derive a coherent and fluent summary. Summarization can be categorized as Extractive summarization and Abstractive summarization. In this paper, we compare a few Extractive and Abstractive methods of summarization on WikiHow-Dataset. The experiment results show superiority of abstractive summarization methods was observed over the extractive summarization methods when the average precision score was compared. Though pre-trained transformer technology was applied for abstractive summaries, even then, basic extractive summarization techniques performed well. This Analysis helps the researchers to understand the summarization process and performance comparison of the summarization techniques.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85116460585
|
A Comparative Analysis of Chinese and American Newspaper Reports on China’s Belt and Road Initiative
|
This study adopts a critical discourse analysis approach in comparing the portrayal of China’s Belt and Road Initiative (BRI) in two English language newspapers, China Daily (CD) and New York Times (NYT). It focuses on how these two influential newspapers employ various discursive strategies to portray the BRI and its various players to unpack the embedded ideologies underlying the news reports. The data comprises 165 news reports on the BRI published between 2013 and 2019. By examining key news features such as headlines, leads and quotes, three contrastive themes were uncovered: BRI as a unifying agent or disruptive force; bolstering support for or casting aspersions on the BRI; and a rising China versus a fading US. These themes coalesce and converge into two parallel but distinct discourses. While CD unequivocally depicts the BRI as a collaborative project that seeks to unify and bring widespread benefits to member countries, NYT presents a more complex picture that discursively constructs China’s BRI as a geopolitical threat to the waning global influence of the US. These divergent discourses are discussed in light of motivated reasoning theory and in relation to the varying ideological standpoints from which the two newspapers operate.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
http://arxiv.org/abs/1905.08780v1
|
A Comparative Analysis of Distributional Term Representations for Author Profiling in Social Media
|
Author Profiling (AP) aims at predicting specific characteristics from a group of authors by analyzing their written documents. Many research has been focused on determining suitable features for modeling writing patterns from authors. Reported results indicate that content-based features continue to be the most relevant and discriminant features for solving this task. Thus, in this paper, we present a thorough analysis regarding the appropriateness of different distributional term representations (DTR) for the AP task. In this regard, we introduce a novel framework for supervised AP using these representations and, supported on it. We approach a comparative analysis of representations such as DOR, TCOR, SSR, and word2vec in the AP problem. We also compare the performance of the DTRs against classic approaches including popular topic-based methods. The obtained results indicate that DTRs are suitable for solving the AP task in social media domains as they achieve competitive results while providing meaningful interpretability.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85137164739
|
A Comparative Analysis of Generative Neural Attention-based Service Chatbot
|
Companies constantly rely on customer support to deliver pre-and post-sale services to their clients through websites, mobile devices or social media platforms such as Twitter. In assisting customers, companies employ virtual service agents (chatbots) to provide support via communication devices. The primary focus is to automate the generation of conversational chat between a computer and a human by constructing virtual service agents that can predict appropriate and automatic responses to customers’ queries. This paper aims to present and implement a seq2seq-based learning task model based on encoder-decoder architectural solutions by training generative chatbots on customer support Twitter datasets. The model is based on deep Recurrent Neural Networks (RNNs) structures which are uni-directional and bi-directional encoder types of Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). The RNNs are augmented with an attention layer to focus on important information between input and output sequences. Word level embedding such as Word2Vec, GloVe, and FastText are employed as input to the model. Incorporating the base architecture, a comparative analysis is applied where baseline models are compared with and without the use of attention as well as different types of input embedding for each experiment. Bilingual Evaluation Understudy (BLEU) was employed to evaluate the model’s performance. Results revealed that while biLSTM performs better with Glove, biGRU operates better with FastText. Thus, the finding significantly indicated that the attention-based, bi-directional RNNs (LSTM or GRU) model significantly outperformed baseline approaches in their BLEU score as a promising use in future works.
|
[
"Language Models",
"Semantic Text Processing",
"Representation Learning",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
52,
72,
12,
11,
38
] |
SCOPUS_ID:85111979898
|
A Comparative Analysis of Japan and India COVID-19 News Using Topic Modeling Approach
|
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is wreaking havoc. This virus has infected more than 62.01 million and killed around 1.44 million people worldwide in less than a year. For the past 11 months, this is the most critical issue that the world is dealing with. Hence, there is a rapid accumulation of coronavirus-related news. Natural language processing (NLP) and machine learning (ML) methods such as topic modeling receive much attention because of their ability to discover hidden themes and issues from large unstructured text data. We collected 63,424 COVID-19/coronavirus themed news articles from Japanese and Indian English newspapers and applied the recently proposed Top2Vec model to analyze and extract major topics. Our research finds out that both countries’ media reported heavily about the problems that arise due to coronavirus in sports, education, and entertainment sectors. Our findings also point out that Indian media gave very little space to the issues such as unemployment and the migrant crisis that impacted millions during this period. This research can be used as a template to understand and analyze how this pandemic impacted other countries. It also brought to our attention the media’s failure to prioritize critical importance issues for society (migrant crisis) and focused on trivial news (celebrities social media posts).
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85150040915
|
A Comparative Analysis of Math Word Problem Solving on Characterized Datasets
|
Benefit from the neural network research, a couple of neural solvers have been developed for automatically solving math word problems (MWPs). These neural solvers are evaluated on several benchmark datasets with diverse characteristics, which leads to a poor comparability of the performance of each solver. To address the problem, a comparative analysis is conducted in this paper to explore the performance variations of neural solvers in solving different characteristic MWPs. The architectures of the typical neural solvers are studied and a four-dimensional index model is proposed to characterize the benchmark dataset into different subsets. The experimental results show that the Seq2Seq-based model solvers perform well on most of the subsets, while Graph2Tree based solvers seem to have more potential in solving problems with complex expression structures.
|
[
"Reasoning",
"Numerical Reasoning"
] |
[
8,
5
] |
SCOPUS_ID:85148422676
|
A Comparative Analysis of Sentence Embedding Techniques for Document Ranking
|
Due to the exponential increase in the information on the web, extracting relevant documents for users in a reasonable time becomes a cumbersome task. Also, when user feedback is scarce or unavailable, content-based approaches to extract and rank relevant documents are critical as they suffer from the problem of determining semantic similarity between texts of user queries and documents. Various sentence embedding models exist today that acquire deep semantic representations through training on a large corpus, with the goal of providing transfer learning to a broad range of natural language processing tasks such as document similarity, text summarization, text classification, sentiment analysis, etc. So, in this paper, a comparative analysis of six pretrained sentence embedding techniques has been done to identify the best model suited for document ranking in IR systems. These are SentenceBERT, Universal Sentence Encoder, InferSent, ELMo, XLNet, and Doc2Vec. Four standard datasets CACM, CISI, ADI, and Medline are used to perform all the experiments. It is found that Universal Sentence Encoder and SentenceBERT outperform other techniques on all four datasets in terms of MAP, recall, F-measure, and NDCG. This comparative analysis offers a synthesis of existing work as a single point of entry for practitioners who seek to use pretrained sentence embedding models for document ranking and for scholars who wish to undertake work in a similar domain. The work can be expanded in many directions in the future as various researchers can combine these strategies to build a hybrid document ranking system or query reformulation system in IR.
|
[
"Language Models",
"Document Retrieval",
"Semantic Text Processing",
"Representation Learning",
"Information Retrieval"
] |
[
52,
56,
72,
12,
24
] |
SCOPUS_ID:85105865696
|
A Comparative Analysis of Sentiment Analysis Using RNN-LSTM and Logistic Regression
|
Social media analytics makes a big difference in the success or failure of an organization. The data gathered from social media can be used to get a hit type product by analyzing the data and getting important information about the need of the people. This can be done by implementing sentiment analysis on the available data and then accessing the feelings of the customers about the product or service and knowing if it is actually being liked by them or not. Tracking data of the customers helps the organization in many ways. This study was done to get familiarized with the concept of data analytics and how social media plays an important role in it. Furthermore, Web scraping of Twitter and YouTube data was done following which a standard dataset was selected to do the other analytics. The field of sentiment analysis was used to get the emotions of the people. Logistic regression and RNN-LSTM models were used to perform the same, and then, the results were compared.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85124025585
|
A Comparative Analysis of Sentiment Classification Based on Deep and Traditional Ensemble Machine Learning Models
|
The era of the internet has transformed the way people share their thoughts and viewpoints. It is now achieved mostly through blog entries, product review blogs, social networks, and so on. We get immersive media through online networks, where users notify and affect others through the internet. In this research, positive and negative sentiments are used to do the document-level sentiment analysis using deep and traditional ensemble models. In this study, we attempt to evaluate the performances of recent deep learning ensemble models and traditional ensemble models for obtaining the highest accuracy for binary sentiment classification. Three traditional ensemble models (i.e., Voting Ensemble, Bagging Ensemble, and Boosting Ensemble) and three deep learning ensemble layout models (i.e., 7 Layer Convolutional Neural Network (7-L CNN) + Gated Recurrent Unit (GRU), 7-L CNN + GRU + Globe Embedding, and 7-L CNN + Long Short-Term memory (LSTM) + Attention Layer) have been applied in two different datasets to perform the sentiment classification. The deep learning ensemble models perform better than the traditional ensemble models in most cases. In both of the datasets, the deep learning ensemble models namely 7-L CNN + GRU + Globe and 7-L CNN + LSTM + Attention Layer achieve the highest accuracy by securing 94.19% and 96.37% respectively for the product and restaurant review dataset.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning",
"Sentiment Analysis",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
12,
78,
36,
3
] |
SCOPUS_ID:85111966224
|
A Comparative Analysis of Supervised Word Sense Disambiguation in Information Retrieval
|
As the amount of information increases every day, there is a need of information retrieval for finding useful information from large amount of information. This paper presents and evaluates supervised word sense disambiguation (WSD) algorithms in information retrieval (IR). Word ambiguity problem is a key issue in the information retrieval systems. Supervised WSD is considered as the effective method than other methods in information retrieval. The effective usage of supervised WSD in IR is the main objectives of this research paper. The paper defines the role of supervised WSD in IR and discusses the best-known supervised algorithms in detail. We use Weikato environment for knowledge analysis (WEKA) tool to assess four supervised WSD algorithms Naïve Bayes, support vector machine (SMO), decision tree (J48), and KNN (IBK) through a series of experiments, and the result concludes that the algorithms performance is based on the features of the datasets.
|
[
"Semantic Text Processing",
"Information Retrieval",
"Word Sense Disambiguation"
] |
[
72,
24,
65
] |
SCOPUS_ID:85073118365
|
A Comparative Analysis of TF-IDF, LSI and LDA in Semantic Information Retrieval Approach for Paper-Reviewer Assignment
|
The intelligent task of semantically assigning a paper to a reviewer with respect to his knowledge domain remains a challenging task in academic conferences. From literature, a number of automated reviewer assignment systems have been presented which are based on distributional semantic models such as Term Frequency-Inverse Document Frequency (TF-IDF), Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) have been used to capture semantics. Thus, this study presents the comparative study of the three models based on their derived suitability scores between a paper meant for review and a reviewer's representation papers. From the experimental results obtained, it shows that TF-IDF outperformed the accuracy level of the other two models by a substantial margin.
|
[
"Indexing",
"Information Retrieval"
] |
[
69,
24
] |
SCOPUS_ID:85079508258
|
A Comparative Analysis of Text Classification Algorithms for Ambiguity Detection in Requirement Engineering Document Using WEKA
|
The volume of digital documents is increasing day by day and thus the task of automatic categorization of document is very important for information and knowledge discovery. Classification is the most common method for finding the mine rule from the large databases. Ambiguity is the major problem in Requirement Engineering (RE) documents. Our proposed work uses WEKA text classification technique to identify and classify ambiguity in the RE document. The present study uses different algorithms on the ambiguity detection dataset and on the basis of different statistical measures like accuracy, time, and error rate we find suitable algorithms for this purpose. The main aim of this paper is to do a comparative study of various classification techniques and methodologies and a detailed analysis of different statistical parameters that are used in classification algorithms in order to analyze the quality of classification.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85065234628
|
A Comparative Analysis of Top 5 Fast Food Restaurants Through Text Mining
|
Text mining is a logical way of mining content and to know about feelings of users. Social media plays a vital role in letting people know about each other views. An ever increasing number of different online integration brands are working on Facebook, Instagram, Twitter and other social networks to furnish several directions and collaborate with multiple consumers. Social media helps different companies to enhance their businesses and get audience feedback for the betterment of their business timeline. Consequently, a lot of customer produced content is uninhibitedly accessible via web-based networking media sites. To increment upper hand and satisfactorily evaluate the focused condition of many companies, they need to work on and analyze the content that will affect their competitors who are also working on social integration networks. This paper represents a comparative analysis of five different social media pages and we have applied text mining to extract data from Facebook or Twitter of five fast food restaurants i.e. KFC, McDonald's, Burger King, Hardees and Howdy.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
https://aclanthology.org//D19-6102/
|
A Comparative Analysis of Unsupervised Language Adaptation Methods
|
To overcome the lack of annotated resources in less-resourced languages, recent approaches have been proposed to perform unsupervised language adaptation. In this paper, we explore three recent proposals: Adversarial Training, Sentence Encoder Alignment and Shared-Private Architecture. We highlight the differences of these approaches in terms of unlabeled data requirements and capability to overcome additional domain shift in the data. A comparative analysis in two different tasks is conducted, namely on Sentiment Classification and Natural Language Inference. We show that adversarial training methods are more suitable when the source and target language datasets contain other variations in content besides the language shift. Otherwise, sentence encoder alignment methods are very effective and can yield scores on the target language that are close to the source language scores.
|
[
"Low-Resource NLP",
"Language Models",
"Semantic Text Processing",
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
52,
72,
58,
4
] |
SCOPUS_ID:85062889913
|
A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature
|
Word Embeddings can be used by deep layers of neural networks to extract features from them to learn stylo-metric patterns of authors based on context and co-occurrence of the words in the field of Authorship Attribution. In this paper, we investigate the effects of different types of word embeddings in Authorship Attribution of Bengali Literature, specifically the skip-gram and continuous-bag-of-words(CBOW) models generated by Word2Vec and fastText along with the word vectors generated by Glove. We experiment with dense neural network models, such as the convolutional and recurrent neural networks and analyse how different word embedding models effect the performance of the classifiers and discuss their properties in this classification task of Authorship Attribution of Bengali Literature. The experiments are performed on a data set we prepared, consisting of 2400 on-line blog articles from 6 authors of recent times.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85122866120
|
A Comparative Analysis of the Semantic and Pragmatic Function of Diminutives in Literary Translation from Spanish into German: the case of Últimas tardes con Teresa by Juan Marsé
|
This paper aims to offer a comparative analysis of the use of diminutives in Spanish and German from the point of view of semantics and pragmatics. Specifically, the linguistic corpus which is examined is the novel Últimas tardes con Teresa by Juan Marsé. The Spanish version is contrasted with the German translation by Andrea Rössler (1991). The vast range of examples that instantiate the variety of uses of diminutives in Spanish justifies the choice of this textual corpus. Not least, it shows how difficult it is to reproduce some semantic and pragmatic traits of the original Spanish text in the translation into German. Ultimately, the use of diminutives illustrates paradigmatically the stylistic features of a text that, considering its plot and linguistic characteristics, turns out to be polyphonic —a true challenge for any translation.
|
[
"Machine Translation",
"Semantic Text Processing",
"Discourse & Pragmatics",
"Text Generation",
"Multilinguality"
] |
[
51,
72,
71,
47,
0
] |
SCOPUS_ID:85142028376
|
A Comparative Analysis of Local Explainability of Models for Sentiment Detection
|
Sentiment analysis is one of the crucial tasks in Natural Language Processing (NLP) which refers to classifying natural language sentences by their positive or negative sentiments. In many existing deep learning-based models, providing an explanation of a sentiment might be as necessary as the prediction itself. In this study, we use four different classification models applied to the sentiment analysis of the Internet Movie Database (IMDB) reviews, and investigate the explainability of results using Local Interpretable Model-agnostic Explanation (LIME). Our results reveal how the attention-based models, such as Bidirectional LSTM (BiLSTM) and fine-tuned Bidirectional Encoder Representations from Transformers (BERT) would focus on the most relevant keywords.
|
[
"Language Models",
"Semantic Text Processing",
"Explainability & Interpretability in NLP",
"Sentiment Analysis",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
81,
78,
4
] |
SCOPUS_ID:85147990682
|
A Comparative Analysis of Machine Learning Based Sentiment Analysis
|
In today’s world, everyone expresses themselves through social media. The most popular platforms for sharing opinions on any issue are Twitter, Facebook, Youtube, IMDB, etc. We can examine people’s attitudes after evaluating messages, comments, responses, and reviews. Sentiment Analysis is a Natural Language Processing technique for analyzing texts and determining how people feel about them. The purpose of Sentiment Analysis is for the computer to be able to detect and express emotions. This work aimed to apply Machine Learning to discover the best accuracy for text-based sentiment Analysis. We have used two datasets in this project, one is a dataset of tweets, and the other is a dataset of movie reviews. We analyze each Machine Learning algorithm’s accuracy and find which algorithm gives the best accuracy in each dataset.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85125879436
|
A Comparative Analysis on Medical Article Classification Using Text Mining & Machine Learning Algorithms
|
The document classification task is one of the widely studied research fields on multiple domains. The core motivation of the classification task is that the manual classification efforts are impractical due to the exponentially growing document volumes. Thus, we densely need to exploit automated computational approaches, such as machine learning models along with data & text mining techniques. In this study, we concentrated on the classification of medical articles specifically on common cancer types, due to the significance of the field and the decent number of available documents of interest. We deliberately targeted MEDLINE articles about common cancer types because most cancer types share a similar literature composition. Therefore, this situation makes the classification effort relatively more complicated. To this end, we built multiple machine learning models, including both traditional and deep learning architectures. We achieved the best performance (R¿82% F score) by the LSTM model. Overall, our results demonstrate a strong effect of exploiting both text mining and machine learning methods to distinguish medical articles on common cancer types.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85138638918
|
A Comparative Analysis on Suicidal Ideation Detection Using NLP, Machine, and Deep Learning
|
Social networks are essential resources to obtain information about people’s opinions and feelings towards various issues as they share their views with their friends and family. Suicidal ideation detection via online social network analysis has emerged as an essential research topic with significant difficulties in the fields of NLP and psychology in recent years. With the proper exploitation of the information in social media, the complicated early symptoms of suicidal ideations can be discovered and hence, it can save many lives. This study offers a comparative analysis of multiple machine learning and deep learning models to identify suicidal thoughts from the social media platform Twitter. The principal purpose of our research is to achieve better model performance than prior research works to recognize early indications with high accuracy and avoid suicide attempts. We applied text pre-processing and feature extraction approaches such as CountVectorizer and word embedding, and trained several machine learning and deep learning models for such a goal. Experiments were conducted on a dataset of 49,178 instances retrieved from live tweets by 18 suicidal and non-suicidal keywords using Python Tweepy API. Our experimental findings reveal that the RF model can achieve the highest classification score among machine learning algorithms, with an accuracy of 93% and an F1 score of 0.92. However, training the deep learning classifiers with word embedding increases the performance of ML models, where the BiLSTM model reaches an accuracy of 93.6% and a 0.93 F1 score.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
SCOPUS_ID:85144232675
|
A Comparative Analysis on the Summarization of Legal Texts Using Transformer Models
|
Transformer models have evolved natural language processing tasks in machine learning and set a new standard for the state of the art. Thanks to the self-attention component, these models have achieved significant improvements in text generation tasks (such as extractive and abstractive text summarization). However, research works involving text summarization and the legal domain are still in their infancy, and as such, benchmarks and a comparative analysis of these state of the art models is important for the future of text summarization of this highly specialized task. In order to contribute to these research works, the researchers propose a comparative analysis of different, fine-tuned Transformer models and datasets in order to provide a better understanding of the task at hand and the challenges ahead. The results show that Transformer models have improved upon the text summarization task, however, consistent and generalized learning is a challenge that still exists when training the models with large text dimensions. Finally, after analyzing the correlation between objective results and human opinion, the team concludes that the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [13] metrics used in the current state of the art are limited and do not reflect the precise quality of a generated summary.
|
[
"Language Models",
"Semantic Text Processing",
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
52,
72,
30,
47,
3
] |
SCOPUS_ID:85123783865
|
A Comparative Analysis with the Hybrid Algorithm Approach for Sentimental Analysis through Machine Learning
|
This research work focuses on the latest studies that have used Machine learning to find a solution to sentiment analysis problems related to sentiment polarisation. In preprocessing steps, the Models applied to stop words and Bag of words to collect datasets. Even with the widespread usage and acceptance of some approaches, a superior technique for categorising the polarisation of text documents is tough to make out. Machine learning has lately evoked attention as a method for sentiment investigation. The present work proposes a machine learning-based hybrid algorithm that incorporates N-gram technique as feature extraction. It combines a Decision tree classifier and Random forest Classifier techniques as a classification for sentiment analysis. Naïve bayse, linear classifier and support vector machine approaches are perform in the perspective of sentiment classification. Finally, a comparative study with the different supervised algorithms implemented on the product reviews dataset. The performance of the model evaluated on the confusion matrix. In the comparative analysis of classification techniques, the combined technique has shown better results than previously used supervised techniques of naïve bayse, linear classifier and support vector machine.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85111871101
|
A Comparative Analyzing of SMS Spam Using Topic Models
|
Mobile phones or smart phones have changed or revolutionized the way we live. These days the short message service (SMS) is becoming fashionable. For spammers, the success of the mobile messaging channel has become a very attractive target to attack. To impose an additional level of security in the pervasive environment, we will create a system which is more authenticated for SMS. This system will have impact on user’s usability from the point of view of user’s safety. In modern era, the financial industries and other related agencies are seeing the SMS as an important aspect to communicate with their customers which somehow opens the easy flap for spammers, and customer’s safety measures is at hazard. The digital encryption methodologies are useful to support the SMS formation which needs two nodes to swap over digital signed SMS message. These two nodes are protected by the public key cryptography and authentication is done with the help of the ECDSA signature scheme. These two nodes are recognized as sender and receiver, and when a sender sends an SMS to any receiver, the unencrypted text is sent means that there is possibility of loss of information. In this paper, we propose the technique called Gaussian Naive Bayes Classification (GNBC) for the filtering of spam by SMS that solves the message topic model (MTM) problems. It is believed that some pre-processing rules and background terms make it the most appropriate model to completely represent spam by SMS. Finally, we have concluded that GNBC is more accurate for the SMS spam filtering activity.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85123295381
|
A Comparative Approach for Opinion Spam Detection Using Sentiment Analysis
|
The most important sources of information about the products or services are online reviews. People trust online review comments while purchasing electronics items, hotel booking, college admission, movies, etc. Sometimes it has been intentionally written by the fake reviewers for monetary gain, business rivalry, etc. Many times, these messages were found fake after the judgments. The fake reviews transmission has a significant social and economic impact on society. Hence, an accurate detection mechanism must be there to identify fake reviews. In this paper, the opinion spam detection mechanism is proposed using sentiment analysis (SA) for content-based applications. In this technique, the sentiment score of the sentences is computed. It is detected as fake or not fake, depending on the sentiment score of reviews. The work also proposed a Long Short-Term Memory (LSTM) based deep learning approach to identify the topic of fake reviews. Combining these two approaches provides a more accurate opinion spam detection rate compared to other existing models. On the benchmark “Deceptive Opinion Spam Corpus v1.4” dataset is used. Our model’s accuracy is 92.46% with 9.23% of the false acceptance rate and 5.50% of the false rejection rate.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85111811315
|
A Comparative Assessment of State-Of-The-Art Methods for Multilingual Unsupervised Keyphrase Extraction
|
Keyphrase extraction is a fundamental task in information management, which is often used as a preliminary step in various information retrieval and natural language processing tasks. The main contribution of this paper lies in providing a comparative assessment of prominent multilingual unsupervised keyphrase extraction methods that build on statistical (RAKE, YAKE), graph-based (TextRank, SingleRank) and deep learning (KeyBERT) methods. For the experimentations reported in this paper, we employ well-known datasets designed for keyphrase extraction from five different natural languages (English, French, Spanish, Portuguese and Polish). We use the F1 score and a partial match evaluation framework, aiming to investigate whether the number of terms of the documents and the language of each dataset affect the accuracy of the selected methods. Our experimental results reveal a set of insights about the suitability of the selected methods in texts of different sizes, as well as the performance of these methods in datasets of different languages.
|
[
"Multilinguality",
"Low-Resource NLP",
"Information Extraction & Text Mining",
"Term Extraction",
"Responsible & Trustworthy NLP"
] |
[
0,
80,
3,
1,
4
] |
https://aclanthology.org//W19-7905/
|
A Comparative Corpus Analysis of PP Ordering in English and Chinese
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
SCOPUS_ID:85140652517
|
A Comparative Empirical Evaluation of Neural Language Models for Thai Question-Answering
|
Despite engineers and researchers' significant and continuing efforts in developing natural language processing tools for the Thai language, the Thai language is, alongside many others, a de facto low-resource language. Can unsupervisedly trained neural language models come to the rescue? The remarkable success of transformer-based language models in most natural language processing tasks promises the advent of a much needed polyglot panacea. It seems, unfortunately, that powerful enough models are not yet available for most other-than-English languages. To assess the situation, we propose to empirically and comparatively evaluate the performance of existing neural language models for the task of extractive question-answering for the Thai language.
|
[
"Language Models",
"Natural Language Interfaces",
"Semantic Text Processing",
"Question Answering"
] |
[
52,
11,
72,
27
] |
http://arxiv.org/abs/2204.07056v1
|
A Comparative Evaluation Of Transformer Models For De-Identification Of Clinical Text Data
|
Objective: To comparatively evaluate several transformer model architectures at identifying protected health information (PHI) in the i2b2/UTHealth 2014 clinical text de-identification challenge corpus. Methods: The i2b2/UTHealth 2014 corpus contains N=1304 clinical notes obtained from N=296 patients. Using a transfer learning framework, we fine-tune several transformer model architectures on the corpus, including: BERT-base, BERT-large, ROBERTA-base, ROBERTA-large, ALBERT-base and ALBERT-xxlarge. During fine-tuning we vary the following model hyper-parameters: batch size, number training epochs, learning rate and weight decay. We fine tune models on a training data set, we evaluate and select optimally performing models on an independent validation dataset, and lastly assess generalization performance on a held-out test dataset. We assess model performance in terms of accuracy, precision (positive predictive value), recall (sensitivity) and F1 score (harmonic mean of precision and recall). We are interested in overall model performance (PHI identified vs. PHI not identified), as well as PHI-specific model performance. Results: We observe that the ROBERTA-large models perform best at identifying PHI in the i2b2/UTHealth 2014 corpus, achieving >99% overall accuracy and 96.7% recall/precision on the heldout test corpus. Performance was good across many PHI classes; however, accuracy/precision/recall decreased for identification of the following entity classes: professions, organizations, ages, and certain locations. Conclusions: Transformers are a promising model class/architecture for clinical text de-identification. With minimal hyper-parameter tuning transformers afford researchers/clinicians the opportunity to obtain (near) state-of-the-art performance.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85080921566
|
A Comparative Evaluation of Classification Algorithms for Sentiment Analysis Using Word Embeddings
|
Sentiment Analysis is an increasing field of research that lies at the intersection of many fields such as Natural Language Processing (NLP), Computational Linguistics and Machine Learning. It is concerned with the extraction of sentiment polarity conveyed in a piece of text. Furthermore, one of the most influential recent development in NLP is the use of word embedding or word distributing approach, it is a current and powerful representation to capture the closest words from a contextual text. In this paper, we investigate enhancing sentiment analysis system tailored to the Arabic language by applying word embeddings and evaluating 9 classification algorithms performance (Gaussian Naïve Bayes, Nu-Support Vector, Linear Support Vector, Logistic Regression, Stochastic Gradient Descent, Random Forest, k-nearest neighbors, Decision Tree, AdaBoost). Then the report obtained improved accuracy for Arabic Sentiment Analysis on different datasets. We find that Logistic Regression classifier followed by SVM and AdaBoost classifiers outperforms the other classifiers.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
78,
24,
3
] |
SCOPUS_ID:85081305147
|
A Comparative Evaluation of Preprocessing Techniques for Short Texts in Spanish
|
Natural Language Processing (NLP) is used to identify key information, generating predictive models, and explaining global events or trends. Also, NLP is supported during the process to create knowledge. Therefore, it is important to apply refinement techniques in major stages such as preprocessing, when data is frequently produced and processed with poor results. This document analyzes and measures the impact of combinations of preprocessing techniques and libraries for short texts that have been written in Spanish. These techniques were applied in tweets for analysis of sentiments considering evaluation parameters in its analysis, the processing time and characteristics of the techniques for each library. The performed experimentation provides readers insights for choosing the appropriate combination of techniques during preprocessing. The results show improvement of up to 5% to 9% in the performance of the classification.
|
[
"Sentiment Analysis"
] |
[
78
] |
http://arxiv.org/abs/1907.08501v1
|
A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data
|
With the growing number and size of Linked Data datasets, it is crucial to make the data accessible and useful for users without knowledge of formal query languages. Two approaches towards this goal are knowledge graph visualization and natural language interfaces. Here, we investigate specifically question answering (QA) over Linked Data by comparing a diagrammatic visual approach with existing natural language-based systems. Given a QA benchmark (QALD7), we evaluate a visual method which is based on iteratively creating diagrams until the answer is found, against four QA systems that have natural language queries as input. Besides other benefits, the visual approach provides higher performance, but also requires more manual input. The results indicate that the methods can be used complementary, and that such a combination has a large positive impact on QA performance, and also facilitates additional features such as data exploration.
|
[
"Visual Data in NLP",
"Natural Language Interfaces",
"Question Answering",
"Multimodality"
] |
[
20,
11,
27,
74
] |
SCOPUS_ID:85139161010
|
A Comparative Lexical Analysis of Three Romanian Works – The Etymological Metalepsis Role and Etymological Indices
|
We introduce an etymological perspective in computational linguistic and apply it for the analysis of literary works with self-biographic content. Our hypothesis is that etymological mixtures used by an author may be a powerful stylistic device. We propose and investigate in the frame of computational linguistics the notion of “etymological metalepsis”; we argue that this device is used by some writers for conveying a sense of the historical frame that their works depict, the specificities of the local context, and even of the sentiments assumed and conveyed by the writer. Three works are analyzed and contrasted from the stand-point of the proposed stylistic device reflecting the etymological mixture used by the author, based on a semi-automated lexicographic analysis and the etymology of the most frequent words. Stylometric indices are suggested in relation with the etymological metalepsis. We also use tools of computational linguistics to elucidate and validate literary critical elements previously published on these oeuvres.
|
[
"Indexing",
"Information Retrieval"
] |
[
69,
24
] |
SCOPUS_ID:85136332116
|
A Comparative Selection of Best Activation Pair Layer in Convolution Neural Network for Sentence Classification using Deep Learning Model
|
Many natural language processing jobs need sentiment classification of text content. There is an urgent need, particularly with the rise of social media, to extract meaningful information from vast volumes of data upon that Internet utilizing sentiment analysis. We're interested in adopting deep learning models to handle sentiment classification because of the advances that have been made. We present a framework named fastText along with Convolutional Neural Network in this study (CNN). To commence, we use fastText to create words vector representations that will be fed into the CNN. FastText's purpose is to generate a generative model of a word as well as reflect word distance. This one will enable the parameters to also be established at an advantageous CNN point, which will help neural nets to perform much better in this circumstance. Second, we create a CNN architecture that is appropriate for sentiment analysis. We use two sets of convolutional layers along with pooling layers in this design. This is the first time, to our knowledge, that a 9-layer architecture and design model is developed based on fastText as well as CNN was used to assess the sentim ent of phrases. We use the Rectified Linear Unit (ReLU), Normalization, and Dropout techniques to improve the accuracy and generalizability of our model. We put our methodology to the test on a publicly available dataset of movie review extracts with five labels: negative, slightly negative, neutral, moderately positive, and positive. In this dataset, ourReLU pairwise network obtains a test accuracy of 96.4 percent, outperforming existing neural network models.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85084684263
|
A Comparative Sentiment Analysis of Sentence Embedding Using Machine Learning Techniques
|
Analyzing sentiment is a process to identify the opinion of a text. It is also known as opinion mining or emotion Artificial Intelligence (AI). People post comments in social media mentioning their experience about an event and are also interested to know if the majority of other people had a positive or negative experience on the same event. This classification can be achieved using Sentiment Analysis. Sentiment analysis takes unstructured text comments about a product reviews, an event, etc., from all comments posted by different users and classifies the comments into different categories as either positive or negative or neutral opinion. This is also known as polarity classification. Sentimental analysis can be performed by Text analysis and computational linguistics. This work aims at comparing the performance of different machine learning algorithms in performing sentiment analysis of Twitter data. The proposed method uses term frequency to find the sentiment polarity of the sentence. The performance of Multinomial Naive Bayes, SVM and Logistic regression algorithms in sentence classification were compared. From the results, it is inferred that logistic regression has achieved a greatest accuracy when it is used with n-gram and bigram model.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
78,
24,
3
] |
SCOPUS_ID:85136953132
|
A Comparative Study Between Rule-Based and Transformer-Based Election Prediction Approaches: 2020 US Presidential Election as a Use Case
|
Social media platforms (SMPs) attracted people from all over the world for they allow them to discuss and share their opinions about any topic including politics. The comprehensive use of these SMPs has radically transformed new-fangled politics. Election campaigns and political discussions are increasingly held on these SMPs. Studying these discussions aids in predicting the outcomes of any political event. In this study, we analyze and predict the 2020 US Presidential Election using Twitter data. Almost 2.5 million tweets are collected and categorized into Location-considered (LC) (USA only), and Location-unconsidered (LUC) (either location not mentioned or out of USA). Two different sentiment analysis (SA) approaches are employed: dictionary-based SA, and transformers-based SA. We investigated if the deployment of deep learning techniques can improve prediction accuracy. Furthermore, we predict a vote-share for each candidate at LC and LUC levels. Afterward, the predicted results are compared with the five polls’ predicted results as well as the real results of the election. The results show that dictionary-based SA outperformed all the five polls’ predicted results including the transformers with MAE 0.85 at LC and LUC levels, and RMSE 0.867 and 0.858 at LC and LUC levels.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85137811911
|
A Comparative Study Of Sentiment Analysis For Big Data On Hadoop
|
Nowadays, people express almost all their feelings and views about everything in the surrounding world on social media applications. If these posts are analyzed accurately, they will be a huge opportunity for various organizations to increase their market value by using that information in decision-making. Opinion mining from social media, also known as sentiment analysis, provides up-to-date information; the reason is the proliferation of social media at various social levels. Sentiment analysis can be extracted from social posts using machine learning algorithms or lexicon-based approaches. The increasing of people who use social media led to more data being produced continuously. These huge amounts of data require an efficient framework to handle them. Hadoop provides a software framework for distributed storage and processing of big data using the MapReduce programming model and Hadoop Distributed File System (HDFS). This paper will discuss different approaches for sentiment analysis of big data, especially on Hadoop, and display their strengths and weaknesses through a comparative study.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85134756700
|
A Comparative Study for Supervised Learning Algorithms to Analyze Sentiment Tweets
|
Twitter popularity has increasingly grown in the last few years, influencing life"s social, political, and business aspects. People would leave their tweets on social media about an event, and simultaneously inquire to see other people's experiences and whether they had a positive/negative opinion about that event. Sentiment Analysis can be used to obtain this categorization. Product reviews, events, and other topics from all users that comprise unstructured text comments are gathered and categorized as good, harmful, or neutral using sentiment analysis. Such issues are called polarity classifications. This study aims to use Twitter data about OK cuisine reviews obtained from the Amazon website and compare the effectiveness of three commonly used supervised learning classifiers, Naive Bayes, Logistic Regression, and Support Vector Machine. This is achieved by using two method of feature selection involving count Vectorizer and Term-Frequency-Inverse Data Frequency. The findings showed that the support vector machine classifier had achieved the highest accuracy of 91%, by feature selection: Count Vectorizer. But it is time consuming. For both accuracy and execution time concentrates, logistic regression is recommended.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85116846471
|
A Comparative Study into Stock Market Prediction Through Various Sentiment Analysis Algorithms
|
From past times, there is strong link between finances, currencies and economic growth the future and progress of humankind. Now a days, the future of economic development is dictated by fortunes and vagaries prevalent in share stock markets. Researchers have found that it is possible to make forecasts with large historical data on stock market and up-down in prices of share values. So, the fact that stock markets play a really vital role in national and global economy, which is today undeniable. Stock markets can be profitable by speculations provided, of course though the future behavior can be forecast with a constant degree of accuracy. In this study, the authors propose a model which help to guess stock market trends consistently and with minimal of error value. The model discussed here makes the use of sentiment analyses based on financial news and also historical patterns of stocks in share markets and can offer more accurate results to analyses data from multiple news sources and historical price movement of individual stocks. By using a two-step process, the model offers a minimum prediction accuracy value of 72%. In the first step, Naïve Bayes algorithm is used to evaluate text polarity to obtain a fix on public sentiment based on news feeds collected and received. In the second step, the future stock prices are forecasted by combining the evaluation results on text polarity with historical data on stock value price up-down.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85144426943
|
A Comparative Study of BERT-Based Attention Flows Versus Human Attentions on Fill-in-Blank Task
|
The purpose of this small-scale study is to compare BERT language model’s attention flow—which quantifies the marginal contributions from each word and aggregated word groups towards the fill-in-blank prediction— with human evaluators’ opinions on the same task. Based on a limited number of experiments performed, we have the following findings: (1) Compared with human evaluators, BERT base model pay less attention towards verbs, and more attention towards noun and other word types. That seems to agree with the natural partition hypothesis: nouns predominate over verbs in children’s initial vocabularies because it is easy to understand the meanings of nouns. The premise of such hypothesis is that BERT base model performs like a human child. (2) As sentences become longer and more complex, human evaluators can distinguish the major logic relation and be less distracted by other components in the structure. The attention flow scores calculated using the BERT base model, on the other hand, amortize towards multiple words and word groups as sentences become longer and more complex. (3) Amortized attention flow scores calculated using BERT base model provides a balanced global view towards different types of discourse relations embedded in long and complex sentences. For future works, more examples will be prepared for detailed and rigorous verifications on the findings.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85128880323
|
A Comparative Study of Bot Detection Techniques With an Application in Twitter Covid-19 Discourse
|
Bot Detection is crucial in a world where Online Social Networks (OSNs) play a pivotal role in our lives as public communication channels. This task becomes highly relevant in crises like the Covid-19 pandemic when there is a growing risk of proliferation of automated accounts designed to produce misinformation content. To address this issue, we first introduce a comparison between supervised Bot Detection models using Data Selection. The techniques used to develop the bot detection models use features such as the tweets’ metadata or accounts’ Digital Fingerprint. The techniques implemented in this work proved effective in detecting bots with different behaviors. Social Fingerprint-based methods have been found to be effective with bots that behave in a coordinated manner. Furthermore, all these approaches have produced excellent results compared to the Botometer v3. Second, we present and discuss a case study related to the Covid-19 pandemic that analyses the differences in the discourse between bots and humans on Twitter, a platform used worldwide to express opinions and engage in dialogue in a public arena. While bots and humans generally express themselves alike, the tweets’ content and sentiment analysis reveal some dissimilitudes, especially in tweets concerning President Trump. When the discourse switches to pandemic management by Trump, sentiment-related values display a drastic difference, showing that tweets generated by bots have a predominantly negative attitude. However, according to our findings, while automated accounts are numerous and active in discussing controversial issues, they usually do not seem to increase exposure to negative and inflammatory content for human users.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
71,
72,
78
] |
SCOPUS_ID:85083079919
|
A Comparative Study of Co-Occurrence Strategies for Building A Cross-Domain Sentiment Thesaurus
|
With the evolution of user-based web content, people naturally and freely share their opinion in numerous domains. However, this would result in a massive cost to label training data for many domains and prevent us from taking advantage of the shared information across-domains. As a result, cross-domain sentiment analysis is a challenging NLP task due to feature and polarity divergence. To build a sentiment sensitive thesaurus that to group different features that express the same sentiments for cross-domain sentiment classification, different co-occurrence measures are used. This paper presents a comparative study covering different co-occurrence methods for building a cross-domain sentiment thesaurus. This work also defines a Bidirectional Conditional Probability (BCP) to handle the unsymmetrical co-occurrence problem. Two machine learning classifiers (Naïve Bayes (NB) and Support Vector Machine (SVM)) and three feature selection methods (Information gain, Odd ratio, Chi-square) are used to evaluate the proposed model. Experimental results show that BCP results outperform four baseline co-occurrence calculation methods (PMI, PMI-square, EMI, and G-means) in the task of cross-domain sentiment analysis.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85132442241
|
A Comparative Study of Conceptual Metaphors of Joyness and Fear in Adolescent’s Literature: A Cognitive Approach
|
Conceptual metaphor is one of the most important topics on cognitive linguistics. This approach, believes that the metaphor is a cognitive phenomenon and what appears in the language is just an aspect of this cognitive phenomenon. In this research, we aim to analysis joyness and fear of eight teenage novels from two Iranian novelists (a man and a woman) and two Spanish novelists (a man and a woman) – two fictions from each writer – and compared conceptual metaphors on areas of destination for joyness and fear, in teenage literature in Persian and Spanish. We choose these feelings among the fundamental concepts of feeling (Sadness, joyness, fear, anger, love and shame). We used library method for this research. For this purpose, conceptual metaphors of joyness and fear were extracted from the eight selected novels and their conceptual name-mapping and also origins were identified. On the next level, we compared these conceptual name-mappings and were identified their points of sharing and differences. In this regard, some of the concepts of the origin area such as color, physiological and behavioral effects, light and darkness were described with details. The other main goal of this research is to investigating the impact of author's gender on the quantity and quality of using metaphors of joyness and fear. The analysis of chosen corpus in Persian is shown that the most frequency of conceptual metaphors is related to “The Joyness is plant”, “The Joyness is light”, “The Joyness is The motion toward higher place or level”. Moreover, in the fear cognitive domain the data is shown that the most frequency is related to “the fear is motionless”, “the fear is fluid”, “the fear is building”. In addition, in Spanish the research is shown that The most frequency is dedicated to “the joyness is light”, “the joyness is direction, “the joyness is color”, the joyness is laughing”, “the joyness is singing”. Moreover, in Spanish “the fear is sweat”, “the fear is heart beat”, “The fear is the color face changing. So the similarities between Persian and Spanish conceptual metaphors is shown that these concepts are universal among languages.
|
[
"Cognitive Modeling",
"Linguistics & Cognitive NLP"
] |
[
2,
48
] |
SCOPUS_ID:85107418904
|
A Comparative Study of Conventional Machine Learning and Deep Learning Models to Find Semantic Similarity
|
In today’s scenario, with many platforms trying to provide answers to every question that the users try searching on the internet, the ease of finding things is direct but constrained for a limited number of users. It becomes arduous to find the answers even though they already exist. The uncertainty is due to semantic ambiguity. With the freedom of thought and linguistic diversity, there is a need for new tools to resolve this ambiguity. A new multidisciplinary field, natural language processing (NLP), incorporated with machine learning and statistical techniques, provides powerful analysis. In this paper, the basic machine learning models and state-of-the-art model Universal Sentence Encoder (transformer and deep averaging network) are compared using NLP techniques to detect duplicate questions in the Quora dataset. This paper aims to find which models out-do the others and the performance measures that affect each model such that there is a better understanding of the requirements to get the best results in finding semantic similarity between textual questions
|
[
"Language Models",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
52,
72,
53
] |
SCOPUS_ID:85112145585
|
A Comparative Study of Conversational Proxemics for Virtual Agents
|
This paper explores proxemics—interpersonal distances—in conversations with virtual agents in virtual reality. While the real-world proxemics of human-human interaction have been well studied, the virtual-world proxemics of human-agent interaction are less well understood. We review research related to proxemics in virtual reality, noting that the previous research has not addressed proxemics with actual conversation, describe an empirical methodology for addressing our research questions, and present our results. The study used a repeated-measures, within-subjects design and had 23 participants. In the study, participants approached and conversed with a virtual agent in three conditions: no crowd, small crowd, and large crowd. The participant's distance from the agent with whom they conversed was recorded at 60 frames/second by VAIF's proxemics tool. Our results suggest that humans in a virtual world tend to position themselves closer to virtual agents than they would relative to humans in the physical world. However, the presence of other virtual agents did not appear to cause participants to change their proxemics.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85066608855
|
A Comparative Study of Deep Learning Approaches for Query-Focused Extractive Multi-Document Summarization
|
Query-focused multi-document summarization aims to produce a single, short document that summarizes a set of documents that are relevant to a given query. During the past few years, deep learning approaches have been utilized to generate summaries in an abstractive or extractive manner. In this study, we employ six deep neural network approaches to solving a query-focused extractive multi-document summarization task and compare their performances. To the best of our knowledge, our study is the first to compare deep learning techniques on extractive query-focused multi-document summarization. Our experiments with DUC 2005-2007 benchmark datasets show that Bi-LSTM with Max-pooling achieves the highest performance among the methods compared.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85126397191
|
A Comparative Study of Deep Learning Methods for Hate Speech and Offensive Language Detection in Textual Data
|
The problem of hate speech on social network sites is very prevalent which is being faced by every major social media platform. Several methods have been explored for the purpose of intent-based text classification. Each method has its own pros and cons concerning the type of intent, size of data set, the maximum length of text, etc. Several approaches have been presented in the literature for the hate and offensive speech detection. The main objective of this work is to present a comparative study among select deep learning methods for hate speech and offensive language detection. These methods include recurrent neural network (RNN), convolutional neural network (CNN), long shortterm memory (LSTM) and bidirectional encoder representations from transformer (BERT). We have investigated the effect of class weighting technique on the performance of the deep learning methods. Our study finds that the pre-trained BERT model outperforms the other explored models in case of both unweighted and weighted hate speech classification. For offensive language classification, RNN and CNN model outperforms all other models in case of unweighted and weighted respectively. It came out that, the class weighting technique has considerably boost the classification performance of all four models for hate speech.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Ethical NLP",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
24,
3,
17,
36,
4
] |
SCOPUS_ID:85130308512
|
A Comparative Study of Deep Learning Techniques for Farmer Query Text Classification
|
This research proposes Long Short-Term Memory Network models with different stacked LSTM layers for farmer queries classification. The 23 different classes of 105,724 farmer queries were used to train the proposed LSTM models. Word to vector (Word2Vec), Global Vectors for Word Representation (GloVe), and FastText embedding techniques were compared on the query classification. The Word2vec embedding technique produced a better result than the GloVe and FastText embedding techniques in the farmer query classification. After an extensive simulation, the DLSTM network with three stacked LSTM layers achieved a testing accuracy of 90.35%. Classification performance of the proposed DLSTM network with three stacked LSTM layers in farmer queries was superior to Convolutional Neural Network (CNN), LSTM and other DLSTM models.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
12,
24,
3
] |
SCOPUS_ID:85103853117
|
A Comparative Study of Deep Learning based Named Entity Recognition Algorithms for Cybersecurity
|
Named Entity Recognition (NER) is important in the cybersecurity domain. It helps researchers extract cyber threat information from unstructured text sources. The extracted cyber-entities or key expressions can be used to model a cyber-attack described in an open-source text. A large number of general-purpose NER algorithms have been published that work well in text analysis. These algorithms do not perform well when applied to the cybersecurity domain. In the field of cybersecurity, the open-source text available varies greatly in complexity and under-lying structure of the sentences. General-purpose NER algorithms can misrepresent domain-specific words, such as "malicious"and "javascript". In this paper, we compare the recent deep learning-based NER algorithms on a cybersecurity dataset. We created a cybersecurity dataset collected from various sources, including "Microsoft Security Bulletin"and "Adobe Security Updates". Some of these approaches proposed in literature were not used for Cybersecurity. Others are innovations proposed by us. This comparative study helps us identify the NER algorithms that are robust and can work well in sentences taken from a large number of cybersecurity sources. We tabulate their performance on the test set and identify the best NER algorithm for a cybersecurity corpus. We also discuss the different embedding strategies that aid in the process of NER for the chosen deep learning algorithms.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.