id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
http://arxiv.org/abs/2002.10116v1
|
A Hybrid Approach to Dependency Parsing: Combining Rules and Morphology with Deep Learning
|
Fully data-driven, deep learning-based models are usually designed as language-independent and have been shown to be successful for many natural language processing tasks. However, when the studied language is low-resourced and the amount of training data is insufficient, these models can benefit from the integration of natural language grammar-based information. We propose two approaches to dependency parsing especially for languages with restricted amount of training data. Our first approach combines a state-of-the-art deep learning-based parser with a rule-based approach and the second one incorporates morphological information into the parser. In the rule-based approach, the parsing decisions made by the rules are encoded and concatenated with the vector representations of the input words as additional information to the deep network. The morphology-based approach proposes different methods to include the morphological structure of words into the parser network. Experiments are conducted on the IMST-UD Treebank and the results suggest that integration of explicit knowledge about the target language to a neural parser through a rule-based parsing system and morphological analysis leads to more accurate annotations and hence, increases the parsing performance in terms of attachment scores. The proposed methods are developed for Turkish, but can be adapted to other languages as well.
|
[
"Syntactic Parsing",
"Syntactic Text Processing",
"Morphology"
] |
[
28,
15,
73
] |
http://arxiv.org/abs/1303.1441v2
|
A Hybrid Approach to Extract Keyphrases from Medical Documents
|
Keyphrases are the phrases, consisting of one or more words, representing the important concepts in the articles. Keyphrases are useful for a variety of tasks such as text summarization, automatic indexing, clustering/classification, text mining etc. This paper presents a hybrid approach to keyphrase extraction from medical documents. The keyphrase extraction approach presented in this paper is an amalgamation of two methods: the first one assigns weights to candidate keyphrases based on an effective combination of features such as position, term frequency, inverse document frequency and the second one assign weights to candidate keyphrases using some knowledge about their similarities to the structure and characteristics of keyphrases available in the memory (stored list of keyphrases). An efficient candidate keyphrase identification method as the first component of the proposed keyphrase extraction system has also been introduced in this paper. The experimental results show that the proposed hybrid approach performs better than some state-of-the art keyphrase extraction approaches.
|
[
"Term Extraction",
"Information Extraction & Text Mining"
] |
[
1,
3
] |
SCOPUS_ID:85138422430
|
A Hybrid Approach to Identify and Forecast Technological Opportunities based on Topic Modeling and Sentiment Analysis
|
This study proposes a hybrid approach to recognize both technological topics and application topics by mining intelligence separately from the different parts of Derwent patent document. Topic modeling method is introduced to recognize topics from patents while sentiment analysis combined with traditional bibliometric indicators are introduced to judge the value of topics from multi-aspect. The hybrid approach is demonstrated by a case study on dye sensitized solar cells. The main contributions of this study include three-fold. First, we explore both technical innovative opportunities and application opportunities by mining different parts of Derwent patent document. Second, we integrate sentiment analysis and bibliometric indicators to judge the value of topics from multi-aspect. Third, we propose a probability-based topic relation measurement method to identify the relationships of the applications with the core sub-technologies.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Sentiment Analysis"
] |
[
9,
3,
78
] |
https://aclanthology.org//W14-4408/
|
A Hybrid Approach to Multi-document Summarization of Opinions in Reviews
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
|
SCOPUS_ID:85061924147
|
A Hybrid Approach to Paraphrase Detection
|
In this paper, we present a hybrid approach to the paraphrase detection task. The approach takes advantage of both feature-engineering and neural-based methods. First, we represent words and entities in a given sentence by using their pre-trained vectors. Then, those pre-trained vectors are encoded by a bidirectional long-short term memory network. The output matrix is fed into an attention network to obtain an attention vector. The final representation of the sentence is inner product of the matrix and the attention vector. We conduct experiments on the Microsoft Research Paraphrase corpus, a popular dataset used for benchmarking paraphrase detection methods. The experimental results show that our approach achieves competitive results.
|
[
"Language Models",
"Paraphrasing",
"Semantic Text Processing",
"Text Generation"
] |
[
52,
32,
72,
47
] |
SCOPUS_ID:85132420501
|
A Hybrid Approach to Paraphrase Detection Based on Text Similarities and Machine Learning Classifiers
|
In the realm of natural language processing (NLP), paraphrase detection is a highly common and significant activity. Because it is involved in a lot of complicated and complex NLP applications like information retrieval, text mining, and plagiarism detection. The proposed model finds the best combination of the three types of similarity techniques that are string similarity, semantic similarity and embedding similarity. Then, inputs these similarity scores that range from 0 to 1, to the machine learning classifiers. This proposed model will be benchmarked on 'the Microsoft research paraphrase corpus' dataset (MSRP) and from this approach for paraphrase detection problem, the accuracy acquired is 75.78% and F1-Score of 83.01 %.
|
[
"Paraphrasing",
"Text Classification",
"Text Generation",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
32,
36,
47,
24,
3
] |
SCOPUS_ID:85124690603
|
A Hybrid Approach to Predict Election Candidate Success Using Candidate Speech and Voter Opinion
|
Election candidate success is a prediction of the winning rate of the candidate. Sentiment analysis on user’s social media data plays a prominent role in prediction. It refers to a classification problem where the main goal is to classify data into positive and negative sentiments. Sentiment analysis over user’s Twitter data offers an effective way to measure voter opinion towards the candidate. As election forecasting based on the only opinion of the voter is difficult, the proposed system come up with a hybrid approach in which sentiment analysis on user’s Twitter data and personality prediction on candidate’s speech data is performed. The system highlights the performance of various classifiers. Experimental results show that the logistic regression classifier outperformed the other classifiers.
|
[
"Information Extraction & Text Mining",
"Text Classification",
"Speech & Audio in NLP",
"Sentiment Analysis",
"Information Retrieval",
"Multimodality"
] |
[
3,
36,
70,
78,
24,
74
] |
http://arxiv.org/abs/1611.01083v1
|
A Hybrid Approach to Word Sense Disambiguation Combining Supervised and Unsupervised Learning
|
In this paper, we are going to find meaning of words based on distinct situations. Word Sense Disambiguation is used to find meaning of words based on live contexts using supervised and unsupervised approaches. Unsupervised approaches use online dictionary for learning, and supervised approaches use manual learning sets. Hand tagged data are populated which might not be effective and sufficient for learning procedure. This limitation of information is main flaw of the supervised approach. Our proposed approach focuses to overcome the limitation using learning set which is enriched in dynamic way maintaining new data. Trivial filtering method is utilized to achieve appropriate training data. We introduce a mixed methodology having Modified Lesk approach and Bag-of-Words having enriched bags using learning methods. Our approach establishes the superiority over individual Modified Lesk and Bag-of-Words approaches based on experimentation.
|
[
"Low-Resource NLP",
"Semantic Text Processing",
"Word Sense Disambiguation",
"Responsible & Trustworthy NLP"
] |
[
80,
72,
65,
4
] |
https://aclanthology.org//W01-1620/
|
A Hybrid Approach to the Development of Dialogue Systems directed by Semantics
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
SCOPUS_ID:85113411165
|
A Hybrid Approach with Machine Learning Towards Opinion Mining for Complex Textual Content
|
Opinion Mining is increasing adopted in business enterprises where sophisticated analytical approaches are subjected to the opinion in the form of text from the consumer. Review of existing literature shows that there are still an open scope to further improve upon this technique. The proposed study considers a case study of a problem where the opinion is shared in the form of text as well as symbol/emoticon, which is quite challenging for any existing text analytics to extract the knowledge. Therefore, the proposed paper introduces a novel solution where two variants of approaches has been used for this purpose i.e. hybrid approach and machine learning approach in order to perform opinion mining from such complex textual content. The study outcome shows that proposed system offers satisfactory processing time and accuracy in large dataset of text.
|
[
"Opinion Mining",
"Sentiment Analysis",
"Information Extraction & Text Mining"
] |
[
49,
78,
3
] |
SCOPUS_ID:85132397168
|
A Hybrid Arabic Text Summarization Approach based on Transformers
|
Recently, the amount of data in the world has increased tremendously. Most of the research and efforts have focused on dealing with data in the English language. Dealing with data in other languages, such as Arabic, has become a significant challenge. In this paper, we proposed a sequential hybrid model based on a transformer to summarize Arabic articles. We used two main approaches of summarization to make our model. The First one is the extractive approach which depends on the most important sentences from the articles as it is to be the summary, so we used more than word embedding techniques to help us know which sentences are more important and by using Deep Learning techniques specifically transformers such as AraBert we make our summary, The second one is abstractive, and this approach is similar to human summarization, which means that the machine can use some words which have the same meaning but are not found in the original text. We apply this kind of summary using the MT5 Arabic pre-trained transformer model. We sequentially applied these two summarization approaches to build our A3SUT hybrid model. The output of the extractive module is fed into the abstractive module. We enhanced the summary's quality to be closer to the human summary by applying this approach. We tested our model on the ESAC dataset and evaluated the extractive summary using the Rouge score technique; we got a precision of 0.5348 and a recall of 0.5515, and an f1 score of 0.4932 and the evaluation of the abstractive model is evaluated by user satisfaction. We add some features to our summary to make it more understandable and accessible by applying the metadata generation task' data about data' and classification. By applying metadata generation, we add facilities and support to our summary, identification, and summary organization. Metadata also captures and provides essential contextual details, as not all summaries are self-describing. Also, classify the original text to determine the summary topic before reading. We acquire 97.5% accuracy by using Support Vector Machine (SVM) and trained it using NADA corpus.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Summarization",
"Text Generation",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
30,
47,
36,
3
] |
http://arxiv.org/abs/2303.04134v1
|
A Hybrid Architecture for Out of Domain Intent Detection and Intent Discovery
|
Intent Detection is one of the tasks of the Natural Language Understanding (NLU) unit in task-oriented dialogue systems. Out of Scope (OOS) and Out of Domain (OOD) inputs may run these systems into a problem. On the other side, a labeled dataset is needed to train a model for Intent Detection in task-oriented dialogue systems. The creation of a labeled dataset is time-consuming and needs human resources. The purpose of this article is to address mentioned problems. The task of identifying OOD/OOS inputs is named OOD/OOS Intent Detection. Also, discovering new intents and pseudo-labeling of OOD inputs is well known by Intent Discovery. In OOD intent detection part, we make use of a Variational Autoencoder to distinguish between known and unknown intents independent of input data distribution. After that, an unsupervised clustering method is used to discover different unknown intents underlying OOD/OOS inputs. We also apply a non-linear dimensionality reduction on OOD/OOS representations to make distances between representations more meaning full for clustering. Our results show that the proposed model for both OOD/OOS Intent Detection and Intent Discovery achieves great results and passes baselines in English and Persian languages.
|
[
"Sentiment Analysis",
"Intent Recognition",
"Text Clustering",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Extraction & Text Mining"
] |
[
78,
79,
29,
11,
38,
3
] |
SCOPUS_ID:85135068944
|
A Hybrid Automatic Text Summarization Model for Judgment Documents
|
Judgment documents are the final carrier of judicial trial activities, and they are an indispensable component for assisting sentencing decision-making and standardizing the scale of judgment. At present, the number of public judgment documents in China has reached 120 million, and it does not stop there, which brings a significant challenge for users to obtain useful information. To solve this problem, this paper proposes a hybrid automatic text summarization model for judgment documents. The method is divided into two stages. The first stage is, through extracting abstract technology, to extract key sentences from the original text to form a set of them. In the second stage, the set of key sentences extracted in the previous stage will be copied or rewritten to generate the final summary using the sequence generation model. The ROUGE indexes of this method in the automatic summarization experiment of judgment documents are 59.79, 37.71, and 52.67, which are 5.26, 7.25, and 10.41 higher than the benchmark model UniLM, respectively. The method proposed in this paper can effectively be applied in the automatic summarization service of judgment documents and solve the problem of information overload thereof and, finally, provide a new way for users to gain smooth access to judgment documents and information.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85090780953
|
A Hybrid BERT Model That Incorporates Label Semantics via Adjustive Attention for Multi-Label Text Classification
|
The multi-label text classification task aims to tag a document with a series of labels. Previous studies usually treated labels as symbols without semantics and ignored the relation among labels, which caused information loss. In this paper, we show that explicitly modeling label semantics can improve multi-label text classification. We propose a hybrid neural network model to simultaneously take advantage of both label semantics and fine-grained text information. Specifically, we utilize the pre-trained BERT model to compute context-aware representation of documents. Furthermore, we incorporate the label semantics in two stages. First, a novel label graph construction approach is proposed to capture the label structures and correlations. Second, we propose a neoteric attention mechanism - adjustive attention to establish the semantic connections between labels and words and to obtain the label-specific word representation. The hybrid representation that combines context-aware feature and label-special word feature is fed into a document encoder to classify. Experimental results on two publicly available datasets show that our model is superior to other state-of-the-art classification methods.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
12,
24,
3
] |
http://arxiv.org/abs/2008.06176v1
|
A Hybrid BERT and LightGBM based Model for Predicting Emotion GIF Categories on Twitter
|
The animated Graphical Interchange Format (GIF) images have been widely used on social media as an intuitive way of expression emotion. Given their expressiveness, GIFs offer a more nuanced and precise way to convey emotions. In this paper, we present our solution for the EmotionGIF 2020 challenge, the shared task of SocialNLP 2020. To recommend GIF categories for unlabeled tweets, we regarded this problem as a kind of matching tasks and proposed a learning to rank framework based on Bidirectional Encoder Representations from Transformer (BERT) and LightGBM. Our team won the 4th place with a Mean Average Precision @ 6 (MAP@6) score of 0.5394 on the round 1 leaderboard.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85058377155
|
A Hybrid BLSTM-C Neural Network Proposed for Chinese Text Classification
|
Text classification has always been a concern in area of natural language processing, especially nowadays the data are getting massive due to the development of Internet. Recurrent neural network (RNN) is one of the most popular method for natural language processing due to its recurrent architecture which give it ability to process serialized information. In the meanwhile, Convolutional neural network (CNN) has shown its ability to extract features from visual imagery. This paper combine the advantages of RNN and CNN and proposed a model called BLSTM-C for Chinese text classification. BLSTM-C begins with a Bi-directional long short-term memory (BLSTM) layer, which is an special kind of RNN, to get a sequence output based on the past context and the future context. Then it feed this sequence to CNN layer which is utilized to extract features from the previous sequence. We evaluate BLSTM-C model on several experiments such as sentiment classification and category classification and the result shows our model's satisfying performance on these text tasks.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85076999703
|
A Hybrid Bat Algorithm Based on Combined Semantic Measures for Word Sense Disambiguation
|
The task of assigning an appropriate sense for an ambiguous word depending on its context is referred to as Words Sense Disambiguation (WSD). The objective of WSD is to attain an improved accuracy for real world applications, such as in information extraction, automatic summarisation, or machine translation. The WSD is solved through the implementation of a computational intelligence approach known as the Bat Algorithm (BA). The BA has the potential to explore an expansive area of the search space as it is a population-based algorithm, which makes it considerably efficient in the diversification process. To further improve the search, a local search algorithm referred to as Hill Climbing (HC) is applied that balances the exploration as well as the exploitation aspects. The suggested algorithm has the ability to optimise the semantic value of the words from the inputted text. In this study, the semantic measure depends on the Leacock and Chodorow (LCH) algorithms and the extended Lesk (eLesk). The recommended algorithm is tested based on certain benchmark datasets. According to the experimental results, it is found that our algorithm can derive better quality performance than that of other relevant algorithms. It is, therefore, concluded that the method suggested in this study provides an effectual solution for the WSD problem.
|
[
"Semantic Text Processing",
"Word Sense Disambiguation"
] |
[
72,
65
] |
SCOPUS_ID:85071097904
|
A Hybrid Bidirectional Recurrent Convolutional Neural Network Attention-Based Model for Text Classification
|
The text classification task is an important application in natural language processing. At present, deep learning models, such as convolutional neural network and recurrent neural network, have achieved good results for this task, but the multi-class text classification and the fine-grained sentiment analysis are still challenging. In this paper, we propose a hybrid bidirectional recurrent convolutional neural network attention-based model to address this issue, which named BRCAN. The model combines the bidirectional long short-term memory and the convolutional neural network with the attention mechanism and word2vec to achieve the fine-grained text classification task. In our model, we apply word2vec to generate word vectors automatically and a bidirectional recurrent structure to capture contextual information and long-term dependence of sentences. We also employ a maximum pool layer of convolutional neural network that judges which words play an essential role in text classification, and use the attention mechanism to give them higher weights to capture the key components in texts. We conduct experiments on four datasets, including Yahoo! Answers, Sogou News of the topic classification, Yelp Reviews, and Douban Movies Top250 short reviews of the sentiment analysis. And the experimental results show that the BRCAN outperforms the state-of-the-art models.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85067388843
|
A Hybrid CNN-LSTM Model for Improving Accuracy of Movie Reviews Sentiment Analysis
|
Nowadays, social media has become a tremendous source of acquiring user’s opinions. With the advancement of technology and sophistication of the internet, a huge amount of data is generated from various sources like social blogs, websites, etc. In recent times, the blogs and websites are the real-time means of gathering product reviews. However, excessive number of blogs on the cloud has enabled the generation of huge volume of information in different forms like attitudes, opinions, and reviews. Therefore, a dire need emerges to find a method to extract meaningful information from big data, classify it into different categories and predict end user’s behaviors or sentiments. Long Short-Term Memory (LSTM) model and Convolutional Neural Network (CNN) model have been applied to different Natural Language Processing (NLP) tasks with remarkable and effective results. The CNN model efficiently extracts higher level features using convolutional layers and max-pooling layers. The LSTM model is capable to capture long-term dependencies between word sequences. In this study, we propose a hybrid model using LSTM and very deep CNN model named as Hybrid CNN-LSTM Model to overcome the sentiment analysis problem. First, we use Word to Vector (Word2Vc) approach to train initial word embeddings. The Word2Vc translates the text strings into a vector of numeric values, computes distance between words, and makes groups of similar words based on their meanings. Afterword embedding is performed in which the proposed model combines set of features that are extracted by convolution and global max-pooling layers with long term dependencies. The proposed model also uses dropout technology, normalization and a rectified linear unit for accuracy improvement. Our results show that the proposed Hybrid CNN-LSTM Model outperforms traditional deep learning and machine learning techniques in terms of precision, recall, f-measure, and accuracy. Our approach achieved competitive results using state-of-the-art techniques on the IMDB movie review dataset and Amazon movie reviews dataset.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis",
"Representation Learning"
] |
[
52,
72,
78,
12
] |
SCOPUS_ID:85120344433
|
A Hybrid CNN-LSTM: A Deep Learning Approach for Consumer Sentiment Analysis Using Qualitative User-Generated Contents
|
With the fastest growth of information and communication technology (ICT), the availability of web content on social media platforms is increasing day by day. Sentiment analysis from online reviews drawing researchers' attention from various organizations such as academics, government, and private industries. Sentiment analysis has been a hot research topic in Machine Learning (ML) and Natural Language Processing (NLP). Currently, Deep Learning (DL) techniques are implemented in sentiment analysis to get excellent results. This study proposed a hybrid convolutional neural network-long short-term memory (CNN-LSTM) model for sentiment analysis. Our proposed model is being applied with dropout, max pooling, and batch normalization to get results. Experimental analysis carried out on Airlinequality and Twitter airline sentiment datasets. We employed the Keras word embedding approach, which converts texts into vectors of numeric values, where similar words have small vector distances between them. We calculated various parameters, such as accuracy, precision, recall, and F1-measure, to measure the model's performance. These parameters for the proposed model are better than the classical ML models in sentiment analysis. Our results analysis demonstrates that the proposed model outperforms with 91.3% accuracy in sentiment analysis.
|
[
"Representation Learning",
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
12,
52,
72,
78
] |
SCOPUS_ID:85119665927
|
A Hybrid Capsule Network with Attention and BiLSTM for Opinion Mining in Text
|
Examining the product review or review on web service facilitate to increase the product quality or web service. By means, comments from online shopping sites such as Amazon, Flipkart, EBay etc will not only assist the users to purchase the product however besides be able to guide the producer/supplier to identify the advantages and disadvantages of the goods. Mining online shopping websites with their information will becomes a major important task. Web Mining plays a major important role to mine the details of online websites efficiently. Web mining is the process of data mining that learns without human intervention and mine information obtained from the documents in web and also services. Sentiment categorization and web mining has become a truly significant task recently, with profound business and research impact. Machine learning algorithms and soon after deep learning methods have been the market leaders in sentiment analysis. The advent of capsule networks has been a landmark event in deep learning. It has been truly proficient in image processing. In case of text classification, standalone capsule networks are not optimally suitable. Here, a hybrid BiLSTM-Capsule framework is introduced for sentiment analysis of web texts of reviews from various datasets. In the model beginning, there is a bidirectional LSTM layer after which is an attention layer and a final capsule layer. This review analysis will helps to improve the products from amazon, increase the movie quality. The analysis of outcome depending on MR, IMDB, SST and Amazon datasets indicated the introduced framework performs better than some benchmark deep learning models. Significantly, the BiLSTM-Capsule can put its words in sentimental trend showing the capsules’ attributes without utilizing the linguistic knowledge.
|
[
"Language Models",
"Opinion Mining",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
49,
72,
78
] |
http://arxiv.org/abs/1609.01597v1
|
A Hybrid Citation Retrieval Algorithm for Evidence-based Clinical Knowledge Summarization: Combining Concept Extraction, Vector Similarity and Query Expansion for High Precision
|
Novel information retrieval methods to identify citations relevant to a clinical topic can overcome the knowledge gap existing between the primary literature (MEDLINE) and online clinical knowledge resources such as UpToDate. Searching the MEDLINE database directly or with query expansion methods returns a large number of citations that are not relevant to the query. The current study presents a citation retrieval system that retrieves citations for evidence-based clinical knowledge summarization. This approach combines query expansion, concept-based screening algorithm, and concept-based vector similarity. We also propose an information extraction framework for automated concept (Population, Intervention, Comparison, and Disease) extraction. We evaluated our proposed system on all topics (as queries) available from UpToDate for two diseases, heart failure (HF) and atrial fibrillation (AFib). The system achieved an overall F-score of 41.2% on HF topics and 42.4% on AFib topics on a gold standard of citations available in UpToDate. This is significantly high when compared to a query-expansion based baseline (F-score of 1.3% on HF and 2.2% on AFib) and a system that uses query expansion with disease hyponyms and journal names, concept-based screening, and term-based vector similarity system (F-score of 37.5% on HF and 39.5% on AFib). Evaluating the system with top K relevant citations, where K is the number of citations in the gold standard achieved a much higher overall F-score of 69.9% on HF topics and 75.1% on AFib topics. In addition, the system retrieved up to 18 new relevant citations per topic when tested on ten HF and six AFib clinical topics.
|
[
"Semantic Text Processing",
"Representation Learning",
"Summarization",
"Text Generation",
"Reasoning",
"Fact & Claim Verification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
12,
30,
47,
8,
46,
24,
3
] |
SCOPUS_ID:85092738254
|
A Hybrid Classification Approach using Topic Modeling and Graph Convolution Networks
|
Text classification has become a key operation in various natural language processing tasks. The efficiency of most classification algorithms predominantly confide in the quality of input features. In this work, we propose a novel multi-class text classification technique that harvests features from two distinct feature extraction methods. Firstly, a structured heterogeneous text graph built based on document-word relations and word co-occurrences is leveraged using a Graph Convolution Network (GCN). Secondly, the documents are topic modeled to use the document-Topic score as features into the classification model. The concerned graph is constructed using Point-Wise Mutual Information (PMI) between pair of word co-occurrences and Term Frequency-Inverse Document Frequency (TF-IDF) score for words in the documents for word co-occurrences. Experimentation reveals that our text classification model outperforms the existing techniques for five benchmark text classification data sets.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Information Retrieval",
"Structured Data in NLP",
"Text Classification",
"Multimodality"
] |
[
9,
3,
24,
50,
36,
74
] |
SCOPUS_ID:85085659397
|
A Hybrid Classification Method via Character Embedding in Chinese Short Text with Few Words
|
Last decades have witnessed the significance development of research in short text classification. However, most existing methods only focus on the text which contained dozens of words like Twitter or MicroBlog, but not take the short text with few words like news headline or invoice name into consideration. Meanwhile, contemporary short text classification methods either to expand feature of short text with external corpus or to learn the feature representation from all the texts, which have not take the difference between words of short text into full consideration. Notably, the classification of short text with few words are usually determined by a few specific key words contrary to documents classification or traditional short text classification. To address these problems, this paper propose a hybrid classification method of Attention mechanism and Feature selection via Character embedding in Chinese short text with few words, called AFC. More specifically, firstly, the character embedding is computed to represent Chinese short texts with few words, which takes full advantage of short text information without external corpus. Secondly, attention-based LSTM is introduced in our method to project the data into feature representation space with weighting, which make the keywords in classification have more subtle value. Furthermore, the semantic similarity between content and class label information is calculated for feature selection, which reduces the possible negative influence of some redundant information on classification. Experiments on real-world datasets demonstrate the effectiveness of our method compared to other competing methods.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
SCOPUS_ID:85088748221
|
A Hybrid Conversational Agent with Semantic Association of Autobiographic Memories for the Elderly
|
Socially Assistive Robots are becoming essential in the field of elderly care, as they can support caregivers in their tasks, for instance, by providing senior users with emotional and psychological support through verbal communication. In this paper, we present the results of a project where we developed an interactive dialogue system so that a robot could engage elderly users in conversations about their personal life stories. A task that seems almost mundane for the average person, is in fact extremely challenging for a machine to achieve. Through the development of a comprehensive platform with a variety of modules, the system is able to extract essential keywords from a user utterance and classify them according to sentence context and word meaning. These keywords are then indexed in a user-specific knowledge base, where semantic associations between items are made, relating them, for instance, by time or place. These items are used by the robot to generate responses to the user’s speech by leveraging a hybrid template/data-driven mechanism. As the user interacts with the system, it learns more details which further enrich the generated sentences. The system was evaluated on a human-in-loop experiment, where the results showed the ability of the system to understand human speech, memorize personal information of each user and generate coherent responses in dialogic interactions. These results highlight the potential of a robot not only to provide companionship, but also to build a social relationship with its user.
|
[
"Natural Language Interfaces",
"Multimodality",
"Speech & Audio in NLP",
"Dialogue Systems & Conversational Agents"
] |
[
11,
74,
70,
38
] |
http://arxiv.org/abs/1702.02390v1
|
A Hybrid Convolutional Variational Autoencoder for Text Generation
|
In this paper we explore the effect of architectural choices on learning a Variational Autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid some of the major difficulties posed by training VAE models on textual data.
|
[
"Language Models",
"Semantic Text Processing",
"Text Generation"
] |
[
52,
72,
47
] |
SCOPUS_ID:85122790083
|
A Hybrid Data Analytics Framework with Sentiment Convergence and Multi-Feature Fusion for Stock Trend Prediction
|
Stock market analysis plays an indispensable role in gaining knowledge about the stock market, developing trading strategies, and determining the intrinsic value of stocks. Nevertheless, predicting stock trends remains extremely difficult due to a variety of influencing factors, volatile market news, and sentiments. In this study, we present a hybrid data analytics framework that integrates convolutional neural networks and bidirectional long short-term memory (CNN-BiLSTM) to evaluate the impact of convergence of news events and sentiment trends with quantitative financial data on predicting stock trends. We evaluated the proposed framework using two case studies from the real estate and communications sectors based on data collected from the Dubai Financial Market (DFM) between 1 January 2020 and 1 December 2021. The results show that combining news events and sentiment trends with quantitative financial data improves the accuracy of predicting stock trends. Compared to benchmarked machine learning models, CNN-BiLSTM offers an improvement of 11.6% in real estate and 25.6% in communications when news events and sentiment trends are combined. This study provides several theoretical and practical implications for further research on contextual factors that influence the prediction and analysis of stock trends.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85096448964
|
A Hybrid Deep Learning Approach for Stock Price Prediction
|
Prediction of stock prices has been the primary objective of an investor. Any future decision taken by the investor directly depends on the stock prices associated with a company. This work presents a hybrid approach for the prediction of intra-day stock prices by considering both time-series and sentiment analysis. Furthermore, it focuses on long short-term memory (LSTM) architecture for the time-series analysis of stock prices and Valence Aware Dictionary and sEntiment Reasoner (VADER) for sentiment analysis. LSTM is a modified recurrent neural network (RNN) architecture. It is efficient at extracting patterns over sequential time-series data, where the data spans over long sequences and also overcomes the gradient vanishing problem of RNN. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments expressed in social media and news articles. The results of both techniques are combined to forecast the intra-day stock movement and hence the model named as LSTM-VDR. The model is first of its kind, a combination of LSTM and VADER to predict stock prices. The dataset contains closing prices of the stock and recent news articles combined from various online sources. This approach, when applied on the stock prices of Bombay Stock Exchange (BSE) listed companies, has shown improvements in comparison to prior studies.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85056835262
|
A Hybrid Deep Learning Architecture for Paraphrase Identification
|
The binary classification task of Paraphrase Identification (PI) is vital in the field of Natural Language Processing. The objective of this study is to propose an optimized Deep Learning architecture in combination with usage of word embedding technique for the classification of sentence pairs as paraphrases or not. For Paraphrase Identification task, this paper proposes a hybrid Deep Learning architecture aiming to capture as many features from the inputted sentences in natural language. The aim is to accurately classify whether the pair of sentences are paraphrases of each other or not. The importance of using an optimized word-embedding approach in combination with the proposed hybrid Deep Learning architecture is explained. This study also deals with the lack of the training data required to generate a robust Deep Learning model. The intention is to harness the memorizing power of Long Short Term Memory (LSTM) neural network and the feature extracting capability of Convolutional Neural Network (CNN) in combination with the optimized word-embedding approach which aims to capture wide-sentential contexts and word-order. The proposed model is compared with existing systems and it surpasses all the existing systems in the performance in terms of accuracy.
|
[
"Paraphrasing",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Text Generation",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
32,
72,
36,
12,
47,
24,
3
] |
SCOPUS_ID:85062502914
|
A Hybrid Deep Learning Framework for Bacterial Named Entity Recognition
|
Microorganisms have been confirmed to be essential for the fundamental function of various ecosystems. The interactions among microorganisms affect the human health and environmental ecosystem. A large number of microbial interactions with experimental confidence have been reported in biomedical literature. Extracting and collating these interactions with experimental confidence into a database will create a valuable data resource. Named Entity Recognition (NER) is the premise and key to interaction extraction from literatures. Especially, bacterial named entity recognition is still a challenging task due to the specialty of bacterial names. In this paper, we propose a bacterial named entity recognition system based on a hybrid deep learning framework (HDL-CRF), which integrates two deep learning models: the bidirectional long short-term memory network and the convolutional neural network, as well as the conditional random field approach, for automatically extracting the features. Finally, we prove that this model outperforms previous methods in performance.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85124532254
|
A Hybrid Deep Learning Model for Emotion Detection in Emotion-sensitive Robo-Advisors
|
The rise of human-like conversational agents such as chatbots and robo-Advisors have motivated researchers to exploit the property of 'anthropomorphizing' in conversational agents who can facilitate an array of real-world applications. Since human can naturally detect respondents' emotions embedded in their responses and constantly adapt to varying conversational contexts, it is desirable to equip chatbots or robo-Advisors with the corresponding emotion-sensitive conversational generation capabilities to better mimic human-like intelligence (i.e., anthropomorphizing). However, the design and development of emotion-sensitive conversational agents are still in its infant stage. To fill the aforementioned research gap, we propose a deep learning-based emotion detection method for the development of emotion-sensitive conversational agents such as robo-Advisors in this paper. In particular, we propose a new hybrid deep learning approach which combines the merits of pre-Trained deep learning models such as BERT and Bi-LSTM by using a residual connection method which supports a layer-wise connection approach to stabilize the fine-Tuning process and bootstrap the overall emotion classification performance. Based on two well-known benchmark emotion corpora, namely IEMOCAP and FRIENDS, our rigorous experiments reveal that the proposed hybrid deep learning model with the residual connection method achieves promising emotion classification performance, which lays a solid foundation for the development of emotion-sensitive conversational agents such as chatbots and robo-Advisors.
|
[
"Information Extraction & Text Mining",
"Text Classification",
"Natural Language Interfaces",
"Sentiment Analysis",
"Emotion Analysis",
"Information Retrieval",
"Dialogue Systems & Conversational Agents"
] |
[
3,
36,
11,
78,
61,
24,
38
] |
SCOPUS_ID:85099131245
|
A Hybrid Deep Learning Model for Long-Term Sentiment Classification
|
With the omnipresence of user feedbacks in social media, mining of relevant opinion and extracting the underlying sentiment to analyze synthetic emotion towards a specific product, person, topic or event has become a vast domain of research in recent times. A thorough survey of the early unimodal and multimodal sentiment classification approaches reveals that researchers mostly relied on either corpus based techniques or those based on machine learning algorithms. Lately, Deep learning models progressed profoundly in the area of image processing. This success has been efficiently directed towards enhancements in sentiment categorization. A hybrid deep learning model consisting of Convolutional Neural Network (CNN) and stacked bidirectional Long Short Term Memory (BiLSTM) over pre-trained word vectors is proposed in this paper to achieve long-term sentiment analysis. This work experiments with various hyperparameters and optimization techniques to make the model get rid of overfitting and to achieve optimal performance. It has been validated on two standard sentiment datasets, Stanford Large Movie Review (IMDB) and Stanford Sentiment Treebank2 Dataset (SST2). It achieves a competitive advantage over other models like CNN, LSTM and ensemble of CNN-LSTM by attaining better accuracy and also produces high F measure.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
78,
24,
3
] |
SCOPUS_ID:85065765847
|
A Hybrid Deep Learning Model for Text Classification
|
Deep learning has shown its effectiveness in many tasks such as text classification and computer vision. Most text classification tasks are concentrated in the use of convolution neural network and recurrent neural network to obtain text feature representation. In some researches, Attention mechanism is usually adopted to improve classification accuracy. According to the target of task 6 in NLPCC2018, a hybrid deep learning model which combined BiGRU, CNN and Attention mechanism was proposed to improve text classification. The experimental results show that the F1-score of the proposed model successfully excels the task's baseline model. Besides, this hybrid Deep Learning model gets higher Precision, Recall and F1-score comparing with some other popular Deep Learning models, and the improvement of on F1-score is 5.4% than the single CNN model.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85062775577
|
A Hybrid Deep Learning Model to Predict Business Closure from Reviews and User Attributes Using Sentiment Aligned Topic Model
|
Business closure is a very good indicator for success or failure of a business. This will help investors and banks as to whether to invest or lend to a particular business for future growth and benefits. Traditional machine learning techniques require extensive manual feature engineering and still do not perform satisfactorily due to significant class imbalance problem and little difference in the attributes for open and closed businesses. We have used historical data besides taking care of the class imbalance problem. Transfer learning also has been used to tackle the issue of having small categorical datasets. A hybrid deep learning model has been proposed to predict whether a business would be shut down within a specific period of time. Sentiment Aligned Topic Model (SATM) is used to extract aspect-wise sentiment scores from user reviews. Our results show a marked improvement over traditional machine learning techniques. It also shows how the aspect-wise sentiment scores corresponding to each business, computed using SATM, help to give better results.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Sentiment Analysis"
] |
[
9,
3,
78
] |
SCOPUS_ID:85141617708
|
A Hybrid Deep Learning Technique for Sentiment Analysis in E-Learning Platform with Natural Language Processing
|
E-learning-based teaching methodologies are increasing now-a-days and also, the online classes are considered as highly popular that ensures the virtual platform for online education from anywhere in the world. The social networks are widely distributed that generates different opinions on various perspectives of life through the messages on the web. This textural information is highly sourced with the data for performing the sentiment analysis and opinion mining that is expressed through the text. This text provides the feelings of the students with the statements that show agreement or disagreement in the comment sections to reveal the negative or positive feelings of the students towards the learning. The major goal of this paper is to design of new sentiment analysis model for e-learning platform with the help of natural language processing techniques. Initially, the standard text data regarding e-learning platform with user reviews are gathered from benchmark resources. The gathered data is forwarded to pre-processing technique, where the unnecessary content is avoided for maximizing the performance of sentiment analysis. Further, word to vector conversion is carried out using glove embedding scheme for getting the relevant data for sentiment analysis. Further, the sentiment classification is carried out by Convolutional Neural Networks (CNN) with Gated Recurrent Unit (GRU). Finally, the sentiments are analyzed through hybrid deep learning in the field of e-learning. The investigation reveals promising results in sentiment analysis tasks.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85084307218
|
A Hybrid Deep Neural Network for Urdu Text Recognition in Natural Images
|
In this work, we present a benchmark and a hybrid deep neural network for Urdu Text Recognition in natural scene images. Recognizing text in natural scene images is a challenging task, which has attracted the attention of computer vision and pattern recognition communities. In recent years, scene text recognition has widely been studied where; state-of-the-art results are achieved by using deep neural network models. However, most of the research works are performed for English text and a less concentration is given to other languages. In this paper, we investigate the problem of Urdu text recognition in natural scene images. Urdu is a type of cursive text written from right to left direction where, two or more characters are joined to form a word. Recognizing cursive text in natural images is considered an open problem due to variations in its representation. A hybrid deep neural network architecture with skip connections, which combines convolutional and recurrent neural network, is proposed to recognize the Urdu scene text. We introduce a new dataset of 11500 manually cropped Urdu word images from natural scenes and show the baseline results. The network is trained on the whole word image avoiding the traditional character based classification. Data augmentation technique with contrast stretching and histogram equalizer is used to further enhance the size of the dataset. The experimental results on original and augmented word images show state-of-the-art performance of the network.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85085738383
|
A Hybrid Dictionary Model for Ethical Analysis
|
With the adoption of social media and web services, people have become more likely to share opinions on the web about their daily activities. Thus, social networks end up being seen as an opportunity to bend the rules, making things inadmissible in society. This work aimed to design a dictionary template for sentiment analysis applied to unethical behaviors. It also analyses how prepared the machine is for human dialogue, and proposes a hybrid approach combining existing work for dictionary creation and standard conversation recognition in the Internet. Additionally, it also analyzes current policies to remove inappropriate content and proposes some steps to be considered to reduce spread of inappropriate content in the web.
|
[
"Responsible & Trustworthy NLP",
"Ethical NLP",
"Sentiment Analysis"
] |
[
4,
17,
78
] |
SCOPUS_ID:85074239572
|
A Hybrid Engine for Clinical Information Extraction from Radiology Reports
|
Clinical researches and practitioners require data extracted from CT scan reports but most of them are in unstructured data format, which are not ready to analysis. Furthermore, a lag of annotated data makes data extraction more difficult to apply natural language processing techniques to convert unstructured data to be structured data. This study is therefore conducted to apply an automated engine employing topic modeling combined with lexicon and syntactic rule-based approach to extract clinical information from CT scan reports. This prototype shows promising results for constructing clinical datasets for further clinical researches.
|
[
"Multimodality",
"Structured Data in NLP",
"Information Extraction & Text Mining"
] |
[
74,
50,
3
] |
SCOPUS_ID:85119198400
|
A Hybrid Ensemble Word Embedding based Classification Model for Multi-document Summarization Process on Large Multi-domain Document Sets
|
Contextual text feature extraction and classification play a vital role in the multi-document summarization process. Natural language processing (NLP) is one of the essential text mining tools which is used to preprocess and analyze the large document sets. Most of the conventional single document feature extraction measures are independent of contextual relationships among the different contextual feature sets for the document categorization process. Also, these conventional word embedding models such as TF-ID, ITF-ID and Glove are difficult to integrate into the multi-domain feature extraction and classification process due to a high misclassification rate and large candidate sets. To address these concerns, an advanced multi-document summarization framework was developed and tested on number of large training datasets. In this work, a hybrid multi-domain glove word embedding model, multi-document clustering and classification model were implemented to improve the multi-document summarization process for multi-domain document sets. Experimental results prove that the proposed multi-document summarization approach has improved efficiency in terms of accuracy, precision, recall, F-score and run time (ms) than the existing models.
|
[
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning",
"Summarization",
"Text Generation",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
72,
24,
12,
30,
47,
36,
3
] |
http://arxiv.org/abs/cmp-lg/9802002v1
|
A Hybrid Environment for Syntax-Semantic Tagging
|
The thesis describes the application of the relaxation labelling algorithm to NLP disambiguation. Language is modelled through context constraint inspired on Constraint Grammars. The constraints enable the use of a real value statind "compatibility". The technique is applied to POS tagging, Shallow Parsing and Word Sense Disambigation. Experiments and results are reported. The proposed approach enables the use of multi-feature constraint models, the simultaneous resolution of several NL disambiguation tasks, and the collaboration of linguistic and statistical models.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
SCOPUS_ID:84964746038
|
A Hybrid Feature Selection Method for Vietnamese Text Classification
|
Text classification is a very important task due to the huge amount of electronic documents. One of the main challenges for text classification is the high dimensionality of feature spaces. There have been extensive studies on feature selections for English text classification. However, not many works have been studied on Vietnamese text classification. This paper evaluates the performances of the three widely used feature selection methods [2][6][10]: the Chi-square (CHI), the Information Gain (IG), and the Document Frequency (DF). Based on the evaluation, we propose a hybrid feature selection method, called SIGCHI, which combines the Chi-square and the Information Gain feature selection methods. Our experimental results showed that the proposed method performs significantly better than the other methods. The accuracy of SIGCHI method is up to 15.03% higher than the one of CHI method, up to 18.65% higher than the one of IG method, and up to 27.72% higher than the one of DF method, respectively.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85141877464
|
A Hybrid Framework Using PCA, EMD and LSTM Methods for Stock Market Price Prediction with Sentiment Analysis
|
The aim of investors is to obtain the maximum return when buying or selling stocks in the market. However, stock price shows non-linearity and non-stationarity and is difficult to accurately predict. To address this issue, a hybrid prediction model was formulated combining principal component analysis (PCA), empirical mode decomposition (EMD) and long short-term memory (LSTM) called PCA-EMD-LSTM to predict one step ahead of the closing price of the stock market in Thailand. In this research, news sentiment analysis was also applied to improve the performance of the proposed framework, based on financial and economic news using FinBERT. Experiments with stock market price in Thailand collected from 2018–2022 were examined and various statistical indicators were used as evaluation criteria. The obtained results showed that the proposed framework yielded the best performance compared to baseline methods for predicting stock market price. In addition, an adoption of news sentiment analysis can help to enhance performance of the original LSTM model.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85066894660
|
A Hybrid Framework for Detecting Non-basic Emotions in Text
|
The task of Emotion Detection from Text has received substantial attention in the recent years. Although most of the work in this field has been conducted considering only the basic set of six emotions, yet there are a number of applications wherein the importance of non-basic emotions (like interest, engagement, confusion, frustration, disappointment, boredom, hopefulness, satisfaction) is paramount. A number of applications like student feedback analysis, online forum analysis and product manual evaluation require the identification of non-basic emotions to suggest improvements and enhancements. In this study, we propose a hybrid framework for the detection and classification of such non-basic emotions from text. Our framework principally uses Support Vector Machine to detect non-basic emotions. The emotions which go undetected in supervised learning are attempted to be detected by using the lexical and semantic information from word2vec predictive model. The results obtained utilizing this framework are quite encouraging and comparable to state-of-the-art techniques available.
|
[
"Text Classification",
"Sentiment Analysis",
"Emotion Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
78,
61,
24,
3
] |
SCOPUS_ID:85061725353
|
A Hybrid Framework for Sentiment Analysis Using Genetic Algorithm Based Feature Reduction
|
Due to the rapid development of Internet technologies and social media, sentiment analysis has become an important opinion mining technique. Recent research work has described the effectiveness of different sentiment classification techniques ranging from simple rule-based and lexicon-based approaches to more complex machine learning algorithms. While lexicon-based approaches have suffered from the lack of dictionaries and labeled data, machine learning approaches have fallen short in terms of accuracy. This paper proposes an integrated framework which bridges the gap between lexicon-based and machine learning approaches to achieve better accuracy and scalability. To solve the scalability issue that arises as the feature-set grows, a novel genetic algorithm (GA)-based feature reduction technique is proposed. By using this hybrid approach, we are able to reduce the feature-set size by up to 42% without compromising the accuracy. The comparison of our feature reduction technique with more widely used principal component analysis (PCA) and latent semantic analysis (LSA) based feature reduction techniques have shown up to 15.4% increased accuracy over PCA and up to 40.2% increased accuracy over LSA. Furthermore, we also evaluate our sentiment analysis framework on other metrics including precision, recall, F-measure, and feature size. In order to demonstrate the efficacy of GA-based designs, we also propose a novel cross-disciplinary area of geopolitics as a case study application for our sentiment analysis framework. The experiment results have shown to accurately measure public sentiments and views regarding various topics such as terrorism, global conflicts, and social issues. We envisage the applicability of our proposed work in various areas including security and surveillance, law-and-order, and public administration.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85076995670
|
A Hybrid Framework of Emotion-Aware Seq2Seq Model for Emotional Conversation Generation
|
This paper describes RUCIR’s system in NTCIR-14 Short Text Conversation (STC) Chinese Emotional Conversation Generation (CECG) subtask. In our system, we use the Attention-based Sequence-to-Sequence (Seq2Seq) method as our basic structure to generate emotional responses. This paper introduces (1) an emotion-aware Seq2Seq model and (2) several features to boost the performance of emotion consistency. Official results show that our model performs the best in terms of the overall results across the five given emotion categories.
|
[
"Language Models",
"Semantic Text Processing",
"Dialogue Response Generation",
"Natural Language Interfaces",
"Text Generation",
"Dialogue Systems & Conversational Agents"
] |
[
52,
72,
14,
11,
47,
38
] |
SCOPUS_ID:85095115899
|
A Hybrid Fuzzy System via Topic Model for Recommending Highlight Topics of CQA in Developer Communities
|
Question-Answering (QA) websites supply a quickly growing source of useful information in numerous areas. These platforms present novel opportunities for online users to supply solutions, they also pose numerous challenges with the ever-growing size of the QA community. QA sites supply platforms for users to cooperate in the form of asking questions or giving answers. Stack Overflow is a massive source of information for both industry and academic practitioners, and its analysis can supply useful insights. Topic modeling of Stack Overflow is very beneficial for pattern discovery and behavior analysis in programming knowledge. In this paper, we propose a framework based on the Latent Dirichlet Allocation (LDA) algorithm and fuzzy rules for question topic mining and recommending highlight latent topics in a community question-Answering (CQA) forum of developer community. We consider a real dataset and use 170,091 programmer questions in the R language forum from the Stack Overflow website. Our result shows that LDA topic models via novel fuzzy rules can play an effective role for extracting meaningful concepts and semantic mining in question-Answering forums in developer communities.
|
[
"Programming Languages in NLP",
"Topic Modeling",
"Question Answering",
"Multimodality",
"Natural Language Interfaces",
"Information Extraction & Text Mining"
] |
[
55,
9,
27,
74,
11,
3
] |
SCOPUS_ID:85149939800
|
A Hybrid Generative/Discriminative Model for Rapid Prototyping of Domain-Specific Named Entity Recognition
|
We propose PYHSCRF, a novel tagger for domain-specific named entity recognition that only requires a few seed terms, in addition to unannotated corpora, and thus permits the iterative and incremental design of named entity (NE) classes for new domains. The proposed model is a hybrid of a generative model named PYHSMM and a semi-Markov CRF-based discriminative model, which play complementary roles in generalizing seed terms and in distinguishing between NE chunks and non-NE words. It also allows a smooth transition to full-scale annotation because the discriminative model makes effective use of annotated data when available. Experiments involving two languages and three domains demonstrate that the proposed method outperforms baselines.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85093102791
|
A Hybrid Imbalanced Data Learning Framework to Tackle Opinion Imbalance in Movie Reviews
|
Opinion Mining is an important buzzword in recent times for research and industry for data science applications. Many concerns in opinion imbalance particularly in movie reviews were analyzed and handled for efficient recommendations. Opinion imbalance in terms of binary class, which often compromises the classifier prediction results, was scarcely studied. In this work, a Hybrid Imbalanced Data Learning Framework (HIDLF) is proposed to handle the opinion imbalance in the movie review dataset and then classify the movie reviews through the proposed HIDLT-SVM algorithm, which is a part of HIDLF for effective movie review classification. Experimental comparisons of the proposed work are done on movie reviews with Logistic Regression, CART, and REP Tree. Different evaluation metrics are used for capable classification of opinions from the movie reviews. The results recommend that the planned HIDLT-SVM framework performs better than the competent algorithms.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85041340662
|
A Hybrid Knowledge Mining Approach to Develop a System Framework for Odia Language Text Processing
|
Words are evolved and use for expression of man's inner feelings. Any language-spoken or written by human beings, use many words in sentences of language to expresses his feelings and emotions through sentences. To supplement our own mother tongue, we borrow such words from other languages. Odisha, which is a state in the eastern part of India, has more than 33 million people speaking and writing this language. The culture and knowledge stored in many forms through Odia language text has a rich heritage. Odia is the mother language of the majority of the people of Odisha at present and also in the past. Various text forms such as reviews, news, and blogs are natural language processing task that mine information from opinion mining, and classify them on the basis of their polarity as positive, negative or neutral. The last few years, enormous increase has been seen in Odia language on the Web. This proposal gives an overview of the work that has been done in Odia language. The present work is a beginning to a higher goal of mining opinions or sentiments of people. The various phases of Natural Language Processing (NLP) included here are Lexical, Morphological and Syntactic-Semantic stages, to generate the Root word, Part of Speech, Suffix and Synonym of words in Odia text.
|
[
"Opinion Mining",
"Sentiment Analysis"
] |
[
49,
78
] |
https://aclanthology.org//2022.case-1.4/
|
A Hybrid Knowledge and Transformer-Based Model for Event Detection with Automatic Self-Attention Threshold, Layer and Head Selection
|
Event and argument role detection are frequently conceived as separate tasks. In this work we conceive both processes as one taskin a hybrid event detection approach. Its main component is based on automatic keyword extraction (AKE) using the self-attention mechanism of a BERT transformer model. As a bottleneck for AKE is defining the threshold of the attention values, we propose a novel method for automatic self-attention thresholdselection. It is fueled by core event information, or simply the verb and its arguments as the backbone of an event. These are outputted by a knowledge-based syntactic parser. In a secondstep the event core is enriched with other semantically salient words provided by the transformer model. Furthermore, we propose an automatic self-attention layer and head selectionmechanism, by analyzing which self-attention cells in the BERT transformer contribute most to the hybrid event detection and which linguistic tasks they represent. This approach was integrated in a pipeline event extraction approachand outperforms three state of the art multi-task event extraction methods.
|
[
"Event Extraction",
"Language Models",
"Semantic Text Processing",
"Information Extraction & Text Mining"
] |
[
31,
52,
72,
3
] |
SCOPUS_ID:85135023611
|
A Hybrid Learning Approach for Text Classification Using Natural Language Processing
|
Text classification and categorization is a hot topic that involves assigning tags or categories to a text based on its content. It is one of the important tasks of automatic natural language processing (NLP) in many applications such as topic tagging, sentiment analysis, intent detection, spam filtering, and email routing. Machine learning text classification can support businesses to automatically analyze and structure their textual documents promptly and inexpensively, to automate processes and improve data-driven decisions. In this article, we propose a new algorithm to classify textual documents using a hybrid approach that combines a set of given algorithms, using the best for each class. These documents can be classified into a set of possible class labels given a priori. Two machine learning algorithms are used to evaluate our proposed approach: Naive Bayesian (NB) and Logistic Regression (LR). The obtained results showed that the proposed hybrid algorithm is more efficient than NB and LR algorithms with an accuracy of 91.86%.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
https://aclanthology.org//W18-3011/
|
A Hybrid Learning Scheme for Chinese Word Embedding
|
To improve word embedding, subword information has been widely employed in state-of-the-art methods. These methods can be classified to either compositional or predictive models. In this paper, we propose a hybrid learning scheme, which integrates compositional and predictive model for word embedding. Such a scheme can take advantage of both models, thus effectively learning word embedding. The proposed scheme has been applied to learn word representation on Chinese. Our results show that the proposed scheme can significantly improve the performance of word embedding in terms of analogical reasoning and is robust to the size of training data.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85084639022
|
A Hybrid Learning approach for Sentiment Classification in Telugu Language
|
Sentiment or Opinion Analysis is a subset of Natural Language Processing which is employed in wide range of business verticals to disentangle, analyze, distinguish and comprehend the general opinion of user reviews, comments, feedback, news, and so on. Humans generate close to 2.5 quintillion bytes of data each day on internet and business leaders are tasked to derive hidden patterns and meaningful insights from this data to understand human behavior and take shrewd decisions. Since last decade, enormous amounts of text data on internet is generated for Indian languages. Researchers have shown inquisitiveness in deriving and analyzing this data to extract relevant information. To the best of our knowledge, there has been no requisite amount of research in classifying sentiment in Telugu language because of inadequate language resources and also being a regional language. This research paper illustrates a methodical approach which leverages lexicon based approach and machine learning in the field of sentiment analysis to classify the opinions in Telugu language. Firstly, by employing Lexicon based approach - Telugu SentiWordNet we identified the subjective sentences from the Telugu corpus. Secondly, by utilizing machine learning algorithms - SVM, Naïve Bayes and Random Forest we categorized the sentiment in the corpus. Our proposed methodology achieved highest accuracy of 85%.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85131769244
|
A Hybrid Linguistic and Knowledge-Based Analysis Approach for Fake News Detection on Social Media
|
The rapid development of different social media and content-sharing platforms has been largely exploited to spread misinformation and fake news that make people believing in harmful stories, which allow to influence public opinion, and could cause panic and chaos among population. Thus, fake news detection has become an important research topic, aiming at flagging a specific content as fake or legitimate. The fake news detection solutions can be divided into three main categories: content-based, social context-based, and knowledge-based approaches. In this paper, we propose a novel hybrid fake news detection system that combines linguistic and knowledge-based approaches and inherits their advantages, by employing two different sets of features: (1) linguistic features (i.e., title, number of words, reading ease, lexical diversity,and sentiment), and (2) a novel set of knowledge-based features, called fact-verification features that comprise three types of information namely, (i) reputation of the website where the news is published, (ii) coverage, i.e., number of sources that published the news, and (iii) fact-check, i.e., opinion of well-known fact-checking websites about the news, i.e., true or false. The proposed system only employs eight features, which is less than most of the state-of-the-art approaches. Also, the evaluation results on a fake news dataset show that the proposed system employing both types of features can reach an accuracy of 94.4%, which is better compared to that obtained from separately employing linguistic features (i.e., accuracy=89.4%) and fact-verification features (i.e., accuracy=81.2%).
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
SCOPUS_ID:85125547433
|
A Hybrid Machine Learning Approach for Sentiment Analysis of Beauty Products Reviews
|
Nowadays, social media platforms have become a mirror that imitates opinions and feelings about any specific product or event. These product reviews are capable of enhancing communication among entrepreneurs and their customers. These reviews need to be extracted and analyzed to predict the sentiment polarity, i.e., whether the review is positive or negative. This paper aims to predict the human sentiments expressed for beauty product reviews extracted from Amazon and improve the classification accuracy. The three phases instigated in our work are data pre-processing, feature extraction using the Bag-of-Words (BoW) method, and sentiment classification using Machine Learning (ML) techniques. A Global Optimization-based Neural Network (GONN) is proposed for the sentimental classification. Then an empirical study is conducted to analyze the performance of the proposed GONN and compare it with the other machine learning algorithms, such as Random Forest (RF), Naive Bayes (NB), and Support Vector Machine (SVM). We dig further to cross-validate these techniques by ten folds to evaluate the most accurate classifier. These models have also been investigated on the Precision-Recall (PR) curve to assess and test the best technique. Experimental results demonstrate that the proposed method is the most appropriate method to predict the classification accuracy for our defined dataset. Specifically, we exhibit that our work is adept at training the textual sentiment classifiers better, thereby enhancing the accuracy of sentiment prediction
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85114554678
|
A Hybrid Machine Learning Approach for Sentiment Analysis of Partially Occluded Faces
|
With millions of images and videos uploaded on social media every day, facial sentiment analysis has gained significant attention as means of gaining large scale insights into people's emotions and sentiments. While several models have been proposed for sentiment and emotion analysis of complete, camera-facing pictures, the analysis of images appearing in natural settings and crowded scenes poses more challenges. In such settings, images typically contain a mix of complete and partially occluded faces (i.e. obstructed faces) presented with different angles, resolutions and distances from the camera. In this paper, we propose a hybrid machine learning model combining convolutional neural networks (CNNs) and support vector machines (SVMs) to achieve accurate facial sentiment and emotion analysis of incomplete and partially occluded facial images. The proposed model was successfully tested using 4, 690 images containing 25, 400 faces, collected from a large-scale public event. The model was able to correctly classify the test dataset containing faces with different angles, camera distances, occlusion areas, and image resolutions. The results show a classification accuracy of 89.9% for facial sentiment analysis, and an accuracy of 87.4% when distinguishing between seven emotions in partially occluded faces. This makes our model suitable for real-life practical applications.
|
[
"Visual Data in NLP",
"Emotion Analysis",
"Multimodality",
"Sentiment Analysis"
] |
[
20,
61,
74,
78
] |
SCOPUS_ID:85081049642
|
A Hybrid Method Based on Particle Swarm Optimization for Restaurant Culinary Food Reviews
|
A review or opinion on culinary food restaurants carried out by consumers will produce information in the form of decision support for culinary food seekers who are looking for the best place to buy these foods. With the data in the form of review text obtained, the text mining based sentiment analysis model was obtained by using the best classification algorithm. The problem occurs while selecting attributes during preprocessing data. In this situation, the attributes that are generated are too many, and the best attributes with the best weight values need to be selected. In this paper, a hybrid model is proposed using Particle Swarm Optimization and Information Gain (PSO-IG). The proposed method is applied to 4 different methods, namely Support Vector Machine, Naïve Bayes, Decision Tree, and K-NN. Based on the results of experiments carried out on the proposed model, there was an increase in accuracy up to the highest accuracy level of 90.55%. The method using hybrid PSO-IG is a solution as an effort to increase the accuracy level of review classification of culinary food restaurants as decision support information.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85143055939
|
A Hybrid Method for Automatic Simplification of Japanese Sentence Patterns Based on Morphological Analysis and Grammatical Rules
|
In Japanese grammar, there are various Japanese sentence patterns with complicated usages. Learners of Japanese as a second language (JSL) feel that it is quite difficult to learn Japanese sentence patterns. To help JSL learners with their study of Japanese sentence patterns, this work introduces Japanese example sentence simplification which is a task of replacing difficult Japanese sentence patterns with simple Japanese sentence patterns or phrases. In this task, given an example sentence as an input, a new example sentence where the difficult Japanese sentence patterns have been replaced will be generated without changing the meaning of the original example sentence. For this purpose, we proposed a hybrid method combining morphological analysis and manual grammatical rules for identifying the difficult Japanese sentence patterns and simplifying the original example sentence. The final human evaluation results demonstrated the effectiveness of our proposed method. This proposed method can be applied to development of a Japanese/Simple Japanese parallel corpus for automatic Japanese grammar simplification and a computer-assisted language learning system (CALL) for Japanese reading comprehension.
|
[
"Paraphrasing",
"Syntactic Text Processing",
"Text Generation",
"Morphology"
] |
[
32,
15,
47,
73
] |
SCOPUS_ID:85124700780
|
A Hybrid Method for Fake News Detection using Cosine Similarity Scores
|
In this work, we propose a novel hybrid method for fake news detection. Two approaches have been used to assess the authenticity of the news using web-scrapped data. In the first approach the data is the pre-processed using NLP techniques like extraction of raw text, the removal of special-characters, white-spaces, and stop words. This is followed by lemmatization which groups words with similar meanings. After Lemmatization we apply, Term Frequency - Inverse Document Frequency (TF-IDF) Vectorization to form a corpus which is further used to train the models. We propose the use of cosine similarity score, obtained after performing topic modelling along with the corpus to improve the classification accuracies. The classifiers are KNN, Decision Tree, Naive Bayes, Logistic Regression, Passive-aggressive Classifier, and SVM to determine the news is reliable or unreliable. More focus has been given to improve the classification accuracies of the passive aggressive classifier which is the most widely used classifier in fake news detection. In the second approach, we use ensemble learning technique called as stacking along with cosine similarity score to train another model which gives the result as reliable or unreliable. It is observed that the second approach shows good improvement in the accuracy of fake news detection.
|
[
"Topic Modeling",
"Information Retrieval",
"Ethical NLP",
"Responsible & Trustworthy NLP",
"Reasoning",
"Fact & Claim Verification",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
9,
24,
17,
4,
8,
46,
36,
3
] |
SCOPUS_ID:84970046078
|
A Hybrid Method of Domain Lexicon Construction for Opinion Targets Extraction Using Syntax and Semantics
|
Opinion targets extraction of Chinese microblogs plays an important role in opinion mining. There has been a significant progress in this area recently, especially the method based on conditional random field (CRF). However, this method only takes lexicon-related features into consideration and does not excavate the implied syntactic and semantic knowledge. We propose a novel approach which incorporates domain lexicon with groups of syntactical and semantic features. The approach acquires domain lexicon through a novel way which explores syntactic and semantic information through Partof-Speech, dependency structure, phrase structure, semantic role and semantic similarity based on word embedding. And then we combine the domain lexicon with opinion targets extracted from CRF with groups of features for opinion targets extraction. Experimental results on COAE2014 dataset show the outperformance of the approach compared with other well-known methods on the task of opinion targets extraction.
|
[
"Representation Learning",
"Semantic Text Processing",
"Syntactic Text Processing",
"Information Extraction & Text Mining"
] |
[
12,
72,
15,
3
] |
SCOPUS_ID:85122459783
|
A Hybrid Method of Long Short-Term Memory and Auto-Encoder Architectures for Sarcasm Detection
|
Sarcasm detection is considered one of the most challenging tasks in sentiment analysis and opinion mining applications in the social media. Sarcasm identification is therefore essential for a good public opinion decision. There are some studies on sarcasm detection that apply standard word2vec model and have shown great performance with word-level analysis. However, once a sequence of terms is being tackled, the performance drops. This is because averaging the embedding of each term in a sentence to get the general embedding would discard the important embedding of some terms. LSTM showed significant improvement in terms of document embedding. However, within the classification LSTM requires adding additional information in order to precisely classify the document into sarcasm or not. This study aims to propose two technique based on LSTM and Auto-Encoder for improving the sarcasm detection. A benchmark dataset has been used in the experiments along with several pre-processing operations that have been applied. These include stop word removal, tokenization and special character removal with LSTM which can be represented by configuring the document embedding and using Auto-Encoder the classifier that was trained on the proposed LSTM. Results showed that the proposed LSTM with Auto-Encoder outperformed the baseline by achieving 8 4% of f-measure for the dataset. The main reason behind the superiority is that the proposed auto encoder is processing the document embedding as input and attempt to output the same embedding vector. This will enable the architecture to learn the interesting embedding that have significant impact on sarcasm polarity.
|
[
"Language Models",
"Semantic Text Processing",
"Representation Learning",
"Sentiment Analysis",
"Stylistic Analysis"
] |
[
52,
72,
12,
78,
67
] |
SCOPUS_ID:85112063822
|
A Hybrid Method of Multi-class SVM and Classification Method Based on Reliability Score for Autocoding of the Family Income and Expenditure Survey
|
The classification of text descriptions based on corresponding classes is an important task in official statistics. We developed a hybrid method of SVM utilizing Word2Vec and the previously developed reliability–score-based classifier to improve both the ability of high classification accuracy and generalization performance. However, in the previous study, as SVM was simply applied to a whole given data, there is room for more efficiently classifying those data to improve the classification accuracy. Therefore, this paper proposes a classification method based on multi-class SVM that is a combined method of SVM and k-means method to improve the ability of high classification accuracy. The numerical example shows the proposed method gives a better result as compared to the results of ordinary methods.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85045235673
|
A Hybrid Method to Sentiment Analysis for Chinese Microblog
|
In recent years, more and more netizens are willing to express their opinions on social media platforms. Sentiment analysis is effective and valuable to extract useful information out of massive text documents. In this paper, we proposed a hybrid approach to the sentiment analysis problem for Chinese microblog. This hybrid approach combines the basic techniques of natural language processing (NLP) and machine learning to determine the semantic orientation for Chinese microblog. The hybrid method is tested on two public data sets and the results show that our method is effective.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85142801722
|
A Hybrid Methodology Based on CRISP-DM and TDSP for the Execution of Preprocessing Tasks in Mexican Environmental Laws
|
This article focuses on the one hand, on showing some techniques applied during the preprocessing of texts represented by environmental laws of Mexico. The need to carry out this type of analysis is due to several factors such as: the large number of existing legislative documents such as laws, programs, regulations, etc., the modifications that are made to the legal system due to reforms and decrees, and especially, to those possible contradictions that may arise among one or more laws. On the other hand, certain tasks of the CRISP-DM methodology were selected and, specifically, for the data preparation phase in the generic tasks of selection, cleaning, transformation, and formatting. This was done using the NLTK library through text preprocessing techniques of tokenization, segmentation, denoising and normalization. Among the most remarkable results there is a combination between CRISP-DM and Team Data Science Process by Microsoft oriented to the preprocessing of Mexican federal environmental laws. In addition, this article shows a detailed application of the hybrid methodology with the execution of a specialized task related to the extraction of text from a pdf file using the PyPDF2 and Pdfplumber libraries.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
SCOPUS_ID:85132411755
|
A Hybrid Model Combined with SVM and CNN for Community Content Classification
|
Community websites bring many conveniences to people, and the classification of community content is playing an important role in website management and information searching. As the carrier of community content, posts are difficult to classify manually. According to the characteristics of community content, a hybrid classification model for machine learning is proposed. This model consists of three steps. Firstly, aiming at the problem of fewer features of posts, a weighted word vector is proposed to enrich the features of posts. Secondly, since the single kernel function of SVM can not completely match all data distributions, a mixed kernel function is employed to improve the model. Finally, in order to fully utilize the powerful feature extraction ability of Convolutional Neural Network as well as the classification ability of SVM, a hybrid model is designed and implemented by replacing softmax layer with SVM classifier. The corresponding experiment results indicate that compared with traditional Convolutional Neural Network, the proposed hybrid model has better performance and stability with classification accuracy improved from 0.9% to 1.4% in general.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85124046761
|
A Hybrid Model Combining Formulae with Keywords for Mathematical Information Retrieval
|
Formula retrieval is an important research topic in Mathematical Information Retrieval (MIR). Most studies have focused on formula comparison to determine the similarity between mathematical documents. However, two similar formulae may appear in entirely different knowledge domains and have different meanings. Based on N-ary Tree-based Formula Embedding Model (NTFEM, our previous work in [Y. Dai, L. Chen, and Z. Zhang, An N-ary tree-based model for similarity evaluation on mathematical formulae, in Proc. 2020 IEEE Int. Conf. Systems, Man, and Cybernetics, 2020, pp. 2578-2584.], we introduce a new hybrid retrieval model, NTFEM-K, which combines formulae with their surrounding keywords for more accurate retrieval. By using keywords extraction technology, we extract keywords from context, which can supplement the semantic information of the formula. Then, we get the vector representations of keywords by FastText N-gram embedding model and the vector representations of formulae by NTFEM. Finally, documents are sorted according to the similarity between keywords, and then the ranking results are optimized by formula similarity. For performance evaluation, NTFEM-K is not only compared with NTFEM but also hybrid retrieval models combining formulae with long text and hybrid retrieval models combining formulae with their keywords using other keyword extraction algorithms. Experimental results show that the accuracy of top-10 results of NTFEM-K is at least 20% higher than that of NTFEM and can be 50% in some specific topics.
|
[
"Semantic Text Processing",
"Term Extraction",
"Representation Learning",
"Reasoning",
"Numerical Reasoning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
1,
12,
8,
5,
24,
3
] |
SCOPUS_ID:85055856960
|
A Hybrid Model Reuse Training Approach for Multilingual OCR
|
Nowadays, there is a great demand for multilingual optical character recognition (MOCR) in various web applications. And recently, Long Short-Term Memory (LSTM) networks have yielded excellent results on Latin-based printed recognition. However, it is not flexible enough to cope with challenges posed by web applications where we need to quickly get an OCR model for a certain set of languages. This paper proposes a Hybrid Model Reuse (HMR) training approach for multilingual OCR task, based on 1D bidirectional LSTM networks coupled with a model reuse scheme. Specifically, Fixed Model Reuse (FMR) scheme is analyzed and incorporated into our approach, which implicitly grabs the useful discriminative information from a fixed text generating model. Moreover, LSTM layers from pre-trained networks for unilingual OCR task are reused to initialize the weights of target networks. Experimental results show that our proposed HMR approach, without assistance of any post-processing techniques, is able to effectively accelerate the training process and finally yield higher accuracy than traditional approaches.
|
[
"Multilinguality",
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Multimodality"
] |
[
0,
20,
52,
72,
74
] |
SCOPUS_ID:85099589722
|
A Hybrid Model for Container-code Detection
|
In this paper, we propose a container code detection algorithm that combines PSENet and CRNN in a manner that ensures that the obtained images are greatly affected by the other containers as well as text information in the picture. The proposed algorithm divided into three parts: object detection module, text detection module and text recognition module. At the initial step, the object detection module is used to calculate the position of those areas in container code that needs to be predicted, we then use the text detection module based on pixel segmentation, and finally obtain the container code through the end-to-end text recognition module. This algorithm is able to detect different container code for a vertical and horizontal scaling. We show that, in a complex multi-container scenario, the detection performance is good, and highly stable when trained with multi-angle container, achieving relative improvement of up to 95%.
|
[
"Programming Languages in NLP",
"Multimodality"
] |
[
55,
74
] |
SCOPUS_ID:85103728105
|
A Hybrid Model for Documents Representation
|
Text representation is a critical issue for exploring the insights behind the text. Many models have been developed to represent the text in defined forms such as numeric vectors where it would be easy to calculate the similarity between the documents using the well-known distance measures. In this paper, we aim to build a model to represent text semantically either in one document or multiple documents using a combination of hierarchical Latent Dirichlet Allocation (hLDA), Word2vec, and Isolation Forest models. The proposed model aims to learn a vector for each document using the relationship between its words’ vectors and the hierarchy of topics generated using the hierarchical Latent Dirichlet Allocation model. Then, the isolation forest model is used to represent multiple documents in one representation as one profile to facilitate finding similar documents to the profile. The proposed text representation model outperforms the traditional text representation models when applied to represent scientific papers before performing content-based scientific papers recommendation for researchers.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
http://arxiv.org/abs/1506.01171v1
|
A Hybrid Model for Enhancing Lexical Statistical Machine Translation (SMT)
|
The interest in statistical machine translation systems increases currently due to political and social events in the world. A proposed Statistical Machine Translation (SMT) based model that can be used to translate a sentence from the source Language (English) to the target language (Arabic) automatically through efficiently incorporating different statistical and Natural Language Processing (NLP) models such as language model, alignment model, phrase based model, reordering model, and translation model. These models are combined to enhance the performance of statistical machine translation (SMT). Many implementation tools have been used in this work such as Moses, Gizaa++, IRSTLM, KenLM, and BLEU. Based on the implementation, evaluation of this model, and comparing the generated translation with other implemented machine translation systems like Google Translate, it was proved that this proposed model has enhanced the results of the statistical machine translation, and forms a reliable and efficient model in this field of research.
|
[
"Machine Translation",
"Green & Sustainable NLP",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
51,
68,
47,
4,
0
] |
SCOPUS_ID:84962535121
|
A Hybrid Model for Experts Finding in Community Question Answering
|
As a means to share knowledge, the community question answering (CQA) service provides users a chance to obtain or provide help by raising or answering questions. After a question is posted, the system must find an appropriate individual to answer this question. Several approaches have recently been proposed to find experts in CQA. In this paper, a new method to find experts in CQA is proposed by considering user post contents, answer votes, ratio of best answers, and user relation. The votes are used in post relation analysis to calculate user authority. The user's knowledge score can be calculated through topic analysis. Considering that a question usually includes many trivial words, an accurate distribution is nearly impossible to obtain with LDA. To solve this problem, vocabulary is extended by including the link information shown in a question, the top 10 relevant words from Wikipedia are provided for each tag. Tag-LDA models the user topic distribution and predicts the topic distribution of new questions. An experiment is conducted on Stack Overflow dataset, which is the world's largest computer programming CQA site. Experimental results showed approximately 2.97% to 7.79% performance improvement in nDCG@N metrics.
|
[
"Natural Language Interfaces",
"Question Answering"
] |
[
11,
27
] |
SCOPUS_ID:85103281811
|
A Hybrid Model for Medical Paper Summarization Based on COVID-19 Open Research Dataset
|
Automatic generation of summarization or key phrase has been applied in a variety of domains, such as scientific papers and news. In response to the COVID-19 pandemic, the white house and some research groups have prepared the COVID-19 article dataset. To struggle against the COVID-19, the automatic summarization or key phrase method can be useful for those wanting a quick overview of what the latest information is saying on pandemic topics. This paper introduce the COVID-19 dataset from Kaggle and propose a novel model which combine a conventional Seq2Seq model with attention mechanism and a classical keywords extraction method. Our motivation is to obtain key information and maintain the result coherence. Experiment results reveal that our model depending on the COVID-19 dataset achieves a considerable improvement over a classical Seq2Seq model with attention mechanism.
|
[
"Language Models",
"Semantic Text Processing",
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
52,
72,
30,
47,
3
] |
SCOPUS_ID:85105759882
|
A Hybrid Model for Named Entity Recognition on Chinese Electronic Medical Records
|
Electronic medical records (EMRs) contain valuable information about the patients, such as clinical symptoms, diagnostic results, and medications. Named entity recognition (NER) aims to recognize entities from unstructured text, which is the initial step toward the semantic understanding of the EMRs. Extracting medical information from Chinese EMRs could be a more complicated task because of the difference between English and Chinese. Some researchers have noticed the importance of Chinese NER and used the recurrent neural network or convolutional neural network (CNN) to deal with this task. However, it is interesting to know whether the performance could be improved if the advantages of the RNN and CNN can be both utilized. Moreover, RoBERTa-WWM, as a pre-Training model, can generate the embeddings with word-level features, which is more suitable for Chinese NER compared with Word2Vec. In this article, we propose a hybrid model. This model first obtains the entities identified by bidirectional long short-Term memory and CNN, respectively, and then uses two hybrid strategies to output the final results relying on these entities. We also conduct experiments on raw medical records from real hospitals. This dataset is provided by the China Conference on Knowledge Graph and Semantic Computing in 2019 (CCKS 2019). Results demonstrate that the hybrid model can improve performance significantly.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85145006395
|
A Hybrid Model for Spatio-Temporal Information Recognition in COVID-19 Trajectory Text
|
Since the outbreak of the COVID-19 epidemic at the end of 2019, the normalization of epidemic prevention and control has become one of the core tasks of the entire country. Health self-examination by checking the trajectory of diagnosed patients has gradually become everyone’s basic necessity and essential to epidemic prevention. The COVID-19 patient’s spatio-temporal information helps to facilitate the self-inspection of the masses of whether their trajectory overlaps with the confirmed cases, which promotes the epidemic prevention work. This paper, proposes a named entity recognition model to automatically identify the time and place information in the COVID-19 patient trajectory text. The model consists of an ALBERT layer, a Bi-GRU layer, and a GlobalPointer layer. The previous two layers jointly focus on extracting the context’s characteristics and the semantic dependencies. And the GlobalPointer layer extracts the corresponding named entities from a global perspective, which improves the recognition ability for the long-nested place and time entities. Compared to the conventional name entity recognition models, our proposed model has high effectiveness because it has a smaller parameter scale and faster training speed. We evaluate the proposed model using a dataset crawled from the official COVID-19 trajectory text. The F1-score of the model has reached 92.86%, which outperforms four traditional named entity recognition models.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
http://arxiv.org/abs/2208.06961v1
|
A Hybrid Model of Classification and Generation for Spatial Relation Extraction
|
Extracting spatial relations from texts is a fundamental task for natural language understanding and previous studies only regard it as a classification task, ignoring those spatial relations with null roles due to their poor information. To address the above issue, we first view spatial relation extraction as a generation task and propose a novel hybrid model HMCGR for this task. HMCGR contains a generation and a classification model, while the former can generate those null-role relations and the latter can extract those non-null-role relations to complement each other. Moreover, a reflexivity evaluation mechanism is applied to further improve the accuracy based on the reflexivity principle of spatial relation. Experimental results on SpaceEval show that HMCGR outperforms the SOTA baselines significantly.
|
[
"Relation Extraction",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
75,
24,
36,
3
] |
https://aclanthology.org//1991.iwpt-1.9/
|
A Hybrid Model of Human Sentence Processing: Parsing Right-Branching, Center-Embedded and Cross-Serial Dependencies
|
A new cognitive architecture for the syntactic aspects of human sentence processing (called Unification Space) is tested against experimental data from human subjects. The data, originally collected by Bach, Brown and Marslen-Wilson (1986), concern the comprehensibility of verb dependency constructions in Dutch and German: right-branching, center-embedded, and cross-serial dependencies of one to four levels deep. A satisfactory fit is obtained between comprehensibility data and parsability scores in the model.
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
SCOPUS_ID:85144565129
|
A Hybrid Model of Latent Semantic Analysis with Graph-Based Text Summarization on Telugu Text
|
In this paper, we are proposing a hybrid model of latent semantic analysis with graph-based xtractive text summarization on Telugu text. Latent semantic analysis (LSA) is an unsupervised method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a corpus of text. Text rank algorithm is one of the graph-based ranking algorithm which is based on the similarity scores of the sentences. This hybrid method has been implemented on Eenadu Telugu e-news data. The ROUGE-1 measures are used to evaluate the summaries of proposed model and human-generated summaries in this extractive text summarization. The proposed LSA with Text rank method has a F1-score of 0.97 as against the F1-score of 0.50 for LSA and 0.49 of Text rank methods. The hybrid model yields better performance compared with the individual algorithms of latent semantic analysis and Text rank results.
|
[
"Information Extraction & Text Mining",
"Structured Data in NLP",
"Summarization",
"Text Generation",
"Multimodality"
] |
[
3,
50,
30,
47,
74
] |
SCOPUS_ID:85128863382
|
A Hybrid Model of Query Expansion using Word2Vec
|
Query expansion method is one of the most popular methods to reduce the vocabulary mismatch in Information retrieval tasks. Traditional methods of query expansion that use Pseudo relevance feedback are not much efficient for document retrieval from a large collection of documents. In the proposed method we will try to minimize the vocabulary mismatch using a hybrid method of query expansion using the word embedding technique. The proposed method uses both Word2Vec and a local method to predict the expansion terms. The mean average precision of the proposed method is 0.2992. The proposed model also compares with the original query and BM25 model.
|
[
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning"
] |
[
72,
24,
12
] |
http://arxiv.org/abs/1911.08117v1
|
A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
|
We propose a language-independent approach for improving statistical machine translation for morphologically rich languages using a hybrid morpheme-word representation where the basic unit of translation is the morpheme, but word boundaries are respected at all stages of the translation process. Our model extends the classic phrase-based model by means of (1) word boundary-aware morpheme-level phrase extraction, (2) minimum error-rate training for a morpheme-level translation model using word-level BLEU, and (3) joint scoring with morpheme- and word-level language models. Further improvements are achieved by combining our model with the classic one. The evaluation on English to Finnish using Europarl (714K sentence pairs; 15.5M English words) shows statistically significant improvements over the classic model based on BLEU and human judgments.
|
[
"Machine Translation",
"Semantic Text Processing",
"Morphology",
"Syntactic Text Processing",
"Representation Learning",
"Text Generation",
"Multilinguality"
] |
[
51,
72,
73,
15,
12,
47,
0
] |
SCOPUS_ID:85136615700
|
A Hybrid Multi-answer Summarization Model for the Biomedical Question-Answering System
|
In natural language processing problems, text summarization is a difficult problem and always attracts attention from the research community, especially working on biomedical text data which lacks supporting tools and techniques. In this scientific research report, we propose a multi-document summarization model for the responses in the biomedical question and answer system. Our model includes components which is a combination of many advanced techniques as well as some improved methods proposed by authors. We present research methods applied to two main approaches: an extractive summarization architecture based on multi scores and state-of-the-art techniques, presenting our novel prosper-thy-neighbor strategies to improve performance; EAHS model (Extractive-Abstractive hybrid model) based on a denoising auto-encoder for pre-training sequence-to-sequence models (BART). In which we propose a question-driven filtering phase to optimize the selection of the most useful information. Our propose model has achieved positive results with the best ROUGE-1/ROUGE-L scores being the runner-up by ROUGE-2 F1 score by extractive summarization results (over 24 participated teams in MEDIQA2021).
|
[
"Question Answering",
"Summarization",
"Natural Language Interfaces",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
27,
30,
11,
47,
3
] |
SCOPUS_ID:85085377645
|
A Hybrid Multilingual Fuzzy-Based Approach to the Sentiment Analysis Problem Using SentiWordNet
|
Sentiment Analysis or in particular social network analysis (SNA) is a new research area which is increased explosively. This domain has become a very active research issue in data mining and natural language processing. Sentiment analysis (opinion mining) consists in analyzing and extracting emotions, opinions or attitudes from product's reviews, movie's reviews, etc., and classify them into classes such as positive, negative and neutral, or extract the degree of importance (polarity). In this paper, we propose a new hybrid approach for classifying tweets into classes based on fuzzy logic and a lexicon based approach using SentiWordnet. Our approach consists in classifying tweets according to three classes: positive, negative or neutral, using SentiWordNet and the fuzzy logic with its three important steps: Fuzzification, Rule Inference/aggregation, and Defuzzification. The dataset of tweets to classify and the result of the classification are stored in the Hadoop Distributed File System (HDFS), and we use the Hadoop MapReduce for the application of our proposal.
|
[
"Information Extraction & Text Mining",
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Multilinguality"
] |
[
3,
49,
36,
78,
24,
0
] |
http://arxiv.org/abs/2006.09213v2
|
A Hybrid Natural Language Generation System Integrating Rules and Deep Learning Algorithms
|
This paper proposes an enhanced natural language generation system combining the merits of both rule-based approaches and modern deep learning algorithms, boosting its performance to the extent where the generated textual content is capable of exhibiting agile human-writing styles and the content logic of which is highly controllable. We also come up with a novel approach called HMCU to measure the performance of the natural language processing comprehensively and precisely.
|
[
"Text Generation"
] |
[
47
] |
SCOPUS_ID:85097055947
|
A Hybrid Neural Network BERT-Cap Based on Pre-Trained Language Model and Capsule Network for User Intent Classification
|
User intent classification is a vital component of a question-answering system or a task-based dialogue system. In order to understand the goals of users' questions or discourses, the system categorizes user text into a set of pre-defined user intent categories. User questions or discourses are usually short in length and lack sufficient context; thus, it is difficult to extract deep semantic information from these types of text and the accuracy of user intent classification may be affected. To better identify user intents, this paper proposes a BERT-Cap hybrid neural network model with focal loss for user intent classification to capture user intents in dialogue. The model uses multiple transformer encoder blocks to encode user utterances and initializes encoder parameters with a pre-trained BERT. Then, it extracts essential features using a capsule network with dynamic routing after utterances encoding. Experiment results on four publicly available datasets show that our model BERT-Cap achieves a F1 score of 0.967 and an accuracy of 0.967, outperforming a number of baseline methods, indicating its effectiveness in user intent classification.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Sentiment Analysis",
"Intent Recognition",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
78,
79,
11,
38,
36,
3
] |
https://aclanthology.org//D19-6002/
|
A Hybrid Neural Network Model for Commonsense Reasoning
|
This paper proposes a hybrid neural network(HNN) model for commonsense reasoning. An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERTbased contextual encoder but use different model-specific input and output layers. HNN obtains new state-of-the-art results on three classic commonsense reasoning tasks, pushing the WNLI benchmark to 89%, the Winograd Schema Challenge (WSC) benchmark to 75.1%, and the PDP60 benchmark to 90.0%. An ablation study shows that language models and semantic similarity models are complementary approaches to commonsense reasoning, and HNN effectively combines the strengths of both. The code and pre-trained models will be publicly available at https: //github.com/namisan/mt-dnn.
|
[
"Language Models",
"Semantic Text Processing",
"Commonsense Reasoning",
"Semantic Similarity",
"Reasoning"
] |
[
52,
72,
62,
53,
8
] |
SCOPUS_ID:85047981382
|
A Hybrid Optimization Framework Fusing Word- and Sentence-Level Information for Extractive Summarization
|
In order to fuse word-level and sentence-level information from different semantic spaces, the authors propose a hybrid optimization framework to optimize word-level information while simultaneously incorporate sentence-level information as constraints. The optimization is conducted by iterative unit substitutions. The performance on DUC benchmark datasets demonstrates the effectiveness of proposed framework in terms of ROUGE evaluation.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85129500712
|
A Hybrid Optimized Deep Learning Framework to Enhance Question Answering System
|
One of the challenging tasks in big data machine learning is the Question-Answering (QA) system. The QA system datasets have several question types: multiple-choice questions, yes or no queries, Wh questions, short questions, factoid questions, etc. Henceforth, training the data and classification of questions and answers is a difficult task. To address these issues, the current research work has focused on constructing a novel Recurrent Hybrid Ant Colony and African Buffalo Model (RHAC-ABM) for the QA classification and answer selection process. Initially, the specific QA dataset was trained to the system, then the trained datasets were tokenized, and training flaws are removed systematically. Moreover, the proposed model was designed by upgrading the hybrid Ant and African buffalo fitness to a dense layer. Also, the hybrid function in the recurrent classification layer can afford a high accuracy rate for query specification and answer selection process. Subsequently, the proficient measure of the designed approach was validated with other related existing models by comparing the chief metrics. Henceforth, the developed RHAC-ABM has gained 99.6% accuracy, F-measure, 99.51%, recall 98.5%, precision 98.5% and low error rate as 1.4% for question classification and answer selection process. Moreover, the achieved results are quite better than other models, it has proved the robustness of the proposed system.
|
[
"Text Classification",
"Question Answering",
"Natural Language Interfaces",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
27,
11,
24,
3
] |
http://arxiv.org/abs/1909.13568v1
|
A Hybrid Persian Sentiment Analysis Framework: Integrating Dependency Grammar Based Rules and Deep Neural Networks
|
Social media hold valuable, vast and unstructured information on public opinion that can be utilized to improve products and services. The automatic analysis of such data, however, requires a deep understanding of natural language. Current sentiment analysis approaches are mainly based on word co-occurrence frequencies, which are inadequate in most practical cases. In this work, we propose a novel hybrid framework for concept-level sentiment analysis in Persian language, that integrates linguistic rules and deep learning to optimize polarity detection. When a pattern is triggered, the framework allows sentiments to flow from words to concepts based on symbolic dependency relations. When no pattern is triggered, the framework switches to its subsymbolic counterpart and leverages deep neural networks (DNN) to perform the classification. The proposed framework outperforms state-of-the-art approaches (including support vector machine, and logistic regression) and DNN classifiers (long short-term memory, and Convolutional Neural Networks) with a margin of 10-15% and 3-4% respectively, using benchmark Persian product and hotel reviews corpora.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85133326965
|
A Hybrid RNN based Deep Learning Approach for Text Classification
|
Despite the fact that text classification has grown in relevance over the last decade, there are a plethora of approaches that have been created to meet the difficulties related with text classification. To handle the complexities involved in the text classification process, the focus has shifted away from traditional machine learning methods and toward neural networks. In this work the traditional RNN model is embedded with different layers to test the accuracy of the text classification. The work involves the implementation of RNN+LSTM+GRU model. This model is compared with RCNN+LSTM and RNN+GRU. The model is trained by using the GloVe dataset. The accuracy and recall are obtained from the models is assessed. The F1 score is used to compare the performance of both models. The hybrid RNN model has three LSTM layers and two GRU layers, whereas the RCNN model contains four convolution layers and four LSTM levels, and the RNN model contains four GRU layers. The weighted average for the hybrid RNN model is found to be 0.74, RCNN+LSTM is 0.69 and RNN+GRU is 0.77. RNN+LSTM+GRU model shows moderate accuracy in the initial epochs but slowly the accuracy increases as and when the epochs are increased
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
http://arxiv.org/abs/cmp-lg/9406014v1
|
A Hybrid Reasoning Model for Indirect Answers
|
This paper presents our implemented computational model for interpreting and generating indirect answers to Yes-No questions. Its main features are 1) a discourse-plan-based approach to implicature, 2) a reversible architecture for generation and interpretation, 3) a hybrid reasoning model that employs both plan inference and logical inference, and 4) use of stimulus conditions to model a speaker's motivation for providing appropriate, unrequested information. The model handles a wider range of types of indirect answers than previous computational models and has several significant advantages.
|
[
"Reasoning"
] |
[
8
] |
SCOPUS_ID:85131100658
|
A Hybrid Recommendation Integrating Semantic Learner Modelling and Sentiment Multi-Classification
|
Enhancing virtual learning platforms need to adapt new intelligent mechanisms so that long-term learner experience can be improved. Sentiment Analysis gives us perception on how a specific scientific material is suitable to be recommended to the learner. It depends on the feedback of a similar learner taking many factors under consideration such as preference, knowledge level, and learning pattern. In this work, a hybrid e-learning recommendation system is proposed based on individualization and Sentiment Analysis. A new approach is provided for modelling the semantic user model based on the generated semantic matrix to capture the learner's preferences based on their selections of interest. The extracted semantic matrix is used for text representation by utilizing ConceptNet knowledge base which relies on contextual graph and expanded terms to represent the correlation among terms and materials. On the extracted terms from semantic user model, Word Embeddings-Based-Sentiment Analysis (WEBSA) must recommend the learning materials with highest rating to the learners properly. Variant models of (WEBSA) are proposed relying on Natural Language Processing (NLP) to generate effective vocabulary representations along with the use of qualitative customized Convolutional Neural Network (CNN) for sentiment multi-classification tasks. To validate the language model, two datasets are used, a tailored dataset that has been created by scraping reviews of different e-learning resources, and a public dataset. From the experimental results, it has been found that the lowest error rate is achieved with our customized dataset, where the model named CNN-Specific-Task-CBOWBSA outperforms than others with 89.26% accuracy.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
78,
24,
3
] |
SCOPUS_ID:85130186083
|
A Hybrid Recommendation System of Upcoming Movies Using Sentiment Analysis of YouTube Trailer Reviews
|
Movies are one of the integral components of our everyday entertainment. In today’s world, people prefer to watch movies on their personal devices. Many movies are available on all popular Over the Top (OTT) platforms. Multiple new movies are released onto these platforms every day. The recommendation system is beneficial for guiding the user to a choice from among the overloaded contents. Most of the research on these recommendation systems has been conducted based on existing movies. We need a recommendation system for forthcoming movies in order to help viewers make a personalized decision regarding which upcoming new movies to watch. In this article, we have proposed a framework combining sentiment analysis and a hybrid recommendation system for recommending movies that are not yet released, but the trailer has been released. In the first module, we extracted comments about the movie trailer from the official YouTube channel for Netflix, computed the overall sentiment, and predicted the rating of the upcoming movies. Next, in the second module, our proposed hybrid recommendation system produced a list of preferred upcoming movies for individual users. In the third module, we finally were able to offer recommendations regarding potentially popular forthcoming movies to the user, according to their personal preferences. This method fuses the predicted rating and preferred list of upcoming movies from modules one and two. This study used publicly available data from The Movie Database (TMDb). We also created a dataset of new movies by randomly selecting a list of one hundred movies released between 2020 and 2021 on Netflix. Our experimental results established that the predicted rating of unreleased movies had the lowest error. Additionally, we showed that the proposed hybrid recommendation system recommends movies according to the user’s preferences and potentially promising forthcoming movies.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85137331932
|
A Hybrid Recommender Model based on Information Retrieval for Mexican Tourism Text in Rest-Mex 2022
|
Nowadays, the tourism is a principal economic sector for the world due to the exportations are improved, the jobs number is enhanced and the economic is developed. In México, the tourism represents 8.7% of GDP and generates 4.5 million direct jobs, however this economic sector has been affected by COVID-19 pandemic. For these reasons, a hybrid recommender model based on information retrieval is presented in this research to tackle the recommendation systems task of Rest-Mex 2022. A vector space model with tf-idf weighting scheme and cosine similarity is implemented. Besides, a hybrid recommender model is generated applying the recommendation techniques item-item collaborative filtering, content-based filtering and switching hybrid approach. Finally, our proposal won the second and third place in the competition.
|
[
"Information Retrieval"
] |
[
24
] |
SCOPUS_ID:85128605329
|
A Hybrid Relation Extraction Model for Knowledge Graph of Heroic Epic 'Gesar'
|
Extracting relations from unstructured text is the primary step of knowledge graph construction. This step is harder in the Tibetan heroic epic 'Epic Appreciation King Gesar' than in other Chinese stories because of the difficulties in the identification of the named entities and the complicated character relationships. Given this, based on the named entity corpus constructed with help of domain experts earlier, a hybrid relation extraction model combined with entity features and syntactic-semantic features, EBERT-BiLSTM, is proposed in this paper. The data set, corpus, and model construction principle and process are described in detail. Experiments show that EBERT-BiLSTM has better effectiveness and performance than just BiLSTM-Attention and BERT. Finally, EBERT-BiLSTM is used to extract 320-character relation triples and then construct the heroic epic 'Gesar' knowledge graph.
|
[
"Language Models",
"Semantic Text Processing",
"Relation Extraction",
"Structured Data in NLP",
"Knowledge Representation",
"Multimodality",
"Information Extraction & Text Mining"
] |
[
52,
72,
75,
50,
18,
74,
3
] |
SCOPUS_ID:85097287637
|
A Hybrid Representation of Word Images for Keyword Spotting
|
In the task of keyword spotting based on query-by-example, how to represent word images is a very important issue. Meanwhile, the problem of out-of-vocabulary (OOV) is frequently occurred in keyword spotting. Therefore, the problem of OOV keyword spotting is a challenging task. In this paper, a hybrid representation approach of word images has been presented to accomplish the aim of OOV keyword spotting. To be specific, a sequence to sequence model has been utilized to generate representation vectors of word images. Meanwhile, a CNN model with VGG16 architecture has been used to obtain another type of representation vectors. After that, a score fusion scheme is adopted to combine the above two kinds of representation vectors. Experimental results demonstrate that the proposed hybrid representation approach of word images is especially suited for solving the problem of OOV keyword spotting.
|
[
"Visual Data in NLP",
"Multimodality",
"Semantic Text Processing",
"Representation Learning"
] |
[
20,
74,
72,
12
] |
SCOPUS_ID:85143975338
|
A Hybrid Response Generation Model for an Empathetic Conversational Agent
|
Research on the use of conversational agents or chatbots to provide alternative and accessible mental health interventions has gained much interest in recent years. Designed to engage human users through natural and empathetic conversations, these chatbots have shown their potential applications in pre-emptive healthcare that emphasizes the importance of helping individuals maintain their optimal mental health and well-being. However, the use of rule-based or retrieval-based models limit the chatbots' abilities in processing user input and generating relevant and empathetic responses to dynamically adapt to the context of a conversation. Neural-based generative models, currently applied in open-domain dialogue and text generation systems, may be able to address the limitations of retrieval-based models. In this paper, we present VHope (Virtual Hope), a conversational agent that combines retrieval-based and generative models to perform its role as a therapist capable of generating empathetic responses to enrich the conversation. The best performing generative model, derived from training DialoGPT with the EmpatheticDialogues dataset and a local mental well-being dataset, yielded a perplexity score of 9.977. Results from experts' evaluation of the conversation logs showed that the responses generated by VHope were 67% relevant, 78% human-like, and 79% empathic. These results further support the idea of modelling complex conversations with ease by using a neural model and a task-specific dataset. Future improvements may include the use of larger, human-based empathetic dataset for enhanced retrieval model's conversation design and generative model's fine-tuning.
|
[
"Dialogue Response Generation",
"Natural Language Interfaces",
"Ethical NLP",
"Text Generation",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Responsible & Trustworthy NLP"
] |
[
14,
11,
17,
47,
38,
24,
4
] |
http://arxiv.org/abs/1904.09068v2
|
A Hybrid Retrieval-Generation Neural Conversation Model
|
Intelligent personal assistant systems that are able to have multi-turn conversations with human users are becoming increasingly popular. Most previous research has been focused on using either retrieval-based or generation-based methods to develop such systems. Retrieval-based methods have the advantage of returning fluent and informative responses with great diversity. However, the performance of the methods is limited by the size of the response repository. On the other hand, generation-based methods can produce highly coherent responses on any topics. But the generated responses are often generic and not informative due to the lack of grounding knowledge. In this paper, we propose a hybrid neural conversation model that combines the merits of both response retrieval and generation methods. Experimental results on Twitter and Foursquare data show that the proposed model outperforms both retrieval-based methods and generation-based methods (including a recently proposed knowledge-grounded neural conversation model) under both automatic evaluation metrics and human evaluation. We hope that the findings in this study provide new insights on how to integrate text retrieval and text generation models for building conversation systems.
|
[
"Natural Language Interfaces",
"Information Retrieval",
"Dialogue Systems & Conversational Agents"
] |
[
11,
24,
38
] |
SCOPUS_ID:85073904762
|
A Hybrid Rules and Statistical Method for Arabic to English Machine Translation
|
Arabic is one of the six major world languages. It originated in the area currently known as the Arabian Peninsula. Arabic is the joint official language in Middle Eastern and African states. Large communities of Arabic speakers have existed outside of the Middle East since the end of the last century, particularly in the United States and Europe. So finding a quick and efficient Arabic machine translator has become an urgent necessity, due to the differences between the languages spoken in the world's communities and the vast development that has occurred worldwide. Arabic combines many of the significant challenges of other languages like word order and ambiguity. The word ordering problem because of Arabic has four sentence structures which allow different word orders. Ambiguity in the Arabic language is a notorious problem because of the richness and complexity of Arabic morphology. The core problems in machine translation are reordering the words and estimating the right word translation among many options in the lexicon. The Rule-Based Machine translation (RBMT) approach is the way to reorder words, and the statistical approach, such as Expectation Maximisation (EM), is the way to select right word translations and count word frequencies. Combining RBMT with EM plays an impotent role in generating a good-quality MT. This paper presents a combination of the rule-based machine translation (RBMT) approach with the Expectation Maximisation (EM) algorithm. These two techniques have been applied successfully to word ordering and ambiguity problems in Arabic-to-English machine translation.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/1910.10363v2
|
A Hybrid Semantic Parsing Approach for Tabular Data Analysis
|
This paper presents a novel approach to translating natural language questions to SQL queries for given tables, which meets three requirements as a real-world data analysis application: cross-domain, multilingualism and enabling quick-start. Our proposed approach consists of: (1) a novel data abstraction step before the parser to make parsing table-agnosticism; (2) a set of semantic rules for parsing abstracted data-analysis questions to intermediate logic forms as tree derivations to reduce the search space; (3) a neural-based model as a local scoring function on a span-based semantic parser for structured optimization and efficient inference. Experiments show that our approach outperforms state-of-the-art algorithms on a large open benchmark dataset WikiSQL. We also achieve promising results on a small dataset for more complex queries in both English and Chinese, which demonstrates our language expansion and quick-start ability.
|
[
"Programming Languages in NLP",
"Semantic Text Processing",
"Structured Data in NLP",
"Semantic Parsing",
"Multimodality"
] |
[
55,
72,
50,
40,
74
] |
SCOPUS_ID:85089340098
|
A Hybrid Semantic Representation with Internal and External Knowledge for Word Similarity
|
Word similarity (WS) plays an important role in natural language processing. Existing approaches to WS are mainly based on word embedding, which is obtained by massive and high-quality corpus, and they neglect insufficient corpus about some specific fields, and do not consider the prior knowledge which can provide useful semantic information to calculate the similarity of word pairs. In this paper, we propose a hybrid word representation method and combine multiple prior knowledge with context semantic information to address WS task. First, the core of our method is the construction of a related word set including word concept, character concept and word synonyms for each word, which extracted from existing knowledge bases, to enrich the semantic knowledge under small corpus. Then, we encode the related word set based on pre-trained word embedding model and aggregate these vectors into a related vector with semantic weights to obtain the prior knowledge of related word sets. Finally, we incorporate related vector into context vector of the word to train a specific WS task. Compared with baseline models, the experiments on similarity evaluation datasets validate the effectiveness of our hybrid model in WS task.
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Representation Learning"
] |
[
18,
72,
12
] |
SCOPUS_ID:85097232634
|
A Hybrid Semantic Similarity Measurement for Geospatial Entities
|
Semantic similarity plays a critical role in geospatial cognition, semantic interoperability, information integration and information retrieval and reasoning in geographic information science. Although some computational models for semantic similarity measurement have been proposed in literature, these models overlook spatial distribution characteristics or geometric features and pay little attention to the types and ranges of properties. This paper presents a novel semantic similarity measurement approach that employs a richer structured semantic description containing properties as well as relations. This approach captures the geo-semantic similarity more accurately and effectively by evaluating the contributions for ontological properties, measuring the effect of the relative position in the ontology hierarchy structure and computing the geometric feature similarity for geospatial entities. A water body ontology is used to illustrate the approach in a case study. A human-subject experiment was carried out and the experiment results shows that this proposed approach has a good performance based on the high correlation between its computed similarity results and human's judgements of similarity.
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
18,
72,
53
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.