id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85112217361
A Deep Learning Approach Toward Determining the Effects of News Trust Factor Based on Source Polarity
Fake news is one of the biggest threats in cyber-world nowadays. There are several categories of fake news like clickbait, propaganda, satire/parody, sloppy journalism, misleading headings, biased or slanted news. Now, due to limited time available generally to the readers, they are subjected to few of these form of fake news like misleading headlines, propaganda news, etc. These types of fake news are generally arised due to the inclination of news portals/firms toward ideologies or political allegiance. The cumulative effect of such circulation of fake news leads to group enmity, political misalignment, disruption of communal harmony and other society paralyzing problems. The work in this paper tries to identify and establish the effects of polarity and inclination of news portal on the trust factor of the news with the help of a systematic machine learning approach. This paper combines sentimental analysis and fake news detection using a multi-level classification model that validates the effect of source polarity and inclination on the news trust factor.
[ "Polarity Analysis", "Ethical NLP", "Sentiment Analysis", "Reasoning", "Fact & Claim Verification", "Responsible & Trustworthy NLP" ]
[ 33, 17, 78, 8, 46, 4 ]
SCOPUS_ID:85121808442
A Deep Learning Approach for Aspect Sentiment Triplet Extraction in Portuguese
Aspect Sentiment Triplet Extraction (ASTE) is an Aspect-Based Sentiment Analysis subtask (ABSA). It aims to extract aspect-opinion pairs from a sentence and identify the sentiment polarity associated with them. For instance, given the sentence “Large rooms and great breakfast”, ASTE outputs the triplet T = {(rooms, large, positive), (breakfast, great, positive)}. Although several approaches to ASBA have recently been proposed, those for Portuguese have been mostly limited to extracting only aspects without addressing ASTE tasks. This work aims to develop a framework based on Deep Learning to perform the Aspect Sentiment Triplet Extraction task in Portuguese. The framework uses BERT as a context-awareness sentence encoder, multiple parallel non-linear layers to get aspect and opinion representations, and a Graph Attention layer along with a Biaffine scorer to determine the sentiment dependency between each aspect-opinion pair. The comparison results show that our proposed framework significantly outperforms the baselines in Portuguese and is competitive with its counterparts in English.
[ "Information Extraction & Text Mining", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 3, 23, 78 ]
http://arxiv.org/abs/2005.04938v1
A Deep Learning Approach for Automatic Detection of Fake News
Fake news detection is a very prominent and essential task in the field of journalism. This challenging problem is seen so far in the field of politics, but it could be even more challenging when it is to be determined in the multi-domain platform. In this paper, we propose two effective models based on deep learning for solving fake news detection problem in online news contents of multiple domains. We evaluate our techniques on the two recently released datasets, namely FakeNews AMT and Celebrity for fake news detection. The proposed systems yield encouraging performance, outperforming the current handcrafted feature engineering based state-of-the-art system with a significant margin of 3.08% and 9.3% by the two models, respectively. In order to exploit the datasets, available for the related tasks, we perform cross-domain analysis (i.e. model trained on FakeNews AMT and tested on Celebrity and vice versa) to explore the applicability of our systems across the domains.
[ "Reasoning", "Fact & Claim Verification", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 8, 46, 17, 4 ]
SCOPUS_ID:85126259808
A Deep Learning Approach for Bangla Image Captioning System
Naturalness and generalization are the two challenges while generating automated Image Captioning through a System. There is a lack of research to focus on these challenges for Image Captioning in the Bangla language. Furthermore, the lexical resources for Image Captioning in Bangla are not adequate. An effort has been made in this research to contribute to experimenting and analyzing the results of Image Captioning for Bangla Language using deep learning on a novel dataset”Ovro1.1” which comprises 2000 images along with a caption describing that image. Images on Bangla culture, lifestyle, festivals, etc. are recorded with captions in this dataset which makes the training more immersive for the Bangla language. Two neural networks are used in this model where a Convolutional Neural Network (CNN) is used for extracting the features of the images into a vector representation, whereas the vector representation is trained by a Recurrent Neural Network (RNN) for generating text output as its caption. This is an encoder-decoder model architecture where the CNN acts as an encoder and the RNN acts as a decoder. Inception V4 architecture is used as the CNN model for the encoder. Long Short-Term Memory (LSTM) Cells are used in decoding. The model is trained with the existing”BanglaLekha ImageCaption” dataset along with our developed dataset”Ovro1.1” and an evaluation has been done on the results. The model gives a BLEU score of 0.4224 and gives decent logical captioning for the images. However, the captions are constrained within 50 words and the model does not work well with conceptual arts, cartoons, and in recognizing special persons or famous places.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
SCOPUS_ID:85116743553
A Deep Learning Approach for Classifying Vulnerability Descriptions Using Self Attention Based Neural Network
Cyber threat intelligence (CTI) refers to essential knowledge used by organizations to prevent or mitigate against cyber attacks. Vulnerability databases such as CVE and NVD are crucial to cyber threat intelligence, but also provide information leveraged in hundreds of security products worldwide. However, previous studies have shown that these vulnerability databases sometimes contain errors and inconsistencies which have to be manually checked by security professionals. Such inconsistencies could threaten the integrity of security products and hamper attack mitigation efforts. Hence, to assist the security community with more accurate and time-saving validation of vulnerability data, we propose an automated vulnerability classification system based on deep learning. Our proposed system utilizes a self-attention deep neural network (SA-DNN) model and text mining approach to identify the vulnerability category from the description text contained within a report. The performance of the SA-DNN-based vulnerability classification system is evaluated using 134,091 vulnerability reports from the CVE details website.The experiments performed demonstrates the effectiveness of our approach, and shows that the SA-DNN model outperforms SVM and other deep learning methods i.e. CNN-LSTM and graph convolutional neural networks.
[ "Information Extraction & Text Mining", "Text Classification", "Robustness in NLP", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 36, 58, 24, 4 ]
SCOPUS_ID:85116942156
A Deep Learning Approach for Dengue Tweet Classification
Dengue is one amongst the foremost widespread vector borne diseases best-known these days. According to National Institute of Allergy and Infectious Disease (NIAID), Dengue fever has been identified as a threat to public health [1]. More than 33% of the total world population is under risk, together with several cities of Asian nation. In recent years, the utilization of social media (from tweets to Facebook posts) in healthcare has risen tremendously because social media is the platform to point out growing want of patients who are suffering, to attach with one another. Tweets are too short to supply sufficient word occurrences for traditional classification methods to give results reliably. Also, natural language is extremely complicated creating classification of health connected problems difficult. The performance of most conventional classification systems depends on acceptable information illustration and tremendous effort in feature engineering. Deep Learning is new space of machine learning that do automatic feature extraction. In this study, Convolutional Neural Network (CNN) has been used to classify dengue related tweets extracted from twitter into seven multiple classes such as 'Infected', 'Informative', 'Vaccination', 'News', 'Awareness', 'Concern' and 'Others'. From Experimental results, Deep Learning algorithm shows increased accuracy when put next to Machine Learning algorithms such as Support Vector Machine (SVM), Naïve Bayes(NB) and Decision Tree Classifier(DT).
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/1711.05350v1
A Deep Learning Approach for Expert Identification in Question Answering Communities
In this paper, we describe an effective convolutional neural network framework for identifying the expert in question answering community. This approach uses the convolutional neural network and combines user feature representations with question feature representations to compute scores that the user who gets the highest score is the expert on this question. Unlike prior work, this method does not measure expert based on measure answer content quality to identify the expert but only require question sentence and user embedding feature to identify the expert. Remarkably, Our model can be applied to different languages and different domains. The proposed framework is trained on two datasets, The first dataset is Stack Overflow and the second one is Zhihu. The Top-1 accuracy results of our experiments show that our framework outperforms the best baseline framework for expert identification.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
http://arxiv.org/abs/1803.00344v1
A Deep Learning Approach for Multimodal Deception Detection
Automatic deception detection is an important task that has gained momentum in computational linguistics due to its potential applications. In this paper, we propose a simple yet tough to beat multi-modal neural model for deception detection. By combining features from different modalities such as video, audio, and text along with Micro-Expression features, we show that detecting deception in real life videos can be more accurate. Experimental results on a dataset of real-life deception videos show that our model outperforms existing techniques for deception detection with an accuracy of 96.14% and ROC-AUC of 0.9799.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/2112.08554v1
A Deep Learning Approach for Ontology Enrichment from Unstructured Text
Information Security in the cyber world is a major cause for concern, with a significant increase in the number of attack surfaces. Existing information on vulnerabilities, attacks, controls, and advisories available on the web provides an opportunity to represent knowledge and perform security analytics to mitigate some of the concerns. Representing security knowledge in the form of ontology facilitates anomaly detection, threat intelligence, reasoning and relevance attribution of attacks, and many more. This necessitates dynamic and automated enrichment of information security ontologies. However, existing ontology enrichment algorithms based on natural language processing and ML models have issues with contextual extraction of concepts in words, phrases, and sentences. This motivates the need for sequential Deep Learning architectures that traverse through dependency paths in text and extract embedded vulnerabilities, threats, controls, products, and other security-related concepts and instances from learned path representations. In the proposed approach, Bidirectional LSTMs trained on a large DBpedia dataset and Wikipedia corpus of 2.8 GB along with Universal Sentence Encoder is deployed to enrich ISO 27001-based information security ontology. The model is trained and tested on a high-performance computing (HPC) environment to handle Wiki text dimensionality. The approach yielded a test accuracy of over 80% when tested with knocked-out concepts from ontology and web page instances to validate the robustness.
[ "Knowledge Representation", "Semantic Text Processing", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 18, 72, 58, 4 ]
SCOPUS_ID:85080109534
A Deep Learning Approach for Optical Character Recognition of Handwritten Devanagari Script
Handwritten Character Recognition is one of the most challenging and demanding area of interest for researchers in domains of pattern recognition and image processing. Many researchers have worked with recognition of characters of different languages but there is comparatively less work carried for Devanagari Script. In past few years, however the work carried out in this direction is increasing to a great extent. Handwritten Devanagari Character Recognition is more challenging in comparison to the recognition of the Roman characters. The complexity is mostly due to the presence of a header line known as shirorekha that connects the Devanagari characters to form a word. The presence of this header line makes the segmentation process of characters more difficult. There is uniqueness to the handwriting styles of every individual which adds to the complexity. In this paper, we propose development of Convolutional Neural Network (CNN) based Optical Character Recognition system (OCR) for Handwritten Devanagari Script which is observed to recognize the characters accurately.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85060014650
A Deep Learning Approach for Part-of-Speech Tagging in Nepali Language
Part of Speech (POS) tagging is the most fundamental task in various natural language processing(NLP) applications such as speech recognition, information extraction and retrieval and so on. POS tagging involves annotation of appropriate tag for each token in the corpus based on its context and the syntax of the language. In computational linguistics, optimal POS tagger is of paramount importance since tagging errors can critically affect the performance of the complex NLP systems. Developing an efficient POS tagger for morphologically rich languages like Nepali is a challenging task. In this paper, a deep learning based POS tagger for Nepali text is proposed which is built using Recurrent Neural Network (RNN), Long Short-Term Memory Networks (LSTM), Gated Recurrent Unit (GRU) and their bidirectional variants. Performance metrics such as accuracy, precision, recall and F1-score were chosen for the model evaluation. It is observed from the results that our model shows significant improvement and outperforms the state-of-art POS taggers with more than 99% accuracy.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85133518598
A Deep Learning Approach for Plagiarism Detection System Using BERT
The processing of natural language processing is changed after the evident of deep learning algorithms. The machine learning algorithms use numerical data for processing; therefore, categorical data are converted into equivalent vectors for processing by the machines. Word embeddings are the real vectored representation of words which store semantic information. These embeddings are the significant tool of natural language being used in various tasks like name-entity-recognition and parsing, etc. Authorship attribution is a major problem in natural language processing. A framework for identification of authorship attribution has two layers of processing—one is attribution (feature selection), and another is verification (classification). Solution for the problem is to obtain a similarity score with the content. Similarity between the contents is identified by plagiarism detection system by finding PDS score of the given documents/contents. The paper proposed a plagiarism detection algorithm using explicit semantic detection algorithm. The system obtains contextualized word embeddings using BERT pre-trained model. STS-benchmark dataset is used for fine-tuning of BERT model. The proposed algorithm compares the word embeddings of the suspicious content with the reference collection using sentence similarity function. The experiments have been performed using python and deep learning Keras framework. The research has shown that results obtained through experimentation have improved the efficiency of the proposed system compared to existing systems.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
SCOPUS_ID:85137266042
A Deep Learning Approach for Public Sentiment Analysis in COVID-19 Pandemic
Sentiment analysis is a process of extracting opinions into the positive, negative, or neutral categories from a pool of text using Natural Language Processing (NLP). In the recent era, our society is swiftly moving towards virtual platforms by joining virtual communities. Social media such as Facebook, Twitter, WhatsApp, etc are playing a very vital role in developing virtual communities. A pandemic situation like COVID-19 accelerated people's involvement in social sites to express their concerns or views regarding crucial issues. Mining public sentiment from these social sites especially from Twitter will help various organizations to understand the people's thoughts about the COVID-19 pandemic and to take necessary steps as well. To analyze the public sentiment from COVID-19 tweets is the main objective of our study. We proposed a deep learning architecture based on Bidirectional Gated Recurrent Unit (BiGRU) to accomplish our objective. We developed two different corpora from unlabelled and labeled COVID-19 tweets and use the unlabelled corpus to build an improved labeled corpus. Our proposed architecture draws a better accuracy of 87% on the improved labeled corpus for mining public sentiment from COVID-19 tweets.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85150217712
A Deep Learning Approach for Recognizing Textual Emotion from Bengali-English Code-Mixed Data
Emotion detection is a computational approach for finding the distinct emotion or feeling of an individual. Although Bengali is a low-resource language, the amount of Bengali-English codemixed textual data has grown significantly because of the recent widespread use of social media applications among Bengali users. Gradually, the classification of emotions in Bengali-English codemixed data has become a crucial challenge for applications in e-commerce, healthcare, suicidal attempt reduction and crime detection. Nevertheless, the lack of Bengali language processing techniques and Bengali-English dataset have made the emotion recognition more challenging. This research work offers a Deep Learning based approach for classifying emotions from Bengali-English code-mixed data into six basic categories: disgust, sadness, joy, anger, fear, and surprise. Due to the lack of required dataset, a Bengali-English code-mixed corpus consisting of 10,221 sentences is created. In order to identify the best features, this work investigates several word embedding techniques, including Word2Vec, FastText, and Keras Embedding Layer. Different types of of Machine Learning and Deep Learning based algorithms including the proposed technique using Word2Vec and BiLSTM are applied on the developed corpus. In order to find out the best technique, a comparative analysis among all the methods is demonstrated revealing that the BiLSTM with Word2Vec word embedding technique outperforms rest other models achieving the highest accuracy of 76.1%.
[ "Language Models", "Programming Languages in NLP", "Semantic Text Processing", "Representation Learning", "Multimodality" ]
[ 52, 55, 72, 12, 74 ]
SCOPUS_ID:85103286573
A Deep Learning Approach for Robust Detection of Bots in Twitter Using Transformers
During the last decades, the volume of multimedia content posted in social networks has grown exponentially and such information is immediately propagated and consumed by a significant number of users. In this scenario, the disruption of fake news providers and bot accounts for spreading propaganda information as well as sensitive content throughout the network has fostered applied researh to automatically measure the reliability of social networks accounts via Artificial Intelligence (AI). In this paper, we present a multilingual approach for addressing the bot identification task in Twitter via Deep learning (DL) approaches to support end-users when checking the credibility of a certain Twitter account. To do so, several experiments were conducted using state-of-the-art Multilingual Language Models to generate an encoding of the text-based features of the user account that are later on concatenated with the rest of the metadata to build a potential input vector on top of a Dense Network denoted as Bot-DenseNet. Consequently, this paper assesses the language constraint from previous studies where the encoding of the user account only considered either the metadatainformation or the metadata information together with some basic semantic text features. Moreover, the Bot-DenseNet produces a low-dimensional representation of the user account which can be used for any application within the Information Retrieval (IR) framework.
[ "Language Models", "Semantic Text Processing", "Robustness in NLP", "Ethical NLP", "Responsible & Trustworthy NLP", "Reasoning", "Fact & Claim Verification", "Multilinguality" ]
[ 52, 72, 58, 17, 4, 8, 46, 0 ]
SCOPUS_ID:85141868073
A Deep Learning Approach for Robust, Multi-oriented, and Curved Text Detection
Automatic text localization and segmentation in a normal environment with vertical or curved texts are core elements of numerous tasks comprising the identification of vehicles and self-driving cars, and preparing significant information from real scenes to visually impaired people. Nevertheless, texts in the real environment can be discovered with a high level of angles, profiles, dimensions, and colors which is an arduous process to detect. In this paper, a new framework based on a convolutional neural network (CNN) is introduced to obtain high efficiency in detecting text even in the presence of a complex background. Due to using a new inception layer and an improved ReLU layer, an excellent result is gained to detect text even in the presence of complex backgrounds. At first, four new m.ReLU layers are employed to explore low-level visual features. The new m.ReLU building block and inception layer are optimized to detect vital information maximally. The effect of stacking up inception layers (kernels with the dimension of 3 × 3 or bigger) is explored and it is demonstrated that this strategy is capable of obtaining mostly varying-sized texts further successfully than a linear chain of convolution layers (Conv layers). The suggested text detection algorithm is conducted in four well-known databases, namely ICDAR 2013, ICDAR 2015, ICDAR 2017, and ICDAR 2019. Text detection results on all mentioned databases with the highest recall of 94.2%, precision of 95.6%, and F-score of 94.8% illustrate that the developed strategy outperforms the state-of-the-art frameworks.
[ "Visual Data in NLP", "Syntactic Text Processing", "Robustness in NLP", "Responsible & Trustworthy NLP", "Text Segmentation", "Multimodality" ]
[ 20, 15, 58, 4, 21, 74 ]
SCOPUS_ID:85128477505
A Deep Learning Approach for Sentiment Analysis of COVID-19 Reviews
User-generated multi-media content, such as images, text, videos, and speech, has recently become more popular on social media sites as a means for people to share their ideas and opinions. One of the most popular social media sites for providing public sentiment towards events that occurred during the COVID-19 period is Twitter. This is because Twitter posts are short and constantly being generated. This paper presents a deep learning approach for sentiment analysis of Twitter data related to COVID-19 reviews. The proposed algorithm is based on an LSTM-RNN-based network and enhanced featured weighting by attention layers. This algorithm uses an enhanced feature transformation framework via the attention mechanism. A total of four class labels (sad, joy, fear, and anger) from publicly available Twitter data posted in the Kaggle database were used in this study. Based on the use of attention layers with the existing LSTM-RNN approach, the proposed deep learning approach significantly improved the performance metrics, with an increase of 20% in accuracy and 10% to 12% in precision but only 12–13% in recall as compared with the current approaches. Out of a total of 179,108 COVID-19-related tweets, tweets with positive, neutral, and negative sentiments were found to account for 45%, 30%, and 25%, respectively. This shows that the proposed deep learning approach is efficient and practical and can be easily implemented for sentiment classification of COVID-19 reviews.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85102411519
A Deep Learning Approach for Text Segmentation in Document Analysis
Text segmentation plays an essential role in both page segmentation and document reading comprehension. In this manuscript, we present a system to separate the page into homogeneous regions that can serve to extract information. Our approach is based on the U-Net network platform to extract text-lines, then the text lines will be read by an OCR system which is developed based on Convolutional Recurrent Neural Network (CRNN). We group the text-lines and OCR results simultaneously based on the idea from the DBSCAN algorithm. Our system also contains the support modules such as template matching and deskew to improve the performance. To materialize and evaluate ideas, we built a complete Vietnamese data set for training and testing. As a result, we get over 90% accuracy in both Vietnamese and English languages.
[ "Visual Data in NLP", "Text Segmentation", "Syntactic Text Processing", "Multimodality" ]
[ 20, 21, 15, 74 ]
SCOPUS_ID:85092637841
A Deep Learning Approach of Collaborative Filtering to Recommender System with Opinion Mining
To produce good quality recommendations for large or enterprise scale problems, a competent approach for recommender system is required. This paper presents such an approach which first generates the text score based on users’ reviews with the help of opinion mining. It then feeds ratings corresponding to the text scores to Convolutional Neural Network (CNN). CNN learns and does the dot product of user and product matrices. It is a special kind of feed forward neural network of deep learning technique to get better predictions in a product recommender system. The work done in this paper has improved accuracy and user satisfaction to great extent using CNN. It also helps e-commerce companies to increase the revenue by recommending closest products to users.
[ "Opinion Mining", "Sentiment Analysis" ]
[ 49, 78 ]
http://arxiv.org/abs/1409.8558v1
A Deep Learning Approach to Data-driven Parameterizations for Statistical Parametric Speech Synthesis
Nearly all Statistical Parametric Speech Synthesizers today use Mel Cepstral coefficients as the vocal tract parameterization of the speech signal. Mel Cepstral coefficients were never intended to work in a parametric speech synthesis framework, but as yet, there has been little success in creating a better parameterization that is more suited to synthesis. In this paper, we use deep learning algorithms to investigate a data-driven parameterization technique that is designed for the specific requirements of synthesis. We create an invertible, low-dimensional, noise-robust encoding of the Mel Log Spectrum by training a tapered Stacked Denoising Autoencoder (SDA). This SDA is then unwrapped and used as the initialization for a Multi-Layer Perceptron (MLP). The MLP is fine-tuned by training it to reconstruct the input at the output layer. This MLP is then split down the middle to form encoding and decoding networks. These networks produce a parameterization of the Mel Log Spectrum that is intended to better fulfill the requirements of synthesis. Results are reported for experiments conducted using this resulting parameterization with the ClusterGen speech synthesizer.
[ "Responsible & Trustworthy NLP", "Multimodality", "Speech & Audio in NLP", "Green & Sustainable NLP" ]
[ 4, 74, 70, 68 ]
SCOPUS_ID:85105957301
A Deep Learning Approach to Distinguish 2019-nCoV and SARS-CoV Sequences
This paper presents a classification of protein sequences obtained from the 2019 Novel Coronavirus (2019n-CoV) and the 2003 SARS Coronavirus (SARS-CoV) using natural language processing. Very recent researches have indicated that the 2019-nCoV bears almost 79% sequence identity to the SARS-CoV but is sufficiently unique to be considered as the 7th entry to the list of human-infecting coronaviruses. 181 protein sequences of the 2019-nCoV (the number available right now due to limited number of cases) and 843 of the SARS-CoV were extracted from NCBI’s GenBank. These were split into dimers, encoded into numerical features and trained on two deep learning models—the CNN and RNN. Results indicate an accuracy of 97.65% from both models, providing useful insights on sequence analysis of the 2019-nCoV and its differentiation from other coronaviruses.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85097298252
A Deep Learning Approach to Geographical Candidate Selection through Toponym Matching
Recognizing toponyms and resolving them to their real-world referents is required to provide advanced semantic access to textual data. This process is often hindered by the high degree of variation in toponyms. Candidate selection is the task of identifying the potential entities that can be referred to by a previously recognized toponym. While it has traditionally received little attention, candidate selection has a significant impact on downstream tasks (i.e. entity resolution), especially in noisy or non-standard text. In this paper, we introduce a deep learning method for candidate selection through toponym matching, using state-of-the-art neural network architectures. We perform an intrinsic toponym matching evaluation based on several datasets, which cover various challenging scenarios (cross-lingual and regional variations, as well as OCR errors) and assess its performance in the context of geographical candidate selection in English and Spanish.
[ "Cross-Lingual Transfer", "Multilinguality" ]
[ 19, 0 ]
http://arxiv.org/abs/2201.02735v1
A Deep Learning Approach to Integrate Human-Level Understanding in a Chatbot
In recent times, a large number of people have been involved in establishing their own businesses. Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second. Though chatbots perform well in task-oriented activities, in most cases they fail to understand personalized opinions, statements or even queries which later impact the organization for poor service management. Lack of understanding capabilities in bots disinterest humans to continue conversations with them. Usually, chatbots give absurd responses when they are unable to interpret a user's text accurately. Extracting the client reviews from conversations by using chatbots, organizations can reduce the major gap of understanding between the users and the chatbot and improve their quality of products and services.Thus, in our research we incorporated all the key elements that are necessary for a chatbot to analyse and understand an input text precisely and accurately. We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence. The efficiency of our approach can be demonstrated accordingly by the detailed analysis.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
https://aclanthology.org//W09-0438/
A Deep Learning Approach to Machine Transliteration
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85062547231
A Deep Learning Approach to Sentiment Analysis in Turkish
This study proposes using deep learning for sentiment analysis in Turkish. Traditional machine learning methods such as logistic regression or Naive Bayes are often applied to this problem however their applicability is limited since they use bag-of-words model which does not take into account the order of the words in a sentence. In this study we compare these approaches with a modern technique called recurrent neural networks using LSTM units on a dataset crawled from Turkish shopping and movie websites. Our results show that RNN based approaches improve the classification accuracies.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85138252336
A Deep Learning Approach to UML Class Diagrams Discovery from Textual Specifications of Software Systems
Software engineering has developed tools to streamline developing software systems. Among these, the model-driven architecture proposes going from specifications written in natural language to the application code via two intermediate models: a platform-independent model and a platform-specific model. Since the models and code are written in a semi-formal language, it is possible to switch between them automatically, making the process automatable over three-quarters of its length. The first step, which creates a platform-independent model based on specifications, cannot be automated in the current state because of the complexity of natural language. We hypothesize that deep learning techniques developed within the natural language processing framework allow us to consider the automation of this step and produce a UML class diagram. We believe that entity detection can help us identify key concepts of the class diagram, that relation classification can help us identify links between concepts, and that coreference resolution can help us gather all the information relating to a concept. That’s why we propose an architecture that jointly resolves these three tasks and constructs the class diagram. We also provide an annotated dataset to solve these tasks. Our architecture, composed of a BERT-type encoding layer and three feed-forward-neural-network-type decoding layers, makes it possible to produce a simple class diagram that is imperfect but consistent. Therefore, the architecture validates the approach and presents the first encouraging results.
[ "Coreference Resolution", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 13, 24, 36, 3 ]
SCOPUS_ID:85136925142
A Deep Learning Approach to Solving Morphological Analogies
Analogical proportions are statements of the form “A is to B as C is to D”. They support analogical inference and provide a logical framework to address learning, transfer, and explainability concerns. This logical framework finds useful applications in AI and natural language processing (NLP). In this paper, we address the problem of solving morphological analogies using a retrieval approach named ANNr. Our deep learning framework encodes structural properties of analogical proportions and relies on a specifically designed embedding model capturing morphological characteristics of words. We demonstrate that ANNr outperforms the state of the art on 11 languages. We analyze ANNr results for Navajo and Georgian, languages on which the model performs worst and best, to explore potential correlations between the mistakes of ANNr and linguistic properties.
[ "Semantic Text Processing", "Morphology", "Syntactic Text Processing", "Representation Learning", "Reasoning" ]
[ 72, 73, 15, 12, 8 ]
SCOPUS_ID:85078340708
A Deep Learning Approach with Deep Contextualized Word Representations for Chemical-Protein Interaction Extraction from Biomedical Literature
Mining interactions between chemicals and proteins/genes is of crucial relevance for clinical medicine, adverse drug effects, and pharmacological research. Although chemical-protein interactions (CPIs) can be manually extracted, this process is expensive and time-consuming. Therefore, it is of considerable significance to automatically extract CPIs from biomedical literature. Currently, the popular methods for CPI extraction are based on deep learning to avoid sophisticated handcrafted features derived from linguistic analyses. However, the performance of existing methods is usually unsatisfactory. The reasons may be that (1) traditional word-embedding methods cannot adequately model context information, and (2) it is difficult to effectively distinguish which words play critical roles in long biomedical sentences. In this study, we propose a novel Deep-contextualized Stacked Bi-LSTM model (DS-LSTM) to tackle the drawbacks of existing methods. Specifically, our model mainly consists of three components: deep contextualized word representations, the entity attention mechanism, and stacked bidirectional long short-term memory networks (Bi-LSTMs). The deep contextualized word representations are introduced to effectively model complex characteristics of word use (e.g., syntax and semantics) and the variations of these words in the context (i.e., to model polysemy), thereby generating context information. The entity attention mechanism is applied to prioritize the weights of words associated with target entities to distinguish which words play critical roles in long biomedical sentences. We evaluate our model on the CHEMPROT corpus. Our approach achieves a micro-averaged F-score of 69.44%, which is significantly higher than existing state-of-the-art methods. Experimental results show that our approach can adequately model context information, effectively distinguish which words play critical roles in long biomedical sentences and, therefore, improve the overall performance.
[ "Representation Learning", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 12, 52, 72, 3 ]
SCOPUS_ID:85092112146
A Deep Learning Architecture with Word Embeddings to Classify Sentiment in Twitter
Social Media Networks are one of the main platforms to express our feelings. The emotions we put in text tell a lot about our behavior towards any topic. Therefore, the analysis of text is a need for detecting one’s emotions in many fields. This paper introduces a deep learning model that classify sentiments from tweets using different types of word embeddings. The main component of our model is the Convolutional Neural Network (CNN) and the main used features are word embeddings. Trials are made on randomly initialized word embeddings and pretrained ones. The used pre-trained word embeddings are of different variants such as Word2Vec, Glove and fastText models. The model consists of three CNN streams that are concatenated and followed by a fully-connected layer. Each stream contains only one convolutional layer and one max-pooling layer. The model works on detecting positive and negative emotions from Stanford Twitter Sentiment (STS) dataset. The accuracy achieved is 78.5% when using the randomly initialized word embeddings and achieved a maximum accuracy 84.9% using Word2Vec word embeddings. The model not only proves that randomly initialized word embedding can achieve good accuracy, it also showing the power of the pretrained word embeddings that helps to achieve a higher competitive accuracy in sentiment classification.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 12, 78, 36, 3 ]
SCOPUS_ID:85115062402
A Deep Learning Based Approach for Classification of News as Real or Fake
In recent past, the growth of both the printed and digital media has greatly facilitated the business and the society. On account of the reach of social media, even the smallest news or event could be spread like wildfire. Often due to this, the news gets amplified and distorted drastically resulting in generation of fake news. This fake news not only misleads the masses but also causes severe impacts in real world. The rapid increase in the area of fake news and its abrasion to judiciary, democracy and trust in the public made the development of a system for detection of fake news vital. Here, in this paper, we have dealt with the proposal of a model in order to detect fake news, by the use of deep learning algorithms to predict whether the given data is real or fake. The experiments were executed using various deep learning algorithms like Convolutional Neural Network (CNN), Long short-term memory (LSTM) and Bidirectional LSTMs (Bi-LSTM). Thereby we compared the results obtained. The model proposed here, has a great accuracy of about 99%.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 24, 3, 17, 8, 46, 36, 4 ]
SCOPUS_ID:85135143854
A Deep Learning Based Approach to Structural Function Recognition of Scientific Literature Abstracts
[Purpose/Significance] Abstracts of scientific documents are often composed of sections with specific functions. Using the deep learning method to identify structural functions of abstracts of scientific documents is conducive to the in-depth analysis of the documents. [Method/Process] In this paper, identifying structural functions of abstracts of scientific documents is transformed into a text classification problem, and its structure functions are divided into four categories: "introduction, methods, results, conclusions (IMRC)". Based on the text content and context features of abstract sentences, the classifier is constructed based on deep learning models such as BERT, BERT-BiLSTM, BERT-TextCNN and ERNIE, to automatically identify structural functions of abstracts of scientific documents. [Results/Conclusions] Experiments are carried out on a dataset with 3,130 articles in the field of eHealth. The results show that the scores of indicators for ERNIE are higher than other models. BERT-TextCNN model is better in dealing with short text, while BERT-BiLSTM model is better in handling long sentences. The method proposed in this paper is helpful for the fine-grained functional understanding of scientific literature abstracts, and is of great significance to the in-depth mining of scientific literature and literature based knowledge discovery.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/1910.06707v1
A Deep Learning Based Chatbot for Campus Psychological Therapy
In this paper, we propose Evebot, an innovative, sequence to sequence (Seq2seq) based, fully generative conversational system for the diagnosis of negative emotions and prevention of depression through positively suggestive responses. The system consists of an assembly of deep-learning based models, including Bi-LSTM based model for detecting negative emotions of users and obtaining psychological counselling related corpus for training the chatbot, anti-language sequence to sequence neural network, and maximum mutual information (MMI) model. As adolescents are reluctant to show their negative emotions in physical interaction, traditional methods of emotion analysis and comforting methods may not work. Therefore, this system puts emphasis on using virtual platform to detect signs of depression or anxiety, channel adolescents' stress and mood, and thus prevent the emergence of mental illness. We launched the integrated chatbot system onto an online platform for real-world campus applications. Through a one-month user study, we observe better results in the increase in positivity than other public chatbots in the control group.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85108009883
A Deep Learning Based Method for Structuring the Chinese Pathological Reports of Lung Specimen
As a kind of electronic reports in text form, the Chinese pathology report of lung specimen contains a large amount of information that is important for clinicians to further analysis and mining. However, various expressions and no fixed format increases the difficulty of extracting and standardizing this information. In this paper, we focus on the extraction of lung lesion locations and the corresponding diagnosis from these reports. And to overcome the difficulties, a structured processing method based on deep learning and the idea of part-of-speech (POS) tagging was proposed. Firstly, the data of lung pathology specimen reports are preprocessed to normalize the medical terms. Secondly, the bidirectional Long Short-Term Memory (Bi-LSTM) neural network is adopted to extract the information of lesion locations and pathological diagnosis from each report. Finally, the obtained information is screened by an information filter method to generate the final structured results. Experimental results on the self-constructed datasets indicated that the proposed method can be beneficial for structuring pathology reports of lung specimen and obtained state-of-the-art results.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85127120709
A Deep Learning Based Methodology for Information Extraction from Documents in Robotic Process Automation
In recent years, thanks to Optical Character Recognition techniques and technologies to deal with low scan quality and complex document structure, there has been a continuous evolution and automation of the digitization processes to allow Robotic Process Automation. In this paper we propose a methodology based both on deep learning algorithms (as generative adversarial network) and statistical tools (as the Hough transform) for the creation of a digitization system capable of managing critical issues, like low scan quality and complex structure of documents. The methodology is composed of 5 modules to manage the poor quality of scanned documents, identify the template and detect tables in documents, extract and organize the text into an easy-to-query schema and perform queries on it through search patterns. For each module different state-of-the-art algorithms are compared and analyzed, with the aim of identifying the best solution to be adopted in an industrial environment. The implemented methodology is measured with respect to the business needs over real data by comparing the extracted information with the target value and shows performance of 90%, in terms of Gestalt Pattern Matching measure.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining" ]
[ 20, 74, 3 ]
SCOPUS_ID:85077006773
A Deep Learning Based Reasoner for Global Consistency in Named Entity Recognition
Named Entity Recognition (NER) is a basic task of Natural Language Processing (NLP), it’s a challenging task in a variety of special applications. This paper aims to solve the global consistency of NER, and to improve the performance. Inspired by human reading process, we propose a NE-Reasoner model, which combine deep neural networks and memory artificial neural network to identify named entities with global consistency. The advantages of the model are: (1) The multi-layer deep architecture, allowing it to bootstrap the recognized entity set from coarse to fine. (2) The candidate pool memory mechanism, allowing it to exchange identified entity information between layers. (3) The reasoner, combing encoder-decoder and cached information to infer to get global entities. The experimental results show that the NE-Reasoner can identity ambiguous words and named entities that rarely or never met before.
[ "Named Entity Recognition", "Reasoning", "Information Extraction & Text Mining" ]
[ 34, 8, 3 ]
SCOPUS_ID:85098255765
A Deep Learning Classification Approach for Short Messages Sentiment Analysis
In today's world, we humans have been communicating with each other through calls, social media applications like whatsapp, facebook, twitter etc. From the social media apps we get social media data from those applications and check what sentences are positive and negative sentiment using sentiment analysis and using deep learning methods like deep neural networks for using the Hindi tweets dataset and classifying them under positive or negative sentiment polarity from twitter accounts.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:85141170667
A Deep Learning Ensemble Hate Speech Detection Approach for Sinhala Tweets
We live in an era where social media platforms play a key role in society. These platforms support most of the native languages and this has enabled people to express their opinions conveniently. Also, it is very common to observe that people express very hateful opinions on social media platforms as well. Several studies have been carried out in this area for the Sinhala language with traditional machine learning models and none of them have shown promising results. Further, current approaches are far behind the latest techniques carried out in high-resource languages. Hence this study presents a deep learning-based approach for hate speech detection which has shown outstanding results for other languages. Moreover, a deep learning ensemble was constructed from these models to evaluate performance improvements. These models were trained and tested on a newly created dataset using the Twitter API. Moreover, the model generalizability was further tested by applying it to a completely new dataset. As per the results, it can be observed that the proposing approach has outperformed the traditional machine learning models and is well generalized. Finally, the experimentation with extra features also reveals that there is a positive impact on the performance using extra features.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85100578498
A Deep Learning Framework for Automatic Detection of Hate Speech Embedded in Arabic Tweets
In this paper, we investigate the ability of CNN, CNN-LSTM, and BiLSTM-CNN deep learning networks to automatically classify or discover hateful content posted on social media. These deep networks were trained and tested using ArHS dataset which consists of 9833 tweets that were annotated to suite hateful speech detection in Arabic. To the best of our knowledge, this is the largest Arabic dataset which handles the subclasses of hate speech. Moreover, we investigate the performance on two existing Arabic hate speech datasets along with ArHS dataset resulting in a combined dataset which consists of 23,678 tweets. Three types of experiment are reported: first, the binary classification of tweets into Hate or Normal, second, ternary classification of tweets into (Hate, Abusive, or Normal), and lastly, multi-class classification of tweets into (Misogyny, Racism, Religious Discrimination, Abusive, and Normal). Using the ArHS dataset, in the binary classification task, the CNN model outperformed other models and achieved an accuracy of 81%. In the ternary classification task, both the CNN and BiLSTM-CNN models achieved the best accuracy of 74%. Lastly, in the multi-class classification task, CNN-LSTM and the BiLSTM-CNN models both achieved the best results with an accuracy of 73%. On the Combined dataset, in the binary classification task, the BiLSTM-CNN achieved an accuracy of 73%. In the ternary classification task, BiLSTM-CNN achieved the best accuracy of 67%. Lastly, in the multi-class classification task, the CNN-LSTM and the BiLSTM-CNN achieved the best accuracy of 65%.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Ethical NLP", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 24, 3, 17, 36, 4 ]
SCOPUS_ID:85018289905
A Deep Learning Framework for Coreference Resolution Based on Convolutional Neural Network
Recently many researches have shown that word embeddings are able to represent information from word related contexts or its nearest neighborhood words, and thus are applied in many NLP tasks successfully. In this paper, we propose convolutional neural network model to extent word embeddings to mention/antecedent representation. These representations are obtained through convoluting neighboring word embeddings and other contextual information for coreference resolution. We evaluate our system on the English portion of the CoNLL 2012 Shared Task dataset and show that the proposed system achieves a competitive performance compared with the state-of-the-art approaches. We also show that our proposed model especially improves the coreference resolution of long spans significantly.
[ "Coreference Resolution", "Semantic Text Processing", "Information Extraction & Text Mining", "Representation Learning" ]
[ 13, 72, 3, 12 ]
SCOPUS_ID:85130354841
A Deep Learning Framework for Detection of COVID-19 Fake News on Social Media Platforms
The fast growth of technology in online communication and social media platforms alleviated numerous difficulties during the COVID-19 epidemic. However, it was utilized to propagate falsehoods and misleading information about the disease and the vaccination. In this study, we investigate the ability of deep neural networks, namely, Long Short-Term Memory (LSTM), Bi-directional LSTM, Convolutional Neural Network (CNN), and a hybrid of CNN and LSTM networks, to automatically classify and identify fake news content related to the COVID-19 pandemic posted on social media platforms. These deep neural networks have been trained and tested using the “COVID-19 Fake News” dataset, which contains 21,379 real and fake news instances for the COVID-19 pandemic and its vaccines. The real news data were collected from independent and internationally reliable institutions on the web, such as the World Health Organization (WHO), the International Committee of the Red Cross (ICRC), the United Nations (UN), the United Nations Children’s Fund (UNICEF), and their official accounts on Twitter. The fake news data were collected from different fact-checking websites (such as Snopes, PolitiFact, and FactCheck). The evaluation results showed that the CNN model outperforms the other deep neural networks with the best accuracy of 94.2%.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 8, 46, 4 ]
SCOPUS_ID:85141663947
A Deep Learning Method for Sentence Embeddings Based on Hadamard Matrix Encodings
Sentence Embedding is recently getting an accrued attention from the Natural Language Processing (NLP) community. An embedding maps a sentence to a vector of real numbers with applications to similarity and inference tasks. Our method uses: word embeddings, dependency parsing, Hadamard matrix with spread spectrum algorithm and a deep learning neural network trained on the Sentences Involving Compositional Knowledge (SICK) corpus. The dependency parsing labels are associated with rows in a Hadamard matrix. Words embeddings are stored at corresponding rows in another matrix. Using the spread spectrum encoding algorithm the two matrices are combined into a single unidimensional vector. This embedding is then fed to a neural network achieving 80% accuracy while the best score from the SEMEVAL 2014 competition is 84%. The advantages of this method stem from encoding of any sentence size, using only fully connected neural networks, tacking into account the word order and handling long range word dependencies.
[ "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Syntactic Parsing", "Reasoning", "Textual Inference" ]
[ 72, 15, 12, 28, 8, 22 ]
SCOPUS_ID:85121358905
A Deep Learning Model Based on BERT and Sentence Transformer for Semantic Keyphrase Extraction on Big Social Data
In the evolution of the Internet, social media platform like Twitter has permitted the public user to share information such as famous current affairs, events, opinions, news, and experiences. Extracting and analyzing keyphrases in Twitter content is an essential and challenging task. Keyphrases can become precise the main contribution of Twitter content as well as it is a vital issue in vast Natural Language Processing (NLP) application. Extracting keyphrases is not only a time-consuming process but also requires much effort. The current works are on graph-based models or machine learning models. The performance of these models relies on feature extraction or statistical measures. In recent year, the application of deep learning algorithms to Twitter data have more insight due to automatic feature extraction can improve the performance of several tasks. This work aims to extract the keyphrase from Big social data using a sentence transformer with Bidirectional Encoder Representation Transformers (BERT) deep learning model. This BERT representation retains semantic and syntactic connectivity between tweets, enhancing performance in every NLP task on large data sets. It can automatically extract the most typical phrases in the Tweets. The proposed Semkey-BERT model shows that BERT with sentence transformer accuracy of 86% is higher than the other existing models.
[ "Language Models", "Semantic Text Processing", "Term Extraction", "Information Extraction & Text Mining" ]
[ 52, 72, 1, 3 ]
SCOPUS_ID:85113767336
A Deep Learning Model Based on Neural Bag-of-Words Attention for Sentiment Analysis
In the field of Natural Language Processing, sentiment analysis is one of core research directions. The hot issue of sentiment analysis is how to avoid the shortcoming of using fixed vector to calculate attention distribution. In this paper, we proposed a novel sentiment analysis model based on neural bag-of-words attention, which utilizes Bidirectional Long Short-Term Memory (BiLSTM) to capture the deep semantic features of text, and fusion these features by attention distribution based on neural bag-of-words. The experimental results show that the proposed method has improved 2.53%–6.46% accuracy compared with the benchmark.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85021762829
A Deep Learning Model Enhanced with Emotion Semantics for Microblog Sentiment Analysis
Word embedding based on neural language model can automatically learn effective word representation from massive unlabeled text dataset, and has made essential progress in many natural language processing tasks. Emoticons in microblog are important emotion signals for microblog sentiment analysis. There have been a lot of research works exploiting emoticons to improve sentiment classification performance for microblog effectively. Commonly used emoticons are adopted to construct an emotion space as feature representation matrix RE from their word embedding. On the basis of vector based semantic composition, the projection to emotion space is performed as matrix-vector multiplication between RE and other embedding. Then, the results are forward to MCNN to learn a sentiment classifier for microblog. This new model is named as EMCNN, short for Emotion-semantic enhanced MCNN, which seamlessly integrates emotion space projection based on emoticon into deep learning model MCNN to enhance its ability of capturing emotion semantic. On the datasets of NLPCC microblog sentiment analysis task, EMCNN achieves the best performance in several sentiment classification experiments and surpass the state-of-the-art results on all the performance metrics. Comparing to MCNN, EMCNN not only improve the classification performance, but also reduce the training time, i.e. 36.15% for subject classification and 33.82% for 7-class sentiment classification.
[ "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Sentiment Analysis", "Emotion Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 12, 78, 61, 36, 3 ]
SCOPUS_ID:85125179063
A Deep Learning Model Fused with Word Sense Knowledge for Textual Entailment Recognition
Textual entailment recognition is an essential research task in the field of natural language processing. The mainstream textual entailment recognition method based on deep learning does not integrate word sense knowledge in training data, so the inference knowledge of model learning is limited. In addition, polysemy has become a significant challenge for the fusion of semantic knowledge. Therefore, we propose a textual entailment recognition model fused with word sense knowledge. It enhances the explicit knowledge of the data by fusing the synonyms in the CiLin. We use word sense disambiguation to fuse the meaning of polysemy. We use the RoBERTa word vector and the word vector representation based on the sememe to initialize sentence encoding. Our model achieves an accuracy of 80.77% on the CNLI dataset and 80.74% on the XNLI dataset.
[ "Reasoning", "Semantic Text Processing", "Textual Inference", "Representation Learning" ]
[ 8, 72, 22, 12 ]
SCOPUS_ID:85077780895
A Deep Learning Model for Dimensional ValenceArousal Intensity Prediction in Stock Market
This paper proposes a dimensional valence-arousal method to define sentiment status in the stock market. In the past, many kinds of research have focused on the valence sentiment on stock messages because it represents the stock trend such as upward and downward. In this case, if the stock price jumps or collapses (positive/negative trend) in the short term, the investor will necessarily need to immediately trade at this moment, but some case is not. Therefore, the valence-arousal method can be used to define the trend intensity and trading intensity for a stock message of the stock market. In order to obtain a powerful prediction model to learn the intensity of trend and trading of a stock message that we propose a keyword-based attention network into Hierarchical Attention Networks (HAN), namely HKAN model, to learn the relation between dimensional sentiments (trend and trading) and stock messages. The experimental results show that our proposed HKAN model for stock VA prediction has outperformed other baseline models such as HAN and Hierarchical Hybrid Attention Networks.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85143373485
A Deep Learning Model for Opinion mining in Twitter Combining Text and Emojis
Several approaches have been proposed to study opinions on Social Network Sites (SNS). Unfortunately, those works are not topic-sensitive and do not investigate the impact of emojis on text-based classification. In this paper, we propose a novel approach to predict the users' opinions expressed through textual tweets and emojis. Thus, we construct an emoji sentiment lexicon. Then, we extract opinions from the text before considering both the text and emojis to see how they enhance the expression of opinions in SNS discussions. We conduct a set of benchmarks using several well-known machine learning algorithms, leading to an accuracy of 83, 7%.
[ "Visual Data in NLP", "Opinion Mining", "Sentiment Analysis", "Multimodality" ]
[ 20, 49, 78, 74 ]
SCOPUS_ID:85132914040
A Deep Learning Modified Neural Network(DLMNN) based proficient sentiment analysis technique on Twitter data
The rapid enhancement in social media over the internet generates massive information in real-time scenarios, which has a striking impact on big data analysis. It resulted in the elevated usage of emotions and sentiments in social media. This paper proffers a proficient sentiment analysis technique in Twitter data. The Twitter database is preprocessed includes, stemming, tokenisation, number removal and stop word removal, etc. The preprocessed words are then passed into the HDFS (Hadoop Distributed File System) to reduce the repeated words and are eliminated using the MapReduce technique. The emoticons and the non-emoticons are extorted as features. The resulted features are ranked with their intended meaning. Then, the classification is performed utilising the DLMNN (Deep Learning Modified Neural Network). The experimental results were examined by using the output parameter such as Accuracy, Recall, Precision, F-Score and Execution Time with other conventional techniques such as ANN, SVM, K-Means and DCNN to show the greatest outcome of the proposed model. Evaluation result shows that DLMNN achieved the greatest performance in terms of precision (95.78%), Recall (95.84%), F-Score (95.87%) and Accuracy (91.65%) when compared with the existing models.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85085529136
A Deep Learning Sentiment Primarily Based Intelligent Product Recommendation System
In recent years, technological enhancements in computing have semiconductor to the event of delicate call support systems to produce support to the purchasers United Nations agency ar victimization social networks for obtaining services. At intervals the past, sure researchers classified product and building reviews into positive and negative slots, that were accustomed build picks to settle on out applicable hotels, product and services for patrons and to produce tips to the business personalities concerned in hotels. Today, folks kind on-line teams and overtly discuss not solely the professionals—of, as associate example, hotels—however in addition air complaints. If feedback isn’t addressed properly by building service suppliers, it’s about to possibly increase then the hotel’s quality downsized. Food served to customers depends on the preparation still as results of the worth, location and times at that it’s served. Further, the angle of the sales folks and building workers, in general, plays a key role in customers’ picks. Thus, on-line shopper feedback through social media is beneficial for shopper behavior analysis, crucial for the success of business. A recommendation system that addresses of these problems will give customers higher picks in their alternative of hotels and services. Throughout this proposal, a try of recent classification algorithms unit of measurement projected. One depends on a modern kind of support vector machines spoken as cluster support vector machines to perform major, and sub classification, of sentiments, still as kind teams supported people’s sentiments with connectedness changes in times and locations. The intelligent cluster support vector machine rule projected throughout this thesis improves classification accuracy to produce correct recommendations. The foremost advantage of the projected work is that it helps make sure folks with similar interests, supported sentiments well-known from tweets, and type interested teams for animated discussions on fascinating topics. A modern clump rule is projected throughout this analysis work that is helpful in forming teams supported clusters. Throughout this work, a modern genetic weighted K-means clump rule is projected to notice correct cluster structures from a try of datasets, Twitter and Face book. The genetic rule chosen here to perform clump is associate economical technique that improves classification accuracy.
[ "Text Classification", "Text Clustering", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 29, 78, 24, 3 ]
SCOPUS_ID:85123638801
A Deep Learning System for Automatic Extraction of Typological Linguistic Information from Descriptive Grammars
Linguistic typology is an area of linguistics concerned with analysis of and comparison between natural languages of the world based on their certain linguistic features. For that purpose, historically, the area has relied on manual extraction of linguistic feature values from textural descriptions of languages. This makes it a laborious and time expensive task and is also bound by human brain capacity. In this study, we present a deep learning system for the task of automatic extraction of linguistic features from textual descriptions of natural languages. First, textual descriptions are manually annotated with special structures called semantic frames. Those annotations are learned by a recurrent neural network, which is then used to annotate un-annotated text. Finally, the annotations are converted to linguistic feature values using a separate rule based module. Word embeddings, learned from general purpose text, are used as a major source of knowledge by the recurrent neural network. We compare the proposed deep learning system to a previously reported machine learning based system for the same task, and the deep learning system wins in terms of F1 scores with a fair margin. Such a system is expected to be a useful contribution for the automatic curation of typological databases, which otherwise are manually developed.
[ "Multilinguality", "Typology", "Syntactic Text Processing", "Information Extraction & Text Mining" ]
[ 0, 45, 15, 3 ]
http://arxiv.org/abs/2303.10510v1
A Deep Learning System for Domain-specific speech Recognition
As human-machine voice interfaces provide easy access to increasingly intelligent machines, many state-of-the-art automatic speech recognition (ASR) systems are proposed. However, commercial ASR systems usually have poor performance on domain-specific speech especially under low-resource settings. The author works with pre-trained DeepSpeech2 and Wav2Vec2 acoustic models to develop benefit-specific ASR systems. The domain-specific data are collected using proposed semi-supervised learning annotation with little human intervention. The best performance comes from a fine-tuned Wav2Vec2-Large-LV60 acoustic model with an external KenLM, which surpasses the Google and AWS ASR systems on benefit-specific speech. The viability of using error prone ASR transcriptions as part of spoken language understanding (SLU) is also investigated. Results of a benefit-specific natural language understanding (NLU) task show that the domain-specific fine-tuned ASR system can outperform the commercial ASR systems even when its transcriptions have higher word error rate (WER), and the results between fine-tuned ASR and human transcriptions are similar.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
http://arxiv.org/abs/2004.10320v1
A Deep Learning System for Sentiment Analysis of Service Calls
Sentiment analysis is crucial for the advancement of artificial intelligence (AI). Sentiment understanding can help AI to replicate human language and discourse. Studying the formation and response of sentiment state from well-trained Customer Service Representatives (CSRs) can help make the interaction between humans and AI more intelligent. In this paper, a sentiment analysis pipeline is first carried out with respect to real-world multi-party conversations - that is, service calls. Based on the acoustic and linguistic features extracted from the source information, a novel aggregated method for voice sentiment recognition framework is built. Each party's sentiment pattern during the communication is investigated along with the interaction sentiment pattern between all parties.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1911.01421v1
A Deep Learning approach for Hindi Named Entity Recognition
Named Entity Recognition is one of the most important text processing requirement in many NLP tasks. In this paper we use a deep architecture to accomplish the task of recognizing named entities in a given Hindi text sentence. Bidirectional Long Short Term Memory (BiLSTM) based techniques have been used for NER task in literature. In this paper, we first tune BiLSTM low-resource scenario to work for Hindi NER and propose two enhancements namely (a) de-noising auto-encoder (DAE) LSTM and (b) conditioning LSTM which show improvement in NER task compared to the BiLSTM approach. We use pre-trained word embedding to represent the words in the corpus, and the NER tags of the words are as defined by the used annotated corpora. Experiments have been performed to analyze the performance of different word embeddings and batch sizes which is essential for training deep models.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 34, 3 ]
SCOPUS_ID:85113381472
A Deep Learning based Customer Sentiment Analysis Model to Enhance Customer Retention and Loyalty in the Payment Industry
Both the industry and academia agree on the immense contribution of big data analytics and machine learning to competitive businesses. The payment industry would benefit from big data analytics and machine learning capabilities to harness their customers' opinions through sentiment analysis, thereby customizing their services and products to fit their customers' preferences. However, the challenge is implementing this competitive edge in small and medium-sized payment solution providers. This paper proposes a deep learning-based customer sentiment analysis model and a related (SaMS-PSP) algorithm that implements sentiment analysis within SaMS-PSP. Through experiments, we have demonstrated that our model has a super performance advantage over conventional machine learning methods and is more suited to handle big data applications such as customer sentiment analysis. This research has demonstrated that the sentiment analysis emotional polarity score can be used in a value-added customer orientation tool to promote customer retention and loyalty within SaMS-PSP.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85099568005
A Deep Learning based Interlingua Representation for Malayalam Documents
Compact representation of sentences like feature vectors, offer better understanding of the sentence formation. Majority applications in natural language processing often requires the help of such meaningful representations. If an interlingua is constructed for the same, it will be useful for the applications like machine translation, text summarization etc. Here we are proposing a deep learning based Interlingua whereby creating a structured meaningful representation for a Malayalam document.
[ "Multilinguality", "Machine Translation", "Semantic Text Processing", "Representation Learning", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 0, 51, 72, 12, 30, 47, 3 ]
SCOPUS_ID:85124692911
A Deep Learning based Self-Assessment Tool for Personality Traits and Interview Preparations
Many people face difficulty in analysing their own personality and to see whether they fit a particular job profile. Analysing our personality is very crucial, especially as a part of preparing for various types of interviews, as our responses reflects how we think and act, thus imprinting our first impression on the panelists. Studying personality traits has been proved to be an emerging stream using Machine Learning and Artificial Intelligence. Our idea is to create a platform to identify the personality traits of an individual and provide aid in suggesting changes, if required, in these traits. Our aim is to provide a helping hand for analysing how a person performs in various types of interviews like Video interview, Personal Interview, Group Discussions, etc. to ensure high-geared performance in final interviews. In our approach, we have used the Natural Language Processing (NLP) algorithm to analyse user’s input in Group Discussion module, so as to provide additional context to the user. Sentiment Analysis of user’s responses in Scenario Based Questions module results in how affirmative or negative the user’s response is with respect to the expected solutions. For Video and Telephonic Interview modules, we have used MobileNet architecture and CNN algorithm to predict user’s confidence level based on his/her facial expressions and voice modulation
[ "Visual Data in NLP", "Multimodality", "Sentiment Analysis" ]
[ 20, 74, 78 ]
SCOPUS_ID:85106638500
A Deep Learning based Sentiment Analysis on Bang-lish Disclosure
Sentiment analysis is a field of immense possibilities and application despite being an age-old topic. Various applications of machine learning and natural language processing keep contributing to this field with innovative techniques. Variants of neural networks with attention mechanism is a well-known tool in this field. However, very few of these techniques have been applied for Bengali sentences written with English letters which is a very common scenario for this era of social networking and online e-commerce sites. As for product reviews, it is of utmost importance as it helps the companies to suggest and review their products. In social networks, it may be used to analyze the emotion of the users. Moreover, in the online platforms of Bangladesh, most people use English letters to express their reviews in the Bengali language, which becomes a major issue for further analysis to guide them. In this paper, we propose a novel attention-based CNN model to solve the problem and analyze the performances of its variants along with the least features of NLP, thus making it able to work on different platforms.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85141663516
A Deep Learning based hybrid model for improving accuracy of sentiment analysis
Text Sentiment analysis has been of great importance over the last few years. It is being widely used to determine a person's feelings, opinions, and emotions on any topic or for someone. In recent years CNN and LSTM have been widely used to develop such models. CNN has shown that it can effectively extract local information between consecutive words, but it lacks in extracting contextual semantic information between words. Although LSTM is able to extract some contextual information, it lacks CNN in extracting local information. To counter such problems, we used the attention mechanism in our multichannel CNN with a Bidirectional LSTM model to give attention to those parts of a sentence which have a major influence in determining the sentiment of that sentence. Experimental results have shown that our MultiChannel CNN model with Bidirectional LSTM and Attention mechanism has achieved an accuracy of 93.61% which has outperformed the traditional LSTM-CNN, CNN models, and many machine learning algorithms.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85098274433
A Deep Learning-Based Approach for Identifying the Medicinal Uses of Plant-Derived Natural Compounds
Medicinal plants and their extracts have been used as important sources for drug discovery. In particular, plant-derived natural compounds, including phytochemicals, antioxidants, vitamins, and minerals, are gaining attention as they promote health and prevent disease. Although several in vitro methods have been developed to confirm the biological activities of natural compounds, there is still considerable room to reduce time and cost. To overcome these limitations, several in silico methods have been proposed for conducting large-scale analysis, but they are still limited in terms of dealing with incomplete and heterogeneous natural compound data. Here, we propose a deep learning-based approach to identify the medicinal uses of natural compounds by exploiting massive and heterogeneous drug and natural compound data. The rationale behind this approach is that deep learning can effectively utilize heterogeneous features to alleviate incomplete information. Based on latent knowledge, molecular interactions, and chemical property features, we generated 686 dimensional features for 4,507 natural compounds and 2,882 approved and investigational drugs. The deep learning model was trained using the generated features and verified drug indication information. When the features of natural compounds were applied as input to the trained model, potential efficacies were successfully predicted with high accuracy, sensitivity, and specificity.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85145652536
A Deep Learning-Based Entity-Relationship Extraction Method in the Field of Electric Power Public Opinion
Entity-relationship extraction can obtain key information elements from texts. Electricity opinion texts have the characteristics of complex entity relationships and less annotated data, so it is difficult to find entity information with mutual relationships from text data in the field of electricity opinion. To solve the above problems, we propose the ABAC (ALBERT-BiLSTM-ATT-CRF) model to extract entity relationships in electric power opinion texts. By using the pre-training model of ALBERT and combining the five-stroke sequence, radicals and pinyin of Chinese characters to extract features, and the features of these parts are fused to enhance the ability of extracting text feature vectors. The experimental results show that the accuracy of entity-relationship extraction has been significantly improved, which verifies the effectiveness of the model designed in this paper for entity-relationship extraction in the field of electric power public opinion.
[ "Language Models", "Relation Extraction", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 75, 72, 3 ]
SCOPUS_ID:85147846574
A Deep Learning-Based Innovative Points Extraction Method
Most of the research on mining online reviews now focuses on the influence of reviews on consumers and the issue of sentiment analysis for analyzing consumer reviews, but few studies how to extract innovative ideas for products from review data. To this end, we propose a deep learning-based method to extract sentences with innovative ideas from a large amount of review data. First, we select a product review dataset from the Internet, and use a stacking integrated word embedding method to generate a rich semantic representation of review sentences, and then the resulting representation of each sentence will be feature extraction by a bidirectional gated recurrent unit (BiGRU) model combined with self-attention mechanism, and finally the extracted features are classified into innovative sentences through softmax. The method proposed in this paper can efficiently and accurately extract innovative sentences from class-imbalanced review data, and our proposed method can be applied in most information extraction studies.
[ "Representation Learning", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 12, 72, 3 ]
SCOPUS_ID:85065497244
A Deep Learning-Based Named Entity Recognition in Biomedical Domain
In the biomedical field, huge amounts of data have been produced day by day. These data drives the development of the biomedical area researches in so many ways. This paper mainly focusing on biomedical named entity recognition (NER) with the aim to enhance the performance through deep learning. Impressive results in natural language processing are made possible by deep learning techniques. Deep learning enables us to use them for NLP tasks and producing huge differences in accuracy compared to traditional methods. NER is a crucial initial step in information extraction in the biomedical domain. Here we use RNN, LSTM, and GRU on GENIA version 3.02 corpus and achieves an F score of 90%, which is better than the most state-of-the-art systems.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85128869983
A Deep Learning-Based Sentiment Classification Model for Real Online Consumption
Most e-commerce platforms allow consumers to post product reviews, causing more and more consumers to get into the habit of reading reviews before they buy. These online reviews serve as an emotional feedback of consumers’ product experience and contain a lot of important information, but inevitably there are malicious or irrelevant reviews. It is especially important to discover and identify the real sentiment tendency in online reviews in a timely manner. Therefore, a deep learning-based real online consumer sentiment classification model is proposed. First, the mapping relationship between online reviews of goods and sentiment features is established based on expert knowledge and using fuzzy mathematics, thus mapping the high-dimensional original text data into a continuous low-dimensional space. Secondly, after obtaining local contextual features using convolutional operations, the long-term dependencies between features are fully considered by a bidirectional long- and short-term memory network. Then, the degree of contribution of different words to the text is considered by introducing an attention mechanism, and a regular term constraint is introduced in the objective function. The experimental results show that the proposed convolutional attention–long and short-term memory network (CA–LSTM) model has a higher test accuracy of 83.3% compared with other models, indicating that the model has better classification performance.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:85128735112
A Deep Learning-Based System for Document Layout Analysis
Document image understanding is an essential process in the digital transformation era. Those systems automatically convert a paper document to a digital document for storing and information extracting. In practice, document layout analysis is a critical step for the success of document image modeling. This paper introduces a page segmentation system based on deep neural networks. Our system uses two auto encoder-decoder networks to segment the text-line and non-text components simultaneously. The paragraph segmentation is then realized based on the text line and separator mask. Besides, the non-text elements are also identified. Our algorithm has been tested RDCL2019. Experimental results show that our method is more stable and more comfortable to adapt with a new format layout than the previous commercial and publishing systems.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85119331042
A Deep Learning-based Approach for Emotions Classification in Big Corpus of Imbalanced Tweets
Emotions detection in natural languages is very effective in analyzing the user's mood about a concerned product, news, topic, and so on. However, it is really a challenging task to extract important features from a burst of raw social text, as emotions are subjective with limited fuzzy boundaries. These subjective features can be conveyed in various perceptions and terminologies. In this article, we proposed an IoT-based framework for emotions classification of tweets using a hybrid approach of Term Frequency Inverse Document Frequency (TFIDF) and deep learning model. First, the raw tweets are filtered using the tokenization method for capturing useful features without noisy information. Second, the TFIDF statistical technique is applied to estimate the importance of features locally as well as globally. Third, the Adaptive Synthetic (ADASYN) class balancing technique is applied to solve the imbalance class issue among different classes of emotions. Finally, a deep learning model is designed to predict the emotions with dynamic epoch curves. The proposed methodology is analyzed on two different Twitter emotions datasets. The dynamic epoch curves are shown to show the behavior of test and train data points. It is proved that this methodology outperformed the popular state-of-the-art methods.
[ "Text Classification", "Sentiment Analysis", "Emotion Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 78, 61, 24, 3 ]
SCOPUS_ID:85146957972
A Deep Learning-based Event Extraction Method in the Field of Electric Power Public Opinion
Event extraction is a sub-task of information extraction in natural language processing by extracting relevant event information from unstructured text. In order to obtain the hot events related to electric power public opinion in a timely manner and assist electric power staff to make quick decisions, this article suggests a deep learning-based event extraction model for electric power public opinion, which is mainly composed of two parts, namely, an event detection model and an argumentative meta-role extraction model. The event detection model is further extracted by using the BLSTM model to obtain the specific event categories of electrical power viewpoint text, and the argumentative role extraction model is employed to extract the features of electric power opinion text by using the BLSTM-CRF model to obtain the argumentative roles included within the text. In this paper, we solve the problem of overlapping roles by using an innovative location indexing annotation method. Finally, the events contained in the power opinion text are extracted by the joint extraction of the event category and the theoretical roles. By conducting experimental tests, this research proposes a model with superior performance in terms of event extraction outcomes and accuracy rate..
[ "Event Extraction", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 31, 52, 72, 3 ]
SCOPUS_ID:85143619946
A Deep Learning-based Unified Solution for Character Recognition
Optical Character Recognition(OCR) has become a crucial area of research due to the vast number of digitized documents to lessen the dependency on paper. One can save time and money on data entry by automatically extracting information off paper and putting it where it needs to go. There has been much research on OCR systems for different languages, but a unified system that is agnostic to language does not exist. In this work, we propose a multi-headed resunet++ based solution that can recognize the low resource languages(Bangla, Assamese, etc.) and performs well on resource-rich languages(such as English, Arabic, etc.). The backbone of the solution, i.e., resunet++, is fundamentally designed for medical image segmentation that is very complex. As the low representative languages are mostly of cursive style and complex in nature, this backbone can help share those higher-level features and pass them to the lower level. Our proposed solution is applied to isolated characters of Bangla, Assamese, and English languages. For Bangla, the segmentation is done by our developed method, and the dataset was pre-segmented for the other two languages. Applying the solution, we achieved a satisfactory performance.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85090287910
A Deep Level Tagger for Malayalam, a Morphologically Rich Language
In recent years, there has been tremendous growth in the amount of natural language text through various sources. Computational analysis of this text has got considerable attention among the NLP researchers. Automatic analysis and representation of natural language text is a step by step procedure. Deep level tagging is one of such steps applied over the text. In this paper, we demonstrate a methodology for deep level tagging of Malayalam text. Deep level tagging is the process of assigning deeper level information to every noun and verb in the text along with normal POS tags. In this study, we move towards a direction that is not much explored in the case of Malayalam language. Malayalam is a morphologically rich and agglutinative language. The morphological features of the language are effectively utilized for the computational analysis of Malayalam text. The language level details required for the study are provided by Thunjath Ezhuthachan Malayalam University, Tirur.
[ "Semantic Text Processing", "Morphology", "Syntactic Text Processing", "Representation Learning", "Tagging" ]
[ 72, 73, 15, 12, 63 ]
http://arxiv.org/abs/1705.09975v1
A Deep Multi-View Learning Framework for City Event Extraction from Twitter Data Streams
Cities have been a thriving place for citizens over the centuries due to their complex infrastructure. The emergence of the Cyber-Physical-Social Systems (CPSS) and context-aware technologies boost a growing interest in analysing, extracting and eventually understanding city events which subsequently can be utilised to leverage the citizen observations of their cities. In this paper, we investigate the feasibility of using Twitter textual streams for extracting city events. We propose a hierarchical multi-view deep learning approach to contextualise citizen observations of various city systems and services. Our goal has been to build a flexible architecture that can learn representations useful for tasks, thus avoiding excessive task-specific feature engineering. We apply our approach on a real-world dataset consisting of event reports and tweets of over four months from San Francisco Bay Area dataset and additional datasets collected from London. The results of our evaluations show that our proposed solution outperforms the existing models and can be used for extracting city related events with an averaged accuracy of 81% over all classes. To further evaluate the impact of our Twitter event extraction model, we have used two sources of authorised reports through collecting road traffic disruptions data from Transport for London API, and parsing the Time Out London website for sociocultural events. The analysis showed that 49.5% of the Twitter traffic comments are reported approximately five hours prior to the authorities official records. Moreover, we discovered that amongst the scheduled sociocultural event topics; tweets reporting transportation, cultural and social events are 31.75% more likely to influence the distribution of the Twitter comments than sport, weather and crime topics.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85148038496
A Deep Multi-level Attentive Network for Multimodal Sentiment Analysis
Multimodal sentiment analysis has attracted increasing attention with broad application prospects. Most of the existing methods have focused on a single modality, which fails to handle social media data due to its multiple modalities. Moreover, in multimodal learning, most of the works have focused on simply combining the two modalities without exploring the complicated correlations between them. This resulted in dissatisfying performance for multimodal sentiment classification. Motivated by the status quo, we propose a Deep Multi-level Attentive network (DMLANet), which exploits the correlation between image and text modalities to improve multimodal learning. Specifically, we generate the bi-attentive visual map along the spatial and channel dimensions to magnify Convolutional neural network representation power. Then, we model the correlation between the image regions and semantics of the word by extracting the textual features related to the bi-attentive visual features by applying semantic attention. Finally, self-attention is employed to fetch the sentiment-rich multimodal features for the classification automatically. We conduct extensive evaluations on four real-world datasets, namely, MVSA-Single, MVSA-Multiple, Flickr, and Getty Images, which verify our method's superiority.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Multimodality" ]
[ 20, 3, 36, 78, 24, 74 ]
SCOPUS_ID:85083388164
A Deep Multi-task Model for Dialogue Act Classification, Intent Detection and Slot Filling
An essential component of any dialogue system is understanding the language which is known as spoken language understanding (SLU). Dialogue act classification (DAC), intent detection (ID) and slot filling (SF) are significant aspects of every dialogue system. In this paper, we propose a deep learning-based multi-task model that can perform DAC, ID and SF tasks together. We use a deep bi-directional recurrent neural network (RNN) with long short-term memory (LSTM) and gated recurrent unit (GRU) as the frameworks in our multi-task model. We use attention on the LSTM/GRU output for DAC and ID. The attention outputs are fed to individual task-specific dense layers for DAC and ID. The output of LSTM/GRU is fed to softmax layer for slot filling as well. Experiments on three datasets, i.e. ATIS, TRAINS and FRAMES, show that our proposed multi-task model performs better than the individual models as well as all the pipeline models. The experimental results prove that our attention-based multi-task model outperforms the state-of-the-art approaches for the SLU tasks. For DAC, in relation to the individual model, we achieve an improvement of more than 2% for all the datasets. Similarly, for ID, we get an improvement of 1% on the ATIS dataset, while for TRAINS and FRAMES dataset, there is a significant improvement of more than 3% compared to individual models. We also get a 0.8% enhancement for ATIS and a 4% enhancement for TRAINS and FRAMES dataset for SF with respect to individual models. Results obtained clearly show that our approach is better than existing methods. The validation of the obtained results is also demonstrated using statistical significance t tests.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Semantic Parsing", "Sentiment Analysis", "Intent Recognition", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 24, 3, 40, 78, 79, 11, 38, 36, 4 ]
SCOPUS_ID:85089219953
A Deep Multimodal Approach for Map Image Classification
Map images (e.g., illustrated maps, historical maps, and geographic maps) have been published around the world, not only for giving location but also to attract tourists or hand down the histories of locations. The management of map data, however, has been an open issue for several research fields, including digital library, humanities, and tourism studies. This paper explores an approach for classifying diverse map images by their themes using map content features. Specifically, we present a novel strategy for preprocessing text data that are positioned inside the map images, which are extracted using OCR. The activation of the textual feature-based model is joint with the visual features in an early fusion manner. Finally, we train a classifier model comprising a convolutional layer and a fully connected layer, which predicts the belonging class of the input map. In experiments conducted on a new labeled dataset of map images, we demonstrate that our approach that uses the fused features achieved the best classification performance over single modality. We have made our dataset available on the Internet to facilitate this new task.
[ "Visual Data in NLP", "Text Classification", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 20, 36, 74, 24, 3 ]
SCOPUS_ID:85056125304
A Deep Multiple View Sentence Representation Model for Question Answering
Question answering (QA) between humans and computers is regarded as one of the most hardcore problems in computer science, which involves interdisciplinary techniques in natural language processing. Existing deep models rely on a single sentence representation or multiple granularity representations for question answering matching, which cannot capture the semantic information well in the question answering matching process. To solve this problem, we propose a new deep multiple view sentence representation model (DMVSR) to match two question answering semantic sentences. After pre-processed by word embedding, each QA semantic sentence representation is generated by a bidirectional long short term memory (Bi-LSTM) and Convolution neural network (CNN). Through k-Max pooling and a multi-layer perceptron, the final QA matching score is produced by aggregating interactions. Our model has several advantages: (1) Using Bi-LSTM to capture the semantic information; (2) Using CNN to implement feature extraction and feature selection in semantic space; (3) Matching QA sentence representation by aggregate interactions with semantic information. In the experiments, we investigate the effectiveness of the proposed deep neural network structures of all different evidence. We demonstrate significant performance improvement against a series of standard and state-of-art baselines in terms of MAP, nDCG@3 and nDCG@5.
[ "Language Models", "Semantic Text Processing", "Question Answering", "Representation Learning", "Natural Language Interfaces" ]
[ 52, 72, 27, 12, 11 ]
SCOPUS_ID:85101216053
A Deep Network Model for Paraphrase Detection in Punjabi
Paraphrase refers to the text which tells the same meanings but with different expressions. It is important in NLP as it deals with many applications such as information retrieval, information extraction, machine translation, query expansion, question answering, summarization and plagiarism. Paraphrase detection is to find that given two texts are semantically similar or not similar. Though paraphrase detection has wide literature, there is no proper algorithm for paraphrase detection in Punjabi language. A new paraphrase detection model for Punjabi language is developed in this paper. We use two deep learning methods to map sentences as vectors, and these vectors are further used to detect paraphrases. Despite other implementations of paraphrase detection, our model is simple and efficient to detect paraphrases. Qualitative and quantitative evaluations prove the efficiency of the model and can be applied to various NLP applications. The proposed model is trained on Quora’s question pair dataset which makes new directions for paraphrasing in Indian languages.
[ "Paraphrasing", "Semantic Text Processing", "Green & Sustainable NLP", "Representation Learning", "Text Generation", "Responsible & Trustworthy NLP" ]
[ 32, 72, 68, 12, 47, 4 ]
http://arxiv.org/abs/1712.02820v1
A Deep Network Model for Paraphrase Detection in Short Text Messages
This paper is concerned with paraphrase detection. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Given two sentences, the objective is to detect whether they are semantically identical. An important insight from this work is that existing paraphrase systems perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts. Challenges with paraphrase detection on user generated short texts, such as Twitter, include language irregularity and noise. To cope with these challenges, we propose a novel deep neural network-based approach that relies on coarse-grained sentence modeling using a convolutional neural network and a long short-term memory model, combined with a specific fine-grained word-level similarity matching model. Our experimental results show that the proposed approach outperforms existing state-of-the-art approaches on user-generated noisy social media data, such as Twitter texts, and achieves highly competitive performance on a cleaner corpus.
[ "Paraphrasing", "Text Generation" ]
[ 32, 47 ]
http://arxiv.org/abs/1707.01555v1
A Deep Network with Visual Text Composition Behavior
While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear. We propose a deep network, which not only achieves competitive accuracy for text classification, but also exhibits compositional behavior. That is, while creating hierarchical representations of a piece of text, such as a sentence, the lower layers of the network distribute their layer-specific attention weights to individual words. In contrast, the higher layers compose meaningful phrases and clauses, whose lengths increase as the networks get deeper until fully composing the sentence.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/1706.08032v1
A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking
This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three Twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 78, 24, 3 ]
http://arxiv.org/abs/2203.01594v1
A Deep Neural Framework for Image Caption Generation Using GRU-Based Attention Mechanism
Image captioning is a fast-growing research field of computer vision and natural language processing that involves creating text explanations for images. This study aims to develop a system that uses a pre-trained convolutional neural network (CNN) to extract features from an image, integrates the features with an attention mechanism, and creates captions using a recurrent neural network (RNN). To encode an image into a feature vector as graphical attributes, we employed multiple pre-trained convolutional neural networks. Following that, a language model known as GRU is chosen as the decoder to construct the descriptive sentence. In order to increase performance, we merge the Bahdanau attention model with GRU to allow learning to be focused on a specific portion of the image. On the MSCOCO dataset, the experimental results achieve competitive performance against state-of-the-art approaches.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
http://arxiv.org/abs/1908.11057v2
A Deep Neural Information Fusion Architecture for Textual Network Embeddings
Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural and textual embeddings were learned by models that rarely take the mutual influences between them into account. In this paper, a deep neural architecture is proposed to effectively fuse the two kinds of informations into one representation. The novelties of the proposed architecture are manifested in the aspects of a newly defined objective function, the complementary information fusion method for structural and textual features, and the mutual gate mechanism for textual feature extraction. Experimental results show that the proposed model outperforms the comparing methods on all three datasets.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
http://arxiv.org/abs/1709.09783v1
A Deep Neural Network Approach To Parallel Sentence Extraction
Parallel sentence extraction is a task addressing the data sparsity problem found in multilingual natural language processing applications. We propose an end-to-end deep neural network approach to detect translational equivalence between sentences in two different languages. In contrast to previous approaches, which typically rely on multiples models and various word alignment features, by leveraging continuous vector representation of sentences we remove the need of any domain specific feature engineering. Using a siamese bidirectional recurrent neural networks, our results against a strong baseline based on a state-of-the-art parallel sentence extraction system show a significant improvement in both the quality of the extracted parallel sentences and the translation performance of statistical machine translation systems. We believe this study is the first one to investigate deep learning for the parallel sentence extraction task.
[ "Multilinguality", "Machine Translation", "Text Generation", "Information Extraction & Text Mining" ]
[ 0, 51, 47, 3 ]
SCOPUS_ID:85107723151
A Deep Neural Network Approach using Convolutional Network and Long Short Term Memory for Text Sentiment Classification
The current emotion-based text categorization method incorporates a lot of deep learning, such as LSTM (Long short term memory) and CNN (Convolutional neural network) algorithms. The traditional algorithm extracts relatively few text features, so the performance of the algorithm can be improved. Based on this fact, this paper decided to adopt a text sentiment prediction method based on CNN and LSTM. First, the user's words are converted into vectors by the frequency of occurrence of the words, and the user's words are convoluted by the CNN to extract the feature information in the user text. Then, further feature extraction of the CNN convolved data by LSTM enables more dimensional information of the text to be used for classification. The experimental results show that the model constructed by this method is more effective in extracting multidimensional features on user text, and effectively optimizes the traditional algorithm. Compared with the CNN model and the LSTM model, the performance is improved.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 78, 24, 3 ]
SCOPUS_ID:85077241235
A Deep Neural Network Model for Joint Entity and Relation Extraction
Joint extraction of entities and their relations from the text is an essential issue in automatic knowledge graph construction, which is also known as the joint extraction of relational triplets. The relational triplets in sentence are complicated, multiple and different relational triplets may have overlaps, which is commonly seen in reality. However, multiple pairs of triplets cannot be efficiently extracted in most of the previous works. To mitigate this problem, we propose a deep neural network model based on the sequence-to-sequence learning, namely, the hybrid dual pointer networks (HDP), which extracts multiple pairs of triplets from the given sentence by generating the hybrid dual pointer sequence. In experiments, we tested our model using the New York Times (NYT) public dataset. The experimental results demonstrated that our model outperformed the state-of-the-art work, and achieved a 17.1% improvement on the F1 values.
[ "Semantic Text Processing", "Relation Extraction", "Structured Data in NLP", "Knowledge Representation", "Multimodality", "Information Extraction & Text Mining" ]
[ 72, 75, 50, 18, 74, 3 ]
SCOPUS_ID:85056528697
A Deep Neural Network Model for Target-based Sentiment Analysis
In recent years, with the development of social networks, sentiment analysis has become one of the most important research topics in the field of natural language processing. The deep neural network model combining attention mechanism has achieved remarkable success in the task of target-based sentiment analysis. In current research, however, the attention mechanism is more combined with LSTM networks, such neural network- based architectures generally rely on complex computation and only focus on the single target, thus it is difficult to effectively distinguish the different polarities of variant targets in the same sentence. To address this problem, we propose a deep neural network model combining convolutional neural network and regional long short-term memory (CNN-RLSTM) for the task of target-based sentiment analysis. The approach can reduce the training time of neural network model through a regional LSTM. At the same time, the CNN-RLSTM uses a sentence-level CNN to extract sentiment features of the whole sentence, and controls the transmission of information through different weight matrices, which can effectively infer the sentiment polarities of different targets in the same sentence. Finally, experimental results on multi-domain datasets of two languages from SemEval2016 and auto data show that, our approach yields better performance than SVM and several other neural network models.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85105446916
A Deep Neural Network Model with Multihop Self-attention Mechanism for Topic Segmentation of Texts
Topic segmentation is an important task in the field of natural language processing (NLP), which finds its importance in applications such as information retrieval, text summarization, e-learning. Current neural methods for topic segmentation represent a sentence by a single feature vector that generates single semantic information. However, the dependencies between different parts in a sentence relies on more complex semantic information, which cannot be learned by a single-vector representation. In this paper, we present a deep neural model to capture the multi-aspect semantic information for topic segmentation of texts by multi-hop attention mechanism to address this issue, which named MHOPSA-SEG. At each attention step, the model assigns different weights to words depending on the previous memory weights. Thus, it can capture multiple sentence semantic vector representation. We conduct experiments on four datasets, including written texts and lectures transcripts. And the experimental results show that MHOPSA-SEG outperforms the state-of-the-art models.
[ "Language Models", "Text Segmentation", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 21, 72, 15 ]
http://arxiv.org/abs/1809.00934v1
A Deep Neural Network Sentence Level Classification Method with Context Information
In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85134218653
A Deep Neural Network-based Model for the Sentiment Analysis of Dravidian Code-mixed Social Media Posts
Sentiment analysis is one of the most essential jobs in natural language processing. The research community has recently presented a slew of papers aimed at detecting sentiment from English social media posts. Despite this, research on recognising feelings in Dravidian Kannada-English, Malayalam-English, and Tamil-English postings has been limited. This study offers a dense neural network-based model for categorising postings in Kannada-English, Malayalam-English, and Tamil-English into five different sentiment classes. When character-level TF-IDF characteristics are combined with a dense neural network, encouraging results are obtained. The recommended model received weighted F1-scores of 0.61, 0.72, and 0.60 for Kannada-English, Malayalam-English, and Tamil-English social media postings, respectively. The code for the proposed models is available at https://github.com/Abhinavkmr/Deep-Neural-Network-based-Model-for-the-Sentiment-Analysis-of-Dravidian-Social-Media-Posts.git.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85060036353
A Deep Recurrent Neural Network with BiLSTM model for Sentiment Classification
In the field of sentiment classification, opinions or sentiments of the people are analyzed. Sentiment analysis systems are being applied in social platforms and in almost every business because the opinions or sentiments are the reflection of the beliefs, choices and activities of the people. With these systems it is possible to make decisions for businesses to political agendas. In recent times a huge number of people share their opinions across the Internet using Bengali. In this paper a new way of sentiment classification of Bengali text using Recurrent Neural Network(RNN) is presented. Using deep recurrent neural network with BiLSTM, the accuracy 85.67% is achieved.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 78, 24, 3 ]
http://arxiv.org/abs/1705.04304v3
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
https://aclanthology.org//2020.ngt-1.7/
A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards
Cross-lingual text summarization aims at generating a document summary in one language given input in another language. It is a practically important but under-explored task, primarily due to the dearth of available data. Existing methods resort to machine translation to synthesize training data, but such pipeline approaches suffer from error propagation. In this work, we propose an end-to-end cross-lingual text summarization model. The model uses reinforcement learning to directly optimize a bilingual semantic similarity metric between the summaries generated in a target language and gold summaries in a source language. We also introduce techniques to pre-train the model leveraging monolingual summarization and machine translation objectives. Experimental results in both English–Chinese and English–German cross-lingual summarization settings demonstrate the effectiveness of our methods. In addition, we find that reinforcement learning models with bilingual semantic similarity as rewards generate more fluent sentences than strong baselines.
[ "Language Models", "Low-Resource NLP", "Machine Translation", "Semantic Text Processing", "Information Extraction & Text Mining", "Semantic Similarity", "Summarization", "Text Generation", "Responsible & Trustworthy NLP", "Cross-Lingual Transfer", "Multilinguality" ]
[ 52, 80, 51, 72, 3, 53, 30, 47, 4, 19, 0 ]
http://arxiv.org/abs/1809.03118v1
A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification
Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-to-sequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-to-set framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/1709.02349v2
A Deep Reinforcement Learning Chatbot
We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than many competing systems. Due to its machine learning architecture, the system is likely to improve with additional data.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
http://arxiv.org/abs/1801.06700v1
A Deep Reinforcement Learning Chatbot (Short Version)
We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85076840283
A Deep Self-learning Classification Framework for Incomplete Medical Patents with Multi-label
The classification of medical patents play an important role for pharmaceutical company, since medical patens with well labeled can significantly accelerate the process of new drug research. The previous studies using machine learning methods focus on classification the medical patents with single label. However, the classification of medical patents is a multi-label task, and the available data are always incomplete with losing the part information of patents. In this paper, we propose a deep self-learning classification framework that can deal with the incomplete medical patterns with multi-label issue. It consists of a text processor and patent classifier. For the text processor, a professional medical text thesaurus is built via GloVe method, which can learn more specialized vocabulary. For the patent classifier, we adopt a bidirectional long short term memory (Bi-LSTM) model to construct our patent classifier, which can learn the hidden knowledge from medical patents and associate with appropriate labels to medical patents automatically. Furthermore, an advanced focal loss function is design to further improve the classification accuracy. Experiments on the Thomson Reuters dataset demonstrate that our proposed method outperform the other existing methods in terms of precision and recall, when dealing with incomplete medical patents with multi-label issue..
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85103895843
A Deep Semantic Alignment Network for the Cross-Modal Image-Text Retrieval in Remote Sensing
Because of the rapid growth of multimodal data from the internet and social media, a cross-modal retrieval has become an important and valuable task in recent years.The purpose of the cross-modal retrieval is to obtain the result data in one modality (e.g., image), which is semantically similar to the query data in another modality (e.g., text).In the field of remote sensing, despite a great number of existing works on image retrieval, there has only been a small amount of research on the cross-modal image-text retrieval, due to the scarcity of datasets and the complicated characteristics of remote sensing image data. In this article, we introduce a novel cross-modal image-text retrieval network to establish the direct relationship between remote sensing images and their paired text data. Specifically, in our framework, we designed a semantic alignment module to fully explore the latent correspondence between images and text, in which we used the attention and gate mechanisms to filter and optimize data features so that more discriminative feature representations can be obtained. Experimental results on four benchmark remote sensing datasets, including UCMerced-LandUse-Captions, Sydney-Captions, RSICD, and NWPU-RESISC45-Captions, well showed that our proposed method outperformed other baselines and achieved the state-of-the-art performance in remote sensing image-text retrieval tasks.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Information Retrieval", "Multimodality" ]
[ 20, 39, 47, 24, 74 ]
http://arxiv.org/abs/1812.00176v1
A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues
Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines.
[ "Semantic Text Processing", "Semantic Parsing", "Discourse & Pragmatics", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 72, 40, 71, 11, 38 ]
SCOPUS_ID:85093838969
A Deep Transfer Learning Approach for Fake News Detection
Fake or incorrect or miss-information detection has nowadays attracted attention to the researchers and developers because of the huge information overloaded in the web. This problem can be considered as equivalent to lie detection, truthfulness identification or stance detection. In our particular work, we focus on deciding whether the title of a news is consistent with its body text- a problem equivalent to fake information identification. In this paper, we propose a deep transfer learning approach where the problem of detecting title-body consistency is posed from the viewpoint of Textual Entailment (TE) where the title is considered as a hypothesis and news body is treated as a premise. The idea is to decide whether the body infers the title or not. Evaluation on the existing benchmark datasets, namely Fake News Challenge (FNC) dataset (released in Fake News Challenge Stage 1 (FNC-I): Stance Detection) show the efficacy of our proposed approach in comparison to the state-of-the-art systems.
[ "Language Models", "Semantic Text Processing", "Opinion Mining", "Ethical NLP", "Sentiment Analysis", "Reasoning", "Fact & Claim Verification", "Textual Inference", "Responsible & Trustworthy NLP" ]
[ 52, 72, 49, 17, 78, 8, 46, 22, 4 ]
SCOPUS_ID:85144347580
A Deep Transfer Learning Method for Cross-Lingual Natural Language Inference
Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), has been one of the central tasks in Artificial Intelligence (AI) and Natural Language Processing (NLP). RTE between the two pieces of texts is a crucial problem, and it adds further challenges when involving two different languages, i.e., in the cross-lingual scenario. This paper proposes an effective transfer learning approach for cross-lingual NLI. We perform experiments on English-Hindi language pairs in the cross-lingual setting to find out that our novel loss formulation could enhance the performance of the baseline model by up to 2%. To assess the effectiveness of our method further, we perform additional experiments on every possible language pair using four European languages, namely French, German, Bulgarian, and Turkish, on top of XNLI dataset. Evaluation results yield up to 10% performance improvement over the respective baseline models, in some cases surpassing the state-of-the-art (SOTA). It is also to be noted that our proposed model has 110M parameters which is much lesser than the SOTA model having 220M parameters. Finally, we argue that our transfer learning-based loss objective is model agnostic and thus can be used with other deep learning-based architectures for cross-lingual NLI.
[ "Language Models", "Semantic Text Processing", "Reasoning", "Cross-Lingual Transfer", "Textual Inference", "Multilinguality" ]
[ 52, 72, 8, 19, 22, 0 ]
SCOPUS_ID:85118180387
A Deep Transfer Learning Method for Medical Question Matching
Question matching (QM) is a fundamental task of information retrieval (IR)-based question-answering (QA) systems, which can be formulated as a paraphrase identification (PI) problem and relies on large-scale labeled data, which is not easy to be obtained, especially in specific domains such as the medical domain. In this paper, we investigate transfer learning for QM in the medical domain, aiming to adapt the shared knowledge learnt from questions for one disease to other diseases. We first compare six state-of-the-art deep learning methods for QM in Chinese on different diseases, and then apply a transfer learning framework to these deep learning methods. Experiments on a Chinese medical corpus of three diseases about children show that the proposed transfer learning framework is efficient and can bring stable performance improvement for most deep learning based QM models on small-scale medical corpora of different diseases.
[ "Language Models", "Paraphrasing", "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Text Generation" ]
[ 52, 32, 72, 27, 11, 47 ]
SCOPUS_ID:84969791928
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data
Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.
[ "Visual Data in NLP", "Topic Modeling", "Information Extraction & Text Mining", "Multimodality" ]
[ 20, 9, 3, 74 ]
SCOPUS_ID:85077207714
A Deep learning approach for Arabic text classification
Advancement in information technology produced massive textual material that is available online. Text classification algorithms are at the core of many natural language processing (NLP) applications. There are several algorithms which have been implemented to tackle the classification problem for English and other European languages. Few attempts have been carried out to solve the problem of Arabic text classification. In this paper, we demonstrate a feed-forward deep learning (DL) neural network for the Arabic text classification problem. The first layer uses term frequency-inverse document frequency (TF-IDF) vectors constructed from the most frequent words of the document collection. The output of the first layer is used as an input to the second layer. To reduce the classification error rate, we used Adam's optimizer. We conducted a set of experiments on two multi-classes Arabic datasets to evaluate our approach based on standard measures such as precision, recall, F-measure, support, accuracy and time to build the model. We compared our approach with the logistic regression (LR) algorithm. Experiments entailed that the deep learning approach outperformed the logistic regression algorithm for Arabic text classification.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]