id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85144180199
A DCRC Model for Text Classification
Traditional text classification models have some drawbacks, such as the inability of the model to focus on important parts of the text contextual information in text processing. To solve this problem, we fuse the long and short-term memory network BiGRU with a convolutional neural network to receive text sequence input to reduce the dimensionality of the input sequence and to reduce the loss of text features based on the length and context dependency of the input text sequence. Considering the extraction of important features of the text, we choose the long and short-term memory network BiLSTM to capture the main features of the text and thus reduce the loss of features. Finally, we propose a BiGRU-CNN-BiLSTM model (DCRC model) based on CNN, GRU and LSTM, which is trained and validated on the THUCNews and Toutiao News datasets. The model outperformed the traditional model in terms of accuracy, recall and F1 score after experimental comparison.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:84880647287
A DEVS-based M&S method for large-scale multi-agent systems
ABMS offers various simulation systems, tools, toolkits and languages for multi-agent system research. However, there is a need for a M&S method for L-systems(large-scale multi-agent systems) research as current ABMS method has some degree of difficulty in dealing with the scale and heterogeneity issues of L-systems. This paper focused on the modelling aspect of the method by combining cognitive modelling, agent organization theory and DEVS-based framework together. First of all, we explained our research initiative by giving the reasons of our research. Further literature review of our choice is also present. Then, we present a design for constructing a DEVS-based system model. We choose PRS as our preferred cognitive architecture, and constructed a DEVS-based simulation framework together with the guidance of agent organization theory. Finally we summarize the benefits of our research compared to other methods.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 2, 48, 57 ]
SCOPUS_ID:85083756664
A DGA Domain Name Detection Method Based on Deep Learning Models with Mixed Word Embedding
DGA domain name detection plays a key role in preventing botnet attacks. It is practically significant in generating threat intelligence, blocking botnet command and control traffic, and maintaining cyber security. In recent years, DGA domain name detection algorithms have made great progress, from the methods using manually-crafted features to the automatically extracting features generated by deep learning methods. Multiple studies have indicated that deep learning methods perform better in DGA detection. However, DGA families are various and domain name data is imbalanced in the multi-class classification of different DGA families. Many existing deep learning models can still be improved. To solve the above problems, a mixed word embedding method is designed, based on character level embedding and bigram level embedding, to improve the information utilization of domain names. The paper also designs a deep learning model using the mixed word embedding method. At the end of the paper, an experiment with multiple comparison models is conducted to test the model. The experimental results show that the model based on the mixed word embedding achieves better performance in DGA domain name detection and multi-class classification tasks compared with the models based on character level embedding, especially in the small DGA families with few samples. The results show the proposed approach is effective.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85141474359
A DIACHRONIC DESCRIPTION OF THE RHETORIC OF THE 18<sup>TH</sup> CENTURY
The article outlines the main ideas of a research project aimed to create scholarly papers and data-bases reflecting the formation of the Russian rhetorical tradition. Up to now, literary norms of the Russian language have been studied either in relation to syntactic structures severed from the living practice of versification, or in terms of rhetorical techniques typical of individual literary schools. The main goal of the project is not only fixing, describing and interpreting the techniques for enhancing speech embellishment and expressiveness as such and identifying the main trends in the field of composition and quantitative data about the use of tropes and figures of speech by the poets of the 18th century, but also analyzing the theoretical recommendations of the first Russian treatises on the art of eloquence, based on the rich tradition of ancient and modern European rhetorical manuals, in their correspondence with the real poetic practice of the Russian poets. The sources of the study in-clude “De arte Rhetorica. Libri X” by F. Prokopovich, guides to eloquence by M. V. Lomonosov, some other rhetorical works of the 18th century, as well as poetic works by F. Prokopovich, A. D. Kantemir, and M. V. Lomonosov. The methods used in the course of the project research and within the framework of the interpretation of the material of this article make it possible to ensure close interaction between the linguistic, linguopoetical, linguo-stylistic and critical literary approaches to the analysis of a literary text. In the 18th century, in the era of the formation of the Russian national language and Russian literature, rhetorical culture was the only current normative basis for the formation of new general literary rules and the genre-stylistic system, and the process of trope and figure of speech unfolding in artistic text was carried out along with the expansion of the potential of the lexical and the grammatical subsystems of the language. The creation of a database that demonstrates the specific features of “rhetorical portraits” of the most famous poets and rhetoricians of the era of the Russian baroque and clas-sicism – Feofan Prokopovich, Antioch Kantemir, Mikhail Lomonosov, Vasily Trediakovsky, Aleksandr Sumaro-kov – may become an important result of the project.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
SCOPUS_ID:85125558731
A DIAGNOSTIC STUDY OF VISUAL QUESTION ANSWERING WITH ANALOGICAL REASONING
The deep learning community has made rapid progress in low-level visual perception tasks such as object localization, detection and segmentation. However, for tasks such as Visual Question Answering (VQA) and visual language grounding that require high-level reasoning abilities, huge gaps still exist between artificial systems and human intelligence. In this work, we perform a diagnostic study on recent popular VQA in terms of analogical reasoning. We term it as Analogical VQA, where a system needs to reason on a group of images to find analogical relations among them in order to correctly answer a natural language question. To study the task in depth, we propose an initial diagnostic synthetic dataset CLEVR-Analogy, which tests a range of analogical reasoning abilities (e.g. reasoning on object attributes, spatial relationships, existence, and arithmetic analogies). We benchmark various recent state-of-the-art methods on our dataset and compare the results against human performance, and discover that existing systems fall shorts when facing analogical reasoning involving spatial relationships. The dataset and code will be publicly available to facilitate future research.
[ "Visual Data in NLP", "Question Answering", "Natural Language Interfaces", "Reasoning", "Multimodality" ]
[ 20, 27, 11, 8, 74 ]
SCOPUS_ID:85071385638
A DIK-based question-answering architecture with multi-sources data for Medical Self-Service (KG)
Medical data is amplified in terms of speed and capacity in a very fast way, which creates obstacles for users to quickly access valid information. We present a DIK-based Question-Answering Architecture for Medical Self-Service. In addition, we propose a model based on the attention mechanism to extract high-quality medical entity concepts from the Chinese Electronic Medical Records (EMR). Then we modeled the medical data based on the DIK architecture (Data graph, Information graph, and Knowledge graph), construct a Question-Answering model (DIK-QA) for medical self-service that meets the needs of users to quickly and accurately find the medical information they need in massive medical data. Finally, we have realized this approach and applied it to real-world systems. The experimental results on our medical dataset show that the DIK-QA can effectively handle 4W (who/what/why/how) questions, which can help users find the information they need accurately.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Multimodality" ]
[ 72, 50, 27, 18, 11, 74 ]
SCOPUS_ID:84911586165
A DINOSAUR CAPER: PSYCHOLINGUISTICS PAST, PRESENT, AND FUTURE
[ "Psycholinguistics", "Linguistics & Cognitive NLP" ]
[ 77, 48 ]
SCOPUS_ID:85124882915
A DISCOURSE ANALYSIS OF CAREER EXPERIENCES OF WOMEN IN THE DEVELOPING COUNTRY
The efforts to reduce the widened effects of structural inequality for women in South Africa have resulted in varied experiences (Burns, Tomita, & Lund, 2017). The study problematised the unresearched and not well articulated social construct within the career experiences of women working in a telecommunication company in South Africa. This article argues that the meaning ascribed to the socio context and equity policy can better describe the dimension of the broader issue of gender inequality in post-apartheid South Africa. The study contributes to discourse analysis methods where discourse analysis was used to explain the experiences of three women who are senior managers with at least ten years of experience. The discourse-based understanding of the experiences of women in this study was reframed into and within the interactions of equity policy deliberation, societal factors and the organisational context model. These interactions allowed interpretation of the career choice for women and what it means for personal development. The model of career experience depicts strong alternative views on a career path for women. The results of this study provide unique findings for justice regulation in the workplace for women in South Africa.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85009081336
A DISCRIMINATIVE TRAINING PROCEDURE BASED ON LANGUAGE MODEL AND DICTIONARY FOR LVCSR
In today's HMM-based speech recognition systems, the parameters are most commonly estimated according to the Maximum Likelihood criterion. Because of limited training data, however, discriminative objectives provide better parameter estimates with respect to the Maximum A-Posteriori decision used for decoding. The question of which distribution functions to discriminate from which and to what degree is the most crucial when performing discriminative parameter estimation. This is particularly difficult because beside the distribution functions, the recognition procedure is restricted and guided by several other sources of information, such as language model and transition matrices. This paper extends the approach presented in [10] to the case of triphones, refines the theory and estimation of the state-to-state confusion metric and proposes an approximation that allows the application of the approach on context-dependent systems with reasonable computational cost. The evaluation is performed on continuous HMM speech recognition systems for the WSJ0 5k-task. The results prove the practicability of the approach and its extensions.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85102077013
A DNA Cryptographic Solution for Secured Image and Text Encryption
In recent days, DNA cryptography is gaining more popularity for providing better security to image and text data. This paper presents a DNA based cryptographic solution for image and textual information. Image encryption involves scrambling at pixel and bit levels based on hyperchaotic sequences. Both image and text encryption involves basic DNA encoding rules, key combination, and conversion of data into binary and other forms. This new DNA cryptographic approach adds more dynamicity and randomness, making the cipher and keys harder to break. The proposed image encryption technique presents better results for various parameters, like Image Histogram, Correlation co-efficient, Information Entropy, Number of Pixels Change Rate (NPCR), and Unified Average Changing Intensity (UACI), Key Space, and Sensitivity compared with existing approaches. Improved time and space complexity, random key generation for text encryption prove that DNA cryptography can be a better security solution for new applications.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:80052423628
A DNA assembly model of sentence generation
Recent results of corpus-based linguistics demonstrate that context-appropriate sentences can be generated by a stochastic constraint satisfaction process. Exploiting the similarity of constraint satisfaction and DNA self-assembly, we explore a DNA assembly model of sentence generation. The words and phrases in a language corpus are encoded as DNA molecules to build a language model of the corpus. Given a seed word, the new sentences are constructed by a parallel DNA assembly process based on the probability distribution of the word and phrase molecules. Here, we present our DNA code word design and report on successful demonstration of their feasibility in wet DNA experiments of a small scale. © 2011 Elsevier Ireland Ltd.
[ "Language Models", "Semantic Text Processing", "Text Generation" ]
[ 52, 72, 47 ]
SCOPUS_ID:85145877569
A DNN-Based Accurate Masking Using Significant Feature Sets
Monaural speech separation has remained a very challenging problem for a longtime which can be addressed using a supervised learning approach that uses features of the noisy input to predict an accurate time-frequency mask. Effective acoustic phonetic features can help in the accurate mask prediction at low Signal-to-Noise Ratios (SNRs). Individual features capture specific attributes of the audio signal; therefore, it's essential to employ a set of features. This work examines different combinations of monaural features as input and ideal ratio mask a straining target to the DNN model. Feature combination sets are constructed by examining single features and then combining the most relevant ones. The results are evaluated for different feature combinations under non-stationary noises at low SNR levels. The feature performance is evaluated by using intelligibility and quality measures. A combination of two features is considered the best feature combination as it indicates a significant increase in speech intelligibility as compared to individual features and combinations consisting of more than two features.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Syntactic Text Processing", "Phonetics", "Multimodality" ]
[ 52, 72, 70, 15, 64, 74 ]
SCOPUS_ID:85062795116
A DNN-Based Framework for Converting Sign Language to Mandarin-Tibetan Cross-Lingual Emotional Speech
We proposed a method for converting sign-language to Mandarin-Tibetan bi-lingual emotional speech using a deep neural network (DNN)-based framework in the paper. We used a support vector machine (SVM) to classify the categories of sign-language by the sign-language features extracted from sign-language image with a trained DNN model. The categories of sign-language were then transcripted with Mandarin and Tibetan words. Meanwhile, we extracted the facial features from facial expression image with a DNN model to classify facial emotion labels by a SVM. We also realized a DNN-based Mandarin-Tibetan bi-lingual speech synthesis through the speaker adaptive training of DNN. Finally, we synthesized Mandarin or Tibetan emotional speech with the obtained transcriptions of sign-language and corresponding facial emotion labels. The objective evaluation illustrated that our method can achieves a recognition rate of 90.7% on static sign-language. The facial expression recognition rates on the extended Cohn-Kanade database (CK+) and Japanese female facial expression (JAFFE) database are 94.6 % and 80.3 %, respectively. Subjective evaluation demonstrated that the emotional mean opinion score of the synthesized emotional speech is 4.1. We also employed the pleasure-arousal-dominance (PAD) test to evaluate the emotion similarity between the facial expression and synthesized emotional speech. The results showed that the PAD values of facial expression is very close to those of the synthesized emotional speech.
[ "Multilinguality", "Visual Data in NLP", "Information Extraction & Text Mining", "Information Retrieval", "Speech & Audio in NLP", "Cross-Lingual Transfer", "Text Classification", "Multimodality" ]
[ 0, 20, 3, 24, 70, 19, 36, 74 ]
SCOPUS_ID:85098165215
A DNN-HMM-DNN hybrid model for discovering word-like units from spoken captions and image regions
Discovering word-like units without textual transcriptions is an important step in low-resource speech technology. In this work, we demonstrate a model inspired by statistical machine translation and hidden Markov model/deep neural network (HMMDNN) hybrid systems. Our learning algorithm is capable of discovering the visual and acoustic correlates of K distinct words in an unknown language by simultaneously learning the mapping from image regions to concepts (the first DNN), the mapping from acoustic feature vectors to phones (the second DNN), and the optimum alignment between the two (the HMM). In the simulated low-resource setting using MSCOCO and SpeechCOCO datasets, our model achieves 62.4 % alignment accuracy and outperforms the audio-only segmental embedded GMM approach on standard word discovery evaluation metrics.
[ "Multilinguality", "Visual Data in NLP", "Low-Resource NLP", "Machine Translation", "Captioning", "Speech & Audio in NLP", "Text Generation", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 0, 20, 80, 51, 39, 70, 47, 4, 74 ]
SCOPUS_ID:67349252831
A Danish phonetically annotated spontaneous speech corpus (DanPASS)
A corpus is described consisting of non-scripted monologues and dialogues, recorded by 27 speakers, comprising a total of 73,227 running words, corresponding to 9 h and 46 min of speech. The monologues were recorded as one-way communication with an unseen partner where the speaker performed three different tasks: (s)he described a network consisting of various geometrical shapes in various colours, (s)he guided the listener through four different routes in a virtual city map, and (s)he instructed the listener how to build a house from its individual pieces. The dialogues are replicas of the HCRC map tasks. Annotation is performed in Praat. The sound files are segmented into prosodic phrases, words, and syllables. The files are supplied, in separate interval tiers, with an orthographical representation, detailed part-of-speech tags, simplified part-of-speech tags, a phonemic notation, a semi-narrow phonetic notation, a symbolic representation of the pitch relation between each stressed and post-tonic syllable, and a symbolic representation of the phrasal intonation. © 2008 Elsevier B.V. All rights reserved.
[ "Speech & Audio in NLP", "Syntactic Text Processing", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Phonetics", "Multimodality" ]
[ 70, 15, 11, 38, 64, 74 ]
SCOPUS_ID:85149873292
A Data Augmentation Method For English-Vietnamese Neural Machine Translation
The translation quality of machine translation systems depends on the parallel corpus used for training, in particular the quantity and quality of the corpus. However, building a high-quality and large-scale parallel corpus is complex and expensive, particularly for a specific domain parallel corpus. Therefore, data augmentation techniques are widely used in machine translation. The input of the back-translation method is monolingual text, which is available from many sources, and therefore this method can be easily and effectively implemented to generate synthetic parallel data. In practice, monolingual texts can be collected from different sources, in which sources from websites often have errors in grammar and spelling, sentence mismatch or freestyle. Therefore, the quality of the output translation is reduced, leading to a low-quality parallel corpus generated by back-translation. In this study, we propose a method for improving the quality of monolingual texts for back-translation. Moreover, we supplemented the data by pruning translation table. We experimented with an English-Vietnamese neural machine translation using the IWSLT2015 dataset for training and testing in the legal domain. The results showed that the proposed method can effectively augment parallel data for machine translation, thereby improving translation quality. In our experimental cases, the BLEU score is increased by 16.37 points compared to the baseline system.
[ "Multilinguality", "Low-Resource NLP", "Text Error Correction", "Machine Translation", "Syntactic Text Processing", "Text Generation", "Responsible & Trustworthy NLP" ]
[ 0, 80, 26, 51, 15, 47, 4 ]
http://arxiv.org/abs/2110.09570v1
A Data Bootstrapping Recipe for Low Resource Multilingual Relation Classification
Relation classification (sometimes called 'extraction') requires trustworthy datasets for fine-tuning large language models, as well as for evaluation. Data collection is challenging for Indian languages, because they are syntactically and morphologically diverse, as well as different from resource-rich languages like English. Despite recent interest in deep generative models for Indian languages, relation classification is still not well served by public data sets. In response, we present IndoRE, a dataset with 21K entity and relation tagged gold sentences in three Indian languages, plus English. We start with a multilingual BERT (mBERT) based system that captures entity span positions and type information and provides competitive monolingual relation classification. Using this system, we explore and compare transfer mechanisms between languages. In particular, we study the accuracy efficiency tradeoff between expensive gold instances vs. translated and aligned 'silver' instances. We release the dataset for future research.
[ "Multilinguality", "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 0, 80, 52, 72, 24, 3, 36, 4 ]
http://arxiv.org/abs/2205.03403v1
A Data Cartography based MixUp for Pre-trained Language Models
MixUp is a data augmentation strategy where additional samples are generated during training by combining random pairs of training samples and their labels. However, selecting random pairs is not potentially an optimal choice. In this work, we propose TDMixUp, a novel MixUp strategy that leverages Training Dynamics and allows more informative samples to be combined for generating new data samples. Our proposed TDMixUp first measures confidence, variability, (Swayamdipta et al., 2020), and Area Under the Margin (AUM) (Pleiss et al., 2020) to identify the characteristics of training samples (e.g., as easy-to-learn or ambiguous samples), and then interpolates these characterized samples. We empirically validate that our method not only achieves competitive performance using a smaller subset of the training data compared with strong baselines, but also yields lower expected calibration error on the pre-trained language model, BERT, on both in-domain and out-of-domain settings in a wide range of NLP tasks. We publicly release our code.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/1203.5084v1
A Data Driven Approach to Query Expansion in Question Answering
Automated answering of natural language questions is an interesting and useful problem to solve. Question answering (QA) systems often perform information retrieval at an initial stage. Information retrieval (IR) performance, provided by engines such as Lucene, places a bound on overall system performance. For example, no answer bearing documents are retrieved at low ranks for almost 40% of questions. In this paper, answer texts from previous QA evaluations held as part of the Text REtrieval Conferences (TREC) are paired with queries and analysed in an attempt to identify performance-enhancing words. These words are then used to evaluate the performance of a query expansion method. Data driven extension words were found to help in over 70% of difficult questions. These words can be used to improve and evaluate query expansion methods. Simple blind relevance feedback (RF) was correctly predicted as unlikely to help overall performance, and an possible explanation is provided for its low value in IR for QA.
[ "Natural Language Interfaces", "Question Answering", "Information Retrieval" ]
[ 11, 27, 24 ]
https://aclanthology.org//W06-3005/
A Data Driven Approach to Relevancy Recognition for Contextual Question Answering
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
http://arxiv.org/abs/2002.05955v1
A Data Efficient End-To-End Spoken Language Understanding Architecture
End-to-end architectures have been recently proposed for spoken language understanding (SLU) and semantic parsing. Based on a large amount of data, those models learn jointly acoustic and linguistic-sequential features. Such architectures give very good results in the context of domain, intent and slot detection, their application in a more complex semantic chunking and tagging task is less easy. For that, in many cases, models are combined with an external language model to enhance their performance. In this paper we introduce a data efficient system which is trained end-to-end, with no additional, pre-trained external module. One key feature of our approach is an incremental training procedure where acoustic, language and semantic models are trained sequentially one after the other. The proposed model has a reasonable size and achieves competitive results with respect to state-of-the-art while using a small training dataset. In particular, we reach 24.02% Concept Error Rate (CER) on MEDIA/test while training on MEDIA/train without any additional data.
[ "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 4, 68 ]
SCOPUS_ID:85138934037
A Data Entry Optical Character Recognition Tool using Convolutional Neural Networks
Almost all institutions and organizations rely substantially on data to run their operations. Data is necessary for making informed decisions, adapting to change, and defining strategic objectives. Data administration has always relied on manual data entry. Manual input is used to entail transferring data from various documents into record books, ledger books, and other such books. Manual data entry, as used in recent years, comprises manually entering particular and predetermined data into a target program, such as customer name, business kind, money amount, and so on, from various sources, such as paper bills, invoices, orders, receipts, and so on. Depending on the sort of business, the target program can be handwritten records, spreadsheets, or computer databases. Several businesses require manual data entry, which has a high rate of mistakes. This is because the manual approach places far too much reliance on the ability of humans tocomprehend handwritten documents. As a result, a method for retrieving and storing information from images, particularly text, is required. OCR (optical character recognition) is a rapidly growing topic of research aimed at creatinga computer system that can automatically extract and interpret text from images. OCR converts any type of text or text-containing documents, such as handwritten text, printed text, or scanned text images, into an editable digital format for deeper and more complex processing. As a result, OCR enables a machine to recognize text in such documents without the need for human intervention. In order to achieve successful automation, a few significant difficulties must be identified and resolved. One of the most pressing issues is the quality of character typefaces in paper documents, as well as image quality. The computer system may not correctly recognize characters as a result of these difficulties. We examine OCR utilizing four different approaches in this research. We begin by laying down all of the possible issues that may arise during the OCR stages. We next go over the pre-processing, segmentation, normalization, feature extraction, classification, and post- processing aspects of an OCR system. As a result, this conversation paints a rather complete picture of the current state of text recognition domain.
[ "Visual Data in NLP", "Programming Languages in NLP", "Multimodality" ]
[ 20, 55, 74 ]
SCOPUS_ID:85099573410
A Data Indexing Technique to Improve the Search Latency of and Queries for Large Scale Textual Documents
Boolean AND queries (BAQ) are one of the most important types of queries used in text searching. In this paper, a graph-based indexing technique is proposed to improve the search latency of BAQ. It shows how a graph structure represented using a hash table can reduce the number of intersections needed for the execution of BAQ. The performance of the proposed technique is compared with one of the most widely used index structures for textual documents called Inverted Index. A detailed performance analysis is performed through prototyping and measurement on a system subjected to a synthetic workload. To get further performance insights, the proposed graph-based indexing technique is also compared with an enterprise-level search engine called Elasticsearch which uses Inverted Index at its core. The analysis shows that the graph-based indexing technique can reduce the latency for executing BAQ significantly in comparison to the other techniques.
[ "Indexing", "Structured Data in NLP", "Information Retrieval", "Multimodality" ]
[ 69, 50, 24, 74 ]
SCOPUS_ID:85064530230
A Data Preprocessing Method to Classify and Summarize Aspect-Based Opinions Using Deep Learning
Opinion summarization is based on aspect analyses of products, events or topics, which is a very interesting topic in natural language processing. Opinions are often expressed in various different ways in regards to objects. Therefore, it is important to express the characteristics of a product, event or topic in a final summary compiled by an automatic summarizing system. This paper proposes a method for conducting data preprocessing on the sentence level of a text using Convolutional Neural Networks. The corpus includes Vietnamese opinions on cars collected from social networking sites, forums, online newspapers and the websites of automobile dealers. The data processing phase will standardize terms for aspects that occur in opinion expressing aspects of the product. These aspects are used by manufacturers. Similarly, the standardization will be performed for both positive and negative terms used in opinions. The sentiment terms in the opinions will be replaced by standardized sentiment terms expressing the same sentiment polarities as those being replaced. This standardization is performed with the support of a semantic and sentiment ontology which has a tree hierarchy in the case of cars. This ontology ensures that the semantics and sentiment of the original opinion are not changed. The experimental results of the paper show that the proposed method gives better results than using no data preprocessing method for deep learning.
[ "Semantic Text Processing", "Information Retrieval", "Summarization", "Knowledge Representation", "Text Generation", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 30, 18, 47, 78, 36, 3 ]
SCOPUS_ID:85032840925
A Data Purpose Case Study of Privacy Policies
Privacy laws and international privacy standards require that companies collect only the data they have a stated purpose for, called collection limitation. Furthermore, these regimes prescribe that companies will not use data for purposes other than the purposes for which they were collected, called use limitation, except for legal purposes and when the user provides consent. To help companies write better privacy requirements that embody the use limitations and collection limitation principles, we conducted a case study to identify how purpose is expressed among five privacy policies from the shopping domain. Using content analysis, we discovered six exclusive data purpose categories. In addition, we observed natural language patterns to express purpose. Finally, we found that data purpose specificity varies with the specificity of information type descriptions. We believe this taxonomy and the patterns can help policy analysts discover missing or underspecified purposes to better comply with the collection and use limitation principles.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85090095497
A Data Science Approach to Analysis of Tweets Based on Cyclone Fani
The advent of social media has contributed to faster as well as wider propagation of information and emotions of people. In the time of emergencies and natural disasters, social media becomes an important tool for communication, spreading of alerts and knowing the needs and feelings of people in crisis. In this paper, analysis of the tweets of people based on cyclone Fani, which struck the Eastern region of India and adjoining regions in the month of May 2019, has been presented. The study has been divided into three phases—onset of cyclone Fani, during cyclone Fani and aftermath of cyclone Fani. As part of the primary analysis, Word Cloud representations have been used to depict the most frequent words in the tweets during all three phases of cyclone Fani. After that, Word Embedding using Word2Vec has been carried out using both Skip-Gram and Continuous-Bag-of-Words approaches. Using Principal Component Analysis, the results have been presented as bubble plots. Then, Sentiment Analysis using Naive Bayes Classifier has been performed and the tweets were classified based on both polarity and subjectivity. The results have been presented using graphical plots, and the accuracy of the results has been analyzed. Finally, an analysis of tweet and retweet counts belonging to credible Twitter handles has been showcased.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1911.10130v1
A Data Set of Internet Claims and Comparison of their Sentiments with Credibility
In this modern era, communication has become faster and easier. This means fallacious information can spread as fast as reality. Considering the damage that fake news kindles on the psychology of people and the fact that such news proliferates faster than truth, we need to study the phenomenon that helps spread fake news. An unbiased data set that depends on reality for rating news is necessary to construct predictive models for its classification. This paper describes the methodology to create such a data set. We collect our data from snopes.com which is a fact-checking organization. Furthermore, we intend to create this data set not only for classification of the news but also to find patterns that reason the intent behind misinformation. We also formally define an Internet Claim, its credibility, and the sentiment behind such a claim. We try to realize the relationship between the sentiment of a claim with its credibility. This relationship pours light on the bigger picture behind the propagation of misinformation. We pave the way for further research based on the methodology described in this paper to create the data set and usage of predictive modeling along with research-based on psychology/mentality of people to understand why fake news spreads much faster than reality.
[ "Information Extraction & Text Mining", "Information Retrieval", "Ethical NLP", "Sentiment Analysis", "Reasoning", "Fact & Claim Verification", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 17, 78, 8, 46, 36, 4 ]
https://aclanthology.org//2021.mtsummit-up.24/
A Data-Centric Approach to Real-World Custom NMT for Arabic
In this presentation, we will present our approach to taking Custom NMT to the next level by building tailor-made NMT to fit the needs of businesses seeking to scale in the Arabic-speaking world. In close collaboration with customers in the MENA region and with a deep understanding of their data, we work on building a variety of NMT models that accommodate to the unique challenges of the Arabic language. This session will provide insights into the challenges of acquiring, analyzing, and processing customer data in various sectors, as well as insights into how to best make use of this data to build high-quality Custom NMT models in English-Arabic. Feedback from usage of these models in production will be provided. Furthermore, we will show how to use our translation management system to make the most of the custom NMT, by leveraging the models, fine-tuning and continuing to improve them over time.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/1606.06274v1
A Data-Driven Approach for Semantic Role Labeling from Induced Grammar Structures in Language
Semantic roles play an important role in extracting knowledge from text. Current unsupervised approaches utilize features from grammar structures, to induce semantic roles. The dependence on these grammars, however, makes it difficult to adapt to noisy and new languages. In this paper we develop a data-driven approach to identifying semantic roles, the approach is entirely unsupervised up to the point where rules need to be learned to identify the position the semantic role occurs. Specifically we develop a modified-ADIOS algorithm based on ADIOS Solan et al. (2005) to learn grammar structures, and use these grammar structures to learn the rules for identifying the semantic roles based on the context in which the grammar structures appeared. The results obtained are comparable with the current state-of-art models that are inherently dependent on human annotated data.
[ "Low-Resource NLP", "Semantic Parsing", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 80, 40, 72, 4 ]
SCOPUS_ID:85147795187
A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification
In noisy environments, speech can be hard to understand for humans. Spoken dialog systems can help to enhance the intelligibility of their output, either by modifying the speech synthesis (e.g., imitate Lombard speech) or by optimizing the language generation. We here focus on the second type of approach, by which an intended message is realized with words that are more intelligible in a specific noisy environment. By conducting a speech perception experiment, we created a dataset of 900 paraphrases in babble noise, perceived by native English speakers with normal hearing. We find that careful selection of paraphrases can improve intelligibility by 33% at SNR -5 dB. Our analysis of the data shows that the intelligibility differences between paraphrases are mainly driven by noise-robust acoustic cues. Furthermore, we propose an intelligibility-aware paraphrase ranking model, which outperforms baseline models with a relative improvement of 31.37% at SNR -5 dB.
[ "Paraphrasing", "Speech & Audio in NLP", "Text Generation", "Multimodality" ]
[ 32, 70, 47, 74 ]
SCOPUS_ID:85096424104
A Data-Driven Method for Measuring the Negative Impact of Sentiment Towards China in the Context of COVID-19
Social media is a valuable source of information that allows to study people opinions of many events that happen every day. Nowadays social networks are one of the most important communication methods that people use. Feelings towards nations can be measured today thanks to the advances in machine learning and big data. In this paper we present a method to identify if there are negative sentiments towards China as a result of the COVID-19 virus. The method is based on sentiment analysis and extracts information from the Twitter social network. This analysis was done with the VADER library, a rule-based tool that provides classification algorithms. A dataset of 30,000 tweets was built for three time windows: December 2019 (before the pandemic), March 2020 (month in which the pandemic was confirmed by the World Health Organization), and May 2020 (when some countries started the de-escalation phase). Results show that sentiments became negative towards China and social network data allows to confirm this situation.
[ "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 3, 78 ]
SCOPUS_ID:85030672673
A Data-Driven Model of Tonal Chord Sequence Complexity
We present a compound language model of tonal chord sequences, and evaluate its capability to estimate perceived harmonic complexity. In order to build the compound model, we trained three different models: prediction by partial matching, a hidden Markov model and a deep recurrent neural network on a novel large dataset containing half a million annotated chord sequences. We describe the training process and propose an interpretation of the harmonic patterns that are learned by the hidden states of these models. We use the compound model to generate new chord sequences and estimate their probability, which we then relate to perceived harmonic complexity. In order to collect subjective ratings of complexity, we devised a listening test comprising two different experiments. In the first, subjects choose the more complex chord sequence between two. In the second, subjects rate with a continuous scale the complexity of a single chord sequence. The results of both experiments show a strong relation between negative log probability, given by our language model, and the perceived complexity ratings. The relation is stronger for subjects with high musical sophistication index, acquired through the GoldMSI standard questionnaire. The analysis of the results also includes the preference ratings that have been collected along with the complexity ratings; a weak negative correlation emerged between preference and log probability.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85128898546
A Data-Driven Score Model to Assess Online News Articles in Event-Based Surveillance System
Online news sources are popular resources for learning about current health situations and developing event-based surveillance (EBS) systems. However, having access to diverse information originating from multiple sources can misinform stakeholders, eventually leading to false health risks. The existing literature contains several techniques for performing data quality evaluation to minimize the effects of misleading information. However, these methods only rely on the extraction of spatiotemporal information for representing health events. To address this research gap, a score-based technique is proposed to quantify the data quality of online news articles through three assessment measures: 1) news article metadata, 2) content analysis, and 3) epidemiological entity extraction with NLP to weight the contextual information. The results are calculated using classification metrics with two evaluation approaches: 1) a strict approach and 2) a flexible approach. The obtained results show significant enhancement in the data quality by filtering irrelevant news, which can potentially reduce false alert generation in EBS systems.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85123624882
A Data-Driven Semi-Automatic Framenet Development Methodology
FrameNet is a lexical semantic resource based on the linguistic theory of frame semantics. A number of framenet development strategies have been reported previously and all of them involve exploration of corpora and a fair amount of manual work. Despite previous efforts, there does not exist a well-thought-out automatic/semi-automatic methodology for frame construction. In this paper we propose a data-driven methodology for identification and semi-automatic construction of frames. As a proof of concept, we report on our initial attempts to build a wider-scale framenet for the legal domain (LawFN) using the proposed methodology. The constructed frames are stored in a lexical database and together with the annotated example sentences they have been made available through a web interface.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
http://arxiv.org/abs/2011.14084v2
A Data-Driven Study of Commonsense Knowledge using the ConceptNet Knowledge Base
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI). Recent research in the Natural Language Processing (NLP) community has demonstrated significant progress in this problem setting. Despite this progress, which is mainly on multiple-choice question answering tasks in limited settings, there is still a lack of understanding (especially at scale) of the nature of commonsense knowledge itself. In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base. ConceptNet is a freely available knowledge base containing millions of commonsense assertions presented in natural language. Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations, allowing us to make data-driven and computational claims about the meaning of phenomena such as 'context' that are traditionally discussed only in qualitative terms. Furthermore, our methodology provides a case study in how to use data-science and computational methodologies for understanding the nature of an everyday (yet complex) psychological phenomenon that is an essential feature of human intelligence.
[ "Commonsense Reasoning", "Knowledge Representation", "Semantic Text Processing", "Reasoning" ]
[ 62, 18, 72, 8 ]
SCOPUS_ID:85134377701
A Data-Efficient Method for One-Shot Text Classification
In this paper, we propose BiGBERT (Binary Grouping BERT), a data-efficient training method for one-shot text classification. With the idea of One-vs-Rest method, we designed an extensible output layer for BERT, which can increase the usability of the training data. To evaluate our approach, we conducted extensive experiments on four celebrated text classification datasets, and reform these datasets into one-shot training scenario, which is approximately equal to the situation of our commercial datasets. The experiment result shows our approach achieves 54.9% in 5AbstractsGroup dataset, 40.2% in 20NewsGroup dataset, 57.0% in IMDB dataset, and 33.6% in TREC dataset. Overall, compare to the baseline BERT, our proposed method achieves 2.3% ~ 28.6% improved in accuracy. This result shows BiGBERT is stable and have significantly improved on one-shot text classification.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/cmp-lg/9606024v1
A Data-Oriented Approach to Semantic Interpretation
In Data-Oriented Parsing (DOP), an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new input sentence is constructed by combining sub-analyses from the corpus in the most probable way. This approach has been succesfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Treebank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method, and summarizes the results of a preliminary experiment. Semantic annotations were added to the syntactic annotations of most of the sentences of the ATIS corpus. A data-oriented semantic interpretation algorithm was succesfully tested on this semantically enriched corpus.
[ "Explainability & Interpretability in NLP", "Syntactic Text Processing", "Responsible & Trustworthy NLP" ]
[ 81, 15, 4 ]
SCOPUS_ID:85125875546
A Data-driven Affective Text Classification Analysis
Affective texts play a key role in sentiment classification/prediction and decision making. They are being increasingly used to form and/or share sentiments in financial, economic and/or political applications. However, the processing time is exponentially increased for large affective textual datasets. Moreover, casual expressions such as emoji, slang, abbreviation and misspelling words usually make data analysis (i.e., text classification) complicated. This paper proposes a pipeline model consisting of data pre-processing, feature extraction and classification model training to classify affective text datasets. It offers three contributions including Emoji recovery, misspelling word correction and abbreviation translation that results in maximised classification accuracy. A rigorous experimental plan is designed to evaluate the performance of the proposed approach according to three factors including dataset size (i.e., small, medium and large), NLP feature extraction technique (i.e., TF-IDF, word2vec and BERT) and classification model (i.e., MLP, Logistic Regression, Naive Bayes and SVM). In addition, the proposed approach is compared with a well-known Deep Learning sentiment analysis approach, named sentimentDLmodel, which addresses a pre-trained sentiment analysis. According to the results, the proposed approach significantly outperforms benchmarks in terms of classification model accuracy for most cases.
[ "Visual Data in NLP", "Information Retrieval", "Multimodality", "Sentiment Analysis", "Emotion Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 20, 24, 74, 78, 61, 36, 3 ]
http://arxiv.org/abs/2005.12565v1
A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction
Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.
[ "Relation Extraction", "Information Extraction & Text Mining" ]
[ 75, 3 ]
SCOPUS_ID:85147955698
A Data-driven Latent Semantic Analysis for Automatic Text Summarization using LDA Topic Modelling
With the advent and popularity of big data mining and huge text analysis in modern times, automated text summarization became prominent for extracting and retrieving important information from documents. This research investigates aspects of automatic text summarization from the perspectives of single and multiple documents. Summarization is a task of condensing huge text articles into short, summarized versions. The text is reduced in size for summarization purpose but preserving key vital information and retaining the meaning of the original document. This study presents the Latent Dirichlet Allocation (LDA) approach used to perform topic modelling from summarised medical science journal articles with topics related to genes and diseases. In this study, PyLDAvis web-based interactive visualization tool was used to visualise the selected topics. The visualisation provides an overarching view of the main topics while allowing and attributing deep meaning to the prevalence individual topic. This study presents a novel approach to summarization of single and multiple documents. The results suggest the terms ranked purely by considering their probability of the topic prevalence within the processed document using extractive summarization technique. PyLDAvis visualization describes the flexibility of exploring the terms of the topics' association to the fitted LDA model. The topic modelling result shows prevalence within topics 1 and 2. This association reveals that there is similarity between the terms in topic 1 and 2 in this study. The efficacy of the LDA and the extractive summarization methods were measured using Latent Semantic Analysis (LSA) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics to evaluate the reliability and validity of the model.
[ "Summarization", "Topic Modeling", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 9, 47, 3 ]
https://aclanthology.org//W13-4062/
A Data-driven Model for Timing Feedback in a Map Task Dialogue System
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
http://arxiv.org/abs/2006.16642v1
A Data-driven Neural Network Architecture for Sentiment Analysis
The fabulous results of convolution neural networks in image-related tasks, attracted attention of text mining, sentiment analysis and other text analysis researchers. It is however difficult to find enough data for feeding such networks, optimize their parameters, and make the right design choices when constructing network architectures. In this paper we present the creation steps of two big datasets of song emotions. We also explore usage of convolution and max-pooling neural layers on song lyrics, product and movie review text datasets. Three variants of a simple and flexible neural network architecture are also compared. Our intention was to spot any important patterns that can serve as guidelines for parameter optimization of similar models. We also wanted to identify architecture design choices which lead to high performing sentiment analysis models. To this end, we conducted a series of experiments with neural architectures of various configurations. Our results indicate that parallel convolutions of filter lengths up to three are usually enough for capturing relevant text features. Also, max-pooling region size should be adapted to the length of text documents for producing the best feature maps. Top results we got are obtained with feature maps of lengths 6 to 18. An improvement on future neural network models for sentiment analysis, could be generating sentiment polarity prediction of documents using aggregation of predictions on smaller excerpt of the entire text.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85116169703
A Data-oriented Approach for Detecting offensive Language in Arabic Tweets
The growing popularity of social media (SM) platforms has made these platforms a crucial part of modern societies. Users from different cultures, backgrounds, demographics get aboard in an increasing manner to express their views, stances, and opinions on a varied range of topics. Since users on SM can easily hide their real identity, a closer look at daily posts on social medial platforms shows that users do not seem to reflect only their stances and views, but also, they get an opportunity for revealing their behaviors, which could be negative towards the others. Although only a small population of SM users can show negative behavior towards other individuals, groups, and society in general, the impact could be catastrophic. This has resulted in the emerge of terms like cyberbullying, online extremism/hatred/threatening, online trolling, online political-polarity discourse. To ensure safe social networking, the domain of automatic detection of offensive/hatred language has lately grown notably. This work focuses on utilizing a publicly available dataset of Arabic tweets labeled for offensive/non-offensive language. Unlike previous work which focuses merely on developing and tuning machine learning models to be as accurate as possible on the benchmark dataset used, we turn to focus on the characteristics of the offensive language used in SM. The purpose is to have an in-depth look into the dataset to disclose what seems to be hidden patterns in offensive language expressed daily online. Our findings reveal the benefit of using larger training dataset that covers a wide range of offensive language patterns to build robust machine learning classifiers with a better ability to generalize well on highly sparse data used in SM.
[ "Text Classification", "Ethical NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 17, 4, 24, 3 ]
SCOPUS_ID:85144203040
A Data-to-Text Generation Model with Deduplicated Content Planning
Texts generated in data-to-text generation tasks often have repetitive parts. In order to get higher quality generated texts, we choose a data-to-text generation model with content planning, and add coverage mechanisms to both the content planning and text generation stages. In the content planning stage, a coverage mechanism is introduced to remove duplicate content templates, so as to remove sentences with the same semantics in the generated texts. In the text generation stage, the coverage mechanism is added to remove the repeated words in the texts. In addition, in order to embed the positional association information in the data into the word vectors, we also add positional encoding to the word embedding. Then the word vectors are fed to the pointer network to generate content template. Finally, the content template is inputted into the text generator to generate the descriptive texts. Through experiments, the accuracy of the content planning and the BLEU of the generated texts have been improved, which verifies the effectiveness of our proposed data-to-text generation model.
[ "Data-to-Text Generation", "Semantic Text Processing", "Text Generation", "Representation Learning" ]
[ 16, 72, 47, 12 ]
SCOPUS_ID:85115252591
A Database and Visualization of the Similarity of Contemporary Lexicons
Lexical similarity data, quantifying the “proximity” of languages based on the similarity of their lexicons, has been increasingly used to estimate the cross-lingual reusability of language resources, for tasks such as bilingual lexicon induction or cross-lingual transfer. Existing similarity data, however, originates from the field of comparative linguistics, computed from very small expert-curated vocabularies that are not supposed to be representative of modern lexicons. We explore a different, fully automated approach to lexical similarity computation, based on an existing 8-million-entry cognate database created from online lexicons orders of magnitude larger than the word lists typically used in linguistics. We compare our results to earlier efforts, and automatically produce intuitive visualizations that have traditionally been hand-crafted. With a new, freely available database of over 27 thousand language pairs over 331 languages, we hope to provide more relevant data to cross-lingual NLP applications, as well as material for the synchronic study of contemporary lexicons.
[ "Cross-Lingual Transfer", "Multilinguality" ]
[ 19, 0 ]
https://aclanthology.org//2022.sigtyp-1.6/
A Database for Modal Semantic Typology
This paper introduces a database for crosslinguistic modal semantics. The purpose of this database is to (1) enable ongoing consolidation of modal semantic typological knowledge into a repository according to uniform data standards and to (2) provide data for investigations in crosslinguistic modal semantic theory and experiments explaining such theories. We describe the kind of semantic variation that the database aims to record, the format of the data, and a current snapshot of the database, emphasizing access and contribution to the database in light of the goals above. We release the database at https://clmbr.shane.st/modal-typology.
[ "Typology", "Syntactic Text Processing", "Multilinguality" ]
[ 45, 15, 0 ]
SCOPUS_ID:84942247366
A Database of On-Line Handwritten Mixed Objects Named 'Kondate'
This paper describes a database of on-line handwritten patterns mixed of text, figures, tables, maps, diagrams and so on. Now, pen-based and touch-based interfaces are spreading into people and their surfaces are getting large. People can write and draw mixed objects without paying attention on the difference of objects or the mode change. Moreover, they may write text in any direction in combination with non-text objects on large surfaces. This is clearly one of the largest advantages of pen or touch interfaces but poses a challenging problem of object classification and recognition. The proposed database is made and now being enlarged to study such subjects more extensively. So far, 100 Japanese writers, approximately 25 English and 45 Thai writers have participated. The database stores on-line handwritten (digital ink) patterns with ground-truth tags in InkML.
[ "Structured Data in NLP", "Multimodality" ]
[ 50, 74 ]
SCOPUS_ID:85021624400
A Database of Paradigmatic Semantic Relation Pairs for German Nouns, Verbs, and Adjectives
A new collection of semantically related word pairs in German is presented, which was compiled via human judgement experiments and comprises (i) a representative selection of target lexical units balanced for semantic category, polysemy, and corpus frequency, (ii) a set of humangenerated semantically related word pairs based on the target units, and (iii) a subset of the generated word pairs rated for their relation strength, including positive and negative relation evidence. We address the three paradigmatic relations antonymy, hypernymy and synonymy, and systematically work across the three word classes of adjectives, nouns, and verbs. A series of quantitative and qualitative analyses demonstrates that (i) antonyms are more canonical than hypernyms and synonyms, (ii) relations are more or less natural with regard to the specific word classes, (iii) antonymy is clearly distinguishable from hypernymy and synonymy, but hypernymy and synonymy are often confused. We anticipate that our new collection of semantic relation pairs will not only be of considerable use in computational areas in which semantic relations play a role, but also in studies in theoretical linguistics and psycholinguistics.
[ "Psycholinguistics", "Linguistics & Cognitive NLP" ]
[ 77, 48 ]
http://arxiv.org/abs/2003.04970v1
A Dataset Independent Set of Baselines for Relation Prediction in Argument Mining
Argument Mining is the research area which aims at extracting argument components and predicting argumentative relations (i.e.,support and attack) from text. In particular, numerous approaches have been proposed in the literature to predict the relations holding between the arguments, and application-specific annotated resources were built for this purpose. Despite the fact that these resources have been created to experiment on the same task, the definition of a single relation prediction method to be successfully applied to a significant portion of these datasets is an open research problem in Argument Mining. This means that none of the methods proposed in the literature can be easily ported from one resource to another. In this paper, we address this problem by proposing a set of dataset independent strong neural baselines which obtain homogeneous results on all the datasets proposed in the literature for the argumentative relation prediction task. Thus, our baselines can be employed by the Argument Mining community to compare more effectively how well a method performs on the argumentative relation prediction task.
[ "Argument Mining", "Reasoning" ]
[ 60, 8 ]
http://arxiv.org/abs/2205.04185v1
A Dataset and BERT-based Models for Targeted Sentiment Analysis on Turkish Texts
Targeted Sentiment Analysis aims to extract sentiment towards a particular target from a given text. It is a field that is attracting attention due to the increasing accessibility of the Internet, which leads people to generate an enormous amount of data. Sentiment analysis, which in general requires annotated data for training, is a well-researched area for widely studied languages such as English. For low-resource languages such as Turkish, there is a lack of such annotated data. We present an annotated Turkish dataset suitable for targeted sentiment analysis. We also propose BERT-based models with different architectures to accomplish the task of targeted sentiment analysis. The results demonstrate that the proposed models outperform the traditional sentiment analysis models for the targeted sentiment analysis task.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
http://arxiv.org/abs/2106.02017v1
A Dataset and Baselines for Multilingual Reply Suggestion
Reply suggestion models help users process emails and chats faster. Previous work only studies English reply suggestion. Instead, we present MRS, a multilingual reply suggestion dataset with ten languages. MRS can be used to compare two families of models: 1) retrieval models that select the reply from a fixed set and 2) generation models that produce the reply from scratch. Therefore, MRS complements existing cross-lingual generalization benchmarks that focus on classification and sequence labeling tasks. We build a generation model and a retrieval model as baselines for MRS. The two models have different strengths in the monolingual setting, and they require different strategies to generalize across languages. MRS is publicly available at https://github.com/zhangmozhi/mrs.
[ "Information Retrieval", "Multilinguality" ]
[ 24, 0 ]
SCOPUS_ID:85101760993
A Dataset and Baselines for Visual Question Answering on Art
Answering questions related to art pieces (paintings) is a difficult task, as it implies the understanding of not only the visual information that is shown in the picture, but also the contextual knowledge that is acquired through the study of the history of art. In this work, we introduce our first attempt towards building a new dataset, coined AQUA (Art QUestion Answering). The question-answer (QA) pairs are automatically generated using state-of-the-art question generation methods based on paintings and comments provided in an existing art understanding dataset. The QA pairs are cleansed by crowdsourcing workers with respect to their grammatical correctness, answerability, and answers’ correctness. Our dataset inherently consists of visual (painting-based) and knowledge (comment-based) questions. We also present a two-branch model as baseline, where the visual and knowledge questions are handled independently. We extensively compare our baseline model against the state-of-the-art models for question answering, and we provide a comprehensive study about the challenges and potential future directions for visual question answering on art.
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
SCOPUS_ID:85147994474
A Dataset for Analysis of Quality Code and Toxic Comments
Software development has an important human aspect, so it is known that the feelings of developers have a significant impact on software development and could affect the quality, productivity and performance of developers. In this study, we have begun the process of finding, understanding and relating these affects to software quality. We propose a quality code and sentiments dataset, a clean set of commits, code quality and toxic sentiments of 19 projects obtained from GitHub. The dataset extracts messages from the commits present in GitHub along with quality metrics from SonarQube. Using this information, we run machine learning techniques with the ML.Net tool to identify toxic developer sentiments in commits that could affect code quality. We analyzed 218K commits from the 19 selected projects. The analysis of the projects took 120 days. We also describe the process of building the tool and retrieving the data. The dataset will be used to further investigate in depth the factors that affect developers’ emotions and whether these factors are related to code quality in the life cycle of a software project. In addition, code quality will be estimated as a function of developer sentiments.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/2201.12888v1
A Dataset for Medical Instructional Video Classification and Question Answering
This paper introduces a new challenge and datasets to foster research toward designing systems that can understand medical videos and provide visual answers to natural language questions. We believe medical videos may provide the best possible answers to many first aids, medical emergency, and medical education questions. Toward this, we created the MedVidCL and MedVidQA datasets and introduce the tasks of Medical Video Classification (MVC) and Medical Visual Answer Localization (MVAL), two tasks that focus on cross-modal (medical language and medical video) understanding. The proposed tasks and datasets have the potential to support the development of sophisticated downstream applications that can benefit the public and medical practitioners. Our datasets consist of 6,117 annotated videos for the MVC task and 3,010 annotated questions and answers timestamps from 899 videos for the MVAL task. These datasets have been verified and corrected by medical informatics experts. We have also benchmarked each task with the created MedVidCL and MedVidQA datasets and proposed the multimodal learning methods that set competitive baselines for future research.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Information Retrieval", "Question Answering", "Natural Language Interfaces", "Text Classification", "Multimodality" ]
[ 20, 3, 24, 27, 11, 36, 74 ]
http://arxiv.org/abs/2205.02289v1
A Dataset for N-ary Relation Extraction of Drug Combinations
Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a challenge in identifying effective combination therapies available in a situation. To assist medical professionals in identifying beneficial drug-combinations, we construct an expert-annotated dataset for extracting information about the efficacy of drug combinations from the scientific literature. Beyond its practical utility, the dataset also presents a unique NLP challenge, as the first relation extraction dataset consisting of variable-length relations. Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task. We provide a promising baseline model and identify clear areas for further improvement. We release our dataset, code, and baseline models publicly to encourage the NLP community to participate in this task.
[ "Relation Extraction", "Information Extraction & Text Mining" ]
[ 75, 3 ]
http://arxiv.org/abs/2203.15568v1
A Dataset for Speech Emotion Recognition in Greek Theatrical Plays
Machine learning methodologies can be adopted in cultural applications and propose new ways to distribute or even present the cultural content to the public. For instance, speech analytics can be adopted to automatically generate subtitles in theatrical plays, in order to (among other purposes) help people with hearing loss. Apart from a typical speech-to-text transcription with Automatic Speech Recognition (ASR), Speech Emotion Recognition (SER) can be used to automatically predict the underlying emotional content of speech dialogues in theatrical plays, and thus to provide a deeper understanding how the actors utter their lines. However, real-world datasets from theatrical plays are not available in the literature. In this work we present GreThE, the Greek Theatrical Emotion dataset, a new publicly available data collection for speech emotion recognition in Greek theatrical plays. The dataset contains utterances from various actors and plays, along with respective valence and arousal annotations. Towards this end, multiple annotators have been asked to provide their input for each speech recording and inter-annotator agreement is taken into account in the final ground truth generation. In addition, we discuss the results of some indicative experiments that have been conducted with machine and deep learning frameworks, using the dataset, along with some widely used databases in the field of speech emotion recognition.
[ "Emotion Analysis", "Multimodality", "Speech & Audio in NLP", "Sentiment Analysis" ]
[ 61, 74, 70, 78 ]
http://arxiv.org/abs/2005.05257v3
A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering
Legislation can be viewed as a body of prescriptive rules expressed in natural language. The application of legislation to facts of a case we refer to as statutory reasoning, where those facts are also expressed in natural language. Computational statutory reasoning is distinct from most existing work in machine reading, in that much of the information needed for deciding a case is declared exactly once (a law), while the information needed in much of machine reading tends to be learned through distributional language statistics. To investigate the performance of natural language understanding approaches on statutory reasoning, we introduce a dataset, together with a legal-domain text corpus. Straightforward application of machine reading models exhibits low out-of-the-box performance on our questions, whether or not they have been fine-tuned to the legal domain. We contrast this with a hand-constructed Prolog-based system, designed to fully solve the task. These experiments support a discussion of the challenges facing statutory reasoning moving forward, which we argue is an interesting real-world task that can motivate the development of models able to utilize prescriptive rules specified in natural language.
[ "Natural Language Interfaces", "Reasoning", "Question Answering", "Textual Inference" ]
[ 11, 8, 27, 22 ]
SCOPUS_ID:85146262806
A Dataset for Term Extraction in Hindi
Automatic Term Extraction (ATE) is one of the core problems in natural language processing and forms a key component of text mining pipelines of domain specific corpora. Complex low-level tasks such as machine translation and summarization for domain specific texts necessitate the use of term extraction systems. However, the development of these systems requires the use of large annotated datasets and thus there has been little progress made on this front for under-resourced languages. As a part of ongoing research, we present a dataset for term extraction from Hindi texts in this paper. To the best of our knowledge, this is the first dataset that provides term annotated documents for Hindi. Furthermore, we have evaluated this dataset on statistical term extraction methods and the results obtained indicate the problems associated with development of term extractors for under-resourced languages.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP", "Term Extraction", "Information Extraction & Text Mining" ]
[ 80, 4, 1, 3 ]
SCOPUS_ID:85079244203
A Dataset for the Sentiment Analysis of Indo-Pak Music Industry
The continuous increase in data creates a need, that data be analysed and useful hidden patterns be found and explored. If the data is readily available, it can easily be analysed. But most of the time it needs to be dug. Substantial increase in the use of social media and online services can be witnessed nowadays. People not only buy and sell things online but also give their remarks on those items/services. Many websites provide such services along with a dedicated section for reviews and comments. These items/services can be ranked and analysed based upon these reviews, which shows the sentiments of the reviewer. These reviews are in millions and promptly in billions. The huge increase in reviews there is a need to analyse them through a proper mechanism. This research is targeting the mining of the sentiments from these reviews. Three songs from YouTube are selected and their reviews are scrapped, pre-processed and analysed using Decision Tree (ID3) and Naïve Bayes. Both presented 75%accuracy on test data. This article presents a dataset to perform Sentiment Analysis on Roman Urdu/Hindi reviews. Dataset is a combination of Indo-Pak song reviews.
[ "Speech & Audio in NLP", "Sentiment Analysis", "Multimodality" ]
[ 70, 78, 74 ]
http://arxiv.org/abs/2003.13016v1
A Dataset of German Legal Documents for Named Entity Recognition
We describe a dataset developed for Named Entity Recognition in German federal court decisions. It consists of approx. 67,000 sentences with over 2 million tokens. The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person, judge, lawyer, country, city, street, landscape, organization, company, institution, court, brand, law, ordinance, European legal norm, regulation, contract, court decision, and legal literature. The legal documents were, furthermore, automatically annotated with more than 35,000 TimeML-based time expressions. The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
https://aclanthology.org//W18-1105/
A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection
Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyberbullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in Hindi-English code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.
[ "Responsible & Trustworthy NLP", "Ethical NLP", "Sentiment Analysis" ]
[ 4, 17, 78 ]
https://aclanthology.org//2022.nlp4pi-1.5/
A Dataset of Sustainable Diet Arguments on Twitter
Sustainable development requires a significant change in our dietary habits. Argument mining can help achieve this goal by both affecting and helping understand people’s behavior. We design an annotation scheme for argument mining from online discourse around sustainable diets, including novel evidence types specific to this domain. Using Twitter as a source, we crowdsource a dataset of 597 tweets annotated in relation to 5 topics. We benchmark a variety of NLP models on this dataset, demonstrating strong performance in some sub-tasks, while highlighting remaining challenges.
[ "Green & Sustainable NLP", "Ethical NLP", "Argument Mining", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 68, 17, 60, 8, 4 ]
SCOPUS_ID:85144479030
A Day at Work (with Text)
Text mining, information extraction, and opinion analysis are rich research areas, which have gained greatly in accessibility over the last 10–15 years. Today, there are many powerful tools and frameworks available, meaning that anybody with sufficient interest and time can integrate computational methods of working with text into their research or application domain. This chapter discusses the processes of identifying a text mining activity, choosing a goal, identifying or constructing a data set, selecting appropriate tools, evaluating the performance of the tool set and selecting a framework in which to graphically visualise the results.
[ "Information Extraction & Text Mining" ]
[ 3 ]
http://arxiv.org/abs/2210.00105v1
A Decade of Knowledge Graphs in Natural Language Processing: A Survey
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
[ "Knowledge Representation", "Structured Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 18, 50, 72, 74 ]
SCOPUS_ID:85133032625
A Decade of Legal Argumentation Mining: Datasets and Approaches
The growing research field of argumentation mining (AM) in the past ten years has made it a popular topic in Natural Language Processing. However, there are still limited studies focusing on AM in the context of legal text (Legal AM), despite the fact that legal text analysis more generally has received much attention as an interdisciplinary field of traditional humanities and data science. The goal of this work is to provide a critical data-driven analysis of the current situation in Legal AM. After outlining the background of this topic, we explore the availability of annotated datasets and the mechanisms by which these are created. This includes a discussion of how arguments and their relationships can be modelled, as well as a number of different approaches to divide the overall Legal AM task into constituent sub-tasks. Finally we review the dominant approaches that have been applied to this task in the past decade, and outline some future directions for Legal AM research.
[ "Argument Mining", "Reasoning" ]
[ 60, 8 ]
SCOPUS_ID:85105436117
A Decade of Sentic Computing: Topic Modeling and Bibliometric Analysis
Research on sentic computing has received intensive attention in recent years, as indicated by the increased availability of academic literature. However, despite the growth in literature and researchers’ interests, there are no reviews on this topic. This study comprehensively explores the current research progress and tendencies, particularly the thematic structure of sentic computing, to provide insights into the issues addressed during the past decade and the potential future of sentic computing. We combined bibliometric analysis and structural topic modeling to examine sentic computing literature in various aspects, including the tendency of annual article count, top journals, countries/regions, institutions, and authors, the scientific collaborations between major contributors, as well as the major topics and their tendencies. We obtained interesting and meaningful findings. For example, sentic computing has attracted growing interest in academia. In addition, Cognitive Computation and Nanyang Technological University were found to be the most productive journal and institution in publishing sentic computing studies, respectively. Moreover, important issues such as cyber issues and public opinion, deep neural networks and personality, financial applications and user profiles, and affective and emotional computing have been commonly addressed by authors focusing on sentic computing. Our study provides a thorough overview of sentic computing, reveals major concerns among scholars during the past decade, and offers insights into the future directions of sentic computing research.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85084948113
A Decade on Script Identification from Natural Images/Videos: A Review
Text present in an image provides high level straight forward information about the image/ video in which it is present. Nowadays, analysis of script identification either in the natural image or document image facilitates benefits to the number of important applications. Automatic script identification is a highly challenging task due to various complexities of the text as well as the image. This paper presents a survey on script identification of different scripts across the world. From the survey, it is noted that research in this area is limited to the document images only. Natural images are yet to be explored for automatic script identification.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85079893139
A Decentralized Context-aware Cross-domain Authorization Scheme for Pervasive Computing
Context-aware access control is one of the most frequently used methods for making authorization decisions in pervasive computing environments. To the best of our knowledge, most previous relevant researches resorted to centralized schemes to preserve all the contextual information. As a result, they neglected actual circumstances where the sources of contextual information are generally decentralized among multiple management domains with different security policies. For the sake of cross-domain access control, in this paper we present a distributed context-aware authorization mechanism for pervasive computing applications. With the help of logical language theory, we demonstrate how the proposed model can attain the goal of effective reliability assurance and privacy protection by way of constructing a decision tree dynamically, according to the current contextual information.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85112856591
A Decision Support System for Project Risk Management based on Ontology Learning
Project Risk Management (PRM) is one of the main concerns of project management executives and professionals. Although PRM frameworks and risk models are mature enough to provide a systematic approach for managing risks, these practices remain ad hoc and non-standardized. In addition, there is no significant work shift toward PRM recommendation systems through inference rules and axioms. This study aims to bridge the cited gaps in PRM by developing a decision support framework based on an ontology that predicts personalized recommendations for managing PR processes effectively, and then making the right decisions. To this end, this framework takes advantage of the ontology semantic strengths to model a unified PRM knowledge relying on PMI’s framework. The idea is to parse PMI’s standard for PRM to enrich and exploit an existing PR Ontology. The enrichment process is driven by the Ontology learning (OL) tasks using Natural Language Processing techniques (NLP) to extract the main concepts, properties as well as OWL DL axioms and SWRL rules. Then, through Jena_rule engine, this decision system infers recommendations, by which a team member asks for a specific targeted risk-related request. Based on this approach, a decision system is developed to illustrate the assets of ontological reasoning and thereby the reliability of decision support. The potential benefits of the proposed framework are evaluated using a questionnaire survey that proves the overall positive evaluation.
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
SCOPUS_ID:85087158957
A Decision Tree Based Supervised Program Interpretation Technique for Gurmukhi Language
Deciphering the right context of the given word is one of the main challenges in Natural Language Processing. The study of Word Sense Disambiguation helps in deciphering the right context of the given word in use. Decision Tree is a methodology discussed under the supervised techniques used in WSD. Gurmukhi is one of the regional languages of India and much of the work done in this language is limited to knowledge-based mechanisms. The implementation of decision tree to correctly decipher the ambiguous word is new to this language and it has shown promising results with an average F-measure of 73.1%. These results will further help in Gurmukhi Word Sense Disambiguation.
[ "Programming Languages in NLP", "Semantic Text Processing", "Word Sense Disambiguation", "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 55, 72, 65, 81, 4, 74 ]
SCOPUS_ID:85149967615
A Decision-Level Approach to Multimodal Sentiment Analysis
There has been near exponential increase in the use of images and video on various Social Media platforms in the last few years, in place of or in addition to the use of plain text. Automated sentiment analysis, at its core, is the capturing of human emotion by machine - the addition of image and video to social media output had made this already challenging task even greater. In this paper, we propose a multimodal, decision-level based approach to sentiment analysis (SA) of Twitter feeds. The solution proposed and outlined in this paper, combines the sentiment analysis scoring of not just text-based output but integrates SA scoring generated from analysis of image captions. For our experiments, we focused on politics and on two political topics (Trump/Brexit) that are generating a lot of discussion and debate on Twitter. We chose the political domain given the power that Social Media has on possibly influencing voters (https://www.theguardian.com/technology/2016/jul/31/trash-talk-how-twitter-is-shaping-the-new-politics ) and the ‘strong’ opinions that are expressed in this area.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Sentiment Analysis", "Multimodality" ]
[ 20, 39, 47, 78, 74 ]
SCOPUS_ID:85079810027
A Decision-Making Model Under Probabilistic Linguistic Circumstances with Unknown Criteria Weights for Online Customer Reviews
Online customer reviews (OCRs) provide much information about products or service, but the mass of information increases the difficulty for customers to make decisions. Thus, we establish a multi-criteria decision making (MCDM) model to evaluate products or service. To analyze OCRs, the sentiment analysis (SA) is introduced to identify the sentiment orientation of reviews. Considering that the textual information in OCRs is linguistic information, probabilistic linguistic term sets (PLTSs) are applied to present the results of the SA. A process of extracting probabilistic linguistic information based on SA from OCRs is also presented. Then, for the MCDM problems with unknown criteria weights, we combine the PP (projection pursuit) method and the MULTIMOORA (multiplicative multi-objective optimization by ratio analysis) method, and develop an extended method (named as the PP-MULTIMOORA method). The projection pursuit (PP) method is developed to derive objective criteria weights and the MULTIMOORA method is to derive final rankings of products or service. Finally, we apply the proposed model to a case of evaluating doctors’ service quality and further conduct a comparative analysis to illustrate the effectiveness of our work.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85146786510
A Decision-Support System to Analyse Customer Satisfaction Applied to a Tourism Transport Service
Due to the perishable nature of tourist products, which impacts supply and demand, the possibility of analysing the relationship between customers’ satisfaction and service quality can contribute to increased revenues. Machine learning techniques allow the analysis of how these services can be improved or developed and how to reach new markets, and look for the emergence of ideas to innovate and improve interaction with the customer. This paper presents a decision-support system for analysing consumer satisfaction, based on consumer feedback from the customer’s experience when transported by a transfer company, in the present case working in the Algarve region, Portugal. The results show how tourists perceive the service and which factors influence their level of satisfaction and sentiment. One of the results revealed that the first impression associated with good news is what creates the most value in the experience, i.e., “first impressions matter”.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1606.01933v2
A Decomposable Attention Model for Natural Language Inference
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.
[ "Reasoning", "Textual Inference" ]
[ 8, 22 ]
http://arxiv.org/abs/cmp-lg/9404009v3
A Deductive Account of Quantification in LFG
The relationship between Lexical-Functional Grammar (LFG) functional structures (f-structures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that explains correctly the constrained interactions between quantifier scope ambiguity and bound anaphora. The use of a deductive framework to account for the compositional properties of quantifying expressions in natural language obviates the need for additional mechanisms, such as Cooper storage, to represent the different scopes that a quantifier might take. Instead, the semantic contribution of a quantifier is recorded as an ordinary logical formula, one whose use in a proof will establish the scope of the quantifier. The properties of linear logic ensure that each quantifier is scoped exactly once. Our analysis of quantifier scope can be seen as a recasting of Pereira's analysis (Pereira, 1991), which was expressed in higher-order intuitionistic logic. But our use of LFG and linear logic provides a much more direct and computationally more flexible interpretation mechanism for at least the same range of phenomena. We have developed a preliminary Prolog implementation of the linear deductions described in this work.
[ "Reasoning" ]
[ 8 ]
SCOPUS_ID:0014749746
A Deductive Question-Answerer for Natural Language Inference
The question-answering aspects of the Protosynthex III prototype language processing system are described and exemplified in detail. The system is written in LISP 1.5 and operates on the Q-32 time-sharing system. The system's data structures and their semantic organization, the deductive question-answering formalism of relational properties and complex-relation-forming operators, and the question-answering procedures which employ these features in their operation are all described and illustrated. Examples of the system's performance and of the limitations of its question-answering capability are presented and discussed. It is shown that the use of semantic information in deductive question answering greatly facilitates the process, and that a top-down procedure which works from question to answer enables effective use to be made of this information. It is concluded that the development of Protosynthex III into a practically useful system to work with large data bases is possible but will require changes in both the data structures and the algorithms used for question answering. © 1970, ACM. All rights reserved.
[ "Natural Language Interfaces", "Reasoning", "Question Answering", "Textual Inference" ]
[ 11, 8, 27, 22 ]
http://arxiv.org/abs/1511.08277v1
A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations
Matching natural language sentences is central for many applications such as information retrieval and question answering. Existing deep models rely on a single sentence representation or multiple granularity representations for matching. However, such methods cannot well capture the contextualized local information in the matching process. To tackle this problem, we present a new deep architecture to match two sentences with multiple positional sentence representations. Specifically, each positional sentence representation is a sentence representation at this position, generated by a bidirectional long short term memory (Bi-LSTM). The matching score is finally produced by aggregating interactions between these different positional sentence representations, through $k$-Max pooling and a multi-layer perceptron. Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
[ "Language Models", "Semantic Text Processing", "Question Answering", "Representation Learning", "Natural Language Interfaces" ]
[ 52, 72, 27, 12, 11 ]
https://aclanthology.org//W14-2405/
A Deep Architecture for Semantic Parsing
Many successful approaches to semantic parsing build on top of the syntactic analysis of text, and make use of distributional representations or statistical models to match parses to ontology-specific queries. This paper presents a novel deep learning architecture which provides a semantic parsing system through the union of two neural models of language semantics. It allows for the generation of ontology-specific queries from natural language statements and questions without the need for parsing, which makes it especially suitable to grammatically malformed or syntactically atypical text, such as tweets, as well as permitting the development of semantic parsers for resource-poor languages.
[ "Knowledge Representation", "Semantic Parsing", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 18, 40, 72, 15 ]
SCOPUS_ID:85076628890
A Deep Attention based Framework for Image Caption Generation in Hindi Language
Image captioning refers to the process of generating a textual description for an image which defines the object and activity within the image. It is an intersection of computer vision and natural language processing where computer vision is used to understand the content of an image and language modelling from natural language processing is used to convert an image into words in the right order. A large number of works exist for generating image captioning in English language, but no work exists for generating image captioning in Hindi language. Hindi is the official language of India, and it is the fourth most-spoken language in the world, after Mandarin, Spanish and English. The current paper attempts to bridge this gap. Here an attention-based novel architecture for generating image captioning in Hindi language is proposed. Convolution neural network is used as an encoder to extract features from an input image and gated recurrent unit based neural network is used as a decoder to perform language modelling up to the word level. In between, we have used the attention mechanism which helps the decoder to look into the important portions of the image. In order to show the efficacy of the proposed model, we have first created a manually annotated image captioning training corpus in Hindi corresponding to popular MS COCO English dataset having around 80000 images. Experimental results show that our proposed model attains a BLEU1 score of 0.5706 on this data set.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
SCOPUS_ID:85129612499
A Deep Attentive Multimodal Learning Approach for Disaster Identification From Social Media Posts
Microblogging platforms such as Twitter have become indispensable for disseminating valuable information, especially at times of natural and man-made disasters. Often people post multimedia contents with images and/or videos to report important information such as casualties, damages of infrastructure, and urgent needs of affected people. Such information can be very helpful for humanitarian organizations for planning adequate response in a time-critical manner. However, identifying disaster information from a vast amount of posts is an arduous task, which calls for an automatic system that can filter out the actionable and non-actionable disaster-related information from social media. While many studies have shown the effectiveness of combining text and image contents for disaster identification, most previous work focused on analyzing only the textual modality and/or applied traditional recurrent neural network (RNN) or convolutional neural network (CNN) which might lead to performance degradation in case of long input sequences. This paper presents a multimodal disaster identification system that utilizes both visual and textual data in a synergistic way by conjoining the influential word features with the visual features to classify tweets. Specifically, we utilize a pretrained convolutional neural network (e.g., ResNet50) to extract visual features and a bidirectional long-term memory (BiLSTM) network with attention mechanism to extract textual features. We then aggregate both visual and textual features by leveraging a feature fusion approach followed by applying the softmax classifier. The evaluations demonstrate that the proposed multimodal system enhances the performance over the existing baselines including both unimodal and multimodal models by attaining approximately 1% and 7% of performance improvement, respectively.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85073253859
A Deep Bidirectional Highway Long Short-Term Memory Network Approach to Chinese Semantic Role Labeling
Existing approaches to Chinese semantic role labeling (SRL) mainly adopt deep long short-term memory (LSTM) neural networks to address the long-term dependencies problem. However, deep LSTM networks cannot address the vanishing gradient problem properly. In addition, the complexity of the Chinese language, as a hieroglyphic language, decreases the performance of traditional SRL approaches to Chinese SRL. To address these problems, this paper proposes a new approach with a deep bidirectional highway LSTM network. The performance of the proposed approach is further improved by introducing the conditional random fields (CRFs) constraints and part-of-speech (POS) feature since POS tags are the classes of formal equivalents of words in linguistics. The experimental results on the commonly used Chinese Proposition Bank dataset show that the proposed approach outperforms existing approaches. With an easily acquired and reliable POS feature for practical applications, the proposed approach substantially improves Chinese SRL.
[ "Language Models", "Semantic Parsing", "Semantic Text Processing" ]
[ 52, 40, 72 ]
SCOPUS_ID:85067867363
A Deep CFS Model for Text Clustering
With the fast development of the Internet technology, the court text information is collected from various fields at an unprecedented speed, such as Weibo and Wechat. This big court text information of high volume poses a vast challenge for the judge making reasonable decisions based on the vast cases. To cluster the reasonable assistant cases from the vast cases, we propose a deep CFS model for the text clustering, which can cluster the court text effectively, in this paper. In the proposed model, a robust deep text feature extractor is designed to improve the cluster accuracy, in which an ensemble of deep learning models are used to learn the deep features of the text. Furthermore, the CFS algorithm is conducted on the extracted deep text features, to discover the non-spherical clusters with the automatic find of the cluster centers. Finally, the proposed deep cluster model is evaluated on two typical datasets and the results show it can perform better than compared models in terms of the cluster accuracy.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
http://arxiv.org/abs/2201.12664v1
A Deep CNN Architecture with Novel Pooling Layer Applied to Two Sudanese Arabic Sentiment Datasets
Arabic sentiment analysis has become an important research field in recent years. Initially, work focused on Modern Standard Arabic (MSA), which is the most widely-used form. Since then, work has been carried out on several different dialects, including Egyptian, Levantine and Moroccan. Moreover, a number of datasets have been created to support such work. However, up until now, less work has been carried out on Sudanese Arabic, a dialect which has 32 million speakers. In this paper, two new publicly available datasets are introduced, the 2-Class Sudanese Sentiment Dataset (SudSenti2) and the 3-Class Sudanese Sentiment Dataset (SudSenti3). Furthermore, a CNN architecture, SCM, is proposed, comprising five CNN layers together with a novel pooling layer, MMA, to extract the best features. This SCM+MMA model is applied to SudSenti2 and SudSenti3 with accuracies of 92.75% and 84.39%. Next, the model is compared to other deep learning classifiers and shown to be superior on these new datasets. Finally, the proposed model is applied to the existing Saudi Sentiment Dataset and to the MSA Hotel Arabic Review Dataset with accuracies 85.55% and 90.01%.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85060016334
A Deep CNN Model for Student Learning Pedagogy Detection Data Collection Using OCR
Student learning pedagogy detection requires a huge amount of data from students. Efficient process to collect the data is a major fact here. This paper proposes an approach based on Convolutional Neural Network (CNN) for Optical Character Recognition (OCR) and mainly shows a method to use this OCR system to extract information of a student filled in a specialized form. This form contains 170 cells. Some of these cells are to be filled with capital English alphabets and others are to be filled with English numerals. This paper discusses a method for feature extraction and use of CNN to identify each cell. Using this method we could predict 96.87% of numeric data and 94.36% of alphabetic data accurately.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/1811.11374v1
A Deep Cascade Model for Multi-Document Reading Comprehension
A fundamental trade-off between effectiveness and efficiency needs to be balanced when designing an online question answering system. Effectiveness comes from sophisticated functions such as extractive machine reading comprehension (MRC), while efficiency is obtained from improvements in preliminary retrieval components such as candidate document selection and paragraph ranking. Given the complexity of the real-world multi-document MRC scenario, it is difficult to jointly optimize both in an end-to-end system. To address this problem, we develop a novel deep cascade learning model, which progressively evolves from the document-level and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Specifically, irrelevant documents and paragraphs are first filtered out with simple functions for efficiency consideration. Then we jointly train three modules on the remaining texts for better tracking the answer: the document extraction, the paragraph extraction and the answer extraction. Experiment results show that the proposed method outperforms the previous state-of-the-art methods on two large-scale multi-document benchmark datasets, i.e., TriviaQA and DuReader. In addition, our online system can stably serve typical scenarios with millions of daily requests in less than 50ms.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Machine Reading Comprehension", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 3, 68, 37, 8, 4 ]
SCOPUS_ID:85124280117
A Deep Content-Based Model for Persian Rumor Verification
During the development of social media, there has been a transformation in social communication. Despite their positive applications in social interactions and news spread, it also provides an ideal platform for spreading rumors. Rumors can endanger the security of society in normal or critical situations. Therefore, it is important to detect and verify the rumors in the early stage of their spreading. Many research works have focused on social attributes in the social network to solve the problem of rumor detection and verification, while less attention has been paid to content features. The social and structural features of rumors develop over time and are not available in the early stage of rumor. Therefore, this study presented a content-based model to verify the Persian rumors on Twitter and Telegram early. The proposed model demonstrates the important role of content in spreading rumors and generates a better-integrated representation for each source rumor document by fusing its semantic, pragmatic, and syntactic information. First, contextual word embeddings of the source rumor are generated by a hybrid model based on ParsBERT and parallel CapsNets. Then, pragmatic and syntactic features of the rumor are extracted and concatenated with embeddings to capture the rich information for rumor verification. Experimental results on real-world datasets demonstrated that the proposed model significantly outperforms the state-of-The-Art models in the early rumor verification task. Also, it can enhance the performance of the classifier from 2% to 11% on Twitter and from 5% to 23% on Telegram. These results validate the model's effectiveness when limited content information is available.
[ "Semantic Text Processing", "Information Retrieval", "Syntactic Text Processing", "Representation Learning", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 15, 12, 36, 3 ]
SCOPUS_ID:85044472751
A Deep Convolution Neural Network Based Model for Enhancing Text Video Frames for Detection
The main causes of getting poor results in video text detection is low quality of frames and which is affected by different factors like de-blurring, complex background, illumination etc. are few of the challenges encountered in image enhancement. This paper proposes a technique for enhancing image quality for better human perception along with text detection for video frames. An approach based on set of smart and effective CNN denoisers are designed and trained to denoise an image by adopting variable splitting technique, the robust denoisers are plugged into model based optimization methods with HQS framework to handle image deblurring and super resolution problems. Further, for detecting text from denoised frames, we have used state-of-art methods such as MSER (Maximally Extremal Regions) and SWT (Stroke Width Transform) and experiments are done on our database, ICDAR and YVT database to demonstrate our proposed work in terms of precision, recall and F-measure.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85080905257
A Deep Convolutional Deblurring and Detection Neural Network for Localizing Text in Videos
Scene text in the video is usually vulnerable to various blurs like those caused by camera or text motions, which brings additional difficulty to reliably extract them from the video for content-based video applications. In this paper, we propose a novel fully convolutional deep neural network for deblurring and detecting text in the video. Specifically, to cope with blur of video text, we propose an effective deblurring subnetwork that is composed of multi-level convolutional blocks with both cross-block (long) and within-block (short) skip connections for progressively learning residual deblurred image details as well as a spatial attention mechanism to pay more attention on blurred regions, which generates the sharper image for current frame by fusing multiple surrounding adjacent frames. To further localize text in the frames, we enhance the EAST text detection model by introducing deformable convolution layers and deconvolution layers, which better capture widely varied appearances of video text. Experiments on the public scene text video dataset demonstrate the state-of-the-art performance of the proposed video text deblurring and detection model.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/2201.06313v3
A Deep Convolutional Neural Networks Based Multi-Task Ensemble Model for Aspect and Polarity Classification in Persian Reviews
Aspect-based sentiment analysis is of great importance and application because of its ability to identify all aspects discussed in the text. However, aspect-based sentiment analysis will be most effective when, in addition to identifying all the aspects discussed in the text, it can also identify their polarity. Most previous methods use the pipeline approach, that is, they first identify the aspects and then identify the polarities. Such methods are unsuitable for practical applications since they can lead to model errors. Therefore, in this study, we propose a multi-task learning model based on Convolutional Neural Networks (CNNs), which can simultaneously detect aspect category and detect aspect category polarity. creating a model alone may not provide the best predictions and lead to errors such as bias and high variance. To reduce these errors and improve the efficiency of model predictions, combining several models known as ensemble learning may provide better results. Therefore, the main purpose of this article is to create a model based on an ensemble of multi-task deep convolutional neural networks to enhance sentiment analysis in Persian reviews. We evaluated the proposed method using a Persian language dataset in the movie domain. Jacquard index and Hamming loss measures were used to evaluate the performance of the developed models. The results indicate that this new approach increases the efficiency of the sentiment analysis model in the Persian language.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Polarity Analysis", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Responsible & Trustworthy NLP", "Text Classification", "Green & Sustainable NLP" ]
[ 52, 80, 72, 24, 3, 33, 23, 78, 4, 36, 68 ]
http://arxiv.org/abs/1906.12188v1
A Deep Decoder Structure Based on WordEmbedding Regression for An Encoder-Decoder Based Model for Image Captioning
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Representation Learning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 12, 47, 74 ]
http://arxiv.org/abs/1811.04670v1
A Deep Ensemble Framework for Fake News Detection and Classification
Fake news, rumor, incorrect information, and misinformation detection are nowadays crucial issues as these might have serious consequences for our social fabrics. The rate of such information is increasing rapidly due to the availability of enormous web information sources including social media feeds, news blogs, online newspapers etc. In this paper, we develop various deep learning models for detecting fake news and classifying them into the pre-defined fine-grained categories. At first, we develop models based on Convolutional Neural Network (CNN) and Bi-directional Long Short Term Memory (Bi-LSTM) networks. The representations obtained from these two models are fed into a Multi-layer Perceptron Model (MLP) for the final classification. Our experiments on a benchmark dataset show promising results with an overall accuracy of 44.87\%, which outperforms the current state of the art.
[ "Information Extraction & Text Mining", "Information Retrieval", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 17, 8, 46, 36, 4 ]
http://arxiv.org/abs/1805.06553v1
A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation
Natural language generation lies at the core of generative dialogue systems and conversational agents. We describe an ensemble neural language generator, and present several novel methods for data representation and augmentation that yield improved results in our model. We test the model on three datasets in the restaurant, TV and laptop domains, and report both objective and subjective evaluations of our best model. Using a range of automatic metrics, as well as human evaluators, we show that our approach achieves better results than state-of-the-art models on the same datasets.
[ "Language Models", "Semantic Text Processing", "Text Generation" ]
[ 52, 72, 47 ]
SCOPUS_ID:85127754176
A Deep Fusion Matching Network Semantic Reasoning Model
As the vital technology of natural language understanding, sentence representation reasoning technology mainly focuses on sentence representation methods and reasoning models. Although the performance has been improved, there are still some problems, such as incomplete sentence semantic expression, lack of depth of reasoning model, and lack of interpretability of the reasoning process. Given the reasoning model’s lack of reasoning depth and interpretability, a deep fusion matching network is designed in this paper, which mainly includes a coding layer, matching layer, dependency convolution layer, information aggregation layer, and inference prediction layer. Based on a deep matching network, the matching layer is improved. Furthermore, the heuristic matching algorithm replaces the bidirectional long-short memory neural network to simplify the interactive fusion. As a result, it improves the reasoning depth and reduces the complexity of the model; the dependency convolution layer uses the tree-type convolution network to extract the sentence structure information along with the sentence dependency tree structure, which improves the interpretabil-ity of the reasoning process. Finally, the performance of the model is verified on several datasets. The results show that the reasoning effect of the model is better than that of the shallow reasoning model, and the accuracy rate on the SNLI test set reaches 89.0%. At the same time, the semantic correlation analysis results show that the dependency convolution layer is beneficial in improving the interpretability of the reasoning process.
[ "Semantic Text Processing", "Representation Learning", "Explainability & Interpretability in NLP", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 72, 12, 81, 8, 4 ]
http://arxiv.org/abs/1709.05074v1
A Deep Generative Framework for Paraphrase Generation
Paraphrase generation is an important problem in NLP, especially in question answering, information retrieval, information extraction, conversation systems, to name a few. In this paper, we address the problem of generating paraphrases automatically. Our proposed method is based on a combination of deep generative models (VAE) with sequence-to-sequence models (LSTM) to generate paraphrases, given an input sentence. Traditional VAEs when combined with recurrent neural networks can generate free text but they are not suitable for paraphrase generation for a given sentence. We address this problem by conditioning the both, encoder and decoder sides of VAE, on the original sentence, so that it can generate the given sentence's paraphrases. Unlike most existing models, our model is simple, modular and can generate multiple paraphrases, for a given sentence. Quantitative evaluation of the proposed method on a benchmark paraphrase dataset demonstrates its efficacy, and its performance improvement over the state-of-the-art methods by a significant margin, whereas qualitative human evaluation indicate that the generated paraphrases are well-formed, grammatically correct, and are relevant to the input sentence. Furthermore, we evaluate our method on a newly released question paraphrase dataset, and establish a new baseline for future research.
[ "Paraphrasing", "Text Generation" ]
[ 32, 47 ]
http://arxiv.org/abs/1906.08972v1
A Deep Generative Model for Code-Switched Text
Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity.
[ "Code-Switching", "Language Models", "Semantic Text Processing", "Multilinguality" ]
[ 7, 52, 72, 0 ]
http://arxiv.org/abs/1807.02745v1
A Deep Generative Model of Vowel Formant Typology
What makes some types of languages more probable than others? For instance, we know that almost all spoken languages contain the vowel phoneme /i/; why should that be? The field of linguistic typology seeks to answer these questions and, thereby, divine the mechanisms that underlie human language. In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains. In contrast to previous work, we work directly with the acoustic information -- the first two formant values -- rather than modeling discrete sets of phonemic symbols (IPA). We develop a novel generative probability model and report results based on a corpus of 233 languages.
[ "Typology", "Syntactic Text Processing", "Multilinguality" ]
[ 45, 15, 0 ]
SCOPUS_ID:85062279624
A Deep Hierarchical Neural Network Model for Aspect-based Sentiment Analysis
Aspect-based sentiment analysis has become one of the research hotspots in the field of natural language processing (NLP) in recent years. Different from ordinary sentiment analysis, aspect-based sentiment classification is a fine-grained task of sentiment analysis in the field of NLP, which need to infer different sentiment polarity of different aspects in the same sentence, because there will be more than one aspect in the same sentence usually. Previous studies, in general, usually consider the independent sentence as the input of neural networks and only focus on the given aspect in each sentence in the process of training. These approaches, however, will ignore the long-distance dependency of the given aspect in the entire long text and cannot take full advantage of the context relations of different sentences across the same review, which is bad for those ambiguous sentences and short sentences of a review in the training process. To address these problems, this paper proposes a hierarchical model of combining regional convolutional neural network and hierarchical long short-term memory (HRCNN-LSTM) for the task of aspect-based sentiment classification on long text customer review. This approach can extract more feature information of independent sentence and the relations of different sentences in the whole review via combining regional CNN and hierarchical LSTM, and is able to infer the sentiment polarity of different aspects discriminatively without any external information such as semantic dependency parsing. We divide a long text review into several regions based on different targets of aspect in the sentences, and then we utilize a regional CNN to receive different independent regions of the review to extract the information across the entire review. This regional CNN is able to capture the long-distance dependency of the concerned aspect across the whole review and keep the order of different regions, as well as, save the training time of using LSTM network only. Meanwhile, dividing regions based on different targets can assist our model to discriminate different sentiment polarity of different aspects in the same sentence better. On the other hand, we present a hierarchical LSTM combined with regional CNN to concentrate on both word-level and sentence-level information through a hierarchical attention mechanism. Aspect embedding is considered as a word-level attention and combined with word embedding as a sequential input and fed into a word-level LSTM to focus on the given aspect in the process of training and generate a sentence-level attention. The final output of word-level LSTM is combined with the extracted feature information from regional CNN and fed into a sentence-level LSTM. Such hierarchical word-level and sentence-level attention between regional CNN and hierarchical LSTM can capture more in-depth information from sentence and the entire review as well as consider both intra-sentence and inter-sentence relations in the prediction process, which provide a good ability to discriminate the sentiment polarity of short and ambiguous sentences. Finally, experimental results on multi-domain datasets of two languages from SemEval2016 show that, our approach yields better performance than several competitor models on aspect-based sentiment classification with word vectors only.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Polarity Analysis", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 12, 33, 23, 78, 36, 3 ]
SCOPUS_ID:85118587974
A Deep Language Model for Symptom Extraction From Clinical Text and its Application to Extract COVID-19 Symptoms From Social Media
Patients experience various symptoms when they haveeither acute or chronic diseases or undergo some treatments for diseases. Symptoms are often indicators of the severity of the disease and the need for hospitalization. Symptoms are often described in free text written as clinical notes in the Electronic Health Records (EHR) and are not integrated with other clinical factors for disease prediction and healthcare outcome management. In this research, we propose a novel deep language model to extract patient-reported symptoms from clinical text. The deep language model integrates syntactic and semantic analysis for symptom extraction and identifies the actual symptoms reported by patients and conditional or negation symptoms. The deep language model can extract both complex and straightforward symptom expressions. We used a real-world clinical notes dataset to evaluate our model and demonstrated that our model achieves superior performance compared to three other state-of-the-art symptom extraction models. We extensively analyzed our model to illustrate its effectiveness by examining each component's contribution to the model. Finally, we applied our model on a COVID-19 tweets data set to extract COVID-19 symptoms. The results show that our model can identify all the symptoms suggested by the Center for Disease Control (CDC) ahead of their timeline and many rare symptoms.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 72, 3 ]
http://arxiv.org/abs/2011.10358v1
A Deep Language-independent Network to analyze the impact of COVID-19 on the World via Sentiment Analysis
Towards the end of 2019, Wuhan experienced an outbreak of novel coronavirus, which soon spread all over the world, resulting in a deadly pandemic that infected millions of people around the globe. The government and public health agencies followed many strategies to counter the fatal virus. However, the virus severely affected the social and economic lives of the people. In this paper, we extract and study the opinion of people from the top five worst affected countries by the virus, namely USA, Brazil, India, Russia, and South Africa. We propose a deep language-independent Multilevel Attention-based Conv-BiGRU network (MACBiG-Net), which includes embedding layer, word-level encoded attention, and sentence-level encoded attention mechanism to extract the positive, negative, and neutral sentiments. The embedding layer encodes the sentence sequence into a real-valued vector. The word-level and sentence-level encoding is performed by a 1D Conv-BiGRU based mechanism, followed by word-level and sentence-level attention, respectively. We further develop a COVID-19 Sentiment Dataset by crawling the tweets from Twitter. Extensive experiments on our proposed dataset demonstrate the effectiveness of the proposed MACBiG-Net. Also, attention-weights visualization and in-depth results analysis shows that the proposed network has effectively captured the sentiments of the people.
[ "Semantic Text Processing", "Sentiment Analysis", "Representation Learning" ]
[ 72, 78, 12 ]
SCOPUS_ID:85109142457
A Deep Learning Approach Combining CNN and Bi-LSTM with SVM Classifier for Arabic Sentiment Analysis
Deep learning models have recently been proven to be successful in various natural language processing tasks, including sentiment analysis. Conventionally, a deep learning model’s architecture includes a feature extraction layer followed by a fully connected layer used to train the model parameters and classification task. In this paper, we employ a deep learning model with modified architecture that combines Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM) for feature extraction, with Support Vector Machine (SVM) for Arabic sentiment classification. In particular, we use a linear SVM classifier that utilizes the embedded vectors obtained from CNN and Bi-LSTM for polarity classification of Arabic reviews. The proposed method was tested on three publicly available datasets. The results show that the method achieved superior performance than the two baseline algorithms of CNN and SVM in all datasets.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 78, 36, 3 ]