id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
https://aclanthology.org//D19-5323/
A Constituency Parsing Tree based Method for Relation Extraction from Abstracts of Scholarly Publications
We present a simple, rule-based method for extracting entity networks from the abstracts of scientific literature. By taking advantage of selected syntactic features of constituent parsing trees, our method automatically extracts and constructs graphs in which nodes represent text-based entities (in this case, noun phrases) and their relationships (in this case, verb phrases or preposition phrases). We use two benchmark datasets for evaluation and compare with previously presented results for these data. Our evaluation results show that the proposed method leads to accuracy rates that are comparable to or exceed the results achieved with state-of-the-art, learning-based methods in several cases.
[ "Knowledge Representation", "Relation Extraction", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 18, 75, 72, 3 ]
SCOPUS_ID:0027643622
A Constrained Approach to Multifont Chinese Character Recognition
Recognizing multifont, multiple-size Chinese characters was a difficult task in the area of optical character recognition (OCR). In this correspondence, we introduce the constraint graph as a general character representation framework. Each character class is described by a constraint graph model. Sampling points on a character skeleton are taken as nodes in the graph. Connection constraints and position constraints are taken as arcs in the graph. For patterns of the same character class, this model captures both the topological invariance and the geometrical invariance in a general and uniform way. Character recognition is then formulated as a constraint-based optimization problem. A cooperative relaxation matching algorithm that solves this optimization problem is developed. A practical OCR system able to recognize multifont, multiple-size Chinese characters with a satisfactory performance was implemented. © 1993 IEEE
[ "Visual Data in NLP", "Structured Data in NLP", "Multimodality" ]
[ 20, 50, 74 ]
http://arxiv.org/abs/1704.02312v1
A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification
Sentence simplification reduces semantic complexity to benefit people with language impairments. Previous simplification studies on the sentence level and word level have achieved promising results but also meet great challenges. For sentence-level studies, sentences after simplification are fluent but sometimes are not really simplified. For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification. In this paper, we propose a two-step simplification framework by combining both the word-level and the sentence-level simplifications, making use of their corresponding advantages. Based on the two-step framework, we implement a novel constrained neural generation model to simplify sentences given simplified words. The final results on Wikipedia and Simple Wikipedia aligned datasets indicate that our method yields better performance than various baselines.
[ "Language Models", "Paraphrasing", "Semantic Text Processing", "Text Generation" ]
[ 52, 32, 72, 47 ]
https://aclanthology.org//W02-2118/
A Constraint-Based Approach for Cooperative Information-Seeking Dialogue
[ "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 11, 47, 38 ]
SCOPUS_ID:85129674017
A Construction Method of Electric Power Professional Domain Corpus Based on Multi-model Collaboration
This paper proposes a method for constructing a corpus in the electric power field based on multi-method collaboration. Aiming at avoid the disadvantage of the excessive small granularity of the words segmentation results by Jieba word segmentation method which cause the incorrectly split of the words, TF-IDF method is used to extract keywords from the Jieba word segmentation results. An improved information entropy word combination method and TextRank method are applied to make word associations from the Jieba word segmentation results to form new phrases. For the information entropy word segmentation method uses strict phrases forming rule, which may cause the number of words formed decrease, all the word segmentation results of the above methods are collected to establish a relatively complete set of candidate words. Then, an improved word2vec clustering algorithm is presented to cluster electric power professional words and remove non-electric power words. Through the above-multi-method collaborative algorithm, a more comprehensive electric power professional field corpus is finally established. Compared with the Jieba word segmentation method, information entropy word combination algorithm (IEWCA), information entropy word segmentation algorithm (IEWSA), the experimental results prove that the electric power professional field corpus constructed by the presented method in this paper is more accurate and with richer vocabulary.
[ "Information Extraction & Text Mining", "Text Segmentation", "Syntactic Text Processing", "Text Clustering" ]
[ 3, 21, 15, 29 ]
SCOPUS_ID:84958087550
A Constructionist Approach to Category Change: Constraining Factors in the Adjectivization of Participles
It is generally accepted in usage-based theories of language that language change is facilitated by contexts of use that allow for semantically and/or structurally ambiguous readings of a construction. While this facilitative effect is well attested in the literature, much less attention has been paid to factors that constrain language change. Furthermore, the idea that ambiguities may also promote stability and discourage change has not been explored in detail. In this article I discuss the development of three participle constructions that have gained increasingly adjective-like uses in recent history: ADJ-looking (e.g., modest-looking), N-Ving denoting a change in psychological state (e.g., awe-inspiring), and the adjectival -ed participle headed by a psych-verb (e.g., surprised). I argue that the development of these constructions has been significantly constrained by unresolved ambiguities as well as source structures that continue to support the earlier, verbal categorization instead of an adjectival one. As a consequence, the participles have acquired adjectival uses gradually and often at a slow rate. The data are analyzed from a constructionist perspective, where constructions are connected in a network, and categories like “verb” or “adjective” are regarded as emergent schemas that arise from actual patterns of use.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85057762007
A Constructive Machine Translation System for English to Odia Translation
India is a country of many languages, diverse regional languages are spoken in different states of India but not all the Indians are the multilingual person. There are a total of 10 prominent scripts and 18 legitimate languages. Many Indians, rural people i.e. distant villagers to be exact, can neither write nor read and can't even understand English, thus employing an effective linguistic interpreter is needed. Machine translation systems, that converts the text of any language to another language, will improve the educated public of Indian without any linguistic barricade. Odia is spoken by most of the people in Odisha outskirts of Odisha and English, being the official language of India we propose a machine translating system from English to Odia. In this paper, many methodologies of Machine Translation is also defined.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85067485841
A Constructive Model for Sentiment Analysis of Speech using Deep Learning
Sentiment analysis is an engrossing and intriguing area of research because of its extensive significance in different domains. Assembling opinions of people about products, social and political events and problems through the Web is becoming progressively popular day by day. The views of users are helpful for the public and stakeholders when making certain decisiveness. Automatic sentiment analysis for natural audio streams containing spontaneous speech is an ambitious area of research that has accepted little attention. Accordingly, efficient abstract methods are required for mining and summarizing the audio-text from corpuses which, requires knowledge of sentiment-bearing words in speech. Many computational techniques, models and algorithms are there for mining opinion components from unstructured text. In this study, we have used lexicon-based approach MNB classifier and deep learning approach for automatic recognition of sentiment from natural speech and compare their results.
[ "Multimodality", "Speech & Audio in NLP", "Sentiment Analysis" ]
[ 74, 70, 78 ]
SCOPUS_ID:85017128653
A Constructive Solution to the Ranking Problem in Partial Order Optimality Theory
Partial order optimality theory (PoOT) (Anttila and Cho in Lingua 104:31–56, 1998) is a conservative generalization of classical optimality theory (COT) (Prince and Smolensky in Optimality theory: constraint interaction in generative grammar, Blackwell Publishers, Malden, 1993/2004) that makes possible the modeling of free variation and quantitative regularities without any numerical parameters. Solving the ranking problem for PoOT has so far remained an outstanding problem: allowing for free variation, given a finite set of input/output pairs, i.e., a dataset, Δ that a speaker S knows to be part of some language L, how can S learn the set of all grammars G under some constraint set C compatible with Δ ?. Here, allowing for free variation, given the set of all PoOT grammars GPoOT over a constraint set C , for an arbitrary Δ , I provide set-theoretic means for constructing the actual set G compatible with Δ. Specifically, I determine the set of all STRICT ORDERS of C that are compatible with Δ. As every strict total order is a strict order, our solution is applicable in both PoOT and COT, showing that the ranking problem in COT is a special instance of a more general one in PoOT.
[ "Linguistics & Cognitive NLP", "Phonology", "Syntactic Text Processing", "Linguistic Theories" ]
[ 48, 6, 15, 57 ]
https://aclanthology.org//W03-2714/
A Constructive View of Discourse Operators
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85107783737
A Contemporary Ensemble Aspect-based Opinion Mining Approach for Twitter Data
Aspect-based opinion mining is one among the thought-provoking research field which focuses on the extraction of vivacious aspects from opinionated texts and polarity value associated with these. The principal aim here is to identify user sentiments about specific features of a product or service rather than overall polarity. This fine-grained polarity identification about myriad aspects of an entity is highly beneficial for individuals or business organizations. Extricating these implicit or explicit aspects can be very challenging and this paper elaborates copious aspect extraction techniques, which is decisive for aspect-based sentiment analysis. This paper presents a novel idea of combining several approaches like Part of Speech tagging, dependency parsing, word embedding, and deep learning to enrich the aspect-based sentiment analysis specially designed for Twitter data. The results show that combining deep learning with traditional techniques can produce excellent results than lexicon-based methods.
[ "Opinion Mining", "Syntactic Text Processing", "Syntactic Parsing", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Tagging", "Information Extraction & Text Mining" ]
[ 49, 15, 28, 23, 78, 63, 3 ]
SCOPUS_ID:85076522704
A Content Analysis System That Supports Sentiment Analysis for Subjectivity and Polarity Detection in Online Courses
Given the current and increasing relevance of research aimed towards the optimization of teaching and learning experiences in online Education, a plethora of studies regarding the application of different technologies to this purpose have been developed. Specifically, Natural Language Processing (NLP) has been used to detect potential sentiments and opinions in texts, enabling a broader scope for making inferences. At Universidad Autónoma of Madrid, Spain, we have designed and developed a tool for the application of NLP techniques to analyse the contents of online courses and the contributions of their learners (video transcriptions, readings, questions and answers of the evaluation activities and learner's posts in forums, among others) to improve the teaching material and the teaching-learning processes of these courses. This tool is called edX-CAS ('Content Analyser System for edX MOOCs'). In this paper, we provide a detailed description of the tool, its functionalities and its NPL processes that support Sentiment Analysis for Subjectivity and Polarity detection. Moreover, we present a review of current research in the field of application of NLP in the improvement of teaching and learning experiences in MOOCs.
[ "Polarity Analysis", "Sentiment Analysis" ]
[ 33, 78 ]
SCOPUS_ID:85073888825
A Content-Aware Image Retargeting Quality Assessment Method Using Foreground and Global Measurement
Image retargeting methods aim to minimize the perceptual loss while changing sizes and aspect ratios of images. Since optimal retargeting methods for different images are generally not the same, the image retargeting quality assessment (IRQA) becomes a meaningful task. This paper proposes a content-aware image retargeting quality assessment method using foreground and global measurement to achieve better performance. In our proposed method, images are first divided into two categories according to the foreground object detection result, and then different corresponding measurements are designed for them. For those with obvious foreground object, both foreground and global measurement are applied. For others, only global measurement is conducted. Foreground measurement includes two complementary features: the high-level semantic similarity feature and the low-level size ratio feature. Global measurement includes another two features: an improved aspect ratio similarity (ARS) feature and edge group similarity (EGS) feature. Two public databases, i.e., the RetargetMe and CUHK, have been evaluated, and experimental results demonstrate that our method is quite effective, and it also provides state-of-the-art performance in the IRQA.aaOur code are available at https://github.com/SCUT-ML-GUO/IRQA.
[ "Visual Data in NLP", "Semantic Text Processing", "Semantic Similarity", "Multimodality" ]
[ 20, 72, 53, 74 ]
SCOPUS_ID:85084802992
A Content-Based Multi Label Classification Model to Suggest Tags for Posts in Stack Overflow
This paper presents a prediction model to automatically suggests tags when developers make a post in the Stack Overflow social network. The proposed model fuses the content of a post (text and code snippets) to build a multimodal representation that is used to find which are the most suitable words to be used as tags. The model was evaluated with a set of 20,000 posts extracted from Stack Overflow. Results show that the text modality reaches a higher performance compared to the code modality, whilst combining both modalities does not produce the best performance.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85101689888
A Content-based intelligent label classification system for warehouse management
With the rapid development of the e-commerce business, smart warehouse management has become a critical part of the supply chain for maintaining an optimized cost in stock logistics. The managing of inventory may require substantial labors and expenses on the order fulfillment and the label classification. This paper proposes a warehouse management system that can provide the real-time categorizing service for labels based on a fastText based word text classification algorithm. In this system, a label classifier is presented to facilitate the process of classifying massive labels based on its content. In this classifier, a Huffman tree data structure have been developed and a layered softmax structure is utilized to reduce the computational complexity. Experimental results show that the proposed method is capable of processing efficient label classification tasks where the management efficiency can be improved for the economic benefits of warehouse management.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Text Classification", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 68, 36, 24, 4 ]
http://arxiv.org/abs/2012.13339v1
A Context Aware Approach for Generating Natural Language Attacks
We study an important task of attacking natural language processing models in a black box setting. We propose an attack strategy that crafts semantically similar adversarial examples on text classification and entailment tasks. Our proposed attack finds candidate words by considering the information of both the original word and its surrounding context. It jointly leverages masked language modelling and next sentence prediction for context understanding. In comparison to attacks proposed in prior literature, we are able to generate high quality adversarial examples that do significantly better both in terms of success rate and word perturbation percentage.
[ "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 58, 4 ]
SCOPUS_ID:85063277153
A Context Integrated Model for Multi-label Emotion Detection
This paper explores the impact of taking the environment within which a tweet is made, on the task of analyzing sentiment orientations of tweets produced by people in the same community. The paper proposes C-GRU (Context-aware Gated Recurrent Units), which extracts the contextual information (topics) from tweets and uses them as an extra layer to determine sentiments conveyed by the tweet. The proposed architecture learns direct co-relations between such information and the task's predication. The multi-modal model combines both outputs learnt (from topics and sentences) by learning the contribution weights of the two sub-modals to the predictions. The evaluation of the proposed model which is carried out by applying it to the SemEval-2018 dataset for Arabic multi-label emotion classification, demonstrate that the model outperforms the highest reported performer on the same dataset.
[ "Emotion Analysis", "Sentiment Analysis" ]
[ 61, 78 ]
SCOPUS_ID:85078266100
A Context based Coverage Model for Abstractive Document Summarization
Automatic abstractive summarization is one of natural language processing fields that generates a sequence of words representing important information of the input document. The sequence-to-sequence model, which is widely used in abstractive summarization, has a repetition problem in which the same sub-pattern is repeatedly generated in decode phases. To solve this problem, various coverage models have been proposed in machine translation. In automatic summarization, unlike machine translation, the lengths of input and summary documents are very different. Because the summary document is a compressed form of important meaning of the input document. Due to the nature of automatic summarization, it is difficult to apply a word position-based coverage model in machine translation directly. For automatic summarization, we propose a context based coverage model to consider the coverage based on the compressed meaning of the input document. The context based coverage is defined as the accumulated weighted average of the encoded word meaning by the attention scores. This considers the meaning of words rather than the position of words in the input document. In the experiment with CNN/DailyMail dataset, the proposed model shows better performance than the previous researches.
[ "Machine Translation", "Information Extraction & Text Mining", "Summarization", "Text Generation", "Multilinguality" ]
[ 51, 3, 30, 47, 0 ]
SCOPUS_ID:85120572932
A Context-Aware Approach for Extracting Hard and Soft Skills
The continuous growth in the online recruitment industry has made the candidate screening process costly, labour intensive, and time-consuming. Automating the screening process would expedite candidate selection. In recent times, recruiting is moving towards skill-based recruitment where candidates are ranked according to the number of skills, skill's competence level and skill's experience. Therefore it is important to create a system which can accurately and automatically extract hard and soft skills from candidates' resume and job descriptions. The task is less complex for hard skills which in some cases could be named entities but much more challenging for soft skills which may appear in different linguistic forms depending on the context. In this paper, we propose a context-aware sequence classification and token classification model for extracting both hard and soft skills. We utilized the most recent state-of-the-art word embedding representations as textual features for various machine learning classifiers. The models have been validated by evaluating them on a publicly available job description dataset. Our results indicated that the best performing sequence classification model used BERT embeddings in addition with POS and DEP tags as input for a logistic regression classifier. The best performing token classification model used fine-tuned BERT embeddings with a support vector machine classifier.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 12, 36, 3 ]
http://arxiv.org/abs/2208.08029v1
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search
Textual adversarial attacks expose the vulnerabilities of text classifiers and can be used to improve their robustness. Existing context-aware methods solely consider the gold label probability and use the greedy search when searching an attack path, often limiting the attack efficiency. To tackle these issues, we propose PDBS, a context-aware textual adversarial attack model using Probability Difference guided Beam Search. The probability difference is an overall consideration of all class label probabilities, and PDBS uses it to guide the selection of attack paths. In addition, PDBS uses the beam search to find a successful attack path, thus avoiding suffering from limited search space. Extensive experiments and human evaluation demonstrate that PDBS outperforms previous best models in a series of evaluation metrics, especially bringing up to a +19.5% attack success rate. Ablation studies and qualitative analyses further confirm the efficiency of PDBS.
[ "Language Models", "Semantic Text Processing", "Green & Sustainable NLP", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 68, 58, 4 ]
SCOPUS_ID:84894165279
A Context-Aware Approach to Entity Linking
Entity linking refers to the task of assigning mentions in documents to their corresponding knowledge base entities. Entity linking is a central step in knowledge base population. Current entity linking systems do not explicitly model the discourse context in which the communication occurs. Nevertheless, the notion of shared context is central to the linguistic theory of pragmatics and plays a crucial role in Grice's cooperative communication principle. Furthermore, modeling context facilitates joint resolution of entities, an important problem in entity linking yet to be addressed satisfactorily. This paper describes an approach to context-aware entity linking.
[ "Knowledge Representation", "Semantic Text Processing", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 18, 72, 48, 57 ]
http://arxiv.org/abs/1903.06464v1
A Context-Aware Citation Recommendation Model with BERT and Graph Convolutional Networks
With the tremendous growth in the number of scientific papers being published, searching for references while writing a scientific paper is a time-consuming process. A technique that could add a reference citation at the appropriate place in a sentence will be beneficial. In this perspective, context-aware citation recommendation has been researched upon for around two decades. Many researchers have utilized the text data called the context sentence, which surrounds the citation tag, and the metadata of the target paper to find the appropriate cited research. However, the lack of well-organized benchmarking datasets and no model that can attain high performance has made the research difficult. In this paper, we propose a deep learning based model and well-organized dataset for context-aware paper citation recommendation. Our model comprises a document encoder and a context encoder, which uses Graph Convolutional Networks (GCN) layer and Bidirectional Encoder Representations from Transformers (BERT), which is a pre-trained model of textual data. By modifying the related PeerRead dataset, we propose a new dataset called FullTextPeerRead containing context sentences to cited references and paper metadata. To the best of our knowledge, This dataset is the first well-organized dataset for context-aware paper recommendation. The results indicate that the proposed model with the proposed datasets can attain state-of-the-art performance and achieve a more than 28% improvement in mean average precision (MAP) and recall@k.
[ "Language Models", "Structured Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 52, 50, 72, 74 ]
http://arxiv.org/abs/2109.01267v1
A Context-Aware Hierarchical BERT Fusion Network for Multi-turn Dialog Act Detection
The success of interactive dialog systems is usually associated with the quality of the spoken language understanding (SLU) task, which mainly identifies the corresponding dialog acts and slot values in each turn. By treating utterances in isolation, most SLU systems often overlook the semantic context in which a dialog act is expected. The act dependency between turns is non-trivial and yet critical to the identification of the correct semantic representations. Previous works with limited context awareness have exposed the inadequacy of dealing with complexity in multiproned user intents, which are subject to spontaneous change during turn transitions. In this work, we propose to enhance SLU in multi-turn dialogs, employing a context-aware hierarchical BERT fusion Network (CaBERT-SLU) to not only discern context information within a dialog but also jointly identify multiple dialog acts and slots in each utterance. Experimental results show that our approach reaches new state-of-the-art (SOTA) performances in two complicated multi-turn dialogue datasets with considerable improvements compared with previous methods, which only consider single utterances for multiple intents and slot filling.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85121918155
A Context-Aware Model with Flow Mechanism for Conversational Question Answering
Conversational Question Answering (ConvQA) requires a deep understanding of conversation history to answer the current question. However, most existing works ignore the sequential dependencies among history turns and treat all history indiscriminately. We propose a Flow based Context-Aware Question Answering model to alleviate the above problems. In specific, we first use a hierarchical history selector to filter out irrelated history turns according to the features of word level, utterance level and dialogue level. Then we introduce a FlowRNN to model the sequential dependencies between history turns along dialog direction. Finally we incorporate these hidden dependencies to BERT for answer predictions. Experiments on a large-scale conversational question answering dataset QuAC show that our proposed method can use conversation history effectively and outperforms most of the recent ConvQA models.
[ "Natural Language Interfaces", "Question Answering", "Dialogue Systems & Conversational Agents" ]
[ 11, 27, 38 ]
SCOPUS_ID:85072871959
A Context-Aware Recommender Method Based on Text Mining
A recommender system is an information filtering technology that can be used to recommend items that may be of interest to users. In their traditional form, recommender systems do not consider information that might enrich the recommendation process, as contextual information. In this way, we have the context-aware recommender systems that consider contextual information to generate the recommendations. Reviews can provide relevant information that can be used by recommender systems, including the contextual one. Thus, in this paper, we propose a context-aware recommender method based on text mining (CARM-TM) that includes two context extraction techniques: (1) CIET.5 (Formula Presented), a technique based on word embeddings; and (2) RulesContext, a technique based on association rules. For this work, CARM-TM makes use of context by running the CAMF algorithm, a context-aware recommender system based on matrix factorization. To evaluate our method, we compare it against the MF algorithm, an uncontextual recommender system based on matrix factorization. The evaluation showed that our method presented better results than the MF algorithm in most cases.
[ "Representation Learning", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 12, 72, 3 ]
SCOPUS_ID:85030323430
A Context-Aware Recurrent Encoder for Neural Machine Translation
Neural machine translation (NMT) heavily relies on its encoder to capture the underlying meaning of a source sentence so as to generate a faithful translation. However, most NMT encoders are built upon either unidirectional or bidirectional recurrent neural networks, which either do not deal with future context or simply concatenate the history and future context to form context-dependent word representations, implicitly assuming the independence of the two types of contextual information. In this paper, we propose a novel context-aware recurrent encoder (CAEncoder), as an alternative to the widely-used bidirectional encoder, such that the future and history contexts can be fully incorporated into the learned source representations. Our CAEncoder involves a two-level hierarchy: The bottom level summarizes the history information, whereas the upper level assembles the summarized history and future context into source representations. Additionally, CAEncoder is as efficient as the bidirectional RNN encoder in terms of both training and decoding. Experiments on both Chinese-English and English-German translation tasks show that CAEncoder achieves significant improvements over the bidirectional RNN encoder on a widely-used NMT system.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
SCOPUS_ID:85091099882
A Context-Based Disambiguation Model for Sentiment Concepts Using a Bag-of-Concepts Approach
With the widespread dissemination of user-generated content on different web sites, social networks, and online consumer systems such as Amazon, the quantity of opinionated information available on the Internet has been increased. Sentiment analysis of user-generated content is one of the main cognitive computing branches; hence, it has attracted the attention of many scholars in recent years. One of the main tasks of the sentiment analysis is to detect polarity within a text. The existing polarity detection methods mainly focus on keywords and their naïve frequency counts; however, they less regard the meanings and implicit dimensions of the natural concepts. Although background knowledge plays a critical role in determining the polarity of concepts, it has been disregarded in polarity detection methods. This study presents a context-based model to solve ambiguous polarity concepts using commonsense knowledge. First, a model is presented to generate a source of ambiguous sentiment concepts based on SenticNet by computing the probability distribution. Then, the model uses a bag-of-concepts approach to remove ambiguities and semantic augmentation with the ConceptNet handling to overcome lost knowledge. ConceptNet is a large-scale semantic network with a large number of commonsense concepts. In this paper, the point mutual information (PMI) measure is used to select the contextual concepts having strong relationships with ambiguous concepts. The polarity of the ambiguous concepts is precisely detected using positive/negative contextual concepts and the relationship of the concepts in the semantic knowledge base. The text representation scheme is semantically enriched using Numberbatch, which is a word embedding model based on the concepts from the ConceptNet semantic network. In this regard, the cosine similarity metric is used to measure similarity and select a concept from the ConceptNet network for semantic augmentation. Pre-trained concepts vectors facilitate the more effective computation of semantic similarity among the concerned concepts. The proposed model is evaluated by applying a corpus of product reviews, called Semeval. The experimental results revealed an accuracy rate of 82.07%, representing the effectiveness of the proposed model.
[ "Semantic Text Processing", "Commonsense Reasoning", "Representation Learning", "Polarity Analysis", "Sentiment Analysis", "Reasoning" ]
[ 72, 62, 12, 33, 78, 8 ]
http://arxiv.org/abs/2104.01697v1
A Context-Dependent Gated Module for Incorporating Symbolic Semantics into Event Coreference Resolution
Event coreference resolution is an important research problem with many applications. Despite the recent remarkable success of pretrained language models, we argue that it is still highly beneficial to utilize symbolic features for the task. However, as the input for coreference resolution typically comes from upstream components in the information extraction pipeline, the automatically extracted symbolic features can be noisy and contain errors. Also, depending on the specific context, some features can be more informative than others. Motivated by these observations, we propose a novel context-dependent gated module to adaptively control the information flows from the input symbolic features. Combined with a simple noisy training method, our best models achieve state-of-the-art results on two datasets: ACE 2005 and KBP 2016.
[ "Coreference Resolution", "Information Extraction & Text Mining" ]
[ 13, 3 ]
SCOPUS_ID:85124631360
A Context-Fusion Method for Entity Extraction Based on Residual Gated Convolution Neural Network
Due to the convolutional receptive field size, the current word has a limited relevance to the context. It brings about a problem, that is, the semantics of the entity words in the whole sentence is under-considered. The Residual Gated Convolution Neural Network (RGCNN) uses dilated convolution and residual gated linear units to simultaneously consider the associations between words from different dimensions, which adjusts the amount of information flowing to the next layer of neurons. And then by this way the vanishing gradient can be alleviated in cross-layer propagation. At the same time, RGCNN combines the attention mechanism to calculate the semantics between words in the last layer. The results on datasets show that RGCNN has a competitive advantage in speed and accuracy, which reflects the superiority and robustness of the algorithm.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:84929505730
A Context-Sensitive Image Annotation Recommendation Engine for Radiology
In the typical radiology reading workflow, a radiologist would go through an imaging study and annotate specific regions of interest. The radiologist has the option to select a suitable description (e.g., 'calcification') from a list of predefined descriptions, or input the description directly as free-text. However, this process is time-consuming and the descriptions are not standardized over time, even for the same patient or the same general finding. In this paper, we describe an approach that presents finding descriptions based on textual information extracted from a patient's prior reports. Using 133 finding descriptions obtained in routine oncology workflow, we demonstrate how the system can be used to reduce keystrokes by up to 86% in about 38% of the instances. We have integrated our solution into a PACS and discuss how the system can be used in a clinical setting to improve the image annotation workflow efficiency and promote standardization of finding descriptions.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/1612.07411v2
A Context-aware Attention Network for Interactive Question Answering
Neural network based sequence-to-sequence models in an encoder-decoder framework have been successfully applied to solve Question Answering (QA) problems, predicting answers from statements and questions. However, almost all previous models have failed to consider detailed context information and unknown states under which systems do not have enough information to answer given questions. These scenarios with incomplete or ambiguous information are very common in the setting of Interactive Question Answering (IQA). To address this challenge, we develop a novel model, employing context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. We also generate unique IQA datasets to test our model, which will be made publicly available. Employing these attention mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input depending on different contexts. When available, user's feedback is encoded and directly applied to update sentence-level attention to infer an answer. Extensive experiments on QA and IQA datasets quantitatively demonstrate the effectiveness of our model with significant improvement over state-of-the-art conventional QA models.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
https://aclanthology.org//W18-5020/
A Context-aware Convolutional Natural Language Generation model for Dialogue Systems
Natural language generation (NLG) is an important component in spoken dialog systems (SDSs). A model for NLG involves sequence to sequence learning. State-of-the-art NLG models are built using recurrent neural network (RNN) based sequence to sequence models (Ondřej Dušek and Filip Jurčíček, 2016a). Convolutional sequence to sequence based models have been used in the domain of machine translation but their application as Natural Language Generators in dialogue systems is still unexplored. In this work, we propose a novel approach to NLG using convolutional neural network (CNN) based sequence to sequence learning. CNN-based approach allows to build a hierarchical model which encapsulates dependencies between words via shorter path unlike RNNs. In contrast to recurrent models, convolutional approach allows for efficient utilization of computational resources by parallelizing computations over all elements, and eases the learning process by applying constant number of nonlinearities. We also propose to use CNN-based reranker for obtaining responses having semantic correspondence with input dialogue acts. The proposed model is capable of entrainment. Studies using a standard dataset shows the effectiveness of the proposed CNN-based approach to NLG.
[ "Language Models", "Semantic Text Processing", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 11, 47, 38 ]
http://arxiv.org/abs/1608.07076v1
A Context-aware Natural Language Generator for Dialogue Systems
We present a novel natural language generation system for spoken dialogue systems capable of entraining (adapting) to users' way of speaking, providing contextually appropriate responses. The generator is based on recurrent neural networks and the sequence-to-sequence approach. It is fully trainable from data which include preceding context along with responses to be generated. We show that the context-aware generator yields significant improvements over the baseline in both automatic metrics and a human pairwise preference test.
[ "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 11, 47, 38 ]
https://aclanthology.org//W05-1607/
A Context-dependent Algorithm for Generating Locative Expressions in Physically Situated Environments
[ "Text Generation" ]
[ 47 ]
https://aclanthology.org//2000.iwpt-1.15/
A Context-free Approximation of Head-driven Phrase Structure Grammar
We present a context-free approximation of unification-based grammars, such as HPSG or PATR-II. The theoretical underpinning is established through a least fixpoint construction over a certain monotonic function. In order to reach a finite fixpoint, the concrete implementation can be parameterized in several ways , either by specifying a finite iteration depth, by using different restrictors, or by making the symbols of the CFG more complex adding annotations a la GPSG. We also present several methods that speed up the approximation process and help to limit the size of the resulting CF grammar.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
SCOPUS_ID:85145882963
A Context-free Arabic Emoji Sentiment Lexicon (CF-Arab-ESL)
Emoji can be valuable features in textual sentiment analysis. One of the key elements of the use of emoji in sentiment analysis is the emoji sentiment lexicon. However, constructing such a lexicon is a challenging task. This is because interpreting the sentiment conveyed by these pictographic symbols is highly subjective, and differs depending upon how each person perceives them. Cultural background is considered to be one of the main factors that affects emoji sentiment interpretation. Thus, we focus in this work on targeting people from Arab cultures. This is done by constructing a context-free Arabic emoji sentiment lexicon annotated by native Arabic speakers from seven different regions (Gulf, Egypt, Levant, Sudan, North Africa, Iraq, and Yemen) to see how these Arabic users label the sentiment of these symbols without a textual context. We recruited 53 annotators (males and females) to annotate 1,069 unique emoji. Then we evaluated the reliability of the annotation for each participant by applying sensitivity (Recall) and consistency (Krippendorff’s Alpha) tests. For the analysis, we investigated the resulting emoji sentiment annotations to explore the impact of the Arabic cultural context. We analyzed this cultural reflection from different perspectives, including national affiliation, use of colour indications, animal indications, weather indications and religious impact.
[ "Visual Data in NLP", "Multimodality", "Sentiment Analysis" ]
[ 20, 74, 78 ]
SCOPUS_ID:85107384044
A Contextual Model for Information Extraction in Resume Analytics Using NLP’s Spacy
The unstructured document like resume will have different file formats (pdf, txt, doc, etc.), and also, there is a lot of ambiguity and variability in the language used in the resume. Such heterogeneity makes the extraction of useful information a challenging task. It gives rise to the urgent need for understanding the context in which words occur. This article proposes a machine learning approach to phrase matching in resumes, focusing on the extraction of special skills using spaCy, an advanced natural language processing (NLP) library. It can analyze and extract detailed information from resumes like a human recruiter. It keeps a count of the phrases while parsing to categorize persons based on their expatriation. The decision-making process can be accelerated through data visualization using matplotlib. Relative comparison of candidates can be made to filter out the candidates.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85133742619
A Contextual Relationship Model for Deceptive Opinion Spam Detection
The promotion of e-commerce platforms has changed the lifestyle of several people from traditional marketing to digital marketing where businesses are made online and the concurrence reached high levels. These platforms have helped the ease of purchases while providing more advantages to the customers such as benefiting from a wide range of high-quality products, low prices, buying at any time, and more importantly supplying information and reviews about the products, and so on. Unfortunately, a plethora of companies mislead the customers to buy their products or demote the competitors’ by using deceptive opinion spams which has a negative impact on the decision and the behavior of the purchasers. Deceptive opinion spams are written deliberately to seem legitimate and authentic so that to misguide or delude the customer’s purchases. Consequently, the detection of these opinions is a hard task due to their nature for both humans and machines. Most of the studies are based on traditional machine learning and sparse feature engineering. However, these models do not capture the semantic aspect of reviews. According to many researchers, it is the key to the detection of deceptive opinion spam. Besides, only a few studies consider using contextual information by adopting neural networks in comparison with plenty of traditional machine learning classifiers. These models face numerous shortcomings as long as their representations are obtained while mining each review considering only words, sentences, reviews, or a combination of them, thereby classifying them based on their representations. In fact, deceptive opinions are written by the same deceivers belonging to the same companies with similar aims to promote or demolish a product. In other words, Deceptive opinion spams tend to be semantically coherent with each other. To the best of our knowledge, no model tries to obtain a representation based on the contextual relationships between opinions. This article proposes to use a capsule neural network, bidirectional long short-term memory, attention mechanism, and paragraph vector distributed bag of words to detect deceptive opinion spam. Our model provides a powerful representation of the opinions since it centers on the preservation of their contexts and the relationships between them. The results show that our model significantly outperforms the existing state-of-the-art models.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85139704017
A Contextual Theory of Language
Writing That Counts InJune 1989 the Canadian Wildlife Federation (CWF) met for their annual meeting in Halifax, Nova Scotia. On the last day of the meetings their program focused on innovative fIsheries management, an important theme for an organisation concerned with promoting the sustained harvest of renewable resources such as fIsh and game. One of the Federation's retiring directors began the discussion with a short paper on 'Innovative Fisheries Management: International Whaling' (W.R. Martin, 1989, 1-4), the beginning sections of which are reproduced below (along with the headings scaffolding the remaining sections of the paper, which have not been reproduced). The paper is one offour presented, which were later published together as Innovative Fisheries Management Initiatives (Bielak, 1989c).
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85140918614
A Continual Relation Extraction Approach for Knowledge Graph Completeness
Representing unstructured data in a structured form is most significant for information system management to analyze and interpret it. To do this, the unstructured data might be converted into Knowledge Graphs, by leveraging an information extraction pipeline whose main tasks are named entity recognition and relation extraction. This thesis aims to develop a novel continual relation extraction method to identify relations (interconnections) between entities in a data stream coming from the real world. Domain-specific data of this thesis is corona news from German and Austrian newspapers.
[ "Semantic Text Processing", "Relation Extraction", "Structured Data in NLP", "Knowledge Representation", "Multimodality", "Information Extraction & Text Mining" ]
[ 72, 75, 50, 18, 74, 3 ]
https://aclanthology.org//2020.dmr-1.1/
A Continuation Semantics for Abstract Meaning Representation
Abstract Meaning Representation (AMR) is a simple, expressive semantic framework whose emphasis on predicate-argument structure is effective for many tasks. Nevertheless, AMR lacks a systematic treatment of projection phenomena, making its translation into logical form problematic. We present a translation function from AMR to first order logic using continuation semantics, which allows us to capture the semantic context of an expression in the form of an argument. This is a natural extension of AMR’s original design principles, allowing us to easily model basic projection phenomena such as quantification and negation as well as complex phenomena such as bound variables and donkey anaphora.
[ "Machine Translation", "Semantic Text Processing", "Representation Learning", "Knowledge Representation", "Text Generation", "Multilinguality" ]
[ 51, 72, 12, 18, 47, 0 ]
https://aclanthology.org//W19-6804/
A Continuous Improvement Framework of Machine Translation for Shipibo-Konibo
[ "Low-Resource NLP", "Machine Translation", "Text Generation", "Responsible & Trustworthy NLP", "Multilinguality" ]
[ 80, 51, 47, 4, 0 ]
http://arxiv.org/abs/2001.05315v1
A Continuous Space Neural Language Model for Bengali Language
Language models are generally employed to estimate the probability distribution of various linguistic units, making them one of the fundamental parts of natural language processing. Applications of language models include a wide spectrum of tasks such as text summarization, translation and classification. For a low resource language like Bengali, the research in this area so far can be considered to be narrow at the very least, with some traditional count based models being proposed. This paper attempts to address the issue and proposes a continuous-space neural language model, or more specifically an ASGD weight dropped LSTM language model, along with techniques to efficiently train it for Bengali Language. The performance analysis with some currently existing count based models illustrated in this paper also shows that the proposed architecture outperforms its counterparts by achieving an inference perplexity as low as 51.2 on the held out data set for Bengali.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85097643141
A Continuous Word Segmentation of Bengali Noisy Speech
Human voice is an important concern of efficient and modern communication in the era of Alexa, Siri, or Google Assistance. Working with voice or speech is going to be easy by preprocessing the unwanted entities when real speech data contains a lot of noise or continuous delivery of a speech. Working with Bangla language is also a concern of enriching the scope of efficient communication over Bangla language. This paper presented a method to reduce noise from speech data collected from a random noisy place, and segmentation of word from continuous Bangla voice. By filtering the threshold of noise with fast Fourier transform (FFT) of audio frequency signal for reduction of noise and compared each chunk of audio signal with minimum dBFS value to separate silent period and non-silent period and on each silent period, segment the signal for word segmentation.
[ "Green & Sustainable NLP", "Speech & Audio in NLP", "Syntactic Text Processing", "Multimodality", "Text Segmentation", "Responsible & Trustworthy NLP" ]
[ 68, 70, 15, 74, 21, 4 ]
http://arxiv.org/abs/1708.00391v1
A Continuously Growing Dataset of Sentential Paraphrases
A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ~70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.
[ "Paraphrasing", "Text Generation" ]
[ 32, 47 ]
https://aclanthology.org//2022.blackboxnlp-1.36/
A Continuum of Generation Tasks for Investigating Length Bias and Degenerate Repetition
Language models suffer from various degenerate behaviors. These differ between tasks: machine translation (MT) exhibits length bias, while tasks like story generation exhibit excessive repetition. Recent work has attributed the difference to task constrainedness, but evidence for this claim has always involved many confounding variables. To study this question directly, we introduce a new experimental framework that allows us to smoothly vary task constrainedness, from MT at one end to fully open-ended generation at the other, while keeping all other aspects fixed. We find that: (1) repetition decreases smoothly with constrainedness, explaining the difference in repetition across tasks; (2) length bias surprisingly also decreases with constrainedness, suggesting some other cause for the difference in length bias; (3) across the board, these problems affect the mode, not the whole distribution; (4) the differences cannot be attributed to a change in the entropy of the distribution, since another method of changing the entropy, label smoothing, does not produce the same effect.
[ "Machine Translation", "Explainability & Interpretability in NLP", "Text Generation", "Responsible & Trustworthy NLP", "Multilinguality" ]
[ 51, 81, 47, 4, 0 ]
http://arxiv.org/abs/2204.07832v2
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task, which focuses on detecting the sentiment polarity towards the aspect in a sentence. However, it is always sensitive to the multi-aspect challenge, where features of multiple aspects in a sentence will affect each other. To mitigate this issue, we design a novel training framework, called Contrastive Cross-Channel Data Augmentation (C3 DA), which leverages an in-domain generator to construct more multi-aspect samples and then boosts the robustness of ABSA models via contrastive learning on these generated data. In practice, given a generative pretrained language model and some limited ABSA labeled data, we first employ some parameter-efficient approaches to perform the in-domain fine-tuning. Then, the obtained in-domain generator is used to generate the synthetic sentences from two channels, i.e., Aspect Augmentation Channel and Polarity Augmentation Channel, which generate the sentence condition on a given aspect and polarity respectively. Specifically, our C3 DA performs the sentence generation in a cross-channel manner to obtain more sentences, and proposes an Entropy-Minimization Filter to filter low-quality generated samples. Extensive experiments show that our C3 DA can outperform those baselines without any augmentations by about 1% on accuracy and Macro- F1. Code and data are released in https://github.com/wangbing1416/C3DA.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 80, 4, 23, 78 ]
http://arxiv.org/abs/2202.06417v3
A Contrastive Framework for Neural Text Generation
Text generation is of great importance to many natural language processing applications. However, maximization-based decoding methods (e.g. beam search) of neural language models often lead to degenerate solutions -- the generated text is unnatural and contains undesirable repetitions. Existing approaches introduce stochasticity via sampling or modify training objectives to decrease probabilities of certain tokens (e.g., unlikelihood training). However, they often lead to solutions that lack coherence. In this work, we show that an underlying reason for model degeneration is the anisotropic distribution of token representations. We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method -- contrastive search -- to encourage diversity while maintaining coherence in the generated text. Extensive experiments and analyses on three benchmarks from two languages demonstrate that our proposed approach significantly outperforms current state-of-the-art text generation methods as evaluated by both human and automatic metrics.
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85148363403
A Contrastive Learning Framework with Tree-LSTMs for Aspect-Based Sentiment Analysis
Different from sentence-level sentiment analysis, the aspect-based sentiment analysis (ABSA) is a fine-grained task that aims to identify the sentiment polarity towards specific aspect terms in a sentence. However, the lack of fine-grained labeled data and the fact that a sentence may contain multiple aspects or complex implicit sentiment relations make ABSA still face challenges. Specifically, effectively exploiting syntactic dependencies to construct contextual information in a sentence to capture implicit sentiment polarities and constructing the data augmentation paradigm to obtain fine-grained aspect-specific information are the key concerns of this paper. To mitigate the above issues, we propose a Contrastive Learning Framework with Tree-Structured LSTM (CLF-TrLSTM), which applies a concatenated form of Tree-LSTMs and self-attention with window mechanism to utilize dependency tree to capture syntactic and contextual information of the sentence. Meanwhile, to alleviate the data scarcity problem, we use mask generation operation and contrastive learning to generate in-domain high-quality positive and negative samples, then encourage anchor sentences and positive samples to be more similar than negative example pairs, which can achieve alignment of different granularities. Finally, experimental results on three public datasets demonstrate that our proposed framework achieves the state-of-the-art performance and comprehensive analysis verifies the effectiveness of each component.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 52, 72, 15, 12, 23, 78 ]
SCOPUS_ID:85127254876
A Contrastive Study on Linguistic Features between HT and MT based on NLPIR-ICTCLAS: A Case Study of Philosophical Text
This paper, with the aid of NLPIR-ICTCLAS, analyzes and compares original English texts and different translation versions of a philosophical text. A 1:6 English-Chinese translation corpus is applied to study the linguistic structural features of human translation (HT) and machine translation (MT). This study shows that the HT is characterized by more complicated language and complex sentences. At the same time, in the process of translation, compared with MT engines, human translator can intentionally avoid using too many functional words, and deliver grammatical structures and logical relations of sentences mainly by the meanings of words or clauses. The five MT versions share similarities in the use of notional words and functional words.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85081996078
A Control Unit for Emotional Conversation Generation
Emotional conversation generation model predicts the response according to the current words and the emotional words. However, the researchers only dedicated to adding more emotional words in the conversation generation model to retain the taste of chat users without considering whether the emotion of a response is suitable for human conversations or not. In this paper, we aim to address the issue of emotion drift which indicates the emotion of a response is not the same category as its post in human conversations. We propose a control unit framework, which consists of emotional channels and word-level attention mechanism, to incorporate natural and smooth emotional words into conversation generation. Emotional channel consists six channels, namely like, sadness, disgust, anger, happiness and other ones, which provides strategy choice control unit to generate emotional words. To improve the importance of emotional content, we use the word-level attention mechanism in emotional channel for acquiring a better emotional decoding response. Experimental results suggest that the proposed model is effective not only in generate content but also in emotion.
[ "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 14, 11, 47, 38 ]
http://arxiv.org/abs/2005.00613v2
A Controllable Model of Grounded Response Generation
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by pretrained language models' propensity to "hallucinate" facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relevance and informativeness in generated responses. We propose a framework that we call controllable grounded response generation (CGRG), in which lexical control phrases are either provided by a user or automatically extracted by a control phrase predictor from dialogue context and grounding knowledge. Quantitative and qualitative results show that, using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
[ "Dialogue Response Generation", "Text Generation" ]
[ 14, 47 ]
SCOPUS_ID:85102635320
A Controllable Text Simplification System for the Italian Language
Text simplification is a non-trivial task that aims at reducing the linguistic complexity of written texts. Researchers have studied the problem by proposing new methodologies for addressing the English language, but other languages, like the Italian one, are almost unexplored. In this paper, we give a contribution to the enhancement of the Automated Text Simplification research by presenting a deep learning-based system, inspired by a state of the art system for the English language, capable of simplifying Italian texts. The system has been trained and tested by leveraging the Italian version of Newsela; it has shown promising results by achieving a SARI value of 30.17.
[ "Paraphrasing", "Text Generation" ]
[ 32, 47 ]
SCOPUS_ID:85102629758
A ConvBiLSTM Deep Learning Model-Based Approach for Twitter Sentiment Classification
Being one of the most widely used social media tools, Twitter is seen as an important source of information for acquiring people's attitudes, emotions, views and feedbacks. Within this context, Twitter sentiment analysis techniques were developed to decide whether textual tweets express a positive or negative opinion. In contrast to lower classification performance of traditional algorithms, deep learning models, including Convolution Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM), have achieved a significant result in sentiment analysis. Although CNN can extract high-level local features efficiently by using convolutional layer and max-pooling layer, it cannot effectively learn sequence of correlations. On the other hand, Bi-LSTM uses two LSTM directions to improve the contexts available to deep learning algorithms, but Bi-LSTM cannot extract local features in a parallel way. Therefore, applying a single CNN or single Bi-LSTM for sentiment analysis cannot achieve the optimal classification result. An integrating structure of CNN and Bi-LSTM model is proposed in this study. ConvBiLSTM is implemented; a word embedding model which converts tweets into numerical values, CNN layer receives feature embedding as input and produces smaller dimension of features, and the Bi-LSTM model takes the input from the CNN layer and produces classification result. Word2Vec and GloVe were distinctly applied to observe the impact of the word embedding result on the proposed model. ConvBiLSTM was applied with retrieved Tweets and SST-2 datasets. ConvBiLSTM model with Word2Vec on retrieved Tweets dataset outperformed the other models with 91.13% accuracy.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 12, 78, 36, 3 ]
SCOPUS_ID:85131805277
A Convenient and Efficient Scene Shop Recognition Assistant Tool for the Visually Impaired
Obtaining information about outdoor scenes has always been a great difficulty for visually impaired people. One of the urgent needs is to obtain the name of all the stores in the scenes where the visually impaired people are when they want to go shopping in an unfamiliar environment without asking others. However, there are no mature assistant tools to help them get this information so far, which bring them great inconvenience. To address this problem, we employ the latest OCR technology and the most mature speech recognition technology to create a scene shop recognition application based on Android system which can recognize the name of all shops in front of the visually impaired conveniently and efficiently by just taking a picture with a mobile phone. With the assistance of this application, the travel experience of the visually impaired can be greatly improved.
[ "Visual Data in NLP", "Multimodality", "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 20, 74, 4, 68 ]
SCOPUS_ID:85091277172
A Conversation Analysis of Non-Progress and Coping Strategies with a Banking Task-Oriented Chatbot
Task-oriented chatbots are becoming popular alternatives for fulfilling users' needs, but few studies have investigated how users cope with conversational 'non-progress' (NP) in their daily lives. Accordingly, we analyzed a three-month conversation log between 1,685 users and a task-oriented banking chatbot. In this data, we observed 12 types of conversational NP; five types of content that was unexpected and challenging for the chatbot to recognize; and 10 types of coping strategies. Moreover, we identified specific relationships between NP types and strategies, as well as signs that users were about to abandon the chatbot, including 1) three consecutive incidences of NP, 2) consecutive use of message reformulation or switching subjects, and 3) using message reformulation as the final strategy. Based on these findings, we provide design recommendations for task-oriented chatbots, aimed at reducing NP, guiding users through such NP, and improving user experiences to reduce the cessation of chatbot use.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85069655439
A Conversation-Based Intelligent Tutoring System Benefits Adult Readers with Low Literacy Skills
This article introduces three distinctive features of a conversation-based intelligent tutoring system called AutoTutor. AutoTutor was designed to teach low literacy adult learners comprehension strategies across different levels of discourse processing. In AutoTutor, three-way conversations take place between two computers agents (a teacher agent and a peer agent) and a human learner. Computer agents scaffold learning by asking questions and providing feedback. The interface of AutoTutor is simple and easy to use and addresses the special technology needs of adult learners. One of AutoTutor’s strengths is that it is adaptive and as such can provide individualized instruction for the diverse population of adult literacy students. The adaptivity of AutoTutor is achieved by assessing learners’ performance and branching them into conditions with different difficulty level. Data from a reading comprehension intervention suggest that adult literacy students benefit from using AutoTutor. Such learning benefits may be increased by enhancing the adaptivity of AutoTutor. This may be accomplished by tailoring instruction and materials to meet the various needs of the individuals with low literacy skills.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85099223537
A Conversational Agent Framework with Multi-modal Personality Expression
Consistently exhibited personalities are crucial elements of realistic, engaging, and behavior-rich conversational virtual agents. Both nonverbal and verbal cues help convey these agents' unseen psychological states, contributing to our effective communication with them. We introduce a comprehensive framework to design conversational agents that express personality through non-verbal behaviors like body movement and facial expressions, as well as verbal behaviors like dialogue selection and voice transformation. We use the OCEAN personality model, which defines personality as a combination of five orthogonal factors of openness, conscientiousness, extraversion, agreeableness, and neuroticism. The framework combines existing personality expression methods with novel ones such as new algorithms to convey Laban Shape and Effort qualities. We perform Amazon Mechanical Turk studies to analyze how different communication modalities influence our perception of virtual agent personalities and compare their individual and combined effects on each personality dimension. The results indicate that our personality-based modifications are perceived as natural, and each additional modality improves perception accuracy, with the best performance achieved when all the modalities are present. We also report some correlations for the perception of conscientiousness with neuroticism and openness with extraversion.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Multimodality" ]
[ 11, 38, 74 ]
http://arxiv.org/abs/2104.01543v2
A Conversational Agent System for Dietary Supplements Use
Dietary supplements (DS) have been widely used by consumers, but the information around the efficacy and safety of DS is disparate or incomplete, thus creating barriers for consumers to find information effectively. Conversational agent (CA) systems have been applied to the healthcare domain, but there is no such a system to answer consumers regarding DS use, although widespread use of DS. In this study, we develop the first CA system for DS use
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85139061558
A Conversational Agent for Argument-driven E-participation
The majority of current e-participation tools are based on online web forums where citizens make proposals and provide comments and opinions, forming large conversation threads. Motivated by the huge popularity of instant messaging applications and the impressive, recent advances in natural language processing and artificial intelligence, in this paper we propose and investigate the use of conversational agents or chatbots as a new form of citizen-To-government communication. Specifically, we present and evaluate a novel chatbot that assists a user on the overwhelming task of exploring the citizen-generated content of Decide Madrid, a forum-based e-participatory budgeting platform. Among other things, the proposed chatbot is capable of automatically extracting, categorizing and summarizing the arguments underlying the citizen proposals and debates in the platform. Through a user study, we show promising results about the potential benefits of the chatbot in terms of several citizen participation, decision making and public value criteria.
[ "Natural Language Interfaces", "Argument Mining", "Reasoning", "Dialogue Systems & Conversational Agents" ]
[ 11, 60, 8, 38 ]
SCOPUS_ID:85132254902
A Conversational Agent for Creating Flexible Daily Automation
The spread of sensors and intelligent devices of the Internet of Things and their integration in daily environments are changing the way we interact with some of the most common objects in everyday life. Therefore, there is an evident need to provide non-expert users with the ability to customize in a simple but effective way the behaviour of these devices based on their preferences and habits. This paper presents RuleBot, a conversational agent that uses machine learning and natural language processing techniques to allow end users to create automations according to a flexible implementation of the trigger-action paradigm, and thereby customize the behaviour of devices and sensors using natural language. In particular, the paper describes the design and implementation of RuleBot, and reports on a user test and lessons learnt.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85103739785
A Conversational Agent for Database Query: A Use Case for Thai People Map and Analytics Platform
Since 2018, Thai People Map and Analytics Platform (TPMAP) has been developed with the aims of supporting government officials and policy makers with integrated household and community data to analyze strategic plans, implement policies and decisions to alleviate poverty. However, to acquire complex information from the platform, non-technical users with no database background have to ask a programmer or a data scientist to query data for them. Such a process is time-consuming and might result in inaccurate information retrieved due to miscommunication between non-technical and technical users. In this paper, we have developed a Thai conversational agent on top of TPMAP to support self-service data analytics on complex queries. Users can simply use natural language to fetch information from our chatbot and the query results are presented to users in easy-to-use formats such as statistics and charts. The proposed conversational agent retrieves and transforms natural language queries into query representations with relevant entities, query intentions, and output formats of the query. We employ Rasa, an open-source conversational AI engine, for agent development. The results show that our system yields Fl-score of 0.9747 for intent classification and 0.7163 for entity extraction. The obtained intents and entities are then used for query target information from a graph database. Finally, our system achieves end-to-end performance with accuracies ranging from 57.5%-80.0%, depending on query message complexity. The generated answers are then returned to users through a messaging channel.
[ "Natural Language Interfaces", "Information Retrieval", "Dialogue Systems & Conversational Agents" ]
[ 11, 24, 38 ]
SCOPUS_ID:85130246743
A Conversational Agent for Promoting Physical Activity Among COPD Patients
Chronic Obstructive Pulmonary Disease (COPD) is one of the most prevalent diseases in the world, affecting respiratory performance of many people, limiting the airflow and is not fully reversible. It is a clinical syndrome characterized by chronic respiratory symptoms, structural pulmonary abnormalities or impairment of lung function. In order to help people with this disease, we propose an innovative personalized mHealth coaching platform that will address patient preferences and contextual factors – the OnTRACK platform. This platform is composed of a mobile application for patients, a web platform for healthcare professionals – and a conversational agent (or chatbot), named “Hígia”, which acts as an alternative interface between patients and the platform. This conversational agent includes several of the main functionalities already available in OnTRACK’s smartphone app, complementing and extending it. It allows consulting prescription information in a multitude of ways, getting and setting all personal data, inserting physical activity measurements, and obtaining historical data on physical activity and prescriptions, among others. The evaluation of the conversational agent yielded encouraging results, with users reporting being happier, more motivated, dedicated and confident when interacting with the systems using their voice, while allowing the development team to identify topics for improvement.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85139070008
A Conversational Agent for Medical Disclosure of Sexually Transmitted Infections
Sexually transmitted infections (STIs) are serious health problems worldwide, increasing the risk of infection by Human Immunodeficiency Virus (HIV)/Acquired Immune Deficiency Syndrome (AIDS). Despite the significant efforts to address the pandemic, especially with sex education programs, STI and HIV remain a significant concern. Meanwhile, conversational agents are becoming popular in healthcare to interact with users and improve their health. This paper reports the design, implementation, and evaluation of VIHrtual-App: an online conversational agent which uses user-centered development and supervised machine learning to offer an engaging sex educational tool to promote awareness and prevention. VIHrtual-App can identify more than 250 STI/HIV-related questions and respond accordingly, attractively providing reliable information.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85077798537
A Conversational Agent in Support of Productivity and Wellbeing at Work
Conversational agents have the potential to support users in many tasks. However, support for productivity and well-being in the workplace has received little attention. We present the first design of a conversational system that supports information workers with multiple work-related goals, informed by a survey of the current and potential use of conversational agents in the workplace. The goals of this research include the evaluation of using an agent for scheduling and prioritizing tasks, switching tasks, providing break reminders, dealing with social media distractions and for end of the day reflection on tasks accomplished. We deployed a chat-based intelligent agent, named Amber, in a field study with 24 information workers over the course of 6 days. We present our preliminary findings from the field study and discuss implications for the design of future workplace conversational agents.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85119405756
A Conversational Agent to Support Hospital Bed Allocation
Bed allocation in hospitals is a critical and important problem, and it has become even more important since last year because of the COVID-19 pandemic. In this paper, we present an approach based on intelligent-agent technologies to assist hospital staff in charge of bed allocation. As part of this work, we developed a web-based simulation of hospital bed allocation system integrated with a chatbot for interaction with the user. As a core component in our approach, an intelligent agent uses the feedback of a plan validator to check if there are any flaws in a user-made allocation, communicating any detected problems to the user using natural language through the chatbot. Thus, our resulting application not only validates bed allocation plans but also interacts with hospital professionals using natural language communication, including giving explainable suggestions of better alternative allocations. We evaluated our approach with professionals responsible for bed allocation in two local hospitals and a doctor who provides consultancy to another local hospital. The version of the system reported in this paper addresses all the suggestions made by the specialists who evaluated its previous version.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85130514745
A Conversational Approach for Modifying Service Mashups in IoT Environments
Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85096528018
A Conversational Assistant on Mobile Devices for Primitive Learners of Computer Programming
According to eMarketer's survey, the penetration rate of smart phone in Taiwan is one of the top 5 countries in the world, around 77.6% of her population in 2018. Especially among youths, almost everyone has a smartphone. They use mobile devices ubiquitously and it causes problems regarding learning in classroom. Instead of restriction of using smart phones, we encourage students to utilize the devices for learning as frequent as possible. Therefore, a conversational assistant(chatbot) based on IBM Watson Assistant and Facebook Messenger is developed to help non-technical students as they are primitive to computer programming. At the development stage, students appreciate the assistant as an 'first-aid' about the course materials and suggest that it should provide more helps about programming logics. In the progress into the stage of test, we will cyclically analyze the conversations between the assistant and users and identify the improvements for better instructional design. The controlled experiment method is utilized to tell if the chatbot could improve the students' engagements and interests to programming for the first version of chatbot.
[ "Natural Language Interfaces", "Programming Languages in NLP", "Multimodality", "Dialogue Systems & Conversational Agents" ]
[ 11, 55, 74, 38 ]
SCOPUS_ID:85091340655
A Conversational Digital Assistant for Intelligent Process Automation
Robotic process automation (RPA) has emerged as the leading approach to automate tasks in business processes. Moving away from back-end automation, RPA automated the mouse-click on user interfaces; this outside-in approach reduced the overhead of updating legacy software. However, its many shortcomings, namely its lack of accessibility to business users, have prevented its widespread adoption in highly regulated industries. In this work, we explore interactive automation in the form of a conversational digital assistant. It allows business users to interact with and customize their automation solutions through natural language. The framework, which creates such assistants, relies on a multi-agent orchestration model and conversational wrappers for autonomous agents including RPAs. We demonstrate the effectiveness of our proposed approach on a loan approval business process and a travel preapproval business process.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85127198576
A Conversational Framework for Semantic Question Answering in Customer Services with Machine Learning on Knowledge Graph
Despite the recent advances in Natural Language Processing (NLP) techniques many issues and inefficiencies arise when it comes to creating a system capable of interacting with the users by means of text conversations. The current techniques rely on the development of chatbots that, however, require designing the conversation flow, defining training questions and associating the expected responses. Even though this process allows the creation of effective question-answering systems, this methodology is not scalable, especially when the answers are to be found in documents. Other approaches, instead, rely on graph embedding techniques and graph neural networks to define the best answer given a question. These methods, however, require set-up training routines and to dispose of the ground truth that, in general, is difficult to retrieve or create for real industrial applications. In this paper we introduce a conversational framework for semantic question answering. Our work relies on knowledge graphs and the use of machine learning for determining the best answer given a question associated with the content of the knowledge graph. In addition, by leveraging text mining techniques we are able to identify the best set of answers that suit the question that are further filtered by means of deep learning algorithms.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Multimodality" ]
[ 72, 50, 27, 18, 11, 38, 74 ]
SCOPUS_ID:85054217342
A Conversational Neural Language Model for Speech Recognition in Digital Assistants
Speech recognition in digital assistants such as Google Assistant can potentially benefit from the use of conversational context consisting of user queries and responses from the agent. We explore the use of recurrent, Long Short-Term Memory (LSTM), neural language models (LMs) to model the conversations in a digital assistant. Our proposed methods effectively capture the context of previous utterances in a conversation without modifying the underlying LSTM architecture. We demonstrate a 4% relative improvement in recognition performance on Google Assistant queries when using the LSTM LMs to rescore recognition lattices.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 11, 47, 38, 10, 74 ]
SCOPUS_ID:85147250090
A Conversational Recommendation System Has Better Usability? a Case-Study of TravelMate
TravelMate is an integrated recommendation system to recommend proper tours or travel companions for travelers. Recommendation systems are varied based on different filtering methodologies. Content-based filtering is a good idea for some users who know what to find. However, for users still thinking about the requirements, the conversational recommendation system (CRS) helps clarify the needs in the chatting process in a harmless and immersive process. TravelMate applies both for different users and objectives. In order to evaluate the usefulness of CRS, NASA-TLX will be employed to test if the workload of users is reduced.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85047221019
A Conversational User Interface for Software Visualization
Software visualizations provide many different complex views with different filters and metrics. But often users have a specific question to which they want to have an answer or they need to find the best visualization by themselves and are not aware of other metrics and possibilities of the visualization tool. We propose an interaction with software visualizations based on a conversational interface. The developed tool extracts meta information from natural language sentences and displays the best fitting software visualization by adjusting metrics and filter settings.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85128857803
A Convolution Neural Network-Based Representative Spatio-Temporal Documents Classification for Big Text Data
With the proliferation of mobile devices, the amount of social media users and online news articles are rapidly increasing, and text information online is accumulating as big data. As spatio-temporal information becomes more important, research on extracting spatiotemporal information from online text data and utilizing it for event analysis is being actively conducted. However, if spatiotemporal information that does not describe the core subject of a document is extracted, it is rather difficult to guarantee the accuracy of core event analysis. Therefore, it is important to extract spatiotemporal information that describes the core topic of a document. In this study, spatio-temporal information describing the core topic of a document is defined as ‘representative spatio-temporal information’, and documents containing representative spatiotemporal information are defined as ‘representative spatio-temporal documents’. We proposed a character-level Convolution Neuron Network (CNN)-based document classifier to classify representative spatio-temporal documents. To train the proposed CNN model, 7400 training data were constructed for representative spatio-temporal documents. The experimental results show that the proposed CNN model outperforms traditional machine learning classifiers and existing CNN-based classifiers.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85101357587
A Convolution-Self Attention Abstractive Summarization Method Fusing Sequential Grammar Knowledge
Abstractive summarization is to analyze the core ideas of the document, rephrase or use new words to generate a summary that can summarize the whole document. However, the encoder-decoder model can not fully extract the syntax, that cause the summary not to match the grammar rules. The recurrent neural network is easy to forget the historical information and can not perform parallel computation during training, that cause the main idea of the summary not significant and the coding speed slow. In view of the above problems, a new abstractive summarization method with fusing sequential syntax was proposed for the convolution-self attention model. First, constructing a phrase structure tree for the document and embeding sequential syntax into the encoder, the method could make better use of the syntax when encoding. Then, the convolution-self-attention model was used to replace the recurrent neural network model to encode, learnning the global and local information sufficiently from the document. Experimental results on the CNN/Daily Mail dataset show that, the proposed method is superior to the state-of-the-art methods. At the same time, the generated summaries are more grammatical, the main ideas are more obvious and the encoding speed of the model is faster.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 15, 30, 47, 3 ]
http://arxiv.org/abs/1602.03001v2
A Convolutional Attention Network for Extreme Summarization of Source Code
Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension. Often there exist features that are locally translation invariant and would be valuable for directing the model's attention, but previous attentional architectures are not constructed to learn such features specifically. We introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way. We apply this architecture to the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries. Using those features, the model sequentially generates a summary by marginalizing over two attention mechanisms: one that predicts the next summary token based on the attention weights of the input tokens and another that is able to copy a code token as-is directly into the summary. We demonstrate our convolutional attention neural network's performance on 10 popular Java projects showing that it achieves better performance compared to previous attentional mechanisms.
[ "Programming Languages in NLP", "Information Extraction & Text Mining", "Summarization", "Text Generation", "Multimodality" ]
[ 55, 3, 30, 47, 74 ]
http://arxiv.org/abs/2006.02547v2
A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning
Probabilistic Latent Variable Models (LVMs) provide an alternative to self-supervised learning approaches for linguistic representation learning from speech. LVMs admit an intuitive probabilistic interpretation where the latent structure shapes the information extracted from the signal. Even though LVMs have recently seen a renewed interest due to the introduction of Variational Autoencoders (VAEs), their use for speech representation learning remains largely unexplored. In this work, we propose Convolutional Deep Markov Model (ConvDMM), a Gaussian state-space model with non-linear emission and transition functions modelled by deep neural networks. This unsupervised model is trained using black box variational inference. A deep convolutional neural network is used as an inference network for structured variational approximation. When trained on a large scale speech dataset (LibriSpeech), ConvDMM produces features that significantly outperform multiple self-supervised feature extracting methods on linear phone classification and recognition on the Wall Street Journal dataset. Furthermore, we found that ConvDMM complements self-supervised methods like Wav2Vec and PASE, improving on the results achieved with any of the methods alone. Lastly, we find that ConvDMM features enable learning better phone recognizers than any other features in an extreme low-resource regime with few labeled training examples.
[ "Low-Resource NLP", "Semantic Text Processing", "Speech & Audio in NLP", "Representation Learning", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 80, 72, 70, 12, 4, 74 ]
http://arxiv.org/abs/1611.02344v3
A Convolutional Encoder Model for Neural Machine Translation
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
http://arxiv.org/abs/2111.06625v1
A Convolutional Neural Network Based Approach to Recognize Bangla Spoken Digits from Speech Signal
Speech recognition is a technique that converts human speech signals into text or words or in any form that can be easily understood by computers or other machines. There have been a few studies on Bangla digit recognition systems, the majority of which used small datasets with few variations in genders, ages, dialects, and other variables. Audio recordings of Bangladeshi people of various genders, ages, and dialects were used to create a large speech dataset of spoken '0-9' Bangla digits in this study. Here, 400 noisy and noise-free samples per digit have been recorded for creating the dataset. Mel Frequency Cepstrum Coefficients (MFCCs) have been utilized for extracting meaningful features from the raw speech data. Then, to detect Bangla numeral digits, Convolutional Neural Networks (CNNs) were utilized. The suggested technique recognizes '0-9' Bangla spoken digits with 97.1% accuracy throughout the whole dataset. The efficiency of the model was also assessed using 10-fold crossvalidation, which yielded a 96.7% accuracy.
[ "Responsible & Trustworthy NLP", "Multimodality", "Speech & Audio in NLP", "Green & Sustainable NLP" ]
[ 4, 74, 70, 68 ]
SCOPUS_ID:85053541812
A Convolutional Neural Network Model for Emotion Detection from Tweets
Sentiment analysis and emotion recognition are major indicators of society trends toward certain topics. Analyzing opinions and feelings helps improving the human-computer interaction in several fields ranging from opinion mining to psychological concerns. This paper proposes a deep learning model for emotion detection from short informal sentences. The model consists of three Convolutional Neural Networks (CNNs). Each CNN contains a convolutional layer and a max-pooling layer, followed by a fully-connected layer for classifying the sentences into positive or negative. The model employs the word vector representation as textual features, which works on random initialization for the word vectors, and are set to be trainable and updated through the model training phase. Eventually, task-specific vectors are generated as the model learns to distinguish the meaning of words in the dataset. The model has been tested on the Stanford Twitter Sentiment dataset for classifying sentiment into two classes positive and negative. The presented model achieved to record 80.6% accuracy as a prove that even with randomly initialized word vectors, it can work very well in text classification tasks when trained with CNNs.
[ "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Sentiment Analysis", "Emotion Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 12, 78, 61, 36, 3 ]
http://arxiv.org/abs/1807.01704v1
A Convolutional Neural Network for Aspect Sentiment Classification
With the development of the Internet, natural language processing (NLP), in which sentiment analysis is an important task, became vital in information processing.Sentiment analysis includes aspect sentiment classification. Aspect sentiment can provide complete and in-depth results with increased attention on aspect-level. Different context words in a sentence influence the sentiment polarity of a sentence variably, and polarity varies based on the different aspects in a sentence. Take the sentence, 'I bought a new camera. The picture quality is amazing but the battery life is too short.'as an example. If the aspect is picture quality, then the expected sentiment polarity is 'positive', if the battery life aspect is considered, then the sentiment polarity should be 'negative'; therefore, aspect is important to consider when we explore aspect sentiment in the sentence. Recurrent neural network (RNN) is regarded as a good model to deal with natural language processing, and RNNs has get good performance on aspect sentiment classification including Target-Dependent LSTM (TD-LSTM) ,Target-Connection LSTM (TC-LSTM) (Tang, 2015a, b), AE-LSTM, AT-LSTM, AEAT-LSTM (Wang et al., 2016).There are also extensive literatures on sentiment classification utilizing convolutional neural network, but there is little literature on aspect sentiment classification using convolutional neural network. In our paper, we develop attention-based input layers in which aspect information is considered by input layer. We then incorporate attention-based input layers into convolutional neural network (CNN) to introduce context words information. In our experiment, incorporating aspect information into CNN improves the latter's aspect sentiment classification performance without using syntactic parser or external sentiment lexicons in a benchmark dataset from Twitter but get better performance compared with other models.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Polarity Analysis", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 33, 23, 78, 36, 3 ]
http://arxiv.org/abs/1904.00805v1
A Convolutional Neural Network for Language-Agnostic Source Code Summarization
Descriptive comments play a crucial role in the software engineering process. They decrease development time, enable better bug detection, and facilitate the reuse of previously written code. However, comments are commonly the last of a software developer's priorities and are thus either insufficient or missing entirely. Automatic source code summarization may therefore have the ability to significantly improve the software development process. We introduce a novel encoder-decoder model that summarizes source code, effectively writing a comment to describe the code's functionality. We make two primary innovations beyond current source code summarization models. First, our encoder is fully language-agnostic and requires no complex input preprocessing. Second, our decoder has an open vocabulary, enabling it to predict any word, even ones not seen in training. We demonstrate results comparable to state-of-the-art methods on a single-language data set and provide the first results on a data set consisting of multiple programming languages.
[ "Language Models", "Programming Languages in NLP", "Semantic Text Processing", "Summarization", "Multimodality", "Text Generation", "Code Generation", "Information Extraction & Text Mining" ]
[ 52, 55, 72, 30, 74, 47, 44, 3 ]
SCOPUS_ID:85091984116
A Convolutional Neural Network with Word-level Attention for Text Classification
Text classification is a classic task in the NLP area which aims to predict the categories for given texts. Many neural network models are applied to this task with the development of the neural network technology. One of the typical neural network structures on the task is the convolutional neural network (CNN). However, the existing traditional CNN model used for classification is not sensitive to the key words in the texts, which causes it cannot capture the important information from the texts. Therefore, in this paper, we propose a new attention-based convolutional neural network (ACNN) which is trained to pay more attention to important words to assist in classifying texts. We evaluate our model and the classic CNN model on several public datasets, and the result of our experiment shows that the new proposed ACNN model outperforms the basic CNN model.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85118261543
A Convolutional Stacked Bidirectional LSTM with a Multiplicative Attention Mechanism for Aspect Category and Sentiment Detection
Traditionally, sentiment analysis is a binary classification task that aims to categorize a piece of text as positive or negative. This approach, however, can be too simplistic when the text under scrutiny contains more than one opinion target. Hence, aspect-based sentiment analysis provides fine-grained sentiment understanding of the product, service, or policy. Machine learning and deep learning algorithms play an important role in this kind of task. Also, attention mechanism has shown breakthrough in the field of natural language processing. Therefore, we propose a convolutional stacked bidirectional long short-term memory with a multiplicative attention mechanism for aspect category and sentiment polarity detection. More specifically, we treat the proposed model as a multiclass classification problem. The proposed model is evaluated using SemEval-2015 and SemEval-2016 dataset. Our proposed model outperforms state-of-the-art results in aspect-based sentiment analysis.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 23, 78, 36, 3 ]
SCOPUS_ID:85084111546
A Cooperative Binary-Clustering Framework Based on Majority Voting for Twitter Sentiment Analysis
Twitter sentiment analysis is a challenging problem in natural language processing. For this purpose, supervised learning techniques have mostly been employed, which require labeled data for training. However, it is very time consuming to label datasets of large size. To address this issue, unsupervised learning techniques such as clustering can be used. In this study, we explore the possibility of using hierarchical clustering for twitter sentiment analysis. Three hierarchical-clustering techniques, namely single linkage (SL), complete linkage (CL) and average linkage (AL), are examined. A cooperative framework of SL, CL and AL is built to select the optimal cluster for tweets wherein the notion of optimal-cluster selection is operationalized using majority voting. The hierarchical clustering techniques are also compared with k-means and two state-of-the-art classifiers (SVM and Naïve Bayes). The performance of clustering and classification is measured in terms of accuracy and time efficiency. The experimental results indicate that cooperative clustering based on majority voting approach is robust in terms of good quality clusters with tradeoff of poor time efficiency. The results also suggest that the accuracy of the proposed clustering framework is comparable to classifiers which is encouraging.
[ "Information Extraction & Text Mining", "Information Retrieval", "Green & Sustainable NLP", "Sentiment Analysis", "Text Clustering", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 68, 78, 29, 36, 4 ]
http://arxiv.org/abs/2102.08322v1
A Cooperative Memory Network for Personalized Task-oriented Dialogue Systems with Incomplete User Profiles
There is increasing interest in developing personalized Task-oriented Dialogue Systems (TDSs). Previous work on personalized TDSs often assumes that complete user profiles are available for most or even all users. This is unrealistic because (1) not everyone is willing to expose their profiles due to privacy concerns; and (2) rich user profiles may involve a large number of attributes (e.g., gender, age, tastes, . . .). In this paper, we study personalized TDSs without assuming that user profiles are complete. We propose a Cooperative Memory Network (CoMemNN) that has a novel mechanism to gradually enrich user profiles as dialogues progress and to simultaneously improve response selection based on the enriched profiles. CoMemNN consists of two core modules: User Profile Enrichment (UPE) and Dialogue Response Selection (DRS). The former enriches incomplete user profiles by utilizing collaborative information from neighbor users as well as current dialogues. The latter uses the enriched profiles to update the current user query so as to encode more useful information, based on which a personalized response to a user request is selected. We conduct extensive experiments on the personalized bAbI dialogue benchmark datasets. We find that CoMemNN is able to enrich user profiles effectively, which results in an improvement of 3.06% in terms of response selection accuracy compared to state-of-the-art methods. We also test the robustness of CoMemNN against incompleteness of user profiles by randomly discarding attribute values from user profiles. Even when discarding 50% of the attribute values, CoMemNN is able to match the performance of the best performing baseline without discarding user profiles, showing the robustness of CoMemNN.
[ "Responsible & Trustworthy NLP", "Natural Language Interfaces", "Robustness in NLP", "Dialogue Systems & Conversational Agents" ]
[ 4, 11, 58, 38 ]
http://arxiv.org/abs/2211.10271v1
A Copy Mechanism for Handling Knowledge Base Elements in SPARQL Neural Machine Translation
Neural Machine Translation (NMT) models from English to SPARQL are a promising development for SPARQL query generation. However, current architectures are unable to integrate the knowledge base (KB) schema and handle questions on knowledge resources, classes, and properties unseen during training, rendering them unusable outside the scope of topics covered in the training set. Inspired by the performance gains in natural language processing tasks, we propose to integrate a copy mechanism for neural SPARQL query generation as a way to tackle this issue. We illustrate our proposal by adding a copy layer and a dynamic knowledge base vocabulary to two Seq2Seq architectures (CNNs and Transformers). This layer makes the models copy KB elements directly from the questions, instead of generating them. We evaluate our approach on state-of-the-art datasets, including datasets referencing unknown KB elements and measure the accuracy of the copy-augmented architectures. Our results show a considerable increase in performance on all datasets compared to non-copy architectures.
[ "Machine Translation", "Semantic Text Processing", "Knowledge Representation", "Text Generation", "Multilinguality" ]
[ 51, 72, 18, 47, 0 ]
SCOPUS_ID:85126275378
A Corpus Approach to Roman Law Based on Justinian’s Digest
Traditional philological methods in Roman legal scholarship such as close reading and strict juristic reasoning have analysed law in extraordinary detail. Such methods, however, have paid less attention to the empirical characteristics of legal texts and occasionally projected an abstract framework onto the sources. The paper presents a series of computer-assisted methods to open new frontiers of inquiry. Using a Python coding environment, we have built a relational database of the Latin text of the Digest, a historical sourcebook of Roman law compiled under the order of Emperor Justinian in 533 CE. Subsequently, we investigated the structure of Roman law by automatically clustering the sections of the Digest according to their linguistic profile. Finally, we explored the characteristics of Roman legal language according to the principles and methods of computational distributional semantics. Our research has discovered an empirical structure of Roman law which arises from the sources themselves and complements the dominant scholarly assumption that Roman law rests on abstract structures. By building and comparing Latin word embeddings models, we were also able to detect a semantic split in words with general and legal sense. These investigations point to a practical focus in Roman law which is consistent with the view that ancient law schools were more interested in training lawyers for practice rather than in philosophical neatness.
[ "Representation Learning", "Information Extraction & Text Mining", "Semantic Text Processing", "Text Clustering" ]
[ 12, 3, 72, 29 ]
SCOPUS_ID:85062859647
A Corpus Based N-gram Hybrid Approach of Bengali to English Machine Translation
Machine translation means automatic translation which is performed using computer software. There are several approaches to machine translation, some of them need extensive linguistic knowledge while others require enormous statistical calculations. This paper presents a hybrid method, integrating corpus based approach and statistical approach for translating Bengali sentences into English with the help of N-gram language model. The corpus based method finds the corresponding target language translation of sentence fragments, selecting the best match text from the bilingual corpus to acquire knowledge while the N-gram model rearranges the sentence constituents to get an accurate translation without employing external linguistic rules. A variety of Bengali sentences, including various structures and verb tenses are considered to translate through the new system. The performance of the proposed system is evaluated in terms of adequacy, fluency, WER, and BLEU score. The assessment scores are compared with other conventional approaches as well as with Google Translate, a well-known free machine translation service by Google. It has been found that experimental results of the work provide higher scores over Google Translate and other methods with less computational cost.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
https://aclanthology.org//2020.nlptea-1.18/
A Corpus Linguistic Perspective on the Appropriateness of Pop Songs for Teaching Chinese as a Second Language
Language and music are closely related. Regarding the linguistic feature richness, pop songs are probably suitable to be used as extracurricular materials in language teaching. In order to prove this point, this paper presents the Contemporary Chinese Pop Lyrics (CCPL) corpus. Based on that, we investigated and evaluated the appropriateness of pop songs for Teaching Chinese as a Second Language (TCSL) with the assistance of Natural Language Processing methods from the perspective of Chinese character coverage, lexical coverage and the addressed topic similarity. Some suggestions in Chinese teaching with the aid of pop lyrics are provided.
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
SCOPUS_ID:85124093014
A Corpus Linguistics Approach to the Representation of Western Religious Beliefs in Ten Series of Chinese University English Language Teaching Textbooks
The early Sino-Western contact was through the way in which religion and language interact to produce language contact. However, research on this contact is relatively limited to date, particularly in the realm of English language materials. In fact, there is a paucity of research on Western religions in English Language Teaching (ELT) textbooks. By applying corpus linguistics as a tool and the Critical Discourse Analysis as the theoretical framework, this manuscript critically investigates the significant semantic domains in ten English language textbook series that are officially approved and are widely used in Chinese universities. The findings suggest that various Western religious beliefs, which are the highly unusual topics in previous Chinese ELT textbooks, are represented in the textbook corpus. The results also show that when presenting the views and attitudes toward Western religious beliefs, these textbooks have adopted an eclectic approach to the material selection. Surprisingly, positive semantic prosody surrounding the concept of religion is evident and no consistent negative authorial stance toward religion is captured. Atheism has been assumed to be in the center of Chinese intellectual traditions and the essence of the Constitution of the Chinese Communist Party. Interestingly, the findings from this study provide a new understanding of Chinese foreign language textbooks in the new era, and its addition to the literature on the study of ELT textbooks, as well as its development worldwide.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Representation Learning" ]
[ 71, 72, 12 ]
SCOPUS_ID:85116114326
A Corpus Preprocessing Method for Syllable-Level Tibetan Text Classification
Text classification is one of the most common and important tasks in the application field of natural language processing. With the rapid development of machine learning landscape, deep learning has become the mainstream approach for implementing text classification applications. However, deep learning has high requirements on the scale and quality of corpus, therefore, it is particularly important to build large-scale and high-quality corpus. In order to improve the quality of Tibetan text classification corpus, based on the analysis of the research status of corpus preprocessing, this paper proposes a syllable level Tibetan text classification corpus preprocessing model, and presents the core module of a text normalization algorithm which we refer as TC_ TCCNL. The proposed method lays a foundation for the construction of Tibetan text classification corpus.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2004.03287v1
A Corpus Study and Annotation Schema for Named Entity Recognition and Relation Extraction of Business Products
Recognizing non-standard entity types and relations, such as B2B products, product classes and their producers, in news and forum texts is important in application areas such as supply chain monitoring and market research. However, there is a decided lack of annotated corpora and annotation guidelines in this domain. In this work, we present a corpus study, an annotation schema and associated guidelines, for the annotation of product entity and company-product relation mentions. We find that although product mentions are often realized as noun phrases, defining their exact extent is difficult due to high boundary ambiguity and the broad syntactic and semantic variety of their surface realizations. We also describe our ongoing annotation effort, and present a preliminary corpus of English web and social media documents annotated according to the proposed guidelines.
[ "Relation Extraction", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 75, 34, 3 ]
https://aclanthology.org//W01-1626/
A Corpus Study of Evaluative and Speculative Language
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85101068033
A Corpus Study of Ideology-Driven Discourse Practice: The University Language Learner as Researcher. The Case of Prepositions
It is widely acknowledged that both language learners and teachers have benefitted in many ways from the implementation of ICTs in and outside the classroom, and continue to do so. The range of skills that students, as future professionals, are required to master has risen exponentially in recent times, partly as a result of these technological developments. The implementation of ICTs has been coupled with a focus in higher education on exploring new ways of encouraging creativity in teaching and learner autonomy, both areas having gained ground recently in teaching practices (Weimer 2002). In this context, open-minded instructors should accept ICTs not just as simple gadgets, blindly adapting older practices to them, but as a means of achieving more rewarding learning and teaching opportunities and outcomes.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
http://arxiv.org/abs/1909.09067v1
A Corpus for Automatic Readability Assessment and Text Simplification of German
In this paper, we present a corpus for use in automatic readability assessment and automatic text simplification of German. The corpus is compiled from web sources and consists of approximately 211,000 sentences. As a novel contribution, it contains information on text structure, typography, and images, which can be exploited as part of machine learning approaches to readability assessment and text simplification. The focus of this publication is on representing such information as an extension to an existing corpus standard.
[ "Text Generation", "Paraphrasing", "Semantic Text Processing", "Text Complexity" ]
[ 47, 32, 72, 42 ]
SCOPUS_ID:85144453129
A Corpus for Commonsense Inference in the Story Cloze Test
The Story Cloze Test (SCT) is designed for training and evaluating machine learning algorithms for narrative understanding and inferences. The SOTA models can achieve over 90% accuracy on predicting the last sentence. However, it has been shown that high accuracy can be achieved by merely using surface-level features. We suspect these models may not truly understand the story. Based on the SCT dataset, we constructed a human-labeled and human-verified commonsense knowledge inference dataset. Given the first four sentences of a story, we asked crowd-source workers to choose from four types of narrative inference for deciding the ending sentence and which sentence contributes most to the inference. We accumulated data on 1871 stories, and three human workers labeled each story. Analysis of the intra-category and inter-category agreements show a high level of consensus. We present two new tasks for predicting the narrative inference categories and contributing sentences. Our results show that transformer-based models can reach SOTA performance on the original SCT task using transfer learning but don't perform well on these new and more challenging tasks.
[ "Commonsense Reasoning", "Reasoning" ]
[ 62, 8 ]
SCOPUS_ID:85127416935
A Corpus for Dimensional Sentiment Classification on YouTube Streaming Service
The streaming service platform such as YouTube provides a discussion function for audiences worldwide to share comments. YouTubers who upload videos to the YouTube platform want to track the performance of these uploaded videos. However, the present analysis functions of YouTube only provide a few performance indicators such as average view duration, browsing history, variance in audience's demographics, etc., and lack of sentiment analysis on the audience's comments. Therefore, the paper proposes multi-dimensional sentiment indicators such as YouTuber preference, Video preferences, and Excitement level to capture comprehensive sentiment on audience comments for videos and YouTubers. To evaluate the performance of different classifiers, we experiment with deep learning-based, machine learning-based, and BERT-based classifiers to automatically detect three sentiment indicators of an audience's comments. Experimental results indicate that the BERT-based classifier is a better classification model than other classifiers according to F1-score, and the sentiment indicator of Excitement level is quite an improvement. Therefore, the multiple sentiment detection tasks on the video streaming service platform can be solved by the proposed multidimensional sentiment indicators accompanied with BERT classifier to gain the best result.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Sentiment Analysis", "Text Classification", "Multimodality" ]
[ 20, 52, 72, 24, 3, 78, 36, 74 ]
http://arxiv.org/abs/2010.08725v1
A Corpus for English-Japanese Multimodal Neural Machine Translation with Comparable Sentences
Multimodal neural machine translation (NMT) has become an increasingly important area of research over the years because additional modalities, such as image data, can provide more context to textual data. Furthermore, the viability of training multimodal NMT models without a large parallel corpus continues to be investigated due to low availability of parallel sentences with images, particularly for English-Japanese data. However, this void can be filled with comparable sentences that contain bilingual terms and parallel phrases, which are naturally created through media such as social network posts and e-commerce product descriptions. In this paper, we propose a new multimodal English-Japanese corpus with comparable sentences that are compiled from existing image captioning datasets. In addition, we supplement our comparable sentences with a smaller parallel corpus for validation and test purposes. To test the performance of this comparable sentence translation scenario, we train several baseline NMT models with our comparable corpus and evaluate their English-Japanese translation performance. Due to low translation scores in our baseline experiments, we believe that current multimodal NMT models are not designed to effectively utilize comparable sentence data. Despite this, we hope for our corpus to be used to further research into multimodal NMT with comparable sentences.
[ "Visual Data in NLP", "Machine Translation", "Multimodality", "Text Generation", "Multilinguality" ]
[ 20, 51, 74, 47, 0 ]
SCOPUS_ID:85135779684
A Corpus for Evaluation of Cross Language Text Re-use Detection Systems
In recent years, the availability of documents through the Internet along with automatic translation systems have increased plagiarism, especially across languages. Cross-lingual plagiarism occurs when the source or original text is in one language and the plagiarized or re-used text is in another language. Various methods for automatic text re-use detection across languages have been developed whose objective is to assist human experts in analyzing documents for plagiarism cases. For evaluating the performance of these systems and algorithms, standard evaluation resources are needed. To construct cross lingual plagiarism detection corpora, the majority of earlier studies have paid attention to English and other European language pairs, and have less focused on low resource languages. In this paper, we investigate a method for constructing an English-Persian cross-language plagiarism detection corpus based on parallel bilingual sentences that artificially generate passages with various degrees of paraphrasing. The plagiarized passages are inserted into topically related English and Persian Wikipedia articles in order to have more realistic text documents. The proposed approach can be applied to other less-resourced languages. In order to evaluate the compiled corpus, both intrinsic and extrinsic evaluation methods were employed. So, the compiled corpus can be suitably included into an evaluation framework for assessing cross-language plagiarism detection systems. Our proposed corpus is free and publicly available for research purposes.
[ "Cross-Lingual Transfer", "Multilinguality" ]
[ 19, 0 ]