id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:84905269500
A Khmer named entity recognition method by fusing language characteristics
Aiming at the problem of Khmer named entity recognition, we proposed a method fusing Khmer entity characteristics based on the universal feature templates. For the relatively stable entity that is formed of time expressions and digital expressions, we recognize it using artificial rules; For the complex entity that is formed of names, locations, and organizations, we use Conditional Random Fields algorithm, taking word, part of speech, contextual information and Khmer entity characteristics into consideration, to build a complex entity recognition model to recognize it. Experimental results show that the named entity recognition method fusing Khmer entity characteristics has a better effect. © 2014 IEEE.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85081180396
A Kind of Chinese Word Segmentation Algorithm Based on Finite Automaton
Finite automaton can be used in Chinese word segmentation technology. The finite automaton recognizing every kind of words and the Chinese word segmentation algorithm based on the finite automaton were constructed. Chinese word segmentation can be completed by using the word segmentation algorithm based on finite automaton. By example, the efficiency of the Chinese word segmentation can be improved and the part of speech dimension can be completed during segmenting words by using the Chinese word segmentation algorithm based on finite automaton.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85147678200
A Kind of Syntax Parsing Algorithm Based on the Predictive Parsing Table
Syntax parsing algorithms are important algorithms in natural language processing area. If the Context free grammar is belong to LL(1) grammars, the predictive parsing table can be constructed for it according to the grammar. The syntax parsing algorithm can be constructed based on the predictive parsing table. The syntax parsing can be completed by using the predictive parsing table and the syntax parsing algorithm based on the predictive parsing table.
[ "Structured Data in NLP", "Syntactic Text Processing", "Multimodality" ]
[ 50, 15, 74 ]
SCOPUS_ID:85116127931
A Kind of Syntax Parsing Algorithm Based on the Push-down Automaton
Natural language processing is an important branch of artificial Intelligence. Syntax parsing algorithms are basic algorithms of natural language processing. The Context free grammar can be transferred to the Graibach normal form. According to the grammar in the form of the Graibach normal form, the push-down automaton can be constructed. The syntax parsing algorithm can be constructed based on the push-down automaton. The syntax parsing can be completed by using the push-down automaton and the syntax parsing algorithm based on the push-down automaton.
[ "Syntactic Text Processing" ]
[ 15 ]
SCOPUS_ID:85075629612
A Kind of personalized advertising recommendation method based on user-interest-behavior model
With the rapid development of internet and mobile internet, delivering ads on the internet has become the main channel for major advertisers. However, the accurate recommendation of online ads is a major problem which plagued advertisers and agencies. By analyzing the unstructured characteristics of online ads and search engine user behavior data, proposed a kind of personalized ads recommendation method based on User-Interest-Behavior model, which can extract the user's interest preferences by the topic model and generate the recommended list of ads based on the nearest neighbor and user behavior. The experimental results demonstrate that the personalized ads recommendation method based on nearest neighbor and user behavior can recommend personalized ads and have a better performance than the content-based recommendation method.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:84942794601
A Kinect-based system for Arabic sign language to speech translation
Speech-based smart systems have come to play an increasingly diverse role in today's pervasive technology. Moreover, it is quite common to experience all kinds of innovations on daily basis, ranging from retina identifiers at banks to electronic fingerprints readers. Such proliferation presents an opportunity and a challenge to integrate speech and hearing-challenged individuals into society by designing sign language to speech translation systems. In this paper, we tackle the problem of Arabic Sign Language to Speech transformation. We make use of commercial off-the-shelf components to capture the Sign Language gestures. Graphical gestures were transformed into Arabic text, which in turn can be translated into any spoken language. Web services were used to generate the spoken sounds. The majority of this paper is dedicated to explaining hand and fingers identification. In addition, motion recognition is also detailed. The accuracy in identifying the implemented characters was shown to exceed 80%.
[ "Machine Translation", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Multilinguality" ]
[ 51, 70, 74, 47, 0 ]
SCOPUS_ID:85140413338
A Knowledge Distillation Method based on IQE Attention Mechanism for Target Recognition in Sar Imagery
The huge computing and storage requirements of deep con-volutional neural networks (DCNNs) limit their application on edge computing devices. In this article, we propose an attention mechanism based on the feature map quality evaluation algorithm (IQE). The knowledge distillation method based on the IQE attention mechanism uses the IQE method to identify important knowledge in the pre-trained SAR target recognition deep neural network. Then in the process of knowledge distillation, the lightweight network is forced to focus on the learning of important knowledge. Through this mechanism, the method proposed in this paper can efficiently transfer the knowledge of the pre-trained SAR target recognition network to the lightweight network, which makes it is possible to deploy the SAR target recognition algorithm on the edge computing platform. Comparison experiments with several commonly used knowledge distillation methods have proved the effectiveness of our proposed method. In addition, we also verified the performance of the lightweight network obtained by our method on the edge platform based on the K210 processor.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Green & Sustainable NLP", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 20, 52, 72, 68, 4, 74 ]
SCOPUS_ID:85083072791
A Knowledge Graph Based Approach for Automatic Speech and Essay Summarization
Every day, big amounts of unstructured data is generated. This data is of the form of essays, research papers, speeches, patents, scholastic articles, book chapters etc. In today's world, it is very important to extract key patterns from huge text passages or verbal speeches. This paper proposes a novel method for summarizing multilingual vocal as well as written paragraphs and speeches, using semantic Knowledge Graphs. Using the proposed model, big text extracts or speeches can be summarized for better understanding and analysis. The method uses speech recognition as well as Named Entity Recognition to identify entities from spoken content to create optimized Knowledge Graphs in the English Language.
[ "Information Extraction & Text Mining", "Semantic Text Processing", "Structured Data in NLP", "Speech & Audio in NLP", "Summarization", "Knowledge Representation", "Named Entity Recognition", "Text Generation", "Multimodality" ]
[ 3, 72, 50, 70, 30, 18, 34, 47, 74 ]
SCOPUS_ID:85118952388
A Knowledge Graph Based Medical Intelligent Question Answering System
The knowledge graphs play a crucial role in the medical field. However, the knowledge graphs built for helping patients, especially Chinese patients, are rare. This paper built a user-friendly medical knowledge graphs-based automatic question answering system. Firstly, the knowledge graphs contain five entities, namely Disease, Drug, Symptom, Department, and Check is built. Over 20,000 nodes and 160,000 relationships are imported from the open-source datasets. Secondly, a question answering system is established based on the knowledge graphs. The input sentence from users is segmented. Then the Term Frequency-Inverse Document Frequency method is used to extract features. After that, the features are classified by the Naive Bayes model for querying the results from the knowledge graphs. The verification results indicate that the question answering system can recognize the input sentence well and return good results. In addition, we create a user-friendly Chinese interface to display results for users.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Multimodality" ]
[ 72, 50, 27, 18, 11, 74 ]
http://arxiv.org/abs/1812.01889v1
A Knowledge Graph Based Solution for Entity Discovery and Linking in Open-Domain Questions
Named entity discovery and linking is the fundamental and core component of question answering. In Question Entity Discovery and Linking (QEDL) problem, traditional methods are challenged because multiple entities in one short question are difficult to be discovered entirely and the incomplete information in short text makes entity linking hard to implement. To overcome these difficulties, we proposed a knowledge graph based solution for QEDL and developed a system consists of Question Entity Discovery (QED) module and Entity Linking (EL) module. The method of QED module is a tradeoff and ensemble of two methods. One is the method based on knowledge graph retrieval, which could extract more entities in questions and guarantee the recall rate, the other is the method based on Conditional Random Field (CRF), which improves the precision rate. The EL module is treated as a ranking problem and Learning to Rank (LTR) method with features such as semantic similarity, text similarity and entity popularity is utilized to extract and make full use of the information in short texts. On the official dataset of a shared QEDL evaluation task, our approach could obtain 64.44% F1 score of QED and 64.86% accuracy of EL, which ranks the 2nd place and indicates its practical use for QEDL problem.
[ "Semantic Text Processing", "Information Extraction & Text Mining", "Structured Data in NLP", "Knowledge Representation", "Named Entity Recognition", "Multimodality" ]
[ 72, 3, 50, 18, 34, 74 ]
http://arxiv.org/abs/2201.09555v3
A Knowledge Graph Embeddings based Approach for Author Name Disambiguation using Literals
Scholarly data is growing continuously containing information about the articles from a plethora of venues including conferences, journals, etc. Many initiatives have been taken to make scholarly data available as Knowledge Graphs (KGs). These efforts to standardize these data and make them accessible have also led to many challenges such as exploration of scholarly articles, ambiguous authors, etc. This study more specifically targets the problem of Author Name Disambiguation (AND) on Scholarly KGs and presents a novel framework, Literally Author Name Disambiguation (LAND), which utilizes Knowledge Graph Embeddings (KGEs) using multimodal literal information generated from these KGs. This framework is based on three components: 1) Multimodal KGEs, 2) A blocking procedure, and finally, 3) Hierarchical Agglomerative Clustering. Extensive experiments have been conducted on two newly created KGs: (i) KG containing information from Scientometrics Journal from 1978 onwards (OC-782K), and (ii) a KG extracted from a well-known benchmark for AND provided by AMiner (AMiner-534K). The results show that our proposed architecture outperforms our baselines of 8-14% in terms of the F1 score and shows competitive performances on a challenging benchmark such as AMiner. The code and the datasets are publicly available through Github: https://github.com/sntcristian/and-kge and Zenodo:https://doi.org/10.5281/zenodo.6309855 respectively.
[ "Semantic Text Processing", "Structured Data in NLP", "Representation Learning", "Knowledge Representation", "Multimodality" ]
[ 72, 50, 12, 18, 74 ]
SCOPUS_ID:85137106169
A Knowledge Graph for Automated Construction Workers' Safety Violation Identification
Identifying workers' safety violations on construction job sites is critical for improving construction safety performance. The advancement of sensing technologies makes automatic safety violation detection possible by encoding the safety knowledge into computer programs. However, it requires intensive human efforts in turning safety knowledge into computer rules, and the hard-coded rules limit the expandability of the developed applications. This study proposes a condition-based knowledge graph for the safety knowledge representation to support the reasoning on safety violations. The improved knowledge graph's structure solves the limitation by presenting the public knowledge and safety rules for condition structure, respectively. A natural language processing supported automatic knowledge graph development approach is developed in this paper to extract the safety knowledge from safety knowledge texts automatically and to construct the knowledge graph. To validate this construction framework, an initial knowledge graph containing 1,200 rules is developed based on construction safety regulations. The proposed automatic safety knowledge extraction model achieves an F1 value of 67%.
[ "Knowledge Representation", "Structured Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 18, 50, 72, 74 ]
SCOPUS_ID:85130830428
A Knowledge Graph-Based Abstractive Model Integrating Semantic and Structural Information for Summarizing Chinese Meetings
With the rapid increase of users, online meeting platforms have accumulated massive meeting transcripts. However, it is still a challenge for users to quickly master the chief information and manage the meetings, despite there are already some useful text summarization models. In this paper, a Knowledge Graph-based Meeting Summarization Framework is proposed to tackle this challenge. First, a two-layers meeting domain Knowledge Graph is developed to integrate more information of meetings. Based on which, an encoder-decoder architecture is utilized to summarize meetings. For encoding meetings, a structural-level and semantic-level embedding strategy is considered, concretely, the Knowledge Graph is embedded to obtain the structural information, an interaction intention recognition model and a two-level transformer mechanism are devised to get the semantic information. Finally, the structural information and semantic information are combined and fed into the decoding network to generate meeting summaries. Extensive experiments on the Chinese meeting dataset show that our summarization framework outperforms other state-of-the-art models.
[ "Semantic Text Processing", "Structured Data in NLP", "Summarization", "Knowledge Representation", "Multimodality", "Text Generation", "Information Extraction & Text Mining" ]
[ 72, 50, 30, 18, 74, 47, 3 ]
http://arxiv.org/abs/1810.01375v1
A Knowledge Hunting Framework for Common Sense Reasoning
We introduce an automatic system that achieves state-of-the-art results on the Winograd Schema Challenge (WSC), a common sense reasoning task that requires diverse, complex forms of inference and knowledge. Our method uses a knowledge hunting module to gather text from the web, which serves as evidence for candidate problem resolutions. Given an input problem, our system generates relevant queries to send to a search engine, then extracts and classifies knowledge from the returned results and weighs them to make a resolution. Our approach improves F1 performance on the full WSC by 0.21 over the previous best and represents the first system to exceed 0.5 F1. We further demonstrate that the approach is competitive on the Choice of Plausible Alternatives (COPA) task, which suggests that it is generally applicable.
[ "Commonsense Reasoning", "Reasoning" ]
[ 62, 8 ]
SCOPUS_ID:85104823303
A Knowledge Representation Model for Studying Knowledge Creation, Usage, and Evolution
A knowledge representation model is proposed to facilitate studies on knowledge creation, usage, and evolution. The model uses a three-layer network structure to capture citation relationships among papers, the internal concept structure within individual papers, and the knowledge landscape in a domain. The resulting model can not only reveal the path and direction of knowledge diffusion, but also detail the content of knowledge transferred between papers, new knowledge added, and changing knowledge landscape in a domain. A pilot experiment is carried out using the PMC-OA dataset in the biomedical field. A case study on one knowledge evolution chain of Alzheimer’s Disease demonstrates the use of the model in revealing knowledge creation, usage, and evolution. Initial findings confirm the feasibility of the model for its purpose. Limitations of the study are discussed. Future work will try to address the recognized limitations and apply the model to large scale automated analysis to understand the knowledge production process.
[ "Knowledge Representation", "Semantic Text Processing", "Representation Learning" ]
[ 18, 72, 12 ]
SCOPUS_ID:85099581671
A Knowledge Search Algorithm Based on Multidimensional Semantic Similarity Analysis in Knowledge Graph Systems of Power Grid Networks
In the face of intelligent demand for information search, knowledge search is now the most promising and has been widely used in the current knowledge graph research. However, the traditional search methods based on node label only are difficult to present the semantic relationships between multidimensional nodes due to that the structure information and information in other dimensions is neglected, thus resulting in low semantic relevance and search efficiency for query results. In this paper, in order to improve the semantic relevance and quality of search results, we propose a knowledge searching algorithm based on analysis of multidimensional semantic similarity for knowledge graph systems, which combines both the ontology information and multi-hop neighborhood information together during the search process. The algorithm is designed for a knowledge graph system developed by the State Grid Anhui power distribution network, China. Several experiments are performed and the results show that the proposed algorithm outperforms the recent knowledge searching methods.
[ "Semantic Text Processing", "Structured Data in NLP", "Semantic Similarity", "Knowledge Representation", "Information Retrieval", "Multimodality" ]
[ 72, 50, 53, 18, 24, 74 ]
SCOPUS_ID:85076837552
A Knowledge Selection Model in Pointer-Generator Dialogue Systems
Conversation generation is one of the core problems in natural language processing. In this paper, we present a new model for knowledge grounded conversations. Our model is enhanced by copying mechanism, which can produce a response with tokens either copied from conversation history or be generated from states. Besides, to utilize related knowledge precisely, we add attention mechanism to select the most related knowledge for each decoding step. Furthermore, we use beam search to reduce the generation of meaningless responses. Evaluations show that our model achieves better performance than other baselines.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85137701955
A Knowledge Storage and Semantic Space Alignment Method for Multi-documents Dialogue Generation
Question Answering (QA) is a Natural Language Processing (NLP) task that can measure language and semantics understanding ability, it requires a system not only to retrieve relevant documents from a large number of articles but also to answer corresponding questions according to documents. However, various language styles and sources of human questions and evidence documents form the different embedding semantic spaces, which may bring some errors to the downstream QA task. To alleviate these problems, we propose a framework for enhancing downstream evidence retrieval by generating evidence, aiming at improving the performance of response generation. Specifically, we take the pre-training language model as a knowledge base, storing documents' information and knowledge into model parameters. With the Child-Tuning approach being designed, the knowledge storage and evidence generation avoid catastrophic forgetting for response generation. Extensive experiments carried out on the multi-documents dataset show that the proposed method can improve the final performance, which demonstrates the effectiveness of the proposed framework.
[ "Dialogue Response Generation", "Question Answering", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Retrieval" ]
[ 14, 27, 11, 47, 38, 24 ]
SCOPUS_ID:85081947640
A Knowledge based Approach to Analyze the Sentiment of Online Reviews
In this paper, we are going to present a work which resolves the sentiment of the online reviews. In this work, first the online reviews about several entities are collected from the internet using a crawler program. Next, the sentiment of individual review is resolved using two algorithms. First one is the Lexical Similarity based approach and second algorithm is based on Context Expansion strategy. Contexts of the reviews are expanded with the help of English WordNet. The algorithms are tested on online reviews of 15 entities from 5 different domains. The lexical similarity based algorithm produced 62% accuracy w.r.t. human judgment and context expansion based algorithm produced 82% accuracy w.r.t. human judgment. The challenges which are faced during the experiment are discussed in the pre¬conclusion section.
[ "Knowledge Representation", "Semantic Text Processing", "Sentiment Analysis" ]
[ 18, 72, 78 ]
https://aclanthology.org//2022.dialdoc-1.14/
A Knowledge storage and semantic space alignment Method for Multi-documents dialogue generation
Question Answering (QA) is a Natural Language Processing (NLP) task that can measure language and semantics understanding ability, it requires a system not only to retrieve relevant documents from a large number of articles but also to answer corresponding questions according to documents. However, various language styles and sources of human questions and evidence documents form the different embedding semantic spaces, which may bring some errors to the downstream QA task. To alleviate these problems, we propose a framework for enhancing downstream evidence retrieval by generating evidence, aiming at improving the performance of response generation. Specifically, we take the pre-training language model as a knowledge base, storing documents’ information and knowledge into model parameters. With the Child-Tuning approach being designed, the knowledge storage and evidence generation avoid catastrophic forgetting for response generation. Extensive experiments carried out on the multi-documents dataset show that the proposed method can improve the final performance, which demonstrates the effectiveness of the proposed framework.
[ "Dialogue Response Generation", "Question Answering", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Retrieval" ]
[ 14, 27, 11, 47, 38, 24 ]
http://arxiv.org/abs/1702.08450v1
A Knowledge-Based Approach to Word Sense Disambiguation by distributional selection and semantic features
Word sense disambiguation improves many Natural Language Processing (NLP) applications such as Information Retrieval, Information Extraction, Machine Translation, or Lexical Simplification. Roughly speaking, the aim is to choose for each word in a text its best sense. One of the most popular method estimates local semantic similarity relatedness between two word senses and then extends it to all words from text. The most direct method computes a rough score for every pair of word senses and chooses the lexical chain that has the best score (we can imagine the exponential complexity that returns this comprehensive approach). In this paper, we propose to use a combinatorial optimization metaheuristic for choosing the nearest neighbors obtained by distributional selection around the word to disambiguate. The test and the evaluation of our method concern a corpus written in French by means of the semantic network BabelNet. The obtained accuracy rate is 78 % on all names and verbs chosen for the evaluation.
[ "Knowledge Representation", "Semantic Text Processing", "Word Sense Disambiguation" ]
[ 18, 72, 65 ]
SCOPUS_ID:85113793740
A Knowledge-Based Deep Learning Architecture for Aspect-Based Sentiment Analysis
The task of sentiment analysis tries to predict the affective state of a document by examining its content and metadata through the application of machine learning techniques. Recent advances in the field consider sentiment to be a multi-dimensional quantity that pertains to different interpretations (or aspects), rather than a single one. Based on earlier research, the current work examines the said task in the framework of a larger architecture that crawls documents from various online sources. Subsequently, the collected data are pre-processed, in order to extract useful features that assist the machine learning algorithms in the sentiment analysis task. More specifically, the words that comprise each text are mapped to a neural embedding space and are provided to a hybrid, bi-directional long short-term memory network, coupled with convolutional layers and an attention mechanism that outputs the final textual features. Additionally, a number of document metadata are extracted, including the number of a document's repetitions in the collected corpus (i.e. number of reposts/retweets), the frequency and type of emoji ideograms and the presence of keywords, either extracted automatically or assigned manually, in the form of hashtags. The novelty of the proposed approach lies in the semantic annotation of the retrieved keywords, since an ontology-based knowledge management system is queried, with the purpose of retrieving the classes the aforementioned keywords belong to. Finally, all features are provided to a fully connected, multi-layered, feed-forward artificial neural network that performs the analysis task. The overall architecture is compared, on a manually collected corpus of documents, with two other state-of-the-art approaches, achieving optimal results in identifying negative sentiment, which is of particular interest to certain parties (like for example, companies) that are interested in measuring their online reputation.
[ "Aspect-based Sentiment Analysis", "Information Retrieval", "Sentiment Analysis" ]
[ 23, 24, 78 ]
SCOPUS_ID:85052695921
A Knowledge-Based Recommendation System That Includes Sentiment Analysis and Deep Learning
Online social networks provide relevant information on users' opinion about different themes. Thus, applications, such as monitoring and recommendation systems (RS) can collect and analyze this data. This paper presents a knowledge-based recommendation system (KBRS), which includes an emotional health monitoring system to detect users with potential psychological disturbances, specifically, depression and stress. Depending on the monitoring results, the KBRS, based on ontologies and sentiment analysis, is activated to send happy, calm, relaxing, or motivational messages to users with psychological disturbances. Also, the solution includes a mechanism to send warning messages to authorized persons, in case a depression disturbance is detected by the monitoring system. The detection of sentences with depressive and stressful content is performed through a convolutional neural network and a bidirectional long short-term memory-recurrent neural networks (RNN); the proposed method reached an accuracy of 0.89 and 0.90 to detect depressed and stressed users, respectively. Experimental results show that the proposed KBRS reached a rating of 94% of very satisfied users, as opposed to 69% reached by a RS without the use of neither a sentiment metric nor ontologies. Additionally, subjective test results demonstrated that the proposed solution consumes low memory, processing, and energy from current mobile electronic devices.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85048858707
A Knowledge-Based Semisupervised Hierarchical Online Topic Detection Framework
Topic models have achieved big success in recent years. To detect topics in a text stream, various online topic models have been proposed in the literature. The limitations of these works include that: 1) most of them run with fixed topic numbers and 2) the overlaps between the topics may enlarge in the evolving process. Hierarchical topic model is a candidate solution to these problems since it can reveal many useful relationships between the topics. These relationships can help to find high quality topics and reduce topic overlaps. In this paper, a knowledge-based semisupervised hierarchical online topic detection framework is proposed. The proposed framework can detect topics in an online hierarchical way. In addition, it has been proven that introducing external knowledge can improve the performance of text mining. Therefore, the knowledge from external knowledge sources and human experts are also integrated in the proposed framework. Experiments are conducted to evaluate the proposed framework with different metrics. The results show that compared with the baseline methods, our framework can achieve better performance with competitive time efficiency.
[ "Topic Modeling", "Knowledge Representation", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 9, 18, 72, 3 ]
http://arxiv.org/abs/2010.05357v2
A Knowledge-Driven Approach to Classifying Object and Attribute Coreferences in Opinion Mining
Classifying and resolving coreferences of objects (e.g., product names) and attributes (e.g., product aspects) in opinionated reviews is crucial for improving the opinion mining performance. However, the task is challenging as one often needs to consider domain-specific knowledge (e.g., iPad is a tablet and has aspect resolution) to identify coreferences in opinionated reviews. Also, compiling a handcrafted and curated domain-specific knowledge base for each domain is very time consuming and arduous. This paper proposes an approach to automatically mine and leverage domain-specific knowledge for classifying objects and attribute coreferences. The approach extracts domain-specific knowledge from unlabeled review data and trains a knowledgeaware neural coreference classification model to leverage (useful) domain knowledge together with general commonsense knowledge for the task. Experimental evaluation on realworld datasets involving five domains (product types) shows the effectiveness of the approach.
[ "Information Retrieval", "Opinion Mining", "Sentiment Analysis", "Coreference Resolution", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 49, 78, 13, 36, 3 ]
SCOPUS_ID:85140721426
A Knowledge-Enhanced Adversarial Model for Cross-lingual Structured Sentiment Analysis
Structured sentiment analysis, which aims to extract the complex semantic structures such as holders, expressions, targets, and polarities, has obtained widespread attention from both industry and academia. Unfortunately, the existing structured sentiment analysis datasets refer to a few languages and are relatively small, limiting neural network models' performance. In this paper, we focus on the cross-lingual structured sentiment analysis task, which aims to transfer the knowledge from the source language to the target one. Notably, we propose a Knowledge-Enhanced Adversarial Model (KEAM) with both implicit distributed and explicit structural knowledge to enhance the cross-lingual transfer. First, we design an adversarial embedding adapter for learning an informative and robust representation by capturing implicit semantic information from diverse multi-lingual embeddings adaptively. Then, we propose a syntax GCN encoder to transfer the explicit semantic information (e.g., universal dependency tree) among multiple languages. We conduct experiments on five datasets and compare KEAM with both the supervised and unsupervised methods. The extensive experimental results show that our KEAM model outperforms all the unsupervised baselines in various metrics.
[ "Multilinguality", "Low-Resource NLP", "Semantic Text Processing", "Robustness in NLP", "Representation Learning", "Knowledge Representation", "Sentiment Analysis", "Cross-Lingual Transfer", "Responsible & Trustworthy NLP" ]
[ 0, 80, 72, 58, 12, 18, 78, 19, 4 ]
SCOPUS_ID:85146682230
A Knowledge-Enhanced Model with Dual-Channel Encoder for Joint Entity and Relation Extraction from Biomedical Literature
Biomedical entity and relation extraction has attracted increasing attention recently, whereas it remains challenging due to its domain-specific features for the biomedical corpus. Hence, many researchers consider utilizing external knowledge from large-scale databases to enhance the semantic understanding of models. However, these knowledge-enhanced methods usually enrich context information by incorporating the context-independent knowledge into entity representations and lack effective interaction. Actually, inspired by pre-trained language models, we argue that knowledge representations need to be trainable and adapted for different contexts. Therefore, we propose Knowledge-enhanced Dual-channel Iterative Model (KeDcIM), a novel end-to-end joint model for biomedical entity and relation extraction. Experiments show that KeDcIM achieves new state-of-the-art results on two benchmark datasets.
[ "Language Models", "Semantic Text Processing", "Relation Extraction", "Knowledge Representation", "Information Extraction & Text Mining" ]
[ 52, 72, 75, 18, 3 ]
http://arxiv.org/abs/2001.05139v1
A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation
Story generation, namely generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of long-range coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we employ multi-task learning which combines a discriminative objective to distinguish true and fake stories during fine-tuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
[ "Language Models", "Semantic Text Processing", "Commonsense Reasoning", "Knowledge Representation", "Text Generation", "Reasoning" ]
[ 52, 72, 62, 18, 47, 8 ]
SCOPUS_ID:85091705622
A Knowledge-Enriched Model for Emotional Conversation Generation
In this poster, we propose a knowledge-enriched emotional conversation generation model (KE-EGM) that can ensure high quality content and focus on the impact of emotional factors during the conversation. First, we apply a multi-embedding fusion layer to provide this model with the token-level and sentence-level understanding. Then, the emotion flow attention mechanism combines flow emotion state and attention mechanism to learn and capture emotional information during the conversation dynamically. Finally, the multi-objective optimization mechanism is introduced to detect and generate fine-grained emotional responses. The experimental results show that KE-EGM outperforms several baselines not only in the content aspect but also in the emotional aspect.
[ "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 14, 11, 47, 38 ]
SCOPUS_ID:85103620101
A Knowledge-Enriched and Span-Based Network for Joint Entity and Relation Extraction
The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes. For entity and relation extraction in a specific domain, we propose a hybrid neural framework consisting of two parts: a span-based model and a graph-based model. The span-basedmodel can tackle overlapping problems compared with BILOU methods, whereas the graph-based model treats relation prediction as graph classification. Our main contribution is to incorporate external lexical and syntactic knowledge of a specific domain, such as domain dictionaries and dependency structures from texts, into end-to-end neural models. We conducted extensive experiments on a Chinese military entity and relation extraction corpus. The results show that the proposed framework outperforms the baselines with better performance in terms of entity and relation prediction. The proposed method provides insight into problems with the joint extraction of entities and their relations.
[ "Multimodality", "Relation Extraction", "Structured Data in NLP", "Information Extraction & Text Mining" ]
[ 74, 75, 50, 3 ]
http://arxiv.org/abs/2106.14444v1
A Knowledge-Grounded Dialog System Based on Pre-Trained Language Models
We present a knowledge-grounded dialog system developed for the ninth Dialog System Technology Challenge (DSTC9) Track 1 - Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access. We leverage transfer learning with existing language models to accomplish the tasks in this challenge track. Specifically, we divided the task into four sub-tasks and fine-tuned several Transformer models on each of the sub-tasks. We made additional changes that yielded gains in both performance and efficiency, including the combination of the model with traditional entity-matching techniques, and the addition of a pointer network to the output layer of the language model.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
https://aclanthology.org//W18-5709/
A Knowledge-Grounded Multimodal Search-Based Conversational Agent
Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database. We address this new challenge by learning a neural response generation system from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded multimodal conversational model where an encoded knowledge base (KB) representation is appended to the decoder input. Our model substantially outperforms strong baselines in terms of text-based similarity measures (over 9 BLEU points, 3 of which are solely due to the use of additional information from the KB).
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Information Retrieval", "Multimodality" ]
[ 11, 38, 24, 74 ]
SCOPUS_ID:85146699821
A Knowledge-Grounded Task-Oriented Dialogue System with Hierarchical Structure for Enhancing Knowledge Selection
For a task-oriented dialogue system to provide appropriate answers to and services for users’ questions, it is necessary for it to be able to utilize knowledge related to the topic of the conversation. Therefore, the system should be able to select the most appropriate knowledge snippet from the knowledge base, where external unstructured knowledge is used to respond to user requests that cannot be solved by the internal knowledge addressed by the database or application programming interface. Therefore, this paper constructs a three-step knowledge-grounded task-oriented dialogue system with knowledge-seeking-turn detection, knowledge selection, and knowledge-grounded generation. In particular, we propose a hierarchical structure of domain-classification, entity-extraction, and snippet-ranking tasks by subdividing the knowledge selection step. Each task is performed through the pre-trained language model with advanced techniques to finally determine the knowledge snippet to be used to generate a response. Furthermore, the domain and entity information obtained because of the previous task is used as knowledge to reduce the search range of candidates, thereby improving the performance and efficiency of knowledge selection and proving it through experiments.
[ "Natural Language Interfaces", "Named Entity Recognition", "Information Extraction & Text Mining", "Dialogue Systems & Conversational Agents" ]
[ 11, 34, 3, 38 ]
https://aclanthology.org//W09-4006/
A Knowledge-Rich Approach to Measuring the Similarity between Bulgarian and Russian Words
[ "Multilinguality" ]
[ 0 ]
SCOPUS_ID:85125166307
A Knowledge-aware Machine Reading Comprehension Framework for Dialogue Symptom Diagnosis
Symptom diagnosis in dialogue remains a challenging task because the symptom entities and their status need to be extracted correctly at the same time. Most previous studies treat symptom diagnosis as a classification or sequence labeling task and focus on using single-sentence dialogue as input. Unique from past studies, in this paper, we propose a new framework for dialogue symptom diagnosis, which formulate it as a machine reading comprehension (MRC) task. We first use window-level multi-turn of dialogue as input and extract the symptom entities. Then, we generate a question for each entity to infer the symptom status in the form of question answering (QA). Benefit from the MRC formalization, our proposed framework can encode more informative prior knowledge, which can effectively improve the performance of symptom status inference. Experiments on the Chinese medical dialogue dataset show that the proposed framework outperforms the previous best model and several competitive baselines, which indicates that our framework provides a useful direction for dialogue symptom diagnosis. The code and data are publicly available at https://github.com/zhaoxiongjun/DSD.
[ "Machine Reading Comprehension", "Natural Language Interfaces", "Reasoning", "Dialogue Systems & Conversational Agents" ]
[ 37, 11, 8, 38 ]
http://arxiv.org/abs/2107.02040v1
A Knowledge-based Approach for Answering Complex Questions in Persian
Research on open-domain question answering (QA) has a long tradition. A challenge in this domain is answering complex questions (CQA) that require complex inference methods and large amounts of knowledge. In low resource languages, such as Persian, there are not many datasets for open-domain complex questions and also the language processing toolkits are not very accurate. In this paper, we propose a knowledge-based approach for answering Persian complex questions using Farsbase; the Persian knowledge graph, exploiting PeCoQ; the newly created complex Persian question dataset. In this work, we handle multi-constraint and multi-hop questions by building their set of possible corresponding logical forms. Then Multilingual-BERT is used to select the logical form that best describes the input complex question syntactically and semantically. The answer to the question is built from the answer to the logical form, extracted from the knowledge graph. Experiments show that our approach outperforms other approaches in Persian CQA.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Multimodality" ]
[ 72, 50, 27, 18, 11, 74 ]
SCOPUS_ID:85089242706
A Knowledge-based Method for Filtering Geo-entity Relations
Knowledge Graphs (KGs) are crucial resources for supporting geographical knowledge services. Given the vast geographical knowledge in web text, extraction of geo-entity relations from web text has become the core technology for constructing geographical KGs. Furthermore, it directly affects the quality of geographical knowledge services. However, web text inevitably contains noise and geographical knowledge can be sparsely distributed, both greatly restricting the quality of geo-entity relationship extraction. Here, we proposed a method for filtering geo-entity relations based on existing Knowledge Bases (KBs). Specifically, ontology knowledge, fact knowledge, and synonym knowledge were integrated to generate geo-related knowledge. Then, the extracted geo-entity relationships and the geo-related knowledge were transferred into vectors, and the maximum similarity between vectors was the confidence value of one extracted geo-entity relationship triple. Our method takes full advantage of existing KBs to assess the quality of geographical information in web text, which helps improve the richness and freshness of geographical KGs. Compared with the Stanford OpenIE method, our method decreased the Mean Square Error (MSE) from 0.62 to 0.06 in the confidence interval [0.7, 1], and improved the area under the Receiver Operating Characteristic (ROC) Curve (AUC) from 0.51 to 0.89.
[ "Semantic Text Processing", "Relation Extraction", "Structured Data in NLP", "Knowledge Representation", "Multimodality", "Information Extraction & Text Mining" ]
[ 72, 75, 50, 18, 74, 3 ]
SCOPUS_ID:85133556150
A Korean menu-ordering sentence text-to-speech system using conformer-based FastSpeech2
In this paper, we present the Korean menu-ordering Sentence Text-to-Speech (TTS) system using conformer-based FastSpeech2. Conformer is the convolution-augmented transformer, which was originally proposed in Speech Recognition. Combining two different structures, the Conformer extracts better local and global features . It comprises two half Feed Forward module at the front and the end, sandwiching the Multi-Head Self-Attention module and Convolution module. We introduce the Conformer in Korean TTS, as we know it works well in Korean Speech Recognition. For comparison between transformer-based TTS model and Conformer-based one, we train FastSpeech2 and Conformer-based FastSpeech2. We collected a phoneme-balanced data set and used this for training our models. This corpus comprises not only general conversation, but also menu-ordering conversation consisting mainly of loanwords. This data set is the solution to the current Korean TTS model’s degradation in loanwords. As a result of generating a synthesized sound using ParallelWave Gan, the Conformer-based FastSpeech2 achieved superior performance of MOS 4.04. We confirm that the model performance improved when the same structure was changed from transformer to Conformer in the Korean TTS.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85088389879
A Korean named entity recognition method using Bi-LSTM-CRF and masked self-attention
Named entity recognition (NER) is a fundamental task in natural language processing. The existing Korean NER methods use the Korean morpheme, syllable sequence, and part-of-speech as features, and use a sequence labeling model to tackle this problem. In Korean, on one hand, morpheme itself contains strong indicative information of named entity (especially for time and person). On the other hand, the context of the target morpheme plays an important role in recognizing the named entity(NE) tag of the target morpheme. To make full use of these two features, we propose two auxiliary tasks. One of them is the morpheme-level NE tagging task which will capture the NE feature of syllable sequence composing morpheme. The other one is the context-based NE tagging task which aims to capture the context feature of target morpheme through the masked self-attention network. These two tasks are jointly trained with Bi-LSTM-CRF NER Tagger. The experimental results on Klpexpo 2016 corpus and Naver NLP Challenge 2018 corpus show that our model outperforms the strong baseline systems and achieves the state of the art.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Named Entity Recognition", "Tagging", "Information Extraction & Text Mining" ]
[ 52, 72, 15, 34, 63, 3 ]
SCOPUS_ID:0012539717
A LANGUAGE MODEL COMBINING N-GRAMS AND STOCHASTIC FINITE STATE AUTOMATA
This paper describes a new kind of language models composed of several local models and a general model linking the local models together. Local models describe more finely subparts of the textual data than a conventional n-gram trained on the complete corpus. They are built on lexical and syntactic criteria. Both local and global models are integrated in a single hidden Markov model. Experiments showed a 14% decrease in perplexity compared to a bigram model on a small corpus of telephonic communications.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85128390561
A LANGUAGE MODEL COMBINING TRIGRAMS AND STOCHASTIC CONTEXT-FREE GRAMMARS
We propose a class trigram language model in which each class is specified by a stochastic context-free grammar. We show how to estimate the parameters of the model, and how to smooth these estimates. We present experimental perplexity and speech recognition results.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:0012357239
A LANGUAGE MODEL FOR COMPOUND WORDS IN SPEECH RECOGNITION
In several languages, words can be aggregated into compound words. In present speech recognition systems, compound words are treated as as additional single words. This creates redundancies in the phonetic word models that have to be stored and searched during recognition. Moreover, it leads to weaknesses in word or n-gram frequency estimates in language models. - This paper describes a novel approach to speech recognition with vocabularies that contain only the composing words of compounds. The recognition of a compound word is performed via a dedicated accessory language model that evaluates compound word hypotheses only. In this way, very large vocabularies (> 100,000 words) can be handled efficiently. In preliminary recognition tests, the model performed well.^
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85119348114
A LANGUAGE PRIOR BASED FOCAL LOSS FOR VISUAL QUESTION ANSWERING
According to current research, one of the major challenges in Visual Question Answering (VQA) models is the overdependence on language priors (and neglect of the visual modality). VQA models tend to predict answers only based on superficial correlations between the first few words in question and frequency of related answer candidates. To address this issue, we propose a novel Language Prior based Focal Loss (LP-Focal Loss) by rescaling the standard cross entropy loss. Specifically, we employ a question-only branch to capture the language biases for each answer candidate based on the corresponding question input. Then, the LP-Focal Loss dynamically assigns lower weights to biased answers when computing the training loss, thereby reducing the contribution of more-biased instances in the train split. Extensive experiments show that the LP-Focal Loss can be generally applied to common baseline VQA models, and achieves significantly better performance on the VQA-CP v2 dataset, with an overall 18% accuracy boost over benchmark models.
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
SCOPUS_ID:84901676539
A LDA feature grouping method for subspace clustering of text data
This paper proposes a feature grouping method for clustering of text data. In this new method, the vector space model is used to represent a set of documents. The LDA algorithm is applied to the text data to generate groups of features as topics. The topics are treated as group features which enable the recently published subspace clustering algorithm FG-k-means to be used to cluster high dimensional text data with two level features, the word level and the group level. In generating the group level features with LDA, an entropy based word filtering method is proposed to remove the words with low probabilities in the word distribution of the corresponding topics. Experiments were conducted on three real-life text data sets to compare the new method with three existing clustering algorithms. The experiment results have shown that the new method improved the clustering performance in comparison with other methods. © 2014 Springer International Publishing.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
SCOPUS_ID:84986550410
A LDA model based text-mining method to recommend reviewer for proposal of research project selection
Reviewer recommendation for research projects proposals plays an indispensable role in funding agencies, because the opinions or feedback of reviewers will exert a direct impact on the result of the projects selection. Current methods mainly focus on grouping the proposals by declared disciplines or evaluating the reviewers with their individual profile, however, the two methods ignore the rich information with different types and formats of proposals and experts, such as subjective information (e.g., evaluation of colleague), objective information (e.g., publications' number). Besides, prior studies mostly applied to English documents, which has limitations when dealing with projects proposals in Chinese. In order to effectively solve the research gap that ignored the different information forms and Chinese contexts, this paper proposes firstly extract the topics words in proposal by LDA and expert's profile with text-mining; secondly automatically classify the information of proposal and profile, and integrate the information into several categories according to its different types. Each category represents the different dimensions of information of proposal and expert. Thirdly, we calculate the similarity of information in each category, and sort the similarity to select top 8 experts as candidate reviewers. Finally, we establish the evaluation model for the candidate reviewers to decide several reviewers to review proposal. A recommendation approach is proposed by integrating these categories of information. In future research, we will try to evaluate the proposed approach using real data.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:84958529165
A LDA-based algorithm for length-aware text clustering
The proliferation of texts in Web presents great challenges on knowledge discovery in text collections. Clustering provides us with a powerful tool to organize the information and recognize the structure of the information. Most text clustering techniques are designed to deal with either long or short texts. However many real-life collections are often made up of both long and short texts, namely mixed length texts. The current text clustering techniques are unsatisfactory, for they don't distinguish the sparseness and high dimension of the mixed length texts. In this paper, we propose a novel approach - Length-Aware Dual Latent Dirichlet Allocation (ADLDA), which is used for clustering the mixed length texts via obtaining auxiliary knowledge from long (short) texts for short (long) texts in the collections. The degree of mutual auxiliary is based on the ratio of long texts and short texts in a corpus. Experimental results on real datasets show our approach achieves superior performance over other state-of the-art text clustering approaches for mixed length texts. © 2014 Springer International Publishing Switzerland.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
SCOPUS_ID:84941011793
A LDA-based approach to keyphrase extraction
Due to the shortage of the comprehensive analysis of the coverage of document topics, the readability and difference of keyphrases, a new algorithm of keyphrase extraction TFITF based on the implicit topic model was put forward. The algorithm adopted the large-scale corpus and producted latent topic model to calculate the TFITF weight of vocabulary on the topic and further generate the weight of vocabulary on the document. And adjacent lexical was ranked and picked out as candidate keyphrases based on co-occurrence information. Then according to the similarity of vocabulary topics, redundant phrases were eliminated. In addition, the comparative experiments of candidate keyphrases were executed by document statistical information, vocabulary chain and topic information. The experimental results, which were carried out on an evaluation dataset including 1 040 Chinese documents and 5 408 standard keyphrases, demonstrate that the method can effectively improve the precision and recall of keyphrase extraction.
[ "Topic Modeling", "Term Extraction", "Information Extraction & Text Mining" ]
[ 9, 1, 3 ]
SCOPUS_ID:84871810316
A LDA-based approach to promoting ranking diversity for genomics information retrieval
Background: In the biomedical domain, there are immense data and tremendous increase of genomics and biomedical relevant publications. The wealth of information has led to an increasing amount of interest in and need for applying information retrieval techniques to access the scientific literature in genomics and related biomedical disciplines. In many cases, the desired information of a query asked by biologists is a list of a certain type of entities covering different aspects that are related to the question, such as cells, genes, diseases, proteins, mutations, etc. Hence, it is important of a biomedical IR system to be able to provide relevant and diverse answers to fulfill biologists’ information needs. However traditional IR model only concerns with the relevance between retrieved documents and user query, but does not take redundancy between retrieved documents into account. This will lead to high redundancy and low diversity in the retrieval ranked lists. Results: In this paper, we propose an approach which employs a topic generative model called Latent Dirichlet Allocation (LDA) to promoting ranking diversity for biomedical information retrieval. Different from other approaches or models which consider aspects on word level, our approach assumes that aspects should be identified by the topics of retrieved documents. We present LDA model to discover topic distribution of retrieval passages and word distribution of each topic dimension, and then re-rank retrieval results with topic distribution similarity between passages based on N-size slide window. We perform our approach on TREC 2007 Genomics collection and two distinctive IR baseline runs, which can achieve 8% improvement over the highest Aspect MAP reported in TREC 2007 Genomics track. Conclusions: The proposed method is the first study of adopting topic model to genomics information retrieval, and demonstrates its effectiveness in promoting ranking diversity as well as in improving relevance of ranked lists of genomics search. Moreover, we proposes a distance measure to quantify how much a passage can increase topical diversity by considering both topical importance and topical coefficient by LDA, and the distance measure is a modified Euclidean distance.
[ "Passage Retrieval", "Information Retrieval" ]
[ 66, 24 ]
SCOPUS_ID:85126699517
A LEBERT-Based Model for Named Entity Recognition
Recently, many works have tried to augment the performance of Chinese named entity recognition (NER) using word lexicons. Because traditional named entity recognition models cannot integrate lexical information into embeddings, the LEBERT-BiLSTM-CRF model is proposed for elementary mathematics text NER. Lexicon Enhanced BERT(LEBERT) integrates external lexicon knowledge into BERT layers directly by a lexicon adapter layer. We can get the embedding after training the LEBERT model. Then put it into BiLSTM for feature extraction. Finally, the CRF is used for correction. Experiments on datasets show that LEBERT-BiLSTM-CRF outperforms BiLSTM-CRF baselines, and F1 scores reached 95.02%. Compared with other NER models, the LEBERT-BiLSTM-CRF model also performs better.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 34, 3 ]
SCOPUS_ID:85005207699
A LOOK AT THE SEMANTIC DIFFERENTIAL AS A TOOL TO ASSIST FACULTY TEACHING EVALUATIONS
The evaluation of the effectiveness of faculty teaching is a difficult process. This paper presents the results of an experimental empirical investigation into the possibility of using a psycho‐linguistic measurement technique, the Semantic Differential, to measure faculty communication of terminology to students. This measurement could be used to supplement other teaching evaluation devices. The paper determines a set of “key concepts,” derives a set of S.D. meanings from a Faculty to act as a standard, and measures student meanings at the beginning and end of an introductory course. These sets of meanings are then compared. Copyright © 1973, Wiley Blackwell. All rights reserved
[ "Psycholinguistics", "Linguistics & Cognitive NLP" ]
[ 77, 48 ]
SCOPUS_ID:85064573683
A LSTM Approach for Sales Forecasting of Goods with Short-Term Demands in E-Commerce
This study proposed a model to forecast short-term goods demand in E-commerce context. The model integrated LSTM approach with sentiment analysis of consumers’ comments. In the training stage, the sales figures and comments crawled from “taobao.com” were preprocessed, and the sentiment rating of comments were analyzed for “positive”, “negative” and confidence. The LSTM model was trained to learn the prediction of future value according to the time-series sequence of sales and sentiment rating of comments. Due to the characteristics of short-term goods, there are not enough history data to evaluate cyclic and periodic variation, so the decision makers have to react to market conditions and take appropriate actions as soon as possible. It also suggested that to adjust the weight of sentiment rating appropriately could further improve the forecasting accuracy. The study fulfilled the goal for supporting them to make use of minimal trading data to achieve maximal predictive accuracy. The results demonstrated that the proposed LSTM approach performed high-level accuracy for sales forecasting of goods with short-term demands.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85125282279
A LSTM Recurrent Neural Network Implementation for Classifying Entities on Brazilian Legal Documents
Although the use of Natural Language Processing and Named Entity Recognition methods to deal with classification problems in the most diverse areas is something well-established, applications in Law offer challenges due to the specific terminology, broader vocabulary and the presence of more complex semantic and syntactic structures compared to spoken Portuguese. In this short paper, we present a Recurrent Neural Network implementation using LSTM to classify entities such as class, subject, value, individuals, among others items, in lawsuits. The initial dataset focused on a sample of 100 thousand lawsuits of São Paulo state court, in Brazil. The proposed method achieved an accuracy and F1-score of approximately 90% in the tested data. Such preliminary results indicate that it is possible to create a model capable of generalizing such classifications on a large scale even regarding the specifics of Brazilian legal texts terminology.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Named Entity Recognition", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 34, 36, 3 ]
SCOPUS_ID:85147492270
A LSTM based Deep Learning Model for Text Summarization
As different users provide different reviews for a product/service, it has become increasingly difficult for common people to understand the customer reviews found on various apps or websites. People are sometimes too lazy to read reviews on various subjects all the way through before making a judgement, despite the fact that they can take time. Even if they wanted to, people cannot read every line of a review. As a result, a text summary model would greatly simplify this process. The purpose of a text summary is to draw out the most significant data from a long document and leave out any that are superfluous or uninteresting. This text summarizer will automatically produce a useful summary from reviews using LSTM. Sentences from the input text will be separated and converted into vectors. A material summary is a process of reducing a large body of text while preserving its original context. The summary should read easily. In this project, our goal is to create a model that accepts reviews of foods as input and outputs a summary of the review. This helps the people who are ordering the food if they want to know about the food that they are looking for.
[ "Language Models", "Semantic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 30, 47, 3 ]
http://arxiv.org/abs/1805.05388v1
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in inducing embeddings for rare or unseen words, n-grams, synsets, and other textual features. This paper introduces a la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transformation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable on the fly in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how the a la carte method requires fewer examples of words in context to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85080926514
A Lab Experiment Using a Natural Language Interface to Extract Information from Data: The NLIDB Game
This paper makes a case for the challenge of an inductive approach to research in economics and management science focused on the use of a natural language interface for action-based applications tailored to business-specific functions. Natural language is a highly dynamical and dialectical process drawing on human cognition and, reflexively, on economic behaviour. The use of natural language is ubiquitous to human interaction and, among others, permeates every facet of companies’ decision-making. Therefore, we take up this challenge by designing and conducting a lab experiment – conceived and named by us as NLIDB game – based on an inductive method using a novel natural language user interface to database (NLIDB) query application system. This interface has been designed and developed by us in order both (i) to enable managers or practitioners to make complex queries as well as ease their decision-making process in certain business areas, and thus (ii) to be used by experimental economists exploring the role of managers and business professionals. The long-term goal is to look for patterns in the experimental data, working to develop a possible research hypothesis that might explain them. Our preliminary findings suggest that experimental subjects are able to use this novel interface more effectively with respect to the more commons graphical interfaces company-wide. Most importantly, subjects make use of cognitive heuristics during the treatments, achieving pragmatic and satisficing rather than theoretically oriented optimal solutions, especially with incomplete or imperfect information or limited computation capabilities. Furthermore, the implementation of our NLIDB roughly translates into savings of transaction costs, because managers can make queries without recurring to technical support, thus reducing both the time needed to have effective results from business decisions and operating practices, and the costs associated with each outcome.
[ "Natural Language Interfaces" ]
[ 11 ]
http://arxiv.org/abs/2112.11740v1
A Label Dependence-aware Sequence Generation Model for Multi-level Implicit Discourse Relation Recognition
Implicit discourse relation recognition (IDRR) is a challenging but crucial task in discourse analysis. Most existing methods train multiple models to predict multi-level labels independently, while ignoring the dependence between hierarchically structured labels. In this paper, we consider multi-level IDRR as a conditional label sequence generation task and propose a Label Dependence-aware Sequence Generation Model (LDSGM) for it. Specifically, we first design a label attentive encoder to learn the global representation of an input instance and its level-specific contexts, where the label dependence is integrated to obtain better label embeddings. Then, we employ a label sequence decoder to output the predicted labels in a top-down manner, where the predicted higher-level labels are directly used to guide the label prediction at the current level. We further develop a mutual learning enhanced training method to exploit the label dependence in a bottomup direction, which is captured by an auxiliary decoder introduced during training. Experimental results on the PDTB dataset show that our model achieves the state-of-the-art performance on multi-level IDRR. We will release our code at https://github.com/nlpersECJTU/LDSGM.
[ "Discourse & Pragmatics", "Language Models", "Semantic Text Processing", "Text Generation" ]
[ 71, 52, 72, 47 ]
SCOPUS_ID:85083312440
A Label Distribution Topic Model for Multi-label Classification
At present, multi-label supervised topic model is a kind of effective multi-label classification model applied to various domain. However, due to the limitation of traditional label-Topic correspondence in existing multi-label supervised topic model, there are still some aspects that need to be improved. This paper proposed a label distributed LDA model(LD-LDA) for providing more complete label description, which overcomes the disadvantage that labels can only be associated with a fixed hidden topic set or a set of non-overlapping hidden topics, so as to describe labels by the form of probability distribution of all hidden topics. The experimental results show that LD-LDA model has better prediction effect than comparative models in protein function prediction. Although the description of observable variables, parameters and hidden variables in LD-LDA model is based on the problem of protein function prediction, LD-LDA model is essentially a multi-label topic model, which is also applicable to various multi-label application scenarios.
[ "Topic Modeling", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 9, 24, 36, 3 ]
SCOPUS_ID:85128611361
A Label Extension Schema for Improved Text Emotion Classification
Due to the subjectiveness and fuzziness of emotions in texts, researchers have been aware that it is ubiquitous to observe multiple emotions in a sentence, and the one-hot label approach is not informative enough in emotion-relevant text classification tasks. Therefore, to facilitate the classification task, recent works focus on generating and employing a coarse-grained emotion distribution, which is based on coarse-grained labels provided by the underlying dataset. Although such methods can alleviate the problem of overfitting and improve robustness, they may cause inter-class confusion between similar emotion categories and introduce undesirable noise during training. Meanwhile, current studies neglect the fine-grained emotions associated with these coarse-grained labels. To address the issue caused by utilizing a coarse-grained distribution, we propose in this paper a general and novel emotion label extension method based on fine-grained emotions. Specifically, we first identify a mapping function between coarse-grained emotions and fine-grained emotion concepts, and extend the original label space with specific fine-grained emotions. Then, we generate a fine-grained emotion distribution by employing a rule-based method, and utilize it as a model constraint to incorporate the dependencies among fine-grained emotions to predict the original coarse-grained emotion labels. We conduct extensive experiments to demonstrate the effectiveness of our proposed label extension method. The results indicate that our proposed method can produce notable improvements over baseline models on the applied datasets.
[ "Text Classification", "Sentiment Analysis", "Emotion Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 78, 61, 24, 3 ]
http://arxiv.org/abs/2003.07444v3
A Label Proportions Estimation Technique for Adversarial Domain Adaptation in Text Classification
Many text classification tasks are domain-dependent, and various domain adaptation approaches have been proposed to predict unlabeled data in a new domain. Domain-adversarial neural networks (DANN) and their variants have been used widely recently and have achieved promising results for this problem. However, most of these approaches assume that the label proportions of the source and target domains are similar, which rarely holds in most real-world scenarios. Sometimes the label shift can be large and the DANN fails to learn domain-invariant features. In this study, we focus on unsupervised domain adaptation of text classification with label shift and introduce a domain adversarial network with label proportions estimation (DAN-LPE) framework. The DAN-LPE simultaneously trains a domain adversarial net and processes label proportions estimation by the confusion of the source domain and the predictions of the target domain. Experiments show the DAN-LPE achieves a good estimate of the target label distributions and reduces the label shift to improve the classification performance.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Robustness in NLP", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 24, 3, 58, 36, 4 ]
http://arxiv.org/abs/1302.4874v1
A Labeled Graph Kernel for Relationship Extraction
In this paper, we propose an approach for Relationship Extraction (RE) based on labeled graph kernels. The kernel we propose is a particularization of a random walk kernel that exploits two properties previously studied in the RE literature: (i) the words between the candidate entities or connecting them in a syntactic representation are particularly likely to carry information regarding the relationship; and (ii) combining information from distinct sources in a kernel may help the RE system make better decisions. We performed experiments on a dataset of protein-protein interactions and the results show that our approach obtains effectiveness values that are comparable with the state-of-the art kernel methods. Moreover, our approach is able to outperform the state-of-the-art kernels when combined with other kernel methods.
[ "Multimodality", "Relation Extraction", "Structured Data in NLP", "Information Extraction & Text Mining" ]
[ 74, 75, 50, 3 ]
https://aclanthology.org//1995.iwpt-1.20/
A Labelled Analytic Theorem Proving Environment for Categorial Grammar
We present a system for the investigation of computational properties of categorial grammar parsing based on a labelled analytic tableaux theorem prover. This proof method allows us to take a modular approach, in which the basic grammar can be kept constant, while a range of categorial calculi can be captured by assigning different properties to the labelling algebra. The theorem proving strategy is particularly well suited to the treatment of categorial grammar, because it allows us to distribute the computational cost between the algorithm which deals with the grammatical types and the algebraic checker which constrains the derivation.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
SCOPUS_ID:85070091578
A Land-Cover Classification Method Using Point of Interest
Traditional land cover classification process is very complicated, time-consuming and labor-intensive, which requires huge amount of imagery data and involves many people. Recently, crowd-sourcing data have been used for land cover classification with lower costs, but they are still time-consuming due to the process of interpreting data. We examine the potential of textual information in point of interest (POI) as a new reference source. Firstly, POI textual data is analyzed to calculate the word distributions and topic distributions of POI using latent Dirichlet allocation (LDA) topic model. Secondly, support vector machine (SVM) algorithm is applied with topic distributions of POI to build a land cover classification model. Finally, we evaluate the land cover classification result by taking a random sample of remote sensing images. In the experiments, 1.9 million POIs from Weibo, Baidu and Gaode are used to test the proposed method, and result shows that a classification accuracy of over 80% is achieved.
[ "Visual Data in NLP", "Topic Modeling", "Information Retrieval", "Multimodality", "Text Classification", "Information Extraction & Text Mining" ]
[ 20, 9, 24, 74, 36, 3 ]
SCOPUS_ID:85055657264
A Language Adaptive Method for Question Answering on French and English
The LAMA (Language Adaptive Method for question Answering) system focuses on answering natural language questions using an RDF knowledge base within a reasonable time. Originally designed to process queries written in French, the system has been redesigned to also function on the English language. Overall, we propose a set of lexico-syntactic patterns for entity and property extraction to create a semantic representation of natural language requests. This semantic representation is then used to generate SPARQL queries able to answer users’ requests. The paper also describes a method for decomposing complex queries into a series of simpler queries. The use of preprocessed data and parallelization methods helps improve individual answer times.
[ "Natural Language Interfaces", "Semantic Text Processing", "Question Answering", "Representation Learning" ]
[ 11, 72, 27, 12 ]
SCOPUS_ID:85140083809
A Language Agnostic Multilingual Streaming On-Device ASR System
On-device end-to-end (E2E) models have shown improvements over a conventional model on English Voice Search tasks in both quality and latency. E2E models have also shown promising results for multilingual automatic speech recognition (ASR). In this paper, we extend our previous capacity solution to streaming applications and present a streaming multilingual E2E ASR system that runs fully on device with comparable quality and latency to individual monolingual models. To achieve that, we propose an Encoder Endpointer model and an End-of-Utterance (EOU) Joint Layer for a better quality and latency trade-off. Our system is built in a language agnostic manner allowing it to natively support intersentential code switching in real time. To address the feasibility concerns on large models, we conducted on-device profiling and replaced the time consuming LSTM decoder with the recently developed Embedding decoder. With these changes, we managed to run such a system on a mobile device in less than real time.
[ "Language Models", "Semantic Text Processing", "Code-Switching", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Speech Recognition", "Multilinguality" ]
[ 52, 72, 7, 70, 74, 47, 10, 0 ]
SCOPUS_ID:85116432618
A Language Model Based Pseudo-Sample Deliberation for Semi-supervised Speech Recognition
End-to-end modeling requires tremendous amounts of transcribed speech to achieve an automatic speech recognition (ASR) model with high performance. For low-resource ASR tasks, it is a promising approach to utilize the highly accessible unlabeled speech and text corpus. Previous works have shown that training with pseudo samples, which are the inferring results given the unlabeled speech, can substantially improve the accuracy of a baseline ASR model. Besides the common data filtering to improve pseudo-label quality, we propose an alternative pseudo-sample deliberation method that operates on the output of the ASR model through a pre-trained bidirectional language model (BERT). It fixes the unreasonable tokens in the inference by substitution, which can distill knowledge from the large text corpus. Experiments on Librispeech show that assisted with our fixing operation, self-training on additional unlabeled samples can bridge up to 82.3 % of the gap with the supervised training.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Speech Recognition", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 70, 74, 47, 10, 4 ]
SCOPUS_ID:85117418840
A Language Model for Intelligent Speech Recognition of Power Dispatching
The accuracy of power dispatching speech recognition system is related to the effect of language model. In order to improve the accuracy of power dispatching speech recognition, this paper proposes a class label language model based on double dictionaries (general dictionary and power dispatching professional word dictionary). The model improves the n-gram language model with adding class label information, so as to improve the accuracy of power dispatching speech recognition. In addition, the joint system (the joint system of word segmentation and part of Speech Tagging based on double dictionaries) is used to preprocess the corpus information, which will improve the adaptability of class label language model based on double dictionary to power dispatching language. Finally, the class label language model is trained on the collected training corpus of power dispatching instructions. The word error rate of the power dispatching language recognition system using the class label language model based on double dictionaries in the test set are only 4.14%.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85146235324
A Language Model for Spell Checking of Educational Texts in Kurdish (Sorani)
Spell checkers have become regular features of most word processing applications. They assist us in writing more correctly in various digital environments. However, this assistance does not exist for all languages equally. The Kurdish language, which still is considered a less-resourced language, currently, lacks well-known and well-tested spell checkers. We present a language model for the Kurdish (Sorani) based on educational texts written in the Persian/Arabic script. We also showcase a spell checker as a testing environment for the language model. Primarily, we use a probabilistic method and our language model with Stupid Backoff smoothing for the spell-checking algorithm. We test for spelling errors on a word and context basis. The spell checker suggests a list of corrections for misspelled words. The results show 88.54% accuracy on the texts in the related context, an F1 score of 43.33%, and correct suggestions of an 85% chance of being in the top three positions of the corrections.
[ "Language Models", "Text Error Correction", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 26, 72, 15 ]
SCOPUS_ID:85143421731
A Language Model identifies population-level features of the T cell Receptor via self-supervised learning
T cells are at the core of human health. Their unique ability to produce an overwhelming repertoire of receptors which they use to interact with threats and to assist with healthy functions has made them an interesting subject for Data Science. One such function, involved with multiple conditions including cancer, response to viral threats, autoimmune disease and more, is the ability to produce what are termed Public clones. Those Public Clones are T cells that are shared between individuals. Some of those clones are even shared between a high percentage of all observed samples. Yet, the reason for this sharing, as well as the DNA sequence that might characterize them is still unknown. Here, using a BERT-based language model, we show that a latent space built by self supervised learning provides distinct areas for Public and Private sequences. We continue to show that these embeddings could be successfully used for binary classification to tell apart Public and Private sequences.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 17, 4 ]
SCOPUS_ID:85127420336
A Language Model-based Generative Classifier for Sentence-level Discourse Parsing
Discourse segmentation and sentence-level discourse parsing play important roles for various NLP tasks to consider textual coherence. Despite recent achievements in both tasks, there is still room for improvement due to the scarcity of labeled data. To solve the problem, we propose a language model-based generative classifier (LMGC) for using more information from labels by treating the labels as an input while enhancing label representations by embedding descriptions for each label. Moreover, since this enables LMGC to make ready the representations for labels, unseen in the pre-training step, we can effectively use a pretrained language model in LMGC. Experimental results on the RST-DT dataset show that our LMGC achieved the state-of-the-art F1 score of 96.72 in discourse segmentation. It further achieved the state-of-the-art relation F1 scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively, in sentence-level discourse parsing.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Semantic Parsing", "Discourse & Pragmatics", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 40, 71, 36, 3 ]
SCOPUS_ID:85144400171
A Language Modelling Approach to Quality Assessment of OCR'ed Historical Text
We hypothesise and evaluate a language model-based approach for scoring the quality of OCR transcriptions in the British Library Newspapers (BLN) corpus parts 1 and 2, to identify the best quality OCR for use in further natural language processing tasks, with a wider view to link individual newspaper reports of crime in nineteenth-century London to the Digital Panopticon-a structured repository of criminal lives. We mitigate the absence of gold standard transcriptions of the BLN corpus by utilising a corpus of genre-adjacent texts that capture the common and legal parlance of nineteenth-century London-the Proceedings of the Old Bailey Online-with a view to rank the BLN transcriptions by their OCR quality.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Multimodality" ]
[ 20, 52, 72, 74 ]
SCOPUS_ID:84857419833
A Language Theory for Educational Practice
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85122579942
A Language Tutoring Tool based on AI and Paraphrase Detection
A language tutoring tool (LTT) helps learning a language through casual human-like conversations. Natural language understanding (NLU) and natural language generation (NLG) are two key components of an LTT. In this paper, we propose a paraphrase detection algorithm that is used as the building block of the NLU. Our proposed tree-LSTM with a selfattention method for paraphrase detection shows accuracy of 87% with a lower parameter of 6.5m, which is much robust and lighter than the other existing paraphrase detection algorithms. Furthermore, we discuss an LTT prototype using the proposed algorithm with having some featured components like- message analysis, grammar detection, dialogue management, and response generation component. Each component is discussed in detail in the methodology section of this paper.
[ "Paraphrasing", "Text Generation" ]
[ 32, 47 ]
http://arxiv.org/abs/1804.00987v2
A Language for Function Signature Representations
Recent work by (Richardson and Kuhn, 2017a,b; Richardson et al., 2018) looks at semantic parser induction and question answering in the domain of source code libraries and APIs. In this brief note, we formalize the representations being learned in these studies and introduce a simple domain specific language and a systematic translation from this language to first-order logic. By recasting the target representations in terms of classical logic, we aim to broaden the applicability of existing code datasets for investigating more complex natural language understanding and reasoning problems in the software domain.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:33646379849
A Language for Verification and Manipulation of Web Documents. (Extended Abstract)
In this paper we develop the language theory underpinning the logical framework PLF. This language features lambda abstraction with patterns and application via pattern-matching. Reductions are allowed in patterns. The framework is particularly suited as a metalanguage for encoding rewriting logics and logical systems where proof terms have a special syntactic constraints, as in term rewriting systems, and rule-based languages. PLF is a conservative extension of the well-known Edinburgh Logical Framework LF. Because of sophisticated pattern matching facilities PLF is suitable for verification and manipulation of HXML documents. © 2006 Elsevier B.V. All rights reserved.
[ "Linguistics & Cognitive NLP", "Paraphrasing", "Text Generation", "Linguistic Theories" ]
[ 48, 32, 47, 57 ]
SCOPUS_ID:85148116726
A Language of Concrete Things: Hulme, Imagism and Modernist Theories of Language
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
http://arxiv.org/abs/1906.01032v1
A Language-Agnostic Model for Semantic Source Code Labeling
Code search and comprehension have become more difficult in recent years due to the rapid expansion of available source code. Current tools lack a way to label arbitrary code at scale while maintaining up-to-date representations of new programming languages, libraries, and functionalities. Comprehensive labeling of source code enables users to search for documents of interest and obtain a high-level understanding of their contents. We use Stack Overflow code snippets and their tags to train a language-agnostic, deep convolutional neural network to automatically predict semantic labels for source code documents. On Stack Overflow code snippets, we demonstrate a mean area under ROC of 0.957 over a long-tailed list of 4,508 tags. We also manually validate the model outputs on a diverse set of unlabeled source code documents retrieved from Github, and we obtain a top-1 accuracy of 86.6%. This strongly indicates that the model successfully transfers its knowledge from Stack Overflow snippets to arbitrary source code documents.
[ "Programming Languages in NLP", "Multimodality" ]
[ 55, 74 ]
https://aclanthology.org//2020.rdsm-1.2/
A Language-Based Approach to Fake News Detection Through Interpretable Features and BRNN
‘Fake news’ – succinctly defined as false or misleading information masquerading as legitimate news – is a ubiquitous phenomenon and its dissemination weakens the fact-based reporting of the established news industry, making it harder for political actors, authorities, media and citizens to obtain a reliable picture. State-of-the art language-based approaches to fake news detection that reach high classification accuracy typically rely on black box models based on word embeddings. At the same time, there are increasing calls for moving away from black-box models towards white-box (explainable) models for critical industries such as healthcare, finances, military and news industry. In this paper we performed a series of experiments where bi-directional recurrent neural network classification models were trained on interpretable features derived from multi-disciplinary integrated approaches to language. We apply our approach to two benchmark datasets. We demonstrate that our approach is promising as it achieves similar results on these two datasets as the best performing black box models reported in the literature. In a second step we report on ablation experiments geared towards assessing the relative importance of the human-interpretable features in distinguishing fake news from real news.
[ "Information Extraction & Text Mining", "Information Retrieval", "Explainability & Interpretability in NLP", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 81, 17, 8, 46, 36, 4 ]
https://aclanthology.org//W98-1420/
A Language-Independent System for Generating Feature Structures from Interlingua Representations
[ "Semantic Text Processing", "Text Generation", "Representation Learning" ]
[ 72, 47, 12 ]
SCOPUS_ID:33749082319
A Language-Oriented Data Modeling Approach
This paper presents a fresh data modeling tool’. Language-Oriented Data Modeling (LODM). The language used is the English language. After a brief summary of previous research in this field, the paper discusses the theoretical underpinnings of the LODM approach from the perspectives of both relational theory and linguistic theory. The paper then illustrates the implementation of LODM with a research prototype tool, “Database Designer.” Through a step-bystep description of the stages involved in using this new tool, the paper addresses the linguistic and database issues encountered in the process of mapping English sentences into a relational schema. In discussing the robustness of the LODM tool, the paper presents both its strengths and its weakness. © 1996, Authors. All rights reserved.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
https://aclanthology.org//2021.calcs-1.10/
A Language-aware Approach to Code-switched Morphological Tagging
Morphological tagging of code-switching (CS) data becomes more challenging especially when language pairs composing the CS data have different morphological representations. In this paper, we explore a number of ways of implementing a language-aware morphological tagging method and present our approach for integrating language IDs into a transformer-based framework for CS morphological tagging. We perform our set of experiments on the Turkish-German SAGT Treebank. Experimental results show that including language IDs to the learning model significantly improves accuracy over other approaches.
[ "Morphology", "Code-Switching", "Syntactic Text Processing", "Tagging", "Multilinguality" ]
[ 73, 7, 15, 63, 0 ]
SCOPUS_ID:85076731547
A Large Scale Study for Identification of Sarcasm in Textual Data
With the increase in the penetration of the Internet and the widespread acceptability of social networking sites, more people are coming forward to express their views and opinion about various topics. This has given a huge boost to the textual and multimedia content generated by these websites, giving opportunities to researchers and analysts to nd and generate patterns from this data. The problem of identification of sarcasm in textual data is quite challenging due to lack of annotation, intonation and facial expression. Big companies are spending millions in finding out, whether people were praising or mocking about their product, they can get the idea about the market trends and needs. Law enforcement agencies may also get benefit from this as they would be able to distinguish legitimate threats from exaggerations on the online social networks. A data-driven approach based on the neural networks and the concepts of deep learning has been evaluated using a blend of deep convolutional networks (CNN) and long short term memory (LSTM). The technique has been applied to the Self-Annotated Reddit Corpus (SARC) (http://nlp.cs.princeton.edu/SARC/), a large corpus for sarcasm research. The technique for domain specific and general data is also probed, as given in the dataset so as to check the accuracy of the proposed method. It has been observed that blending of the models further improves the accuracy of simple CNN model, and yields a more computationally efficient model of accuracy compared to standalone models. Our method has achieved an overall average precision of 73%.
[ "Stylistic Analysis", "Sentiment Analysis" ]
[ 67, 78 ]
http://arxiv.org/abs/1704.05579v4
A Large Self-Annotated Corpus for Sarcasm
We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for sarcasm research and for training and evaluating systems for sarcasm detection. The corpus has 1.3 million sarcastic statements -- 10 times more than any previous dataset -- and many times more instances of non-sarcastic statements, allowing for learning in both balanced and unbalanced label regimes. Each statement is furthermore self-annotated -- sarcasm is labeled by the author, not an independent annotator -- and provided with user, topic, and conversation context. We evaluate the corpus for accuracy, construct benchmarks for sarcasm detection, and evaluate baseline methods.
[ "Stylistic Analysis", "Sentiment Analysis" ]
[ 67, 78 ]
SCOPUS_ID:85125474466
A Large Visual Question Answering Dataset for Cultural Heritage
Visual Question Answering (VQA) is gaining momentum for its ability of bridging Computer Vision and Natural Language Processing. VQA approaches mainly rely on Machine Learning algorithms that need to be trained on large annotated datasets. Once trained, a machine learning model is barely portable on a different domain. This calls for agile methodologies for building large annotated datasets from existing resources. The cultural heritage domain represents both a natural application of this task and an extensive source of data for training and validating VQA models. To this end, by using data and models from ArCo, the knowledge graph of the Italian cultural heritage, we generated a large dataset for VQA in Italian and English. We describe the results and the lessons learned by our semi-automatic process for the dataset generation and discuss the employed tools for data extraction and transformation.
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
http://arxiv.org/abs/2201.09227v2
A Large and Diverse Arabic Corpus for Language Modeling
Language models (LMs) have introduced a major paradigm shift in Natural Language Processing (NLP) modeling where large pre-trained LMs became integral to most of the NLP tasks. The LMs are intelligent enough to find useful and relevant representations of the language without any supervision. Perhaps, these models are used to fine-tune typical NLP tasks with significantly high accuracy as compared to the traditional approaches. Conversely, the training of these models requires a massively large corpus that is a good representation of the language. English LMs generally perform better than their other language counterparts, due to the availability of massive English corpora. This work elaborates on the design and development of a large Arabic corpus. It consists of over 500 GB of Arabic cleaned text targeted at improving cross-domain knowledge and downstream generalization capability of large-scale language models. Moreover, the corpus is utilized in the training of a large Arabic LM. In order to evaluate the effectiveness of the LM, a number of typical NLP tasks are fine-tuned. The tasks demonstrate a significant boost from 4.5 to 8.5% when compared to tasks fine-tuned on multi-lingual BERT (mBERT). To the best of my knowledge, this is currently the largest clean and diverse Arabic corpus ever collected.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85102546745
A Large, Crowdsourced Evaluation of Gesture Generation Systems on Common Data: The GENEA Challenge 2020
Co-speech gestures, gestures that accompany speech, play an important role in human communication. Automatic co-speech gesture generation is thus a key enabling technology for embodied conversational agents (ECAs), since humans expect ECAs to be capable of multi-modal communication. Research into gesture generation is rapidly gravitating towards data-driven methods. Unfortunately, individual research efforts in the field are difficult to compare: There are no established benchmarks, and each study tends to use its own dataset, motion visualisation, and evaluation methodology. To address this situation, we launched the GENEA Challenge, a gesture-generation challenge wherein participating teams built automatic gesture-generation systems on a common dataset, and the resulting systems were evaluated in parallel in a large, crowdsourced user study using the same motion-rendering pipeline. Since differences in evaluation outcomes between systems now are solely attributable to differences between the motion-generation methods, this enables benchmarking recent approaches against one another in order to get a better impression of the state of the art in the field. This paper reports on the purpose, design, results, and implications of our challenge.
[ "Natural Language Interfaces", "Multimodality", "Speech & Audio in NLP", "Dialogue Systems & Conversational Agents" ]
[ 11, 74, 70, 38 ]
SCOPUS_ID:85136220279
A Large-Scale Analysis of COVID-19 Twitter Dataset in a New Phase of the Pandemic
Social media have been awash with news and discussions of the COVID-19 pandemic. It is phenomenal to observe that social media has been the focal venue for people to express their reactions, opinions, and interpretations of the pandemic, given the presence of mixed sources of real information and misinformation. Thus, it is essential to conduct professional assessments of the public views and their evolving nature. Our study aims to extract and assess insights into the reflections of sentiments and topics of the public on Twitter and their dynamics along the timeline of the Delta variant. It highlights the extraordinary influence Twitter, or similar major social media, would have on people to comprehend and decide how to cope with the pandemic. We present findings of extracted sentiments and topics from a large-scale dataset of COVID-related tweets collected for the recent phase of the Delta variant of the pandemic (July-September 2021). We utilized a variety of machine learning algorithms for topic modeling and testing the accuracy of sentiment analysis. Our study shows the dramatic dominance of a positive and objective sentiment rather than a negative and subjective sentiment as well as the shift of prevalent topics during the period of study. The findings indicate the importance of conveying real, rational, and accurate information instead of misinformation on social media to foster the public's awareness and preparedness for a major public emergency incident such as the pandemic.
[ "Topic Modeling", "Ethical NLP", "Sentiment Analysis", "Responsible & Trustworthy NLP", "Reasoning", "Fact & Claim Verification", "Information Extraction & Text Mining" ]
[ 9, 17, 78, 4, 8, 46, 3 ]
SCOPUS_ID:85093110494
A Large-Scale Chinese Short-Text Conversation Dataset
The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset LCCC, which contains a base version (6.8 million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT.
[ "Language Models", "Semantic Text Processing", "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 14, 11, 47, 38 ]
SCOPUS_ID:85093666070
A Large-Scale Comparative Evaluation of IR-Based Tools for Bug Localization
This paper reports on a large-scale comparative evaluation of IR-based tools for automatic bug localization. We have divided the tools in our evaluation into the following three generations: (1) The first-generation tools, now over a decade old, that are based purely on the Bag-of-Words (BoW) modeling of software libraries. (2) The somewhat more recent second-generation tools that augment BoW-based modeling with two additional pieces of information: historical data, such as change history, and structured information such as class names, method names, etc. And, finally, (3) The third-generation tools that are currently the focus of much research and that also exploit proximity, order, and semantic relationships between the terms. It is important to realize that the original authors of all these three generations of tools have mostly tested them on relatively small-sized datasets that typically consisted no more than a few thousand bug reports. Additionally, those evaluations only involved Java code libraries. The goal of the present paper is to present a comprehensive large-scale evaluation of all three generations of bug-localization tools with code libraries in multiple languages. Our study involves over 20,000 bug reports drawn from a diverse collection of Java, C/C++, and Python projects. Our results show that the third-generation tools are significantly superior to the older tools. We also show that the word embeddings generated using code files written in one language are effective for retrieval from code libraries in other languages.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
http://arxiv.org/abs/1904.02036v1
A Large-Scale Comparison of Historical Text Normalization Systems
There is no consensus on the state-of-the-art approach to historical text normalization. Many techniques have been proposed, including rule-based methods, distance metrics, character-based statistical machine translation, and neural encoder--decoder models, but studies have used different datasets, different evaluation methods, and have come to different conclusions. This paper presents the largest study of historical text normalization done so far. We critically survey the existing literature and report experiments on eight languages, comparing systems spanning all categories of proposed normalization techniques, analysing the effect of training data quantity, and using different evaluation methods. The datasets and scripts are made publicly available.
[ "Text Normalization", "Syntactic Text Processing" ]
[ 59, 15 ]
http://arxiv.org/abs/2211.12124v1
A Large-Scale Dataset for Biomedical Keyphrase Generation
Keyphrase generation is the task consisting in generating a set of words or phrases that highlight the main topics of a document. There are few datasets for keyphrase generation in the biomedical domain and they do not meet the expectations in terms of size for training generative models. In this paper, we introduce kp-biomed, the first large-scale biomedical keyphrase generation dataset with more than 5M documents collected from PubMed abstracts. We train and release several generative models and conduct a series of experiments showing that using large scale datasets improves significantly the performances for present and absent keyphrase generation. The dataset is available under CC-BY-NC v4.0 license at https://huggingface.co/ datasets/taln-ls2n/kpbiomed.
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85127439144
A Large-Scale Dataset for Empathetic Response Generation
Recent development in NLP shows a strong trend towards refining pre-trained models with a domain-specific dataset. This is especially the case for response generation where emotion plays an important role. However, existing empathetic datasets remain small, delaying research efforts in this area, for example, the development of emotion-aware chatbots. One main technical challenge has been the cost of manually annotating dialogues with the right emotion labels. In this paper, we describe a large-scale silver dataset consisting of 1M dialogues annotated with 32 fine-grained emotions, eight empathetic response intents, and the Neutral category. To achieve this goal, we have developed a novel data curation pipeline starting with a small seed of manually annotated data and eventually scaling it to a satisfactory size. We compare its quality against a state-of-the-art gold dataset using offline experiments and visual validation methods. The resultant procedure can be used to create similar datasets in the same domain as well as in other domains.
[ "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 14, 11, 47, 38 ]
https://aclanthology.org//2021.woah-1.16/
A Large-Scale English Multi-Label Twitter Dataset for Cyberbullying and Online Abuse Detection
In this paper, we introduce a new English Twitter-based dataset for cyberbullying detection and online abuse. Comprising 62,587 tweets, this dataset was sourced from Twitter using specific query terms designed to retrieve tweets with high probabilities of various forms of bullying and offensive content, including insult, trolling, profanity, sarcasm, threat, porn and exclusion. We recruited a pool of 17 annotators to perform fine-grained annotation on the dataset with each tweet annotated by three annotators. All our annotators are high school educated and frequent users of social media. Inter-rater agreement for the dataset as measured by Krippendorff’s Alpha is 0.67. Analysis performed on the dataset confirmed common cyberbullying themes reported by other studies and revealed interesting relationships between the classes. The dataset was used to train a number of transformer-based deep learning models returning impressive results.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85147793524
A Large-Scale Group Decision-Making Method based on Sentiment Analysis for the Detection of Cooperative Group
Group Decision-Making is a process by which a set of experts sort a set of alternatives. When there are a large number of experts, the process is called a Large-Scale Group Decision-Making process. In this kind of environment, there is a great variety of attitudes when discussing the set of alternatives. For instance, there may be experts who may be more aggressive or more peaceful. Aggressiveness is directly related to the degree of cooperativeness of the experts because if an expert is aggressive it implies that he/she will be less cooperative. Consequently, this paper develops a Large-Scale Group Decision-Making method that classifies experts according to their cooperativeness. This classification is based on using sentiment analysis to detect the degree of aggressiveness of each expert. Thus, it is possible to determine whether a specific behavior implies similar valuations and also to detect which experts are more cooperative and are able to reach a consensual ranking of alternatives and which ones would be less willing to cooperate to reach a consensual decision.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85081297899
A Large-Scale Implementation Using MapReduce-Based SVM for Tweets Sentiment Analysis
Sentiment analysis is an interesting area of research due to the availability of sentiment data and opinion-oriented services. The efficiency and scalability of the sentiment analysis applications are important concerns as they expect accurate results in short period of time by processing a large amount of data. An efficient and scalable polarity detection method is proposed in this paper. The sequential minimal optimization with MapReduce (SMOMR) is used to achieve enhanced efficiency as well as scalability. The experiment results reveal that this method outperforms many existing methods.
[ "Responsible & Trustworthy NLP", "Polarity Analysis", "Sentiment Analysis", "Green & Sustainable NLP" ]
[ 4, 33, 78, 68 ]
SCOPUS_ID:85144468798
A Large-Scale Japanese Dataset for Aspect-based Sentiment Analysis
There has been significant progress in the field of sentiment analysis. However, aspect-based sentiment analysis (ABSA) has not been explored in the Japanese language even though it has a huge scope in many natural language processing applications such as 1) tracking sentiment towards products, movies, politicians etc; 2) improving customer relation models. The main reason behind this is that there is no standard Japanese dataset available for ABSA task. In this paper, we present the first standard Japanese dataset for the hotel reviews domain. The proposed dataset contains 53,192 review sentences with seven aspect categories and two polarity labels. We perform experiments on this dataset using popular ABSA approaches and report error analysis. Our experiments show that contextual models such as BERT works very well for the ABSA task in the Japanese language and also show the need to focus on other NLP tasks for better performance through our error analysis.
[ "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 23, 78 ]
http://arxiv.org/abs/2005.10070v1
A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation. However, there is a lack of datasets that realistically address such use cases at a scale large enough for training supervised models for this task. This work presents a new dataset for MDS that is large both in the total number of document clusters and in the size of individual clusters. We build this dataset by leveraging the Wikipedia Current Events Portal (WCEP), which provides concise and neutral human-written summaries of news events, with links to external source articles. We also automatically extend these source articles by looking for related articles in the Common Crawl archive. We provide a quantitative analysis of the dataset and empirical results for several state-of-the-art MDS techniques.
[ "Summarization", "Information Extraction & Text Mining", "Text Generation", "Text Clustering" ]
[ 30, 3, 47, 29 ]
https://aclanthology.org//W19-8641/
A Large-Scale Multi-Length Headline Corpus for Analyzing Length-Constrained Headline Generation Model Evaluation
Browsing news articles on multiple devices is now possible. The lengths of news article headlines have precise upper bounds, dictated by the size of the display of the relevant device or interface. Therefore, controlling the length of headlines is essential when applying the task of headline generation to news production. However, because there is no corpus of headlines of multiple lengths for a given article, previous research on controlling output length in headline generation has not discussed whether the system outputs could be adequately evaluated without multiple references of different lengths. In this paper, we introduce two corpora, which are Japanese News Corpus (JNC) and JApanese MUlti-Length Headline Corpus (JAMUL), to confirm the validity of previous evaluation settings. The JNC provides common supervision data for headline generation. The JAMUL is a large-scale evaluation dataset for headlines of three different lengths composed by professional editors. We report new findings on these corpora; for example, although the longest length reference summary can appropriately evaluate the existing methods controlling output length, this evaluation setting has several problems.
[ "Text Generation" ]
[ 47 ]
http://arxiv.org/abs/1608.06718v1
A Large-Scale Multilingual Disambiguation of Glosses
Linking concepts and named entities to knowledge bases has become a crucial Natural Language Understanding task. In this respect, recent works have shown the key advantage of exploiting textual definitions in various Natural Language Processing applications. However, to date there are no reliable large-scale corpora of sense-annotated textual definitions available to the research community. In this paper we present a large-scale high-quality corpus of disambiguated glosses in multiple languages, comprising sense annotations of both concepts and named entities from a unified sense inventory. Our approach for the construction and disambiguation of the corpus builds upon the structure of a large multilingual semantic network and a state-of-the-art disambiguation system; first, we gather complementary information of equivalent definitions across different languages to provide context for disambiguation, and then we combine it with a semantic similarity-based refinement. As a result we obtain a multilingual corpus of textual definitions featuring over 38 million definitions in 263 languages, and we make it freely available at http://lcl.uniroma1.it/disambiguated-glosses. Experiments on Open Information Extraction and Sense Clustering show how two state-of-the-art approaches improve their performance by integrating our disambiguated corpus into their pipeline.
[ "Multilinguality" ]
[ 0 ]
http://arxiv.org/abs/2302.04811v1
A Large-Scale Multilingual Study of Visual Constraints on Linguistic Selection of Descriptions
We present a large, multilingual study into how vision constrains linguistic choice, covering four languages and five linguistic properties, such as verb transitivity or use of numerals. We propose a novel method that leverages existing corpora of images with captions written by native speakers, and apply it to nine corpora, comprising 600k images and 3M captions. We study the relation between visual input and linguistic choices by training classifiers to predict the probability of expressing a property from raw images, and find evidence supporting the claim that linguistic properties are constrained by visual context across languages. We complement this investigation with a corpus study, taking the test case of numerals. Specifically, we use existing annotations (number or type of objects) to investigate the effect of different visual conditions on the use of numeral expressions in captions, and show that similar patterns emerge across languages. Our methods and findings both confirm and extend existing research in the cognitive literature. We additionally discuss possible applications for language generation.
[ "Multilinguality", "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 0, 20, 39, 47, 74 ]
SCOPUS_ID:85054263247
A Large-Scale Study of Language Models for Chord Prediction
We conduct a large-scale study of language models for chord prediction. Specifically, we compare N-gram models to various flavours of recurrent neural networks on a comprehensive dataset comprising all publicly available datasets of annotated chords known to us. This large amount of data allows us to systematically explore hyperparameter settings for the recurrent neural networks - a crucial step in achieving good results with this model class. Our results show not only a quantitative difference between the models, but also a qualitative one: in contrast to static N-gram models, certain RNN configurations adapt to the songs at test time. This finding constitutes a further step towards the development of chord recognition systems that are more aware of local musical context than what was previously possible.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]