id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
SCOPUS_ID:85146955533
|
A Generative-Based Chatbot for Daily Conversation: A Preliminary Study
|
Currently AI has been integrated in our daily life. One of basic example of AI in our daily life is chatbot. The common chatbot that is developed is a rule-based chatbot. Rule-based chatbot have several drawbacks that is easy to predict, repetitive, an unnatural conversation might happened. In addition rule-based chatbot would rely on fixed pair question and answer, developed in a closed domain conversation, and has limited self-learning. Due to this problem, there is a need to develop a chatbot that is able to dynamically response to the question. The chatbot model in this study is based on Simple Dialog and Daily Dialog dataset. The dataset are then merged into single dataset. In this research, we proposed the Seq2seq model architecture with LSTM and GRU cell to create a Generative-Based chatbot. The Seq2seq model consists of two parts, encoder and decoder. The proposed model is evaluated using Cross Entropy Loss and BLEU score. The result shows that the LSTM model has better performance than the GRU model. The LSTM model resulted in 0.7064 training loss, 8.9740 validation loss and 0.0588 BLEU. Meanwhile, the GRU model resulted in 2.1125 training loss, 7.1840 validation loss and 0,0028 BLEU. Compared to GRU, LTSM model have the ability to generate a more acceptable response to the given questions.
|
[
"Language Models",
"Natural Language Interfaces",
"Semantic Text Processing",
"Dialogue Systems & Conversational Agents"
] |
[
52,
11,
72,
38
] |
SCOPUS_ID:85147330066
|
A Generator-based Method for Attacking Embedding-based Text Classifiers
|
This paper proposes a method to attack embedding-based text classifiers, namely GenEmC, to reduce the high computational cost of rule-based replacement. GenEmC includes three main steps including data preparation, embedding decoder training, and generator training. The embedding decoder is learned to convert embedding vectors to texts. The generator is trained by a two-term loss to generate adversarial embedding vectors from original texts. After three steps, the combination of two trained models could generate adversarial texts instantly. To demonstrate the effectiveness of GenEmC, the experiments are conducted on the IMDB dataset. The target classifiers are well-trained GRU, LSTM, and Bert. GenEmC is compared with two well-known rule-based replacement methods known as PWWS and TextBugger. The experiments have demonstrated that for 1,000 texts in test set, while the computational cost of GenEmC is an average of about 6 minutes, the others require at least 10 hours. Besides, the average success rate and semantic-preserving output score of GenEmC is moderately higher than those of PWWS and TextBugger.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Robustness in NLP",
"Representation Learning",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
24,
3,
58,
12,
36,
4
] |
https://aclanthology.org//W04-2210/
|
A Generic Collaborative Platform for Multilingual Lexical Database Development
|
[
"Multilinguality"
] |
[
0
] |
|
SCOPUS_ID:85126572142
|
A Generic Graph-Based Method for Flexible Aspect-Opinion Analysis of Complex Product Customer Feedback
|
Product design experts depend on online customer reviews as a source of insight to improve product design. Previous works used aspect-based sentiment analysis to extract insight from product reviews. However, their approaches for requirements elicitation are less flexible than traditional tools such as interviews and surveys. They require costly data labeling or pre-labeled datasets, lack domain knowledge integration, and focus more on sentiment classification than flexible aspect-opinion analysis. Related works lack effective mechanisms for probing the customer feedback of complex configurable products. This study proposes a generic graph-based opinion mining and analysis method for product design improvement. First, a customer feedback data preprocessing and annotation pipeline that can incorporate designer-specified domain knowledge is proposed. Second, an intuitive opinion-aware labeled property graph data model is designed to ingest preprocessed feedback data and perform ad hoc opinion analysis. Applying the generic model to a real-world dataset demonstrates superior functionality and flexibility compared to related works. A wider range of analyses is supported in a single model without repeating data preprocessing and modeling. Specifically, the proposed method supports regular and comparative aspect-opinion analysis, aspect satisfaction/influence ranking, opinion trend extraction, and targeted aspect-opinion summarization.
|
[
"Opinion Mining",
"Multimodality",
"Structured Data in NLP",
"Sentiment Analysis"
] |
[
49,
74,
50,
78
] |
SCOPUS_ID:85062076164
|
A Generic OCR Using Deep Siamese Convolution Neural Networks
|
This paper presents a generic optical character recognition (OCR) system based on deep Siamese convolution neural networks (CNNs) and support vector machines (SVM). Supervised deep CNNs achieve high level of accuracy in classification tasks. However, fine-tuning a trained model for a new set of classes requires large amount of data to overcome the problem of dataset bias. The classification accuracy of deep neural networks (DNNs) degrades when the available dataset is insufficient. Moreover, using a trained deep neural network in classifying a new class requires tuning the network architecture and retraining the model. All these limitations are handled by our proposed system. The deep Siamese CNN is trained for extracting discriminative features. The training is performed once using a group of classes. The OCR system is then used for recognizing different classes without retraining or fine-tuning the deep Siamese CNN model. Only few samples are needed from any target class for classification. The proposed OCR system is evaluated on different domains: Arabic letters, Eastern-Arabic numerals, Hindu-Arabic numerals, and Farsi numerals using test sets that contain printed and handwritten letters and numerals. The proposed system achieves a very promising recognition accuracy close to the results achieved by CNNs trained for specific target classes and recognition systems without the need for retraining. The system outperforms the state of the art method that uses Siamese CNN in one-shot classification task by around 12%.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Text Classification",
"Multimodality"
] |
[
20,
52,
72,
24,
3,
36,
74
] |
SCOPUS_ID:85086598935
|
A Generic Solver Combining Unsupervised Learning and Representation Learning for Breaking Text-Based Captchas
|
Although there are many alternative captcha schemes available, text-based captchas are still one of the most popular security mechanism to maintain Internet security and prevent malicious attacks, due to the user preferences and ease of design. Over the past decade, different methods of breaking captchas have been proposed, which helps captcha keep evolving and become more robust. However, these previous works generally require heavy expert involvement and gradually become ineffective with the introduction of new security features. This paper proposes a generic solver combining unsupervised learning and representation learning to automatically remove the noisy background of captchas and solve text-based captchas. We introduce a new training scheme for constructing mini-batches, which contain a large number of unlabeled hard examples, to improve the efficiency of representation learning. Unlike existing deep learning algorithms, our method requires significantly fewer labeled samples and surpasses the recognition performance of a fully-supervised model with the same network architecture. Moreover, extensive experiments show that the proposed method outperforms state-of-the-art by delivering a higher accuracy on various captcha schemes. We provide further discussions of potential applications of the proposed unified framework. We hope that our work can inspire the community to enhance the security of text-based captchas.
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Representation Learning"
] |
[
80,
4,
72,
12
] |
https://aclanthology.org//W97-0602/
|
A Generic Template to evaluate integrated components in spoken dialogue systems
|
[
"Natural Language Interfaces",
"Multimodality",
"Speech & Audio in NLP",
"Dialogue Systems & Conversational Agents"
] |
[
11,
74,
70,
38
] |
|
SCOPUS_ID:85093359472
|
A Generic approach for Pronominal Anaphora and Zero Anaphora resolution in Arabic language
|
This paper deals with the resolution of pronominal anaphora and zero anaphora in Arabic language. While researchers have treated the two phenomena separately, we propose a generic approach for both of them. Our resolution system combines a Q-learning reinforcement method and Word Embedding models. The Q-learning method uses syntactic criteria as preference factors to select candidate antecedents. It reinforces the best combination criteria for evaluating candidate antecedents. The Word Embedding models provide semantic similarity measures that help to validate the best antecedent. Our approach is evaluated on different type of Arabic texts and the obtained precision can reach79.37%.
|
[
"Semantic Text Processing",
"Semantic Similarity",
"Representation Learning",
"Coreference Resolution",
"Information Extraction & Text Mining"
] |
[
72,
53,
12,
13,
3
] |
SCOPUS_ID:85060635433
|
A Genetic Algorithm Based Approach for Data Fusion at Grammar Level
|
Multimodal interaction is a type of Human-Computer Interaction, which involves a combination of multifarious modalities to effectuate a task. Human-to-human interactions necessitate the use of all modalities such as speech, gestures, facial expressions and assorted media available for brainy communications. The single modalities often introduce the ambivalent interpretation of the ideas being conveyed. Multimodal interaction plays a major role in resolving such ambiguities. The various modalities used can be combined as a way to recover the semantics of the messages involved. This strategy, known as data fusion is the main focus of the research presented. This paper proposes a new genetic algorithm based approach for achieving intelligent data fusion for inferred contextfree grammars. Grammar inference methods are applied to generate the grammars, and the resulting production rules are fused to generate a correct grammar. Related results, their implications, and future work are presented.
|
[
"Text Error Correction",
"Syntactic Text Processing",
"Multimodality"
] |
[
26,
15,
74
] |
SCOPUS_ID:85067455174
|
A Genetic Algorithm Based Approach for Hindi Word Sense Disambiguation
|
Word Sense Disambiguation (WSD) is the procedure of selecting precise sense or meaning for a word in a given context. Word sense disambiguation goes about as an establishment to different AI applications as data mining, information recovery, and machine interpretation. The issue solicits to figure out which sense from the polysemous word is appropriate in a given context. Previously a few methodologies have been proposed for WSD in English, German and so forth, however, work on WSD in Hindi is limited. In this work, a genetic algorithm based approach is presented for Hindi WSD. The feature of dynamic setting window is used which contains left and right expression of a vague word. The cardinal supposition of this approach is that the target word must have a common topic in its neighborhood. For finding all possible senses of an ambiguous word, WordNet created by IIT Bombay is used.
|
[
"Semantic Text Processing",
"Word Sense Disambiguation"
] |
[
72,
65
] |
SCOPUS_ID:85080962797
|
A Genetic Algorithm Based Approach for Word Sense Disambiguation Using Fuzzy WordNet Graphs
|
Due to the ever-evolving nature of human languages, the ambiguity in it needs to be dealt with by the researchers. Word sense disambiguation (WSD) is a classical problem of natural language processing which refers to identifying the most appropriate sense of a given word in the concerned context. WordNet graph based approaches are used by several state-of-art methods for performing WSD. This paper highlights a novel genetic algorithm based approach for performing WSD using fuzzy WordNet graph based approach. The fitness function is calculated using the fuzzy global measures of graph connectivity. For proposing this fitness function, a comparative study is performed for the global measures edge density, entropy and compactness. Also, an analytical insight is provided by presenting a visualization of the control terms for word sense disambiguation in the research papers from 2013 to 2018 present in Web of Science.
|
[
"Structured Data in NLP",
"Semantic Text Processing",
"Word Sense Disambiguation",
"Multimodality"
] |
[
50,
72,
65,
74
] |
SCOPUS_ID:85127477443
|
A Genetic-based Fusion Approach of Persian and Universal Phonetic Results for Spoken Language Identification
|
Automatic Spoken language identification (LID) refers to the automatic process of identifying languages spoken in the audio files. Pure acoustic approaches have shown great potential in LID. Since acoustic approaches have become more and more popular, phonetic information has been largely overlooked. In this paper, we present a genetic-based fusion approach based on the score probabilities of two phonetic LID systems. There are two SVM classifiers trained on perplexities as their feature vectors which are obtained from phone language models of different phone recognizers. Two phone recognizers are here utilized; one decodes the speech file to a sequence of IPA alphabet, as a universal phone recognizer, and the other is a Farsi phone recognizer which is trained on FARSDAT databases. With the help of the genetic-based fusion approach, we will extract 54 weights. We have 27 languages in our database and 2 individual phonetic LID systems; therefore, we will achieve 54 weights for our fusion. The first 27 weights correspond to our system using a universal phone recognizer and the second 27 weights are related to our system with the Farsi phone recognizer. In the end, we use these weights to combine the results of each of our individual phonetic LID systems. The experimental results conducted on 27 languages within the NIST-LRE09 corpus demonstrated that the proposed fusion approach could greatly increase the classification accuracy of target languages. It should also be noted that we separate the files of each speaker and place them only in one set (train set, development set, or test set) to prevent speaker-related biases.
|
[
"Text Classification",
"Syntactic Text Processing",
"Phonetics",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
15,
64,
24,
3
] |
SCOPUS_ID:85125672569
|
A Genre-Based Approach to Teaching Argument Writing
|
This chapter provides an authentic classroom example of a research-based approach that secondary ESOL/ELA teachers can apply to teach ELLs from diverse cultural, linguistic, and educational backgrounds to write an academic-style, authoritative argument. Using the teaching and learning cycle (TLC) of genre pedagogy, teachers can make visible and tangible the language tools, or academic language resources, that ELLs can employ to write well in this critical genre. Grounded in theories of language and learning, teachers can use the TLC to design and implement instruction that strengthens ELLs’ academic language and literacy development while supporting learning of grade-level disciplinary content.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
http://arxiv.org/abs/2010.01345v1
|
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples
|
Generating adversarial examples for natural language is hard, as natural language consists of discrete symbols, and examples are often of variable lengths. In this paper, we propose a geometry-inspired attack for generating natural language adversarial examples. Our attack generates adversarial examples by iteratively approximating the decision boundary of Deep Neural Networks (DNNs). Experiments on two datasets with two different models show that our attack fools natural language models with high success rates, while only replacing a few words. Human evaluation shows that adversarial examples generated by our attack are hard for humans to recognize. Further experiments show that adversarial training can improve model robustness against our attack.
|
[
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
58,
4
] |
http://arxiv.org/abs/2004.03283v1
|
A German Corpus for Fine-Grained Named Entity Recognition and Relation Extraction of Traffic and Industry Events
|
Monitoring mobility- and industry-relevant events is important in areas such as personal travel planning and supply chain management, but extracting events pertaining to specific companies, transit routes and locations from heterogeneous, high-volume text streams remains a significant challenge. This work describes a corpus of German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. It has also been annotated with a set of 15 traffic- and industry-related n-ary relations and events, such as accidents, traffic jams, acquisitions, and strikes. The corpus consists of newswire texts, Twitter messages, and traffic reports from radio stations, police and railway companies. It allows for training and evaluating both named entity recognition algorithms that aim for fine-grained typing of geo-entities, as well as n-ary relation extraction systems.
|
[
"Relation Extraction",
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
75,
34,
3
] |
SCOPUS_ID:85059918318
|
A German corpus for fine-grained named entity recognition and relation extraction of traffic and industry events
|
Monitoring mobility- and industry-relevant events is important in areas such as personal travel planning and supply chain management, but extracting events pertaining to specific companies, transit routes and locations from heterogeneous, high-volume text streams remains a significant challenge. This work describes a corpus of German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. It has also been annotated with a set of 15 traffic- and industry-related n-ary relations and events, such as accidents, traffic jams, acquisitions, and strikes. The corpus consists of newswire texts, Twitter messages, and traffic reports from radio stations, police and railway companies. It allows for training and evaluating both named entity recognition algorithms that aim for fine-grained typing of geo-entities, as well as n-ary relation extraction systems.
|
[
"Relation Extraction",
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
75,
34,
3
] |
http://arxiv.org/abs/2203.11849v1
|
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for Deobfuscation
|
Recent advances in natural language processing have enabled powerful privacy-invasive authorship attribution. To counter authorship attribution, researchers have proposed a variety of rule-based and learning-based text obfuscation approaches. However, existing authorship obfuscation approaches do not consider the adversarial threat model. Specifically, they are not evaluated against adversarially trained authorship attributors that are aware of potential obfuscation. To fill this gap, we investigate the problem of adversarial authorship attribution for deobfuscation. We show that adversarially trained authorship attributors are able to degrade the effectiveness of existing obfuscators from 20-30% to 5-10%. We also evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator was used. While there is a a clear degradation in attribution accuracy, it is noteworthy that this degradation is still at or above the attribution accuracy of the attributor that is not adversarially trained at all. Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation
|
[
"Ethical NLP",
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
58,
4
] |
http://arxiv.org/abs/2005.00702v1
|
A Girl Has A Name: Detecting Authorship Obfuscation
|
Authorship attribution aims to identify the author of a text based on the stylometric analysis. Authorship obfuscation, on the other hand, aims to protect against authorship attribution by modifying a text's style. In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model. An obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated - a decision that is key to the adversary interested in authorship attribution. We show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average F1 score of 0.87. The reason for the lack of stealthiness is that these obfuscators degrade text smoothness, as ascertained by neural language models, in a detectable manner. Our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity.
|
[
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
4
] |
SCOPUS_ID:85111351604
|
A Gist Information Guided Neural Network for Abstractive Summarization
|
Abstractive summarization aims to condense the given documents and generate fluent summaries with important information. It is challenging for selecting the salient information and maintaining the semantic consistency between documents and summaries. To tackle these problems, we propose a novel framework - Gist Information Guided Neural Network (GIGN), which is inspired by the process that people usually summarize a document around the gist information. First, we incorporate multi-head attention mechanism with the self-adjust query to extract the global gist of the input document, which is equivalent to a question vector questions the model “What is the document gist?”. Through the interaction of the query and the input representations, the gist contains all salient semantics. Second, we propose the remaining gist guided module to dynamically guide the generation process, which can effectively reduce the redundancy by attending to different contents of gist. Finally, we introduce the gist consistency loss to improve the consistency between inputs and outputs. We conduct experiments on the benchmark dataset - CNN/Daily Mail to validate the effectiveness of our methods. The results indicate that our GIGN significantly outperforms all baseline models and achieves the state-of-the-art.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85115015425
|
A Global Assessment of Climate Change Adaptation in Marine Protected Area Management Plans
|
Marine protected area (MPA) efficacy is increasingly challenged by climate change. Experts have identified clear climate change adaptation principles that MPA practitioners can incorporate into MPA management; however, adoption of these principles in MPA management remains largely unquantified. We conducted a text analysis of 647 English-language MPA management plans to assess the frequency with which they included climate change-related terms and terms pertaining to ecological, physical, and sociological components of an MPA system that may be impacted by climate change. Next, we manually searched 223 management plans to quantify the plans’ climate change robustness, which we defined as the degree of incorporation of common climate change adaptation principles. We found that climate change is inadequately considered in MPA management plans. Of all plans published since 2010, only 57% contained at least one of the climate change-related terms, “climate change,” “global warming,” “extreme events,” “natural variability,” or “climate variability.” The mean climate change robustness index of climate-considering management plans was 10.9 or 39% of a total possible score of 28. The United States was the only region that had plans with climate robustness indices of 20 or greater. By contrast, Canada lags behind other temperate jurisdictions in incorporating climate change adaptation analysis, planning, and monitoring into MPA management, with a mean climate change robustness index of 6.8. Climate change robustness scores have generally improved over time within the most common MPA designations in Oceania, the United Kingdom, and the United States, though the opposite is true in Canada. Our results highlight the urgent need for practitioners to incorporate climate change adaptation into MPA management in accordance with well-researched frameworks.
|
[
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
58,
4
] |
SCOPUS_ID:85110376349
|
A Global Past-Future Early Exit Method for Accelerating Inference of Pre-trained Language Models
|
Early exit mechanism aims to accelerate the inference speed of large-scale pre-trained language models. The essential idea is to exit early without passing through all the inference layers at the inference stage. To make accurate predictions for downstream tasks, the hierarchical linguistic information embedded in all layers should be jointly considered. However, much of the research up to now has been limited to use local representations of the exit layer. Such treatment inevitably loses information of the unused past layers as well as the high-level features embedded in future layers, leading to sub-optimal performance. To address this issue, we propose a novel Past-Future method to make comprehensive predictions from a global perspective. We first take into consideration all the linguistic information embedded in the past layers and further engage the future information which is originally inaccessible for predictions. Extensive experiments demonstrate that our method outperforms previous early exit methods by a large margin, yielding better and robust performance.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85147548000
|
A Global Pointer based Entity Relation Extraction Method for Chinese Pulmonary Nodule Medical Records
|
To facilitate research on pulmonary nodule medical records for physicians, this paper proposes an entity relation extraction model which bases on Global Pointer, using embedded pre-trained language model RoFormer as upstream encoder, Exponential Moving Average optimization method and Fast Gradient Method for adversarial training. The proposed model can also analyze the parent-child relations on contextual semantics, and then process them into structured data. The experimental results show that this model improves the extraction effect significantly compared with the traditional methods, and the F1 value can reach 86.2% in the Chinese pulmonary nodule medical records dataset.
|
[
"Relation Extraction",
"Information Extraction & Text Mining"
] |
[
75,
3
] |
https://aclanthology.org//2022.seretod-1.2/
|
A GlobalPointer based Robust Approach for Information Extraction from Dialog Transcripts
|
With the widespread popularisation of intelligent technology, task-based dialogue systems (TOD) are increasingly being applied to a wide variety of practical scenarios. As the key tasks in dialogue systems, named entity recognition and slot filling play a crucial role in the completeness and accuracy of information extraction. This paper is an evaluation paper for Sere-TOD 2022 Workshop challenge (Track 1 Information extraction from dialog transcripts). We proposed a multi-model fusion approach based on GlobalPointer, combined with some optimisation tricks, finally achieved an entity F1 of 60.73, an entity-slot-value triple F1 of 56, and an average F1 of 58.37, and got the highest score in SereTOD 2022 Workshop challenge
|
[
"Information Extraction & Text Mining",
"Robustness in NLP",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Responsible & Trustworthy NLP"
] |
[
3,
58,
11,
38,
4
] |
http://arxiv.org/abs/2106.03376v1
|
A Globally Normalized Neural Model for Semantic Parsing
|
In this paper, we propose a globally normalized model for context-free grammar (CFG)-based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.
|
[
"Semantic Parsing",
"Semantic Text Processing"
] |
[
40,
72
] |
SCOPUS_ID:85127519012
|
A Global–Local Attentive Relation Detection Model for Knowledge-Based Question Answering
|
Knowledge-based question answering (KBQA) is an essential but challenging task for artificial intelligence and natural language processing. A key challenge pertains to the design of effective algorithms for relation detection. Conventional methods model questions and candidate relations separately through the knowledge bases (KBs) without considering the rich word-level interactions between them. This approach may result in local optimal results. This article presents a global–local attentive relation detection model (GLAR) that utilizes the local module to learn the features of word-level interactions and employs the global module to acquire nonlinear relationships between questions and their candidate relations located in KBs. This article also reports on the application of an end-to-end retrieval-based KBQA system incorporating the proposed relation detection model. Experimental results obtained on two datasets demonstrated GLAR’s remarkable performance in the relation detection task. Furthermore, the functioning of end-to-end KBQA systems was significantly improved through the relation detection model, whose results on both datasets outperformed even state-of-the-art methods. Impact Statement—Knowledge-based question answering (KBQA) aims at answering user questions posed over the knowledge bases (KBs). KBQA helps users access knowledge in the KBs more easily, and it works on two subtasks: entity mention detection and relation detection. While existing relation detection algorithms perform well on the global representation of questions and relations sequences, they ignore some local semantic information on interaction cases between them. The technology proposed in this article takes both global and local interactions into account. With superior improvement on two relation detection tasks and two KBQA end tasks, the technology provides more precise answers. It could be used in more applications, including intelligent customer service, intelligent finance, and others.
|
[
"Natural Language Interfaces",
"Knowledge Representation",
"Semantic Text Processing",
"Question Answering"
] |
[
11,
18,
72,
27
] |
SCOPUS_ID:85126014945
|
A Glove CNN-Bilstm Sentiment Classification
|
Reviewing products online has become an increasingly popular way for consumers to voice their opinions and feelings about a product or service. Analyzing this Big data of online reviews would help to discern and extract useful facts and information that could provide a competitive and economic advantage to merchants and other organizations that are interested. Text classification organizes documents according to a variety of predefined categories. In other to solve the aforementioned problems, we employed Glove embeddings for our review sentiment analysis. We further integrate this embedding layer into a deep convolutional neural network (CNN)-bidirectional LSTM model. We further train our model on the IMDB and movie review dataset to extract the polarity as positive or negative and subsequently compare our model with other state-of- the-art models. The aforementioned experiments validate the efficacy and superiority of our proposed approach.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning",
"Sentiment Analysis",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
12,
78,
36,
3
] |
https://aclanthology.org//W93-0205/
|
A Goal-Based Grammar of Rhetoric
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
|
https://aclanthology.org//2020.inlg-1.22/
|
A Gold Standard Methodology for Evaluating Accuracy in Data-To-Text Systems
|
Most Natural Language Generation systems need to produce accurate texts. We propose a methodology for high-quality human evaluation of the accuracy of generated texts, which is intended to serve as a gold-standard for accuracy evaluations of data-to-text systems. We use our methodology to evaluate the accuracy of computer generated basketball summaries. We then show how our gold standard evaluation can be used to validate automated metrics.
|
[
"Data-to-Text Generation",
"Text Generation"
] |
[
16,
47
] |
https://aclanthology.org//W18-4601/
|
A Gold Standard to Measure Relative Linguistic Complexity with a Grounded Language Learning Model
|
This paper focuses on linguistic complexity from a relative perspective. It presents a grounded language learning system that can be used to study linguistic complexity from a developmental point of view and introduces a tool for generating a gold standard in order to evaluate the performance of the learning system. In general, researchers agree that it is more feasible to approach complexity from an objective or theory-oriented viewpoint than from a subjective or user-related point of view. Studies that have adopted a relative complexity approach have showed some preferences for L2 learners. In this paper, we try to show that computational models of the process of language acquisition may be an important tool to consider children and the process of first language acquisition as suitable candidates for evaluating the complexity of languages.
|
[
"Semantic Text Processing",
"Text Complexity"
] |
[
72,
42
] |
SCOPUS_ID:85135878371
|
A Good Classifier is Not Enough: A XAI Approach for Urgent Instructor-Intervention Models in MOOCs
|
Deciding upon instructor intervention based on learners’ comments that need an urgent response in MOOC environments is a known challenge. The best solutions proposed used automatic machine learning (ML) models to predict the urgency. These are ‘black-box’-es, with results opaque to humans. EXplainable artificial intelligence (XAI) is aiming to understand these, to enhance trust in artificial intelligence (AI)-based decision-making. We propose to apply XAI techniques to interpret a MOOC intervention model, by analysing learner comments. We show how pairing a good predictor with XAI results and especially colour-coded visualisation could be used to support instructors making decisions on urgent intervention.
|
[
"Text Classification",
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
81,
4,
24,
3
] |
http://arxiv.org/abs/2110.08484v2
|
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models
|
Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FewVLM, relatively smaller than recent few-shot learners. For FewVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FewVLM with prompt-based learning outperforms Frozen which is 31x larger than FewVLM by 18.2% point and achieves comparable results to a 246x larger model, PICa. In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at \url{https://github.com/woojeongjin/FewVLM}
|
[
"Language Models",
"Low-Resource NLP",
"Visual Data in NLP",
"Semantic Text Processing",
"Multimodality",
"Responsible & Trustworthy NLP"
] |
[
52,
80,
20,
72,
74,
4
] |
https://aclanthology.org//W19-8672/
|
A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models
|
Deep neural networks (DNN) are quickly becoming the de facto standard modeling method for many natural language generation (NLG) tasks. In order for such models to truly be useful, they must be capable of correctly generating utterances for novel meaning representations (MRs) at test time. In practice, even sophisticated DNNs with various forms of semantic control frequently fail to generate utterances faithful to the input MR. In this paper, we propose an architecture agnostic self-training method to sample novel MR/text utterance pairs to augment the original training data. Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding are capable of generating semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.
|
[
"Text Generation"
] |
[
47
] |
SCOPUS_ID:85086457748
|
A Google Trends spatial clustering approach for a worldwide Twitter user geolocation
|
User location data is valuable for diverse social media analytics. In this paper, we address the non-trivial task of estimating a worldwide city-level Twitter user location considering only historical tweets. We propose a purely unsupervised approach that is based on a synthetic geographic sampling of Google Trends (GT) city-level frequencies of tweet nouns and three clustering algorithms. The approach was validated empirically by using a recently collected dataset, with 3,268 worldwide city-level locations of Twitter users, obtaining competitive results when compared with a state-of-the-art Word Distribution (WD) user location estimation method. The best overall results were achieved by the GT noun DBSCAN (GTN-DB) method, which is computationally fast, and correctly predicts the ground truth locations of 15%, 23%, 39% and 58% of the users for tolerance distances of 250 km, 500 km, 1,000 km and 2,000 km.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85042129795
|
A Gossamer Consensus: Discourses of Vulnerability in the Westminster Prostitution Policy Subsystem
|
Within much feminist scholarship, the concept of vulnerability is understood to possess progressive potential. Troubling Liberalism’s individualism, vulnerability theorists conceive of the subject as situated and formed through her various relational dependencies. Concurrently, the term vulnerability appears in much contemporary social policy. An emergent literature suggests, however, that policy and academic representations of vulnerability diverge in ideologically significant ways. In this article, I make a significant contribution to this body of work. I explore how 21 prostitution policy actors and 4 prostitution policy documents represent vulnerability, as they understand it to pertain to the sale and purchase of sex. I trace the many narratives strands which contribute to policy conversations regarding vulnerability and conclude by suggesting ‘vulnerability’ has become a ‘floating signifier’ – a surface of inscription encompassing contradictory political projects. Despite this, I suggest that the feminist ‘lens’ of vulnerability may provide us with a new way to understand prostitution debates.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85129243588
|
A Gradient Harmonic Grammar Account of Nasals in Extended Phonological Words
|
The article aims at contributing to the long-standing research on the prosodic organization of linguistic elements and the criteria used for identifying prosodic structures. Our focus is on final coronal nasals in function words in Greek and the variability in their patterns of realization before lexical words. Certain nasals coalesce before stops and delete before fricatives, whereas others do not. We propose that this split in the behavior of nasals does not pertain to item-specific prosody because the relevant strings are uniformly prosodified into an extended phonological word (Itô & Mester 2007, 2009). It rather stems from the contrastive activity level of nasals in underlying forms in the spirit of Smolensky & Goldrick's (2016) Gradient Symbolic Representations; nasals with lower activity coalesce and delete in the respective phonological environments, whereas those with higher activity do not. We show that the proposed analysis captures certain gradient effects that alternative analyses cannot account for.
|
[
"Language Models",
"Semantic Text Processing",
"Phonology",
"Syntactic Text Processing",
"Representation Learning"
] |
[
52,
72,
6,
15,
12
] |
SCOPUS_ID:85064845078
|
A Gradually Distilled CNN for SAR Target Recognition
|
Convolutional neural networks (CNNs) have been widely used in synthetic aperture radar (SAR) target recognition. Traditional CNNs suffer from expensive computation and high memory consumption, impeding their deployment in real-time recognition systems of SAR sensors, as these systems have low memory resources and low speed of calculation. In this paper, a micro CNN (MCNN) for real-time SAR recognition system is proposed. The proposed MCNN has only two layers, and it is compressed from a deep convolutional neural network (DCNN) with 18 layers by a novel knowledge distillation algorithm called gradual distillation. MCNN is a ternary network, and all its weights are either -1 or 1 or 0. Following a student-teacher paradigm, the DCNN is the teacher network and MCNN is its student network. The gradual distillation makes MCNN a better learning route than traditional knowledge distillation. The experiments on the MSTAR dataset show that the proposed MCNN can obtain a high recognition rate which is almost the same as the DCNN. However, compared with the DCNN, the memory footprint of the proposed MCNN is compressed 177 times, and the calculated amount is 12.8 times less, which means that the proposed MCNN can obtain better performance with the smaller network.
|
[
"Language Models",
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Green & Sustainable NLP"
] |
[
52,
4,
72,
68
] |
https://aclanthology.org//W11-2101/
|
A Grain of Salt for the WMT Manual Evaluation
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85109190317
|
A Grammar of Kunbarlang
|
This is a comprehensive linguistic description of Kunbarlang (Gunbalang), a highly endangered polysynthetic language of northern Australia. Kunbarlang belongs to the non-Pama-Nyungan Gunwinyguan language family and is currently spoken by nearly 40 people. This work draws on elicitations and analysis of narratives from the author's original field work (2015--2018), as well as those from previous recordings. The main areas covered are the sound system, morphology, syntax, and aspects of lexical and constructional semantics. Dictated by the polysynthetic structure of the language and the patterns of its use, the principal focus of the work is the analysis of the verbal complex and the interaction between the verb and other constituents of the clause. The analysis strike a balance between taking into consideration the areal and genetic context, being informed by linguistic typology and theory, yet at the same time remaining data-driven and theory-neutral in the way generalisations are stated. Against the Australian and a broader cross-linguistic background, Kunbarlang possesses remarkable features at all levels of its organisation.
|
[
"Linguistic Theories",
"Syntactic Text Processing",
"Linguistics & Cognitive NLP",
"Typology",
"Multilinguality"
] |
[
57,
15,
48,
45,
0
] |
SCOPUS_ID:84949989618
|
A Grammar of Old English
|
First published in 1992, A Grammar of Old English, Volume 1: Phonology was a landmark publication that in the intervening years has not been surpassed in its depth of scholarship and usefulness to the field. With the 2011 posthumous publication of Richard M. Hogg's Volume 2: Morphology, Volume 1 is again in print, now in paperback, so that scholars can own this complete work. • Takes account of major developments both in the field of Old English studies and in linguistic theory • Takes full advantage of the Dictionary of OldEnglish project at Toronto, and includes full cross-references to the DOE data • Fully utilizes work in phonemic and generative theory and related topics • Provides material crucial for future research both in diachronic and synchronic phonology and in historical sociolinguistics.
|
[
"Linguistics & Cognitive NLP",
"Phonology",
"Syntactic Text Processing",
"Linguistic Theories"
] |
[
48,
6,
15,
57
] |
SCOPUS_ID:85107389152
|
A Grammar-Aware Pointer Network for Abstractive Summarization
|
Pointer network (PN) has achieved breakthrough in recent years of text summarization research. But it only focuses on semantic relevance of the source sequence; in fact, the text should also comply with explicit grammar rules. The semantics and syntax are the two main granularities to research in this paper. In more detail, we proposed a grammar-aware pointer network (GPAN) for abstractive summarization, which not only tracks the key semantics of the original text, but also observes the syntax rules. To enforce the syntactic constraints, we get each word attached with the part-of-speech (POS) tag and syntactic dependency (DEP) tag and input them into the recurrent network when training the network. Then, we predict the POS and DEP at each decoder time step; by this way, we trying to let the model learn to track the grammar information of the ground truth. We evaluate our model on the benchmark dataset CNN/Daily Mail and GiGaword. The experimental results show that our model leads to significant improvements.
|
[
"Text Generation",
"Summarization",
"Syntactic Text Processing",
"Information Extraction & Text Mining"
] |
[
47,
30,
15,
3
] |
SCOPUS_ID:85136076234
|
A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs
|
Researchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high-level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars. We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non-terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low-level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high-level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under-expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data-driven development of user behavior taxonomies.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:85135032670
|
A Grammatical Error Correction Model for English Essay Words in Colleges Using Natural Language Processing
|
Natural language processing technology is a theory and approach for exploring and developing successful human-computer communication. With the rapid growth of computer science and technology, statistical learning methods have become an important research area in artificial intelligence and semantic search. If there are errors in the semantic units (words and sentences), it will affect future text analysis and semantic understanding, eventually affecting the whole application system performance. As a result, intelligent word and grammatical error detection and correction in English text are a significant and difficult aspect of natural language processing. Therefore, this paper examines the phenomena of word spelling and grammatical errors in undergraduate English essays and balances the mathematical-statistical models and technology solutions involved in intelligent error correction. The research findings of this study are represented in two aspects. (1) In nonword mistakes, four sorts of errors are studied: insertion, loss, replacement, and exchange between letters. It focuses on nonword mistakes and varied word forms (such as English abbreviations, hyphenated compound terms, and proper nouns) produced by word pronunciation difficulties. This paper utilizes the nonword check information to recommend an optimum combination prediction method based on the suggested candidate list for actual word errors, and the genuine word repair model is trained. This approach is 83.78% accurate when used with actual words with spelling errors in the context. (2) It verifies and corrects sentence grammar using context information from the text training set, as well as grammatical rules and statistical models. In addition, it has investigated singular and plural inconsistency, word confusion, subject, and predicate inconsistency, and modal (auxiliary) verb errors. It includes sentence boundary disambiguation, word part-of-speech tagging, named entity identification, and context information extraction. The software for checking and fixing sentence grammatical mistakes presented in this article works on English texts with difficulty levels 4 and 6. Furthermore, this work obtains a clause correctness rate of 99.70%, and the system's average corrective accuracy rate for four-level and six-level essays is more than 80%.
|
[
"Text Error Correction",
"Syntactic Text Processing"
] |
[
26,
15
] |
SCOPUS_ID:85103603573
|
A Granular Computing Approach to Provide Transparency of Intelligent Systems for Criminal Investigations
|
Criminal investigations involve repetitive information retrieval requests in high risk, high consequence, and time pressing situations. Artificial Intelligence (AI) systems can provide significant benefits to analysts, by sharing the burden of reasoning and speeding up information processing. However, for intelligent systems to be used in critical domains, transparency is crucial. We draw from human factors analysis and a granular computing perspective to develop Human-Centered AI (HCAI). Working closely with experts in the domain of criminal investigations we have developed an algorithmic transparency framework for designing AI systems. We demonstrate how our framework has been implemented to model the necessary information granules for contextual interpretability, at different levels of abstraction, in the design of an AI system. The system supports an analyst when they are conducting a criminal investigation, providing (i) a conversational interface to retrieve information through natural language interactions, and (ii) a recommender component for exploring, recommending, and pursuing lines of inquiry. We reflect on studies with operational intelligence analysts, to evaluate our prototype system and our approach to develop HCAI through granular computing.
|
[
"Natural Language Interfaces",
"Information Retrieval",
"Dialogue Systems & Conversational Agents"
] |
[
11,
24,
38
] |
SCOPUS_ID:85111062512
|
A Granular Computing-Driving Hesitant Fuzzy Linguistic Method for Supporting Large-Scale Group Decision Making
|
Considering the conditions that: 1) same linguistic term means different things for different people; 2) flexible semantics cannot be represented by original linguistic term; and 3) some semantics given by decision makers are possible to be changed during the consistency improving process, we bring some flexibility and personality into hesitant fuzzy linguistic preference matrix structures by allowing the linguistic preference matrices to be granular rather than numeric, providing a new characterization of linguistic preference matrices. Inspired by the thought of granular computing, this article proposes a new hesitant fuzzy linguistic method to deal with issues when a lot of decision makers provide hesitant and uncertain preference information in the decision-making process. First, we design a multiplicative consistency index and calculate its thresholds corresponding to different dimensions of preference matrix by the Monte Carlo experiment. Then, we construct a hesitant fuzzy linguistic model with granularity level, so as to recharacterize original assessment information and improve the consistency of preference matrices as far as possible. Considering the features of some large-scale group decision-making situation, where the decision makers have little opportunity to take part in multiple consensus reaching processes, hesitant fuzzy linguistic fuzzy $C$-means clustering algorithm is developed to integrate the assessment information given by decision makers. Finally, the final decision-making results are derived. An illustrative example of assessing psychological situation of some COVID-19 infected persons clarifies the reasonability of the proposed method. Finally, we complete some comparative studies and simulation experiments to demonstrate the method's validity and advantages.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85084270545
|
A Graph Attention Model for Dictionary-Guided Named Entity Recognition
|
The lack of human annotations has been one of the main obstacles for neural named entity recognition in low-resource domains. To address this problem, there have been many efforts on automatically generating silver annotations according to domain-specific dictionaries. However, the information of domain dictionaries is usually limited, and the generated annotations may be noisy which poses significant challenges on learning effective models. In this work, we try to alleviate these issues by introducing a dictionary-guided graph attention model. First, domain-specific dictionaries are utilized to extract entity mention candidates by a graph matching algorithm, which can capture word patterns of domain entities. Furthermore, a word-mention interactive graph is leveraged to integrate the semantic and boundary information of entities into their context. We evaluated our model on the biomedical-domain datasets of recognizing chemical and disease entities, namely BC5CDR and NCBI disease corpora. The results show that our model outperforms several state-of-the-art models with different methodologies, such as feature-based models (e.g., BANNER), ensemble models (e.g., CollaboNet), multi-task learning models (e.g., MTM-CW), dictionary-based models (e.g., AutoNER). Moreover, the performance of our model is also comparable with BioBERT that owns huge parameters and needs large-scale pre-training.
|
[
"Multimodality",
"Named Entity Recognition",
"Structured Data in NLP",
"Information Extraction & Text Mining"
] |
[
74,
34,
50,
3
] |
SCOPUS_ID:85107380640
|
A Graph Convolutional Network with Multiple Dependency Representations for Relation Extraction
|
Dependency analysis can assist neural networks to capture semantic features within a sentence for entity relation extraction (RE). Both hard and soft strategies of encoding dependency tree structure have been developed to balance the beneficial extra information against the unfavorable interference in the task of RE. A wide application of graph convolutional network (GCN) in the field of natural language processing (NLP) has demonstrated its effectiveness in encoding the input sentence with the dependency tree structure, as well as its efficiency in parallel computation. This study proposes a novel GCN-based model using multiple representations to depict the dependency tree from various perspectives, and combines those dependency representations afterward to obtain a better sentence representation for relation classification. This model can maximally draw from the sentence the semantic features relevant to the relationship between entities. Results show that our model achieves state-of-the-art performance in terms of the F1 score (68.0) on the Text Analysis Conference relation extraction dataset (TACRED). In addition, we verify that the renormalization parameter in the GCN operation should be carefully chosen to help GCN-based models achieve its best performance.
|
[
"Semantic Text Processing",
"Relation Extraction",
"Structured Data in NLP",
"Syntactic Text Processing",
"Representation Learning",
"Multimodality",
"Syntactic Parsing",
"Information Extraction & Text Mining"
] |
[
72,
75,
50,
15,
12,
74,
28,
3
] |
http://arxiv.org/abs/2205.10822v1
|
A Graph Enhanced BERT Model for Event Prediction
|
Predicting the subsequent event for an existing event context is an important but challenging task, as it requires understanding the underlying relationship between events. Previous methods propose to retrieve relational features from event graph to enhance the modeling of event correlation. However, the sparsity of event graph may restrict the acquisition of relevant graph information, and hence influence the model performance. To address this issue, we consider automatically building of event graph using a BERT model. To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process. Hence, in the test process, the connection relationship for unseen events can be predicted by the structured variable. Results on two event prediction tasks: script event prediction and story ending prediction, show that our approach can outperform state-of-the-art baseline methods.
|
[
"Language Models",
"Structured Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
52,
50,
72,
74
] |
SCOPUS_ID:85081589654
|
A Graph Grammar Approach to the Design and Validation of Floor Plans
|
Researchers have proposed many approaches to generate floor plans using shape grammars. None of them, however, testifies the semantic relations among rooms. This paper presents a generic approach for grammar specification, grammar induction, validation, and design generation of house floor plans using their path graphs based on the reserved graph grammar (RGG) formalism. In our approach, the connectivity of a floor plan is analyzed by user-specified graph grammar transformation rules, also known as productions. Floor plans of houses in different styles share common attributes while retaining specific features. By identifying these features, our approach validates floor plans in different styles with user-specified graph productions. A graph grammar induction engine is also introduced to assist designers by automatically inferring graph productions from an input graph set. In addition, the derivation process in RGG offers the capability of generating floor plan designs. Two types of constraints, specified as attribute-sets, are introduced to generate floor plans meeting a wide range of requirements. To evaluate this generic approach, we design a set of productions to validate and generate floor plans in the style of Frank Lloyd Wright's prairie houses. The results are discussed, and further research is suggested.
|
[
"Text Error Correction",
"Structured Data in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
26,
50,
15,
74
] |
SCOPUS_ID:85135077584
|
A Graph Neural Network-Based Approach for Predicting Second Rise of Information Diffusion on Social Networks
|
Recently, with the rise of social media, the research of information diffusion prediction has drawn much attention from scholars. The problem has important applications in public opinion monitoring, social advertising, etc. Through an in-depth diffusion analysis, we observed an interesting phenomenon where a piece of message may endure a “second rise” after it peaked at the maximum popularity for a while. However, this phenomenon has not yet been investigated by existing works. Moreover, the valuable information contained in the repost comments were not fully utilized. To fill this gap, this paper proposes a graph neural network-based model for predicting second rise of information diffusion. Specifically, we first design a simple but efficient algorithm to determine whether a message has second rise. Then, text analysis is carried out on the repost comments, and different message-text bipartite graphs are constructed according to different types of textual information (topics, comments, @users, emojis). After that, we use random walk to generate node sequences and apply skip-gram model to learn node representations, followed by a PCA-based dimension deduction. The compressed textual features are combined with embeddings learned from repost network, before they are finally fed to downstream machine learning models to generate predictions. Experimental results on the Weibo dataset show that the overall prediction accuracy can be improved significantly by incorporating the textual features.
|
[
"Multimodality",
"Structured Data in NLP",
"Semantic Text Processing",
"Representation Learning"
] |
[
74,
50,
72,
12
] |
http://arxiv.org/abs/2012.11099v2
|
A Graph Reasoning Network for Multi-turn Response Selection via Customized Pre-training
|
We investigate response selection for multi-turn conversation in retrieval-based chatbots. Existing studies pay more attention to the matching between utterances and responses by calculating the matching score based on learned features, leading to insufficient model reasoning ability. In this paper, we propose a graph-reasoning network (GRN) to address the problem. GRN first conducts pre-training based on ALBERT using next utterance prediction and utterance order prediction tasks specifically devised for response selection. These two customized pre-training tasks can endow our model with the ability of capturing semantical and chronological dependency between utterances. We then fine-tune the model on an integrated network with sequence reasoning and graph reasoning structures. The sequence reasoning module conducts inference based on the highly summarized context vector of utterance-response pairs from the global perspective. The graph reasoning module conducts the reasoning on the utterance-level graph neural network from the local perspective. Experiments on two conversational reasoning datasets show that our model can dramatically outperform the strong baseline methods and can achieve performance which is close to human-level.
|
[
"Language Models",
"Semantic Text Processing",
"Structured Data in NLP",
"Knowledge Graph Reasoning",
"Reasoning",
"Multimodality"
] |
[
52,
72,
50,
54,
8,
74
] |
http://arxiv.org/abs/2010.06801v1
|
A Graph Representation of Semi-structured Data for Web Question Answering
|
The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA). Different from plain text passages in Web documents, Web tables and lists have inherent structures, which carry semantic correlations among various elements in tables and lists. Many existing studies treat tables and lists as flat documents with pieces of text and do not make good use of semantic information hidden in structures. In this paper, we propose a novel graph representation of Web tables and lists based on a systematic categorization of the components in semi-structured data as well as their relations. We also develop pre-training and reasoning techniques on the graph model for the QA task. Extensive experiments on several real datasets collected from a commercial engine verify the effectiveness of our approach. Our method improves F1 score by 3.90 points over the state-of-the-art baselines.
|
[
"Semantic Text Processing",
"Structured Data in NLP",
"Question Answering",
"Representation Learning",
"Natural Language Interfaces",
"Multimodality"
] |
[
72,
50,
27,
12,
11,
74
] |
http://arxiv.org/abs/2101.00153v1
|
A Graph Total Variation Regularized Softmax for Text Generation
|
The softmax operator is one of the most important functions in machine learning models. When applying neural networks to multi-category classification, the correlations among different categories are often ignored. For example, in text generation, a language model makes a choice of each new word based only on the former selection of its context. In this scenario, the link statistics information of concurrent words based on a corpus (an analogy of the natural way of expression) is also valuable in choosing the next word, which can help to improve the sentence's fluency and smoothness. To fully explore such important information, we propose a graph softmax function for text generation. It is expected that the final classification result would be dominated by both the language model and graphical text relationships among words. We use a graph total variation term to regularize softmax so as to incorporate the concurrent relationship into the language model. The total variation of the generated words should be small locally. We apply the proposed graph softmax to GPT2 for the text generation task. Experimental results demonstrate that the proposed graph softmax achieves better BLEU and perplexity than softmax. Human testers can also easily distinguish the text generated by the graph softmax or softmax.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Structured Data in NLP",
"Multimodality",
"Text Generation",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
50,
74,
47,
36,
3
] |
SCOPUS_ID:85089072760
|
A Graph-Based Indexing Technique to Enhance the Performance of Boolean and Queries in Big Data Systems
|
This paper introduces a new graph-based indexing (GBI) technique for big data systems. It uses a directed graph structure that effectively captures the simultaneous occurrence of multiple keywords in the same document. The objective is to use the relationship between the search keywords captured in the graph structure to effectively retrieve all results of Boolean AND queries at once. The performance of the proposed technique is compared with the conventional inverted index-based technique. This paper highlights that, irrespective of the intersection algorithm used to evaluate Boolean AND queries, GBI always returns Boolean AND search results faster than the inverted index. This is due to the fact that GBI always performs a smaller number of intersection operations and avoids intersection if search keywords do not have a common document. A preliminary performance analysis is performed through prototyping and measurement on a system subjected to a synthetic workload. The analysis shows that GBI improves search latency when executing Boolean AND queries by an average of 69% to 99.9% in comparison to the inverted index.
|
[
"Indexing",
"Structured Data in NLP",
"Information Retrieval",
"Multimodality"
] |
[
69,
50,
24,
74
] |
SCOPUS_ID:85088571437
|
A Graph-Based Keyphrase Extraction Model with Three-Way Decision
|
Keyphrase extraction has been a popular research topic in the field of natural language processing in recent years. But how to extract keyphrases precisely and effectively is still a challenge. The mainstream methods are supervised learning methods and graph-based methods. Generally, the effects of supervised methods are better than unsupervised methods. However, there are many problems in supervised methods such as the difficulty in obtaining training data, the cost of labeling and the limitation of the classification function trained by training data. In recent years, the development of the graph-based method has made great progress and its performance of extraction is getting closer and closer to the supervised method, so the graph-based method of keyphrase extraction has got a wide concern from researchers. In this paper, we propose a new model that applies the three-way decision theory to graph-based keyphrase extraction model. In our model, we propose algorithms dividing the set of candidate phrases into the positive domain, the boundary domain and the negative domain depending on graph-based attributes, and combining candidate phrases in the positive domain and the boundary domain qualified by graph-based attributes and non- graph-based attributes to get keyphrases. Experimental results show that our model can effectively improve the extraction precision compared with baseline methods.
|
[
"Multimodality",
"Structured Data in NLP",
"Term Extraction",
"Information Extraction & Text Mining"
] |
[
74,
50,
1,
3
] |
http://arxiv.org/abs/2109.12319v1
|
A Graph-Based Neural Model for End-to-End Frame Semantic Parsing
|
Frame semantic parsing is a semantic analysis task based on FrameNet which has received great attention recently. The task usually involves three subtasks sequentially: (1) target identification, (2) frame classification and (3) semantic role labeling. The three subtasks are closely related while previous studies model them individually, which ignores their intern connections and meanwhile induces error propagation problem. In this work, we propose an end-to-end neural model to tackle the task jointly. Concretely, we exploit a graph-based method, regarding frame semantic parsing as a graph construction problem. All predicates and roles are treated as graph nodes, and their relations are taken as graph edges. Experiment results on two benchmark datasets of frame semantic parsing show that our method is highly competitive, resulting in better performance than pipeline models.
|
[
"Semantic Parsing",
"Structured Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
40,
50,
72,
74
] |
SCOPUS_ID:85075573577
|
A Graph-Based Node Identification Model in Social Networks
|
An enormous amount of data is being generated in many social networking sites. Among them, Twitter is the most data generated platform, which includes commercial data, business data, events, and polls. Various tasks can be done with the generated data. One among them is finding the trending keyword in the huge data. To obtain the trending keyword, we have to consider certain factors to improve the model performance, which includes centrality measure, term frequency, and the position of the nodes in the network. Therefore, we propose a graph-based keyword extraction approach referred to as Multi-Attribute Keyword Extraction (MAKE) which determines the significance of a keyword with the aid of collectively taking various influencing parameters. The proposed graph-based approach is more accurate in finding the importance of a node in the network including the above-mentioned factors.
|
[
"Multimodality",
"Structured Data in NLP",
"Term Extraction",
"Information Extraction & Text Mining"
] |
[
74,
50,
1,
3
] |
SCOPUS_ID:85112410598
|
A Graph-Based Opinion Mining Approach for Reducing Information Loss and Overload in Product Reviews Analysis
|
Information overload is a real challenge that product designers face whiles trying to glean insight from online product reviews. Opinion summaries are limited in the richness of insight and text summarizations pose the risk of information loss for aspect based analysis. Although much effort has been spent yearly to advance the research in search, opinion analysis and text summarization, the same cannot be said for the provision of practical models and tools to leverage these advancements. In this work, we proposed a graph-based method centred on the Labelled Property Graph, sentiment analysis and text summarization. It indexes all tokens and opinions and allows for an explorative approach to aspect sentiment analysis whiles providing targeted sentence extracts for text summarizations through opinion-aware search. We show the limitations of text summarization for designers and show how our model can avoid them with expressive pattern matching and property filtering without reprocessing aspect sentiments.
|
[
"Information Extraction & Text Mining",
"Opinion Mining",
"Structured Data in NLP",
"Summarization",
"Aspect-based Sentiment Analysis",
"Sentiment Analysis",
"Text Generation",
"Multimodality"
] |
[
3,
49,
50,
30,
23,
78,
47,
74
] |
https://aclanthology.org//W10-1204/
|
A Graph-Based Semi-Supervised Learning for Question Semantic Labeling
|
[
"Low-Resource NLP",
"Semantic Text Processing",
"Structured Data in NLP",
"Semantic Search",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Multimodality"
] |
[
80,
72,
50,
41,
4,
24,
74
] |
|
http://arxiv.org/abs/2303.10395v1
|
A Graph-Guided Reasoning Approach for Open-ended Commonsense Question Answering
|
Recently, end-to-end trained models for multiple-choice commonsense question answering (QA) have delivered promising results. However, such question-answering systems cannot be directly applied in real-world scenarios where answer candidates are not provided. Hence, a new benchmark challenge set for open-ended commonsense reasoning (OpenCSR) has been recently released, which contains natural science questions without any predefined choices. On the OpenCSR challenge set, many questions require implicit multi-hop reasoning and have a large decision space, reflecting the difficult nature of this task. Existing work on OpenCSR sorely focuses on improving the retrieval process, which extracts relevant factual sentences from a textual knowledge base, leaving the important and non-trivial reasoning task outside the scope. In this work, we extend the scope to include a reasoner that constructs a question-dependent open knowledge graph based on retrieved supporting facts and employs a sequential subgraph reasoning process to predict the answer. The subgraph can be seen as a concise and compact graphical explanation of the prediction. Experiments on two OpenCSR datasets show that the proposed model achieves great performance on benchmark OpenCSR datasets.
|
[
"Commonsense Reasoning",
"Structured Data in NLP",
"Question Answering",
"Natural Language Interfaces",
"Reasoning",
"Information Retrieval",
"Multimodality"
] |
[
62,
50,
27,
11,
8,
24,
74
] |
SCOPUS_ID:85029188969
|
A Graph-based Approach of Automatic Keyphrase Extraction
|
Existing graph-based ranking techniques for keyphrase extraction only consider the connections between words in a document, ignoring the impact of the sentence. Motivated by the fact that a word must be important if it appears in many important sentences, we propose to take full advantage of the reinforcement between words andsentences by melting three kinds of relationships between them. Moreover, a document is grouped with many topics. The extracted keyphrases should be synthetic in the sense that they should deal with all the main topics in a document. Inspired by this, we take topic model into consider. Experimental results show that our approach performs betterthan state-of-the-art keyphrase extraction method on two datasets under three evaluation metrics.
|
[
"Multimodality",
"Structured Data in NLP",
"Term Extraction",
"Information Extraction & Text Mining"
] |
[
74,
50,
1,
3
] |
http://arxiv.org/abs/1904.04697v2
|
A Graph-based Model for Joint Chinese Word Segmentation and Dependency Parsing
|
Chinese word segmentation and dependency parsing are two fundamental tasks for Chinese natural language processing. The dependency parsing is defined on word-level. Therefore word segmentation is the precondition of dependency parsing, which makes dependency parsing suffer from error propagation and unable to directly make use of the character-level pre-trained language model (such as BERT). In this paper, we propose a graph-based model to integrate Chinese word segmentation and dependency parsing. Different from previous transition-based joint models, our proposed model is more concise, which results in fewer efforts of feature engineering. Our graph-based joint model achieves better performance than previous joint models and state-of-the-art results in both Chinese word segmentation and dependency parsing. Besides, when BERT is combined, our model can substantially reduce the performance gap of dependency parsing between joint models and gold-segmented word-based models. Our code is publicly available at https://github.com/fastnlp/JointCwsParser.
|
[
"Language Models",
"Semantic Text Processing",
"Structured Data in NLP",
"Syntactic Text Processing",
"Syntactic Parsing",
"Text Segmentation",
"Multimodality"
] |
[
52,
72,
50,
15,
28,
21,
74
] |
SCOPUS_ID:85140761769
|
A Graph-based Topic Modeling Approach to Detection of Irrelevant Citations
|
In the recent years, the academic paper influence analysis has been widely studied due to its potential applications in the multiple areas of science information metric and retrieval. By identifying the academic influence of papers, authors, etc., we can directly support researchers to easily reach academic papers. These recommended candidate papers are not only highly relevant with their desired research topics but also highly-attended by the research community within these topics. For very recent years, the rapid developments of academic networks, like Google Scholar, Research Gate, CiteSeerX, etc., have significantly boosted the number of new published papers annually. It also helps to strengthen the borderless cooperation between researchers who are interested on the same research topics. However, these current academic networks still lack the capabilities of provisioning researchers deeper into most-influenced papers. They also largely ignore quite/irrelevant papers, which are not fully related with their current interest topics. Moreover, the distributions of topics within these academic papers are considered as varying and it is difficult to extract the main concentrated topics in these papers. Thus, it leads to challenges for researchers to find their appropriated/high-qualified reference resources while doing researches. To overcome this limitation, in this paper, we proposed a novel approach of paper influence analysis through their content-based and citation relationship-based analyses within the biographical network. In order to effectively extract the topic-based relevance from papers, we apply the integrated graph-based citation relationship analysis with topic modeling approach to automatically learn the distributions of keyword-based labeled topics in forms of unsupervised learning approach, named as TopCite. Then, we base on the constructed graph-based paper-topic structure to identify their relevancy levels. Upon the identified relevancy levels between papers, we can support for improving the accuracy performance of other bibliographic network mining tasks, such as paper similarity measurement, recommendation, etc. Extensive experiments in real-world AMiner bibliographic dataset demonstrate the effectiveness of our proposed ideas in this paper.
|
[
"Topic Modeling",
"Multimodality",
"Structured Data in NLP",
"Information Extraction & Text Mining"
] |
[
9,
74,
50,
3
] |
SCOPUS_ID:85100357000
|
A Graph-boosted Framework for Adverse Drug Event Detection on Twitter
|
Detecting adverse drug events from Twitter is expected to reveal unreported side effects, thereby complementing current spontaneous reporting systems. However, existing studies usually only use word embeddings as the input for deep learning models, which ignores the structural information of sentences. In addition, deep learning models usually require a large number of cases for training, but the scale of annotated corpora that can be used for this task is limited. In order to solve the above problems, we propose a graph-boosted framework, that constructs the text into a graph structure. By using pre-trained graph embeddings and word embeddings for model training, our proposed framework provides richer semantic and structural information for prediction. The experimental results show that the proposed method can be used in different deep learning models and bring improvements when using the TwiMed corpus of different scales.
|
[
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Structured Data in NLP",
"Representation Learning",
"Event Extraction",
"Multimodality"
] |
[
3,
72,
50,
12,
31,
74
] |
http://arxiv.org/abs/2104.08443v1
|
A Graph-guided Multi-round Retrieval Method for Conversational Open-domain Question Answering
|
In recent years, conversational agents have provided a natural and convenient access to useful information in people's daily life, along with a broad and new research topic, conversational question answering (QA). Among the popular conversational QA tasks, conversational open-domain QA, which requires to retrieve relevant passages from the Web to extract exact answers, is more practical but less studied. The main challenge is how to well capture and fully explore the historical context in conversation to facilitate effective large-scale retrieval. The current work mainly utilizes history questions to refine the current question or to enhance its representation, yet the relations between history answers and the current answer in a conversation, which is also critical to the task, are totally neglected. To address this problem, we propose a novel graph-guided retrieval method to model the relations among answers across conversation turns. In particular, it utilizes a passage graph derived from the hyperlink-connected passages that contains history answers and potential current answers, to retrieve more relevant passages for subsequent answer extraction. Moreover, in order to collect more complementary information in the historical context, we also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding. Experimental results on the public dataset verify the effectiveness of our proposed method. Notably, the F1 score is improved by 5% and 11% with predicted history answers and true history answers, respectively.
|
[
"Structured Data in NLP",
"Question Answering",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Multimodality"
] |
[
50,
27,
11,
38,
24,
74
] |
http://arxiv.org/abs/1805.02473v3
|
A Graph-to-Sequence Model for AMR-to-Text Generation
|
The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.
|
[
"Language Models",
"Semantic Text Processing",
"Structured Data in NLP",
"Representation Learning",
"Text Generation",
"Multimodality"
] |
[
52,
72,
50,
12,
47,
74
] |
SCOPUS_ID:85136090771
|
A Graphical User Interface (GUI) Based Speech Recognition System Using Deep Learning Models
|
Presently, TSPI (Time Space Position Information)/Surveillance/Search and Track systems are more dependent on Artificial Intelligence technologies such as fingerprint, iris-based (eyes scanning), facial recognition-based, voice-based, etc. Such systems often demand remote operation which needs to be secure with authorized operators/users. In order to incorporate this technology, few human features play an important role toward unique identity. In this paper, we have tried to make an intelligent system that will be able to recognize authorized voice input and can convert those to commands in perfect sequence for real-time control and operation. Deep Learning concept is implemented in python to know the accuracy for recognizing voice commands. The same process has also been linked to the existing Graphical User Interface (GUI) of management console in parallel for monitoring. The main emphasis is to build a data-set for reference to correlate for recognition and then to process the voice input (for real-time operation) through an efficient and fast method. Intelligent modules are incorporated for better accuracy and to generate perfect sequencing of commands on basis of weight/relevance factor of voice input through various layers.
|
[
"Text Generation",
"Speech Recognition",
"Speech & Audio in NLP",
"Multimodality"
] |
[
47,
10,
70,
74
] |
SCOPUS_ID:85145007420
|
A Gravitational Search Algorithm Study on Text Summarization Using NLP
|
Over the last decade, the amount of data available on the internet has grown exponentially. As a result, there is a need for a resolution that converts this massive atomic data into valuable information that a brain could comprehend arises. Text Summarization (TS) is the procedure of generating a synopsis of a particular document that comprises only the most critical info from the novel; the objective is to obtain a concise synopsis of the important points in the document. Text summarization is one such technique that is frequently used in research to aid in the management of massive amounts of data. Automated summarization is a well-known technique for distilling the main points of a document. It works by preserving significant information in the text by creating a condensed version of it. Extractive and Abstractive methods of text summarization are available. Extractive summarization methods alleviate some of the concern of summarization by extracting a subset of relevant sentences from the original text. The objective of abstractive method of multiple records is to create condensed form of the record while retaining essential info. While there are numerous methods available, investigators studying in NLP using Gravitational Search Algorithm are predominantly focus on extractive methods. The inferences of sentences are determined in terms of linguistic and statistical characteristics. Finally, this paper compiles the new and pertinent study in the area of TS for further research and examination. It will be important because it will provide a new path for forthcoming scholars concerned in this domain.
|
[
"Text Generation",
"Summarization",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
47,
30,
24,
3
] |
SCOPUS_ID:85125230946
|
A Green Pipeline for Out-of-Domain Public Sentiment Analysis
|
In the changing social and economic environment, organisations are keen to act promptly and appropriately to changes. Sentiment analysis can be applied to social media data to capture timely information of new events and the corresponding public opinions. However, currently both the social topics and trending words are changing just as rapidly as the target topics and domains that organisations are interested in investigating. Therefore, there is a need for a well-trained sentiment analysis model able to handle out-of-domain input. Current solutions mainly focus on using domain adaptation techniques, but these solutions require domain-specific data and inevitably introduce extra overheads. To tackle this challenge, we propose a green Artificial Intelligence (AI) solution for a sentiment analysis pipeline (GreenSAP) to gain a better understanding of the changing public opinions on social media. Specifically, we propose to leverage the expressively powerful capability of the pre-trained Transformer encoder, and make use of several publicly-available sentiment analysis datasets from various domains and scenarios to develop a pipeline model. A sarcasm detection model is also included to eliminate false positive predictions. In experiments, this model significantly outperforms its competitors on three public benchmark datasets and on two of our labelled out-of-domain datasets for real-world applications.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85136196574
|
A Green(er) World for A.I.
|
As research and practice in artificial intelligence (A.I.) grow in leaps and bounds, the resources necessary to sustain and support their operations also grow at an increasing pace. While innovations and applications from A.I. have brought significant advances, from applications to vision and natural language to improvements to fields like medical imaging and materials engineering, their costs should not be neglected. As we embrace a world with ever-increasing amounts of data as well as research & development of A.I. applications, we are sure to face an ever-mounting energy footprint to sustain these computational budgets, data storage needs, and more. But, is this sustainable and, more importantly, what kind of setting is best positioned to nurture such sustainable A.I. in both research and practice? In this paper, we outline our outlook for Green A.I. - a more sustainable, energy-efficient and energy-aware ecosystem for developing A.I. across the research, computing, and practitioner communities alike - and the steps required to arrive there. We present a bird's eye view of various areas for potential changes and improvements from the ground floor of AI's operational and hardware optimizations for datacenter/HPCs to the current incentive structures in the world of A.I. research and practice, and more. We hope these points will spur further discussion, and action, on some of these issues and their potential solutions.
|
[
"Responsible & Trustworthy NLP",
"Green & Sustainable NLP"
] |
[
4,
68
] |
SCOPUS_ID:85050942695
|
A Grey Wolf Optimizer for Text Document Clustering
|
Text clustering problem (TCP) is a leading process in many key areas such as information retrieval, text mining, and natural language processing. This presents the need for a potent document clustering algorithm that can be used effectively to navigate, summarize, and arrange information to congregate large data sets. This paper encompasses an adaptation of the grey wolf optimizer (GWO) for TCP, referred to as TCP-GWO. The TCP demands a degree of accuracy beyond that which is possible with metaheuristic swarm-based algorithms. The main issue to be addressed is how to split text documents on the basis of GWO into homogeneous clusters that are sufficiently precise and functional. Specifically, TCP-GWO, or referred to as the document clustering algorithm, used the average distance of documents to the cluster centroid (ADDC) as an objective function to repeatedly optimize the distance between the clusters of the documents. The accuracy and efficiency of the proposed TCP-GWO was demonstrated on a sufficiently large number of documents of variable sizes, documents that were randomly selected from a set of six publicly available data sets. Documents of high complexity were also included in the evaluation process to assess the recall detection rate of the document clustering algorithm. The experimental results for a test set of over a part of 1300 documents showed that failure to correctly cluster a document occurred in less than 20% of cases with a recall rate of more than 65% for a highly complex data set. The high F-measure rate and ability to cluster documents in an effective manner are important advances resulting from this research. The proposed TCP-GWO method was compared to the other well-established text clustering methods using randomly selected data sets. Interestingly, TCP-GWO outperforms the comparative methods in terms of precision, recall, and F-measure rates. In a nutshell, the results illustrate that the proposed TCP-GWO is able to excel compared to the other comparative clustering methods in terms of measurement criteria, whereby more than 55% of the documents were correctly clustered with a high level of accuracy.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:34250369499
|
A Gricean analysis of understanding in economic experiments
|
Understanding is an issue of crucial importance in economic experiments. Different ideas of how to achieve full understanding have resulted in disparate and contradictory recommendations on the correct methods for economic experiments. It is argued that a more systematic approach is necessary based on the linguistic theories of pragmatics put forward by Grice. This provides resources for assessing understanding in practical experiments.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
http://arxiv.org/abs/1904.05426v1
|
A Grounded Unsupervised Universal Part-of-Speech Tagger for Low-Resource Languages
|
Unsupervised part of speech (POS) tagging is often framed as a clustering problem, but practical taggers need to \textit{ground} their clusters as well. Grounding generally requires reference labeled data, a luxury a low-resource language might not have. In this work, we describe an approach for low-resource unsupervised POS tagging that yields fully grounded output and requires no labeled training data. We find the classic method of Brown et al. (1992) clusters well in our use case and employ a decipherment-based approach to grounding. This approach presumes a sequence of cluster IDs is a `ciphertext' and seeks a POS tag-to-cluster ID mapping that will reveal the POS sequence. We show intrinsically that, despite the difficulty of the task, we obtain reasonable performance across a variety of languages. We also show extrinsically that incorporating our POS tagger into a name tagger leads to state-of-the-art tagging performance in Sinhalese and Kinyarwanda, two languages with nearly no labeled POS data available. We further demonstrate our tagger's utility by incorporating it into a true `zero-resource' variant of the Malopa (Ammar et al., 2016) dependency parser model that removes the current reliance on multilingual resources and gold POS tags for new languages. Experiments show that including our tagger makes up much of the accuracy lost when gold POS tags are unavailable.
|
[
"Low-Resource NLP",
"Information Extraction & Text Mining",
"Syntactic Text Processing",
"Text Clustering",
"Tagging",
"Responsible & Trustworthy NLP"
] |
[
80,
3,
15,
29,
63,
4
] |
https://aclanthology.org//2021.nlp4posimpact-1.16/
|
A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results
|
Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctor-patient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.
|
[
"Question Answering",
"Natural Language Interfaces",
"Ethical NLP",
"Dialogue Systems & Conversational Agents",
"Responsible & Trustworthy NLP"
] |
[
27,
11,
17,
38,
4
] |
SCOPUS_ID:85127500944
|
A Group-Centric Intelligence Recommendation System for Twitter
|
The group-centric recommendation system develops logical collective results from the information given by Twitter users. Even though different input data formats have been used to represent user preferences, the input information mode is static. To avoid this shortcoming, this paper proposes a system which enables clients to give halfway or deficient inclination information at various occasions. Since this is an entangled issue, this paper explicitly centers around specific perspective (recommending movies) as the main endeavor. Accordingly, the re-analysis of variant input datasets, the maximum consensus mining problem, with the help of sentiment analysis, review analysis and rating analysis has merged singular proposals into clusters of suggestions under dynamic information mode suspicion. The outcome demonstrates that the proposed strategy is computationally productive and can adequately distinguish a general understanding among all clients.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85129860195
|
A Guided Topic-Noise Model for Short Texts
|
Researchers using social media data want to understand the discussions occurring in and about their respective fields. These domain experts often turn to topic models to help them see the entire landscape of the conversation, but unsupervised topic models often produce topic sets that miss topics experts expect or want to see. To solve this problem, we propose Guided Topic-Noise Model (GTM), a semi-supervised topic model designed with large domain-specific social media data sets in mind. The input to GTM is a set of topics that are of interest to the user and a small number of words or phrases that belong to those topics. These seed topics are used to guide the topic generation process, and can be augmented interactively, expanding the seed word list as the model provides new relevant words for different topics. GTM uses a novel initialization and a new sampling algorithm called Generalized Polya Urn (GPU) seed word sampling to produce a topic set that includes expanded seed topics, as well as new unsupervised topics. We demonstrate the robustness of GTM on open-ended responses from a public opinion survey and four domain-specific Twitter data sets.
|
[
"Low-Resource NLP",
"Topic Modeling",
"Information Extraction & Text Mining",
"Responsible & Trustworthy NLP"
] |
[
80,
9,
3,
4
] |
SCOPUS_ID:85133652507
|
A HISTORICAL TOURISM RECOMMENDATION SYSTEM FOR THE ELDERLY TOURIST USING NATURAL LANGUAGE PROCESSING AND THE ONTOLOGY TECHNIQUE
|
This paper presents a proposed methodology and framework of a recommendation system for elderly tourists which enables semantic inferencing by the inclusion of an ontology, together with natural language processing for analyzing user queries to increase the efficiency and accuracy of the recommendation system. Given that the target user demographic are Thai tourists, the natural language utilized is Thai. The physical limitations experienced by elderly tourists are specified as significant factors when suggesting the choices of tourist destinations presented and making recommendations appropriate to that demographic. To enhance the efficiency of the system, a new “facility” class is defined in the Elderly Historical Tourism Ontology which provides information regarding the availability of elder-related facilities which can satisfy the physical limitations of the elderly tourist. A 95% precision of retrieved information was achieved in the proposed system.
|
[
"Responsible & Trustworthy NLP",
"Knowledge Representation",
"Semantic Text Processing",
"Green & Sustainable NLP"
] |
[
4,
18,
72,
68
] |
SCOPUS_ID:77950567405
|
A HIT-based semantic search approach in unstructured P2P systems
|
An effective semantic search approach based on hierarchical interest tree (HIT) is proposed in unstructured P2P systems. Documents owned by a peer are classified into categories to build a HIT, which is sent to a super peer. Meanwhile, the inverted document index (IDI) of top n terms for each category is also sent to a super peer according to their Chi-square(χ2)statistic values. When a regular peer sends a query and gives a category semantic similarity threshold Simth, query messages are forwarded via an effective query routing algorithm and the results are returned by searching HIT. It is flexible for each peer since it can set the Simth, which can provide a better personal service. The experiments show that HIT-based semantic search approach is more accurate and efficient than previous methods. © 2010 Peking University.
|
[
"Semantic Search",
"Semantic Text Processing",
"Semantic Similarity",
"Information Retrieval"
] |
[
41,
72,
53,
24
] |
SCOPUS_ID:80052658271
|
A HMM and FSVMbased model to chinese named entity recognition
|
Chinese Named Entity Recognition, as a task of providing important semantic information, is a critical first step in information extraction and question answering systems. This paper proposes a hybrid method for NE recognition which combines HMM model and FSVM model. At the bottom level of the system, the person name and simple NEs are recognized by the character-based SVM. At the top level of the system, the complicated NEs are recognized by the word-based SVM. The character-based and word-based SVM are integrated. Adoption of fuzzy SVM helps reduce the impact of noise samples and abnormal data, and improve accuracy of the system. Copyright © 2011 Binary Information Press.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:11144337200
|
A HTK-based system to recognise Arabic script
|
This paper presents a cursive Arabic script recognition system. The system decomposes the document image into text line images and divides each text line image into smaller overlapped frames. The system extracts a set of simple statistical features from each frame and then injects the sequence of the feature vectors to the Hidden Markov Model Toolkit (HTK). HTK is a portable toolkit for speech recognition system. The proposed system is applied to a sophisticated cursive Arabic font.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85135379273
|
A HUMAN FACTORS STUDY OF SPEECH-TO-TEXT TECHNOLOGY: CONSEQUENCES OF DISCRETE SPEECH
|
Three experiments are reported which aimed at analyzing the learnability of discrete speech as well as the joint influences of discrete speech, modality and location of input commands and speech velocity on performance (composing standardized business letters; reading short texts). Results reveal that discrete speech can be learned quickly, and that a considerable amount of additional tasks can be managed while speaking discretely. Some consequences of these results are discussed which may be relevant for the design of interfaces to speech-to-text systems.
|
[
"Text Generation",
"Speech & Audio in NLP",
"Speech Recognition",
"Multimodality"
] |
[
47,
70,
10,
74
] |
SCOPUS_ID:85124628532
|
A HYBRID CNN-BILSTM MODEL FOR DRUG NAMED ENTITY RECOGNITION
|
Named Entity Recognition (NER) has been proven to be successful and useful in many domains in extracting real-world entities. Under the medical domain, this study aims to extract drug name entities. To extract drug name entities, drug product, ingredient, and drug dose are used as categories to classify extracted entities. 18,075 sentences containing drug-related information have been collected from articles written in Bahasa Indonesia. Our main contribution lies in presenting the drug NER model and evaluation of the CNN-BiLSTM architecture in the Indonesian language. We used a hybrid Convolutional Neural Network - Bidirectional Long-Short Term Memory (CNN-BiLSTM) deep learning architecture model that automatically detects word- and character-level features. We trained the architecture with six different hyper-parameter sets to find the best model. Based on the experiments, the best achievement was obtained by one of the models in term of its F1 score. The model uses 2 layers of CNN with a kernel size of 7, a CNN filter of 50, a single LSTM layer with 200 hidden units, and additional chunk tag-based feature. The model achieved an f1-score of 0.892, a precision of 0.881, and a recall of 0.903.
|
[
"Language Models",
"Named Entity Recognition",
"Semantic Text Processing",
"Information Extraction & Text Mining"
] |
[
52,
34,
72,
3
] |
SCOPUS_ID:85060633008
|
A Haar Classifier Based Call Number Detection and Counting Method for Library Books Kütüphane Kitaplarinda Yer Numaralarini Bulmakve Saymakiç in HaarSiniflandirici Tabanlibir Yöntem
|
Counting and organization of books in libraries is a routine and time-consuming task. The task gets more complicated by misplaced books in shelves. In order to solve these problems, we propose an automated visual call number (book-id) detection and counting system in this paper. The method employs a Haar feature-based classifier from OpenCV library and cloud-based OCR system to decode characters from images. To develop and test the method, we have acquired and organized a dataset of 1000 book call numbers. The proposed method has been tested on 20 bookshelves images that contain 233 call numbers, which resulted in a true detection rate of 96% and false detection rate of 1.75 per image. For OCR step, the number of false recognized characters per call number was 0.76.
|
[
"Visual Data in NLP",
"Text Classification",
"Multimodality",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
20,
36,
74,
24,
3
] |
http://arxiv.org/abs/1807.07149v1
|
A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management
|
We propose a network independent, hand-held system to translate and disambiguate foreign restaurant menu items in real-time. The system is based on the use of a portable multimedia device, such as a smartphones or a PDA. An accurate and fast translation is obtained using a Machine Translation engine and a context-specific corpora to which we apply two pre-processing steps, called translation standardization and $n$-gram consolidation. The phrase-table generated is orders of magnitude lighter than the ones commonly used in market applications, thus making translations computationally less expensive, and decreasing the battery usage. Translation ambiguities are mitigated using multimedia information including images of dishes and ingredients, along with ingredient lists. We implemented a prototype of our system on an iPod Touch Second Generation for English speakers traveling in Spain. Our tests indicate that our translation method yields higher accuracy than translation engines such as Google Translate, and does so almost instantaneously. The memory requirements of the application, including the database of images, are also well within the limits of the device. By combining it with a database of nutritional information, our proposed system can be used to help individuals who follow a medical diet maintain this diet while traveling.
|
[
"Visual Data in NLP",
"Machine Translation",
"Explainability & Interpretability in NLP",
"Multimodality",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multilinguality"
] |
[
20,
51,
81,
74,
47,
4,
0
] |
https://aclanthology.org//W09-3952/
|
A Handsome Set of Metrics to Measure Utterance Classification Performance in Spoken Dialog Systems
|
[
"Text Classification",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
11,
38,
24,
3
] |
|
SCOPUS_ID:85128505639
|
A Handwritten Chinese Characters Files Text Recognition Method Based on Inception Structure
|
Objectives: In order to solve the problem that the accuracy of handwritten Chinese character text recognition is not high, an end⁃to⁃end method for handwritten Chinese character text recognition based on convolutional neural network and recurrent neural network is proposed. Methods: Firstly, the convolutional neural network constructed by inception module is used to extract the basic features of the text image. Secondly, the recurrent neural network is used to predict the extracted features and output a probability distribution about the Chinese character set. Finally, the connectionist temporal classification algorithm is used to calculate the recognition results and construct the loss function. Results: The proposed method is tested on the handwritten Chinese character text dataset, experimental result shows that the Inception module and data enhancement method can effectively improve the performance of the algorithm, obtain the recognition accuracy of 71.2% and the text editing distance of 0.060. Conclusion: Our proposed method can conduct end⁃to⁃end handwritten Chinese character text recognition, and improve the recognition accuracy compared with the existing methods.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85049991611
|
A Hash-Based Approach for Document Retrieval by Utilizing Term Features
|
Digital data increase on servers with time which resulted in different researchers focusing on this field. Various issues are arising on the server such as data handling, security, maintenance, etc. In this paper, an approach for the document retrieval is proposed which efficiently fetches the document according to the query which is given by user. Here hash-based indexing of the dataset document was done by utilizing term features. In order to provide privacy for the terms, each of this is identified by a unique number and each document has its hash index key for identification. Experiment was done on real and artificial dataset. Results show that NDCG, precision, and recall parameter of the work are better as compared to previous work on different size of datasets.
|
[
"Document Retrieval",
"Information Retrieval"
] |
[
56,
24
] |
SCOPUS_ID:85027234745
|
A Health QA with Enhanced User Interfaces
|
Health related Question Answering (QA) systems have proven to be useful to patients. However, most of the QA systems focus on improving system performance against a standard set of questions, but neglect the problem of designing effective user interfaces. We build a health QA with enhanced user interface which includes three formats - single answer, list of fragments and combination of fragments. At the same time, we add entities into the re-ranking of the candidate answers and result display to help patients better understand the relationship between question and answer. The experiment proves the effectiveness of our method.
|
[
"Natural Language Interfaces",
"Question Answering"
] |
[
11,
27
] |
https://aclanthology.org//W09-0615/
|
A Hearer-Oriented Evaluation of Referring Expression Generation
|
[
"Text Generation"
] |
[
47
] |
|
SCOPUS_ID:85027588189
|
A Hebbian account of entrenchment and (over)-extension in language learning
|
In production, frequently used words are preferentially extended to new, though related meanings. In comprehension, frequent exposure to a word instead makes the learner confident that all of the word's legitimate uses have been experienced, resulting in an entrenched form-meaning mapping between the word and its experienced meaning(s). This results in a perception-production dissociation, where the forms speakers are most likely to map onto a novel meaning are precisely the forms that they believe can never be used that way. At first glance, this result challenges the idea of bidirectional form-meaning mappings, assumed by all current approaches to linguistic theory. In this paper, we show that bidirectional form-meaning mappings are not in fact challenged by this production-perception dissociation. We show that the production-perception dissociation is expected even if learners of the lexicon acquire simple symmetrical form-meaning associations through simple Hebbian learning.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:85131968909
|
A Hereditary Attentive Template-based Approach for Complex Knowledge Base Question Answering Systems
|
Knowledge Base Question Answering systems (KBQA) aim to find answers to natural language questions over a knowledge base. This work presents a template matching approach for Complex KBQA systems (C-KBQA) using the combination of Semantic Parsing and Neural Networks techniques to classify natural language questions into answer templates. An attention mechanism was created to assist a Tree-LSTM in selecting the most important information. The approach was evaluated on the LC-Quad 1, LC-Quad 2, ComplexWebQuestion, and WebQuestionsSP datasets, and the results show that our approach outperforms other approaches on three datasets.
|
[
"Semantic Text Processing",
"Semantic Parsing",
"Question Answering",
"Natural Language Interfaces",
"Knowledge Representation"
] |
[
72,
40,
27,
11,
18
] |
SCOPUS_ID:85131096151
|
A Heretical Defence of the Unity of Form and Content
|
The received view in the debate on the form-content unity of poetry is that the possibility of paraphrase does not sit well with the unity conception. I will suggest a shift from paraphrase to translation, since the latter is substantially closer to the heart of the matter. I will heretically divert from the 'commonplace' view, which claims that poetry cannot be translated. However, I will argue that the possibility of translation in this sense can be reconciled, appearances notwithstanding, with the unity of form and content. A further surprising conclusion will be that, while this possibility prima facie appears to be the best argument against the unity, the contrary is the case.
|
[
"Paraphrasing",
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
32,
51,
47,
0
] |
SCOPUS_ID:85132202347
|
A Hesitant Fuzzy Linguistic TOPSIS Model to Support Supplier Segmentation
|
Objective: this study proposes a hesitant fuzzy linguistic TOPSIS model for supplier segmentation based on economic, environmental, and social criteria. Proposal: the model classifies suppliers in a segmentation matrix considering their capabilities and willingness to collaborate. It was implemented using Microsoft Excel© and applied to a hydropower plant. Two employees of the company chose a set of segmentation criteria, assigned weights to these criteria, and evaluated the performance of suppliers. In the pilot application, the performance of six suppliers was analyzed and ranked according to 28 criteria. The classification results were endorsed by the decision-makers involved. Conclusion: the model provides consistent results and can assist managers in designing development programs aimed at improving the economic, environmental, and social performance of suppliers. Additionally, it can support group decisions under uncertainty and hesitation, allows the use of linguistic expressions, and does not limit the amounts of criteria or alternatives.
|
[
"Text Segmentation",
"Syntactic Text Processing"
] |
[
21,
15
] |
http://arxiv.org/abs/2107.00841v2
|
A Heterogeneous Graph Attention Network for Multi-hop Machine Reading Comprehension
|
Multi-hop machine reading comprehension is a challenging task in natural language processing, which requires more reasoning ability across multiple documents. Spectral models based on graph convolutional networks grant inferring abilities and lead to competitive results. However, part of them still faces the challenge of analyzing the reasoning in a human-understandable way. Inspired by the concept of the Grandmother Cells in cognitive neuroscience, a spatial graph attention framework named ClueReader was proposed in this paper, imitating the procedure. This model is designed to assemble the semantic features in multi-level representations and automatically concentrate or alleviate information for reasoning via the attention mechanism. The name ClueReader is a metaphor for the pattern of the model: regard the subjects of queries as the start points of clues, take the reasoning entities as bridge points, consider the latent candidate entities as the grandmother cells, and the clues end up in candidate entities. The proposed model allows us to visualize the reasoning graph, then analyze the importance of edges connecting two entities and the selectivity in the mention and candidate nodes, which can be easier to be comprehended empirically. The official evaluations in the open-domain multi-hop reading dataset WikiHop and the Drug-drug Interactions dataset MedHop prove the validity of our approach and show the probability of the application of the model in the molecular biology domain.
|
[
"Multimodality",
"Reasoning",
"Structured Data in NLP",
"Machine Reading Comprehension"
] |
[
74,
8,
50,
37
] |
http://arxiv.org/abs/2004.12057v1
|
A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for Question Answering Over Dynamic Contexts
|
We study question answering over a dynamic textual environment. Although neural network models achieve impressive accuracy via learning from input-output examples, they rarely leverage various types of knowledge and are generally not interpretable. In this work, we propose a graph-based approach, where a heterogeneous graph is automatically built with factual knowledge of the context, temporal knowledge of the past states, and logical knowledge that combines human-curated knowledge bases and rule bases. We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner. Experimental results on a benchmark dataset show that the injection of various types of knowledge improves a strong neural network baseline. An additional benefit of our approach is that the graph itself naturally serves as a rational behind the decision making.
|
[
"Natural Language Interfaces",
"Structured Data in NLP",
"Question Answering",
"Multimodality"
] |
[
11,
50,
27,
74
] |
http://arxiv.org/abs/1912.07911v1
|
A Heterogeneous Graphical Model to Understand User-Level Sentiments in Social Media
|
Social Media has seen a tremendous growth in the last decade and is continuing to grow at a rapid pace. With such adoption, it is increasingly becoming a rich source of data for opinion mining and sentiment analysis. The detection and analysis of sentiment in social media is thus a valuable topic and attracts a lot of research efforts. Most of the earlier efforts focus on supervised learning approaches to solve this problem, which require expensive human annotations and therefore limits their practical use. In our work, we propose a semi-supervised approach to predict user-level sentiments for specific topics. We define and utilize a heterogeneous graph built from the social networks of the users with the knowledge that connected users in social networks typically share similar sentiments. Compared with the previous works, we have several novelties: (1) we incorporate the influences/authoritativeness of the users into the model, 2) we include comment-based and like-based user-user links to the graph, 3) we superimpose multiple heterogeneous graphs into one thereby allowing multiple types of links to exist between two users.
|
[
"Multimodality",
"Structured Data in NLP",
"Sentiment Analysis"
] |
[
74,
50,
78
] |
SCOPUS_ID:85149714307
|
A Heterogeneous Interaction Graph Network for Multi-Intent Spoken Language Understanding
|
As the core component of intelligent dialogue systems, spoken language understanding (SLU) usually includes two tasks: intent detection and slot filling. In real-world scenarios, users may express multiple intents in an utterance, and a token-level slot label can belong to multiple intents. Intent detection and slot filling tasks are closely related and instruct each other. In this paper, we propose the heterogeneous interaction graph framework with window mechanism for joint multi-intent detection and slot filling, which can adequately capture the rich semantic information of different granularity in heterogeneous information. We leverage different types of nodes and edges to construct the heterogeneous graph to realize the interaction between coarse-grained sentence-level intent information and fine-grained word-level slot information. And we utilize window mechanism to accommodate the temporal locality of the slot information. Experimental results on two datasets show that our model achieves the state-of-the-art performance. Comprehensive analysis empirically verifies the effectiveness of each component.
|
[
"Semantic Text Processing",
"Semantic Parsing",
"Structured Data in NLP",
"Sentiment Analysis",
"Intent Recognition",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Multimodality"
] |
[
72,
40,
50,
78,
79,
11,
38,
74
] |
SCOPUS_ID:85122474532
|
A Heuristic Approach to Extract Knowledge from the Text Considering Explicit and Implicit Features Both
|
The important thing to extract knowledge from text data (comments, chat, blogs, news articles, etc.) is how to convert unstructured data into structured data sometime called metadata and further how to get implicit features from it. Implicit features are very important for semantic understanding of any sentence. Generally, features are opinionated in terms of adjectives. Researchers find such features easily using natural language processing and the concept of association between adjectives and high frequency nouns, sometime synonyms of nouns and adjectives also play very important role in the acquisition of such features. In this paper, adjectives and their synonyms are considered to have highly relevant features and opinions in terms of pointwise mutual index. This paper presents the introduction, proposed method, framework, result, and finally conclusion.
|
[
"Information Extraction & Text Mining",
"Structured Data in NLP",
"Indexing",
"Information Retrieval",
"Multimodality"
] |
[
3,
50,
69,
24,
74
] |
SCOPUS_ID:85112701844
|
A Heuristic Grafting Strategy for Manufacturing Knowledge Graph Extending and Completion Based on Nature Language Processing: KnowTree
|
Applied to search, question answering, and semantic web of close-or-open domain, knowledge graph (KG) is known for its incompleteness subject to the rapid knowledge growing pace. Inspired by the agricultural grafting technology to fruit variety, this paper proposes a heuristic knowledge grafting strategy (HGS) for manufacturing knowledge graph (MKG) named KnowTree extending and completion with natural language processing (NLP) mining engineering cases document. Based on similarity analysis, firstly the grafting related definitions and mechanisms (completeness, relatedness, connectivity and reutilization) are defined. Then, focused on the four mechanisms, HGS takes a pair same engineering documents as input. KnowWords is built as a collection of KnowScion, and each scion is mined from engineering documents based on the SAO structure network, whose importance is evaluated by SAORank counting the in-out degree of centrality. On another hand, the KnowRoot system is designed based on the extended P S ontology model to characterize the structure of abstract document into four sub-space of knowledge: know-what (problem), know-why (context), know-how (solution) and know-with (result), where a pre-trained language representation model K-BERT is used to classify the KnowScion candidates into the designed KnowRoot system with a fine-tuning classification task. In the knowledge grafting process, the connection unit is constructed based on the extracted domain knowledge triples of the K-BERT model, where the head element of a triple is from the KnowScion candidate set KnowWords satisfying the threshold value, the tail element is from the domain MKG to be extended, and a connection factor is used to evaluate the relationship of union combination. To the goal of knowledge reuse, the path based reasoning rules are designed for KnowTree reutilization. Finally, take the latest engineering case abstract (ECA) in whitegoods domain as resources, a case study is conducted to validate the proposed HGS strategy.
|
[
"Language Models",
"Semantic Text Processing",
"Structured Data in NLP",
"Knowledge Representation",
"Multimodality"
] |
[
52,
72,
50,
18,
74
] |
SCOPUS_ID:85104809919
|
A Heuristic-Driven Ensemble Framework for COVID-19 Fake News Detection
|
The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world stay connected. With the COVID-19 pandemic raging, social media has become more relevant and widely used than ever before, and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention. In this paper, we describe our Fake News Detection system that automatically identifies whether a tweet related to COVID-19 is “real” or “fake”, as a part of CONSTRAINT COVID19 Fake News Detection in English challenge. We have used an ensemble model consisting of pre-trained models that has helped us achieve a joint 8th position on the leader board. We have achieved an F1-score of 0.9831 against a top score of 0.9869. Post completion of the competition, we have been able to drastically improve our system by incorporating a novel heuristic algorithm based on username handles and link domains in tweets fetching an F1-score of 0.9883 and achieving state-of-the art results on the given dataset.
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
http://arxiv.org/abs/2101.03545v1
|
A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection
|
The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world stay connected. With the COVID-19 pandemic raging, social media has become more relevant and widely used than ever before, and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention. In this paper, we describe our Fake News Detection system that automatically identifies whether a tweet related to COVID-19 is "real" or "fake", as a part of CONSTRAINT COVID19 Fake News Detection in English challenge. We have used an ensemble model consisting of pre-trained models that has helped us achieve a joint 8th position on the leader board. We have achieved an F1-score of 0.9831 against a top score of 0.9869. Post completion of the competition, we have been able to drastically improve our system by incorporating a novel heuristic algorithm based on username handles and link domains in tweets fetching an F1-score of 0.9883 and achieving state-of-the art results on the given dataset.
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.