id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85111408507
A Framework and Decision Algorithm to Determine the Best Feature Extraction Technique for Supporting Machine Learning-Based Hate Speech Detection
We develop and implement a framework and a decision algorithm to determine the best feature extraction technique (FET) for supporting machine learning-based hate speech detection. Specifically, the contributions of this work are three-fold: (1) a seamless modular pipeline that automatically preprocesses, vectorizes, and classifies whether or not a text message is a hate speech; (2) a decision algorithm that determines the best FET approach among all the possible FET candidates with the linear time complexity O(N); and (3) a preliminary experimental evaluation on the tweets provided by Twitter Sentiment Analysis on Analytics Vidhya to demonstrate that our FET framework and decision algorithm are effective and produce the significant results.
[ "Responsible & Trustworthy NLP", "Ethical NLP", "Information Extraction & Text Mining" ]
[ 4, 17, 3 ]
SCOPUS_ID:85118231815
A Framework for Accelerating Transformer-Based Language Model on ReRAM-Based Architecture
Transformer-based language models have become the de-facto standard model for various natural language processing (NLP) applications given the superior algorithmic performances. Processing a transformer-based language model on a conventional accelerator induces the memory wall problem, and the ReRAM-based accelerator is a promising solution to this problem. However, due to the characteristics of the self-attention mechanism and the ReRAM-based accelerator, the pipeline hazard arises when processing the transformer-based language model on the ReRAM-based accelerator. This hazard issue greatly increases the overall execution time. In this article, we propose a framework to resolve the hazard issue. First, we propose the concept of window self-attention to reduce the attention computation scope by analyzing the properties of the self-attention mechanism. After that, we present a window-size search algorithm, which finds an optimal window size set according to the target application/algorithmic performance. We also suggest a hardware design that exploits the advantages of the proposed algorithm optimization on the general ReRAM-based accelerator. The proposed work successfully alleviates the hazard issue while maintaining the algorithmic performance, leading to a 5.8 × speedup over the provisioned baseline. It also delivers up to 39.2 × 643.2 × speedup/higher energy efficiency over GPU, respectively.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:84922182545
A Framework for Analysis
This chapter argues that an appreciation of the symbolic and the performative dimensions of politics and policy making is crucial to understand how authoritative governance is possible in an age of multiplicities, as these factors determine how politics 'meets the eye'. Reminding the reader how the staging of parliamentary decision-making is itself the product of an active search for a symbolization of political legitimacy in the eighteenth century, the chapter suggests it is questionable whether this particular staging of politics can hold its symbolic power in the age of mediatization. A performance perspective on governance holds that policy makers and politicians are constantly trying to create order and structure in potentially unstable situations. The very variability of the setting and staging of politics calls for more explicit attention to how actors use particular terms in particular settings. Politics is (counter-)scripted and staged for multiple audiences: politics and media are fundamentally intertwined. Understanding governance thus comes from studying the contextualized interaction as a series of 'performances', drawing on the combined analytical vocabularies of discourse analysis and dramaturgy to open up the concept of 'practice'.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
http://arxiv.org/abs/2011.15038v1
A Framework for Authorial Clustering of Shorter Texts in Latent Semantic Spaces
Authorial clustering involves the grouping of documents written by the same author or team of authors without any prior positive examples of an author's writing style or thematic preferences. For authorial clustering on shorter texts (paragraph-length texts that are typically shorter than conventional documents), the document representation is particularly important: very high-dimensional feature spaces lead to data sparsity and suffer from serious consequences like the curse of dimensionality, while feature selection may lead to information loss. We propose a high-level framework which utilizes a compact data representation in a latent feature space derived with non-parametric topic modeling. Authorial clusters are identified thereafter in two scenarios: (a) fully unsupervised and (b) semi-supervised where a small number of shorter texts are known to belong to the same author (must-link constraints) or not (cannot-link constraints). We report on experiments with 120 collections in three languages and two genres and show that the topic-based latent feature space provides a promising level of performance while reducing the dimensionality by a factor of 1500 compared to state-of-the-arts. We also demonstrate that, while prior knowledge on the precise number of authors (i.e. authorial clusters) does not contribute much to additional quality, little knowledge on constraints in authorial clusters memberships leads to clear performance improvements in front of this difficult task. Thorough experimentation with standard metrics indicates that there still remains an ample room for improvement for authorial clustering, especially with shorter texts
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
SCOPUS_ID:85134546068
A Framework for Automated Text Generation Benchmarking
Researchers in areas such as translation and summarization need to compare their results to a wide range of published baselines that commonly use different evaluation methods. We aim to enable an easy comparison by presenting TextGen-Benchmarch, an open-sourced tool1 for streamlining the generation and evaluation of text. Text generation methods and evaluation metrics can easily be added to TextGen-Benchmarch, and its pipeline results in a more efficient comparison between methods as users can supply corpora, systems, and evaluation techniques and receive comparison reports in easy to analyze tabular and graphic formats.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85136271310
A Framework for Automatic Fake Content Identification
Fake news emerged as a challenge for society now a day. Easy accessibility and low cost to the internet makes the fake news propagation task easy. In the Covid-19 pandemic situation, it is required to reduce the proliferation of misleading content to reduce its severe impact. Many existing works are based on lexico-syntactic features using a small training sample size. To address this issue, this study used the Gossip-cop dataset for evaluation. Various supervised techniques of the ML model and advanced deep learning techniques are implemented for intense research. Dataset is crawled from Gossipcop fact-checking websites. The dataset consists of 4,947fake news with text and 16,694 real news. The result of these algorithms helps in differentiating false content from reliable news and improved the accuracy achieved using existing techniques.
[ "Reasoning", "Fact & Claim Verification", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 8, 46, 17, 4 ]
SCOPUS_ID:85013127660
A Framework for Automatic Personalised Ontology Learning
Understanding or acquiring a user's information needs from their local information repository (e.g. a set of example-documents that are relevant to user information needs) is important in many applications. However, acquiring the user's information needs from the local information repository is very challenging. Personalised ontology is emerging as a powerful tool to acquire the information needs of users. However, its manual or semi-Automatic construction is expensive and time-consuming. To address this problem, this paper proposes a model to automatically learn personalised ontology by labelling topic models with concepts, where the topic models are discovered from a user's local information repository. The proposed model is evaluated by comparing against ten baseline models on the standard dataset RCV1 and a large ontology LCSH. The results show that the model is effective and its performance is significantly improved.
[ "Topic Modeling", "Knowledge Representation", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 9, 18, 72, 3 ]
http://arxiv.org/abs/1910.13826v4
A Framework for Building Closed-Domain Chat Dialogue Systems
This paper presents HRIChat, a framework for developing closed-domain chat dialogue systems. Being able to engage in chat dialogues has been found effective for improving communication between humans and dialogue systems. This paper focuses on closed-domain systems because they would be useful when combined with task-oriented dialogue systems in the same domain. HRIChat enables domain-dependent language understanding so that it can deal well with domain-specific utterances. In addition, HRIChat makes it possible to integrate state transition network-based dialogue management and reaction-based dialogue management. FoodChatbot, which is an application in the food and restaurant domain, has been developed and evaluated through a user study. Its results suggest that reasonably good systems can be developed with HRIChat. This paper also reports lessons learned from the development and evaluation of FoodChatbot.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
https://aclanthology.org//W08-0114/
A Framework for Building Conversational Agents Based on a Multi-Expert Model
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85016729829
A Framework for Building an Arabic Multi-disciplinary Ontology from Multiple Resources
Over recent years, the Internet has become people’s main source of information, with many databases and web pages being added and accessed every day. This continued growth in the amount of information available has led to frustration and difficulty for those attempting to find a specific piece of information. As such, many techniques are widely used to retrieve useful information and to mine valuable data; indeed, these techniques make it possible to discover hidden relations and patterns. Most of the above-mentioned techniques have been used primarily to process and analyse English text, but not Arabic text. Limited Arabic resources (e.g. datasets, databases, and ontologies), also make analysing and processing Arabic text a difficult task. As such, in this paper, we propose a framework for building an Arabic ontology from multiple resources. Thus, we will first extract and build an Arabic ontology from a publicly available directory, following which, we will enhance this ontology with rich data from the Internet. We will then use an Arabic online directory to construct a multi-disciplinary ontology that provides a hierarchical representation of topics in a conceptual way. Following this, we introduce an enhanced technique to enrich these ontologies with sufficient information and proper annotation for each concept. Finally, by using common information retrieval evaluation techniques, we confirm the viability of the proposed approach.
[ "Knowledge Representation", "Semantic Text Processing", "Information Retrieval" ]
[ 18, 72, 24 ]
SCOPUS_ID:85105879023
A Framework for Constructing Thai Sentiment Corpus using the Cosine Similarity Technique
Unstructured data is growing rapidly due to the increase in various social media that allow individuals to express and write reviews in either a formal or informal style. Thus, in sentiment analysis, it becomes difficult to identify and analyze both positive and negative reviews. Thai is a low-resource language with few resources with which to conduct NLP research and the lack of a sentiment corpus. The main objective of this paper is to present a framework in which to construct a Thai sentiment corpus and sentiment polarity classification utilizing the cosine similarity technique. The proposed framework consists of three main steps: data collection; data preprocessing; and sentiment similarity measurement. Initially, data collection generates a sub-step that is a manual sentiment that classifies polarity into positive and negative. Data preprocessing is then applied. Moreover, we also created a special database in which to store text tokenization, convert abbreviations, check spelling errors, and conduct stop-word removal. Lastly, sentiment similarity measurement was conducted between two reviews with a combination of TF-IDF and the cosine similarity technique. We evaluated our framework by employing training data incorporating 3, 129 reviews and testing data comprised of 1, 000 reviews. The experiment results demonstrated that the proposed framework achieved an accuracy of 81.2%. We further observed that the data preprocessing step herein significantly affected the accuracy of the sentence similarity measurements of the two reviews.
[ "Text Classification", "Polarity Analysis", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 33, 78, 24, 3 ]
https://aclanthology.org//W19-2910/
A Framework for Decoding Event-Related Potentials from Text
We propose a novel framework for modeling event-related potentials (ERPs) collected during reading that couples pre-trained convolutional decoders with a language model. Using this framework, we compare the abilities of a variety of existing and novel sentence processing models to reconstruct ERPs. We find that modern contextual word embeddings underperform surprisal-based models but that, combined, the two outperform either on its own.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:85093869232
A Framework for Design of Conversational Agents to Support Health Self-Care for Older Adults
Objective: We examined the potential of conversational agents (CAs) to support older adults’ self-care related to chronic illness in light of lessons learned from decades of pedagogical agent research, which investigates the impact and efficacy of CAs for a wide range of learners. Background: The role of CAs in education (i.e., pedagogical agents) has been long studied, but their potential for supporting self-care has received less attention, especially for older adults. Methods: We reviewed work on pedagogical agents and considered how it informs the design of CAs for older adults. We propose a framework for designing CAs to support older adult self-care, which organizes a review of work in this area and integration with the pedagogical agent literature. Results: Our review of the pedagogical agent literature revealed an evolution from teaching machines to interactive, social systems that influence student motivational as well as learning outcomes. To integrate this review with work on CAs and self-care, we developed a framework that specifies how self-care goals evolve with stages of an illness, communication goals that support self-care at each stage, patient needs, and requirements for CAs to support these needs. The review identified an agenda for future research on CA functions and features that help older adults accept need for self-care, establish self-care, and sustain self-care over time. Conclusions: Integrating insights from the pedagogical agent literature with research on developing CAs for self-care defines an agenda for developing and evaluating CAs to help older adults manage illness.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
http://arxiv.org/abs/1903.00232v2
A Framework for Detecting Event related Sentiments of a Community
Social media has revolutionized human communication and styles of interaction. Due to its easiness and effective medium, people share and exchange information, carry out discussion on various events, and express their opinions. For effective policy making and understanding the response of a community on different events, we need to monitor and analyze the social media. In social media, there are some users who are more influential, for example, a famous politician may have more influence than a common person. These influential users belong to specific communities. The main object of this research is to know the sentiments of a specific community on various events. For detecting the event based sentiments of a community we propose a generic framework. Our framework identifies the users of a specific community on twitter. After identifying the users of a community, we fetch their tweets and identify tweets belonging to specific events. The event based tweets are pre-processed. Pre-processed tweets are then analyzed for detecting sentiments of a community for specific events. Qualitative and quantitative evaluation confirms the effectiveness and usefulness of our proposed framework.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85084608174
A Framework for Detection and Identification the Components of Arguments in Arabic Legal Texts
Argument mining processes in the legal domain it is aiming to detect and extract the premises, claims and their relations automatically from unstructured legal texts to provide structured data that can be processable by argumentation models. This paper presents a framework to detect and identify the components of arguments in texts of Arabic legal documents. The framework proposes a computational model adopts the supervised learning that integrates an annotated Arabic Legal Text corpus (ALTC), it is a collection of Iraq's Federal Court of Cassation decision documents with different binary classifiers based on relevant features to detect and identify the components of arguments from legal decisions texts as a final goal. The results of the framework experiments are promising, especially this paper is the first in argumentation mining processing at the level of the Arabic texts.
[ "Argument Mining", "Reasoning", "Information Extraction & Text Mining" ]
[ 60, 8, 3 ]
SCOPUS_ID:85111391536
A Framework for Detection and Validation of Fake News via Authorize Source Matching
In the present era, peoples are sharing views, information, and knowledge on social media across the world without validating the contents. This increases the probability that deceptive news reaches the group of peoples. This type of deceptive news is termed as fake news and requires a proper solution to validate such contents. To overcome and find a solution, a novel framework along with algorithm has been proposed in this research work. The framework is using embedded image/text as input. Extract text with image features used as query and sent to multiple search engines to find relevant links to validate source of generation. After all, the source of the selected links validates by stores authorize source list. Algorithm achieves 82, 85, and 94% accuracy on MediaEval, BuzzFeedNews datasets, and Google news, respectively.
[ "Visual Data in NLP", "Ethical NLP", "Responsible & Trustworthy NLP", "Reasoning", "Fact & Claim Verification", "Multimodality" ]
[ 20, 17, 4, 8, 46, 74 ]
https://aclanthology.org//2011.mtsummit-papers.60/
A Framework for Diagnostic Evaluation of MT Based on Linguistic Checkpoints
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
https://aclanthology.org//W16-2210/
A Framework for Discriminative Rule Selection in Hierarchical Moses
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85045248461
A Framework for Document Specific Error Detection and Corrections in Indic OCR
In this paper, we present a framework for assisting word-level corrections in Indic OCR documents by incorporating the ability to identify, segment and combine partially correct word forms. The partially correct word forms themselves may be obtained from corrected parts of the document itself and auxiliary sources such as dictionaries and common OCR character confusions. Our framework updates a domain dictionary and learns OCR specific n-gram confusions from the human feedback on the fly. The framework can also leverage consensus between outputs of multiple OCR systems on the same text as an auxiliary source for dynamic dictionary building. Experimental evaluations confirm that for highly inflectional Indian languages, matching partially correct word forms an result in significant reduction in the amount of manual input required for correction. Furthermore, significant gains are observed when the consolidated output of multiple OCR systems is employed as an auxiliary source of information. We have corrected over 1100 pages (13 books) in Sanskrit, 190 pages (1 book) in Marathi, 50 pages (part of a book) in Hindi and 1000 pages (12 books) in English using our framework. We present a book-wise analysis of improvement in required human interaction for these Languages.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85107820329
A Framework for Document-level Cybersecurity Event Extraction from Open Source Data
With the rapid development of the Internet, the number of cyber threats increases exponentially. More and more cyber threats come from new and unexpected sources, leading organizations and individuals to facing more security risks and vulnerabilities. Automatically obtaining and structuring security information from cybersecurity news can help security analysts to identify useful information more quickly. Most existing studies on extracting security events merely focused on the event detection task, aiming to discover and categorize cybersecurity events from the plain text. However, such event detection methods cannot capture useful information such as who performed the cyberattack, when the data breach event happened, who was the victim, etc. These arguments of a cybersecurity event are needed for analysts to get cybersecurity event details directly. Several studies have tried to extract rich semantic information of cybersecurity events, but they merely focused on extracting event arguments within the sentence scope. These studies still have limitations when the event arguments needed to recognize spread across multiple sentences. In this paper, we proposed a framework that effectively extracts cybersecurity events at the document-level from cybersecurity news, blogs and announcements. We model the document-level event extraction task as a sequence tagging problem. The goal is to identify the related arguments of cybersecurity events from documents. Firstly, we get the characters embedding and incorporate the word information into the character representations. Then we design a sliding window mechanism to get the cross-sentence context information. Finally, we predict the label of each character. We build a Chinese cybersecurity dataset and use three methods to evaluate our method, and the experimental results demonstrate the effectiveness of the proposed model.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85128791563
A Framework for Effective Knowledge Extraction from A Data Space Formed by Unstructured Technical Reports using Pre-Trained Models
The transformation of unstructured data into triples is a key task in knowledge graph construction. It remains a great challenge to complete this task on technical reports. In this work, we propose a framework for effectively structuring data structuring in knowledge graph construction from a data space formed by technical reports. This framework specifically consist of two pre-Trained language models to provide the embed dings and a sequence labeling model to tag the entity labels. The pre-Trained models, i.e.The Flair embedding and the BERT model, are employed to combine the output vectors to downstream tasks. To evaluate the proposed method, we conduct named entity recognition experiments using the status reports of complex equipment in nuclear power plants. The evaluation shows the framework achieves remarkable improvement on F1 score. This paper details the framework, the experiments, and the evaluation of the proposed method.
[ "Language Models", "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Named Entity Recognition", "Multimodality", "Information Extraction & Text Mining" ]
[ 52, 72, 50, 18, 34, 74, 3 ]
SCOPUS_ID:85112188450
A Framework for Efficient Multilevel Polarity-Based Sentiment Analysis Using Fuzzy Logic
The evaluation of any product or event on social media with the opinion or emotion of peoples is known as sentiment analysis (SA). A great deal of attention has been attracted in recent years, toward both science and industry fields for a variety of uses. Machine learning and the text mining uses this most widely known application area of sentiment analysis. This paper presents a framework for efficient multilevel sentiment analysis using fuzzy logic for the classification of online test reviews polarity as strong positive, positive, negative and strong negative. This proposed model can use the fuzzy logic classifier to enhance the degree of sentiment polarity of reviews. Here, fuzzy logic classifier is used for finding the sentiment classes. This also utilizes the mechanism of imputation of missing sentiment for integrating non-opinionated sentences in generating precise results. Results show that the proposed method has a capability of extracting opinions and classify them in an effective way. The proposed method has a capability to predict the degree of sentiment polarity for the reviews on a social media. The better precision and F1-scores are obtained for an objective/subjective classification and polarity (positive/negative) classification on twitter dataset.
[ "Information Extraction & Text Mining", "Information Retrieval", "Green & Sustainable NLP", "Polarity Analysis", "Sentiment Analysis", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 68, 33, 78, 36, 4 ]
SCOPUS_ID:85097580281
A Framework for Estimating Privacy Risk Scores of Mobile Apps
With the rapidly growing popularity of smart mobile devices, the number of mobile applications available has surged in the past few years. Such mobile applications collect a treasure trove of Personally Identifiable Information (PII) attributes (such as age, gender, location, and fingerprints). Mobile applications, however, are many and often not well understood, especially for their privacy-related activities and functions. To fill this critical gap, we recommend providing an automated yet effective assessment of the privacy risk score of each application. The design goal is that the higher the score, the higher the potential privacy risk of this mobile application. Specifically, we consider excessive data access permissions and risky privacy policies. We first calculate the privacy risk of over 600 PII attributes through a longitudinal study of over 20 years of identity theft and fraud news reporting. Then, we map the access rights and privacy policies of each smart application to our dataset of PII to analyze what PII the application collects, and then calculate the privacy risk score of each smart application. Finally, we report our extensive experiments of 100 open source applications collected from Google Play to evaluate our method. The experimental results clearly prove the effectiveness of our method.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85145438793
A Framework for Evaluating MRC Approaches with Unanswerable Questions
Machine reading comprehension (MRC) is a challenging task in natural language processing that demonstrates the language understanding of the machine. An approach to tackle this challenge requires the machine to answer the question about the given context when needed and abstain from answering when there is no answer. Recent works attempted to solve this challenge with various comprehensive neural network architectures for sequences such as SAN, U-Net, EQuANt, and others that were trained on the SQuAD 2.0 dataset containing unanswerable questions. However, the robustness of these approaches has not been evaluated. In this paper, we propose a data augmentation approach that converts answerable questions to unanswerable questions in the SQuAD 2.0 dataset by altering the entities in the question to its antonym from ConceptNet which is a semantic network. The augmented data is, then, fitted into the U-Net question answering model to evaluate the robustness of the model.
[ "Question Answering", "Robustness in NLP", "Machine Reading Comprehension", "Natural Language Interfaces", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 27, 58, 37, 11, 8, 4 ]
http://arxiv.org/abs/2003.04642v1
A Framework for Evaluation of Machine Reading Comprehension Gold Standards
Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text. While neural MRC systems gain popularity and achieve noticeable performance, issues are being raised with the methodology used to establish their performance, particularly concerning the data design of gold standards that are used to evaluate them. There is but a limited understanding of the challenges present in this data, which makes it hard to draw comparisons and formulate reliable hypotheses. As a first step towards alleviating the problem, this paper proposes a unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand. We propose a qualitative annotation schema for the first and a set of approximative metrics for the latter. In a first application of the framework, we analyse modern MRC gold standards and present our findings: the absence of features that contribute towards lexical ambiguity, the varying factual correctness of the expected answers and the presence of lexical cues, all of which potentially lower the reading comprehension complexity and quality of the evaluation data.
[ "Reasoning", "Machine Reading Comprehension" ]
[ 8, 37 ]
SCOPUS_ID:85085939614
A Framework for Event-oriented Text Retrieval Based on Temporal Aspects: A Recent Review
Event, as an important carrier for users to understand the world, has become a special retrieval object. In contrast to traditional text retrieval, Event-oriented Text Retrieval (ETR) can search events by utilizing events knowledge and using events as proxies for information needs. Accordingly, ETR has become the preferred way for users to obtain their interested events from massive web collections. Moreover, it also has already aroused considerable attention from scholars in recent years. However, the retrieval effectiveness of ETR is still subject to the effect of temporal aspects (i.e., temporal dynamics). Thus, in this review, we first analyze three major temporal components in the framework of ETR. After that, we provide a comprehensive overview of state-of-the-art approaches corresponding to such three components. Finally, we summary some ETR-related resources and pinpoint several potential research directions.
[ "Information Retrieval" ]
[ 24 ]
SCOPUS_ID:85081412596
A Framework for Explainable Text Classification in Legal Document Review
Companies regularly spend millions of dollars producing electronically-stored documents in legal matters. Over the past two decades, attorneys have been using a variety of technologies to conduct this exercise, and most recently, parties on both sides of the 'legal aisle' are accepting the use of machine learning techniques like text classification to cull massive volumes of data and to identify responsive documents for use in these matters. While text classification is regularly used to reduce the discovery costs in legal matters, text classification also faces a peculiar perception challenge: amongst lawyers, this technology is sometimes looked upon as a black box Put simply, very little information is provided for attorneys to understand why documents are classified as responsive. In recent years, a group of AI and Machine Learning researchers have been actively researching Explainable AI. In an explainable AI system, actions or decisions are human understandable. In legal 'document review' scenarios, a document can be identified as responsive, as long as one or more of the text snippets (small passages of text) in a document are deemed responsive. In these scenarios, if text classification can be used to locate these responsive snippets, then attorneys could easily evaluate the model's document classification decision. When deployed with defined and explainable results, text classification can drastically enhance the overall quality and speed of the document review process by reducing the time it takes to review documents. Moreover, explainable predictive coding provides lawyers with greater confidence in the results of that supervised learning task. This paper describes a framework for explainable text classification as a valuable tool in legal services: for enhancing the quality and efficiency of legal document review and for assisting in locating responsive snippets within responsive documents. This framework has been implemented in our legal analytics product, which has been used in hundreds of legal matters. We also report our experimental results using the data from an actual legal matter that used this type of document review.
[ "Information Extraction & Text Mining", "Text Classification", "Explainability & Interpretability in NLP", "Passage Retrieval", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 36, 81, 66, 24, 4 ]
SCOPUS_ID:85080997702
A Framework for Explaining Teachers’ Diagnostic Judgements by Cognitive Modeling (DiaCoM)
Research on diagnostic competencies of teachers nowadays raises the question which person or situational characteristics moderate judgement accuracy. Beside this correlational approach, a stronger interest in understanding the cognitive processes involved in the genesis of diagnostic judgements has emerged. To address the theoretical gap regarding cognitive processes underlying diagnostic judgements, we propose a framework, called DiaCoM (Explaining Teachers’ Diagnostic Judgements by Cognitive Modeling). It aims at supporting (existing or envisioned) research that strives to test cognitively oriented explanations for processes and products of diagnostic judgements of teachers.
[ "Explainability & Interpretability in NLP", "Cognitive Modeling", "Linguistics & Cognitive NLP", "Responsible & Trustworthy NLP" ]
[ 81, 2, 48, 4 ]
SCOPUS_ID:85119378931
A Framework for Extractive Text Summarization Based on Deep Learning Modified Neural Network Classifier
There is an exponential growth of text data over the internet, and it is expected to gain significant growth and attention in the coming years. Extracting meaningful insights from text data is crucially important as it offers value-added solutions to business organizations and end-users. Automatic text summarization (ATS) automates text summarization by reducing the initial size of the text without the loss of key information elements. In this article, we propose a novel text summarization algorithm for documents using Deep Learning Modifier Neural Network (DLMNN) classifier. It generates an informative summary of the documents based on the entropy values. The proposed DLMNN framework comprises six phases. In the initial phase, the input document is pre-processed. Subsequently, the features are extracted using pre-processed data. Next, the most appropriate features are selected using the improved fruit fly optimization algorithm (IFFOA). The entropy value for every chosen feature is computed. These values are then classified into two classes, (a) highest entropy values and (b) lowest entropy values. Finally, the class that holds the highest entropy values is chosen, representing the informative sentences that form the last summary. The results observed from the experiment indicate that the DLMNN classifier gives 81.56, 91.21, and 83.53 of sensitivity, accuracy, specificity, precision, and f-measure. Whereas the existing schemes such as ANN relatively provide lesser value in contrast to DLMNN.
[ "Text Classification", "Summarization", "Text Generation", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 30, 47, 24, 3 ]
SCOPUS_ID:85129203293
A Framework for Facilitating Reproducible News Sentiment Impact Analysis
The proliferation of outlets for news media in recent decades has contributed to faster issuance of news data. News analysis has been one of the key activities conducted by researchers in a broad variety of research disciplines. In general, the analysis process used in these studies includes interpreting the content of the news items, and then discovering their impact in a specific area. In this paper, we delve into the field of news analysis applied to the financial domain and explore news sentiment impact analysis in the context of financial markets. Existing studies lack systematic methods to assimilate financial context and evaluate the impact of a given news dataset on relevant entities financial market performance. We introduce an improved version of the framework called News Sentiment Impact Analysis (NSIA) that encompasses models, supporting software architecture and processes for defining various financial contexts and conducting news sentiment impact analysis. The framework is then evaluated using a prototype implementation and a case study that investigates the impact of extremely negative news on the stock price of the related entities. The results demonstrate the functionality, usability and reproducibility of the framework, and its capability to bridge the gap between generating news sentiment and evaluating its impact in selected financial contexts.
[ "Sentiment Analysis", "Responsible & Trustworthy NLP" ]
[ 78, 4 ]
SCOPUS_ID:85129323405
A Framework for Feature Selection Using Natural Language Processing for User Profile Learning for Recommendations of Healthcare-Related Content
This paper presents the work done on recommendations of healthcare-related journal papers by understanding the semantics of terms from the papers referred by users in past. In other words, user profiles based on user interest within the healthcare domain are constructed from the kind of journal papers read by the users. Multiple user profiles are constructed for each user based on different categories of papers read by the users. The proposed approach goes to the granular level of extrinsic and intrinsic relationship between terms and clusters highly semantically-related relevant domain terms where each cluster represents a user interest area. The semantic analysis of terms is done starting from co-occurrence analysis to extract the intra-couplings between terms and then the intercouplings are extracted from the intra-couplings, and then, finally, clusters of highly related terms are formed. The experiments showed improved precision for the proposed approach as compared to the state-of-the-art technique with a mean reciprocal rank of 0.76.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
https://aclanthology.org//W04-0201/
A Framework for Feature based Description of Low level Discourse
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
http://arxiv.org/abs/2003.09530v2
A Framework for Generating Explanations from Temporal Personal Health Data
Whereas it has become easier for individuals to track their personal health data (e.g., heart rate, step count, food log), there is still a wide chasm between the collection of data and the generation of meaningful explanations to help users better understand what their data means to them. With an increased comprehension of their data, users will be able to act upon the newfound information and work towards striving closer to their health goals. We aim to bridge the gap between data collection and explanation generation by mining the data for interesting behavioral findings that may provide hints about a user's tendencies. Our focus is on improving the explainability of temporal personal health data via a set of informative summary templates, or "protoforms." These protoforms span both evaluation-based summaries that help users evaluate their health goals and pattern-based summaries that explain their implicit behaviors. In addition to individual users, the protoforms we use are also designed for population-level summaries. We apply our approach to generate summaries (both univariate and multivariate) from real user data and show that our system can generate interesting and useful explanations.
[ "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP" ]
[ 81, 4 ]
SCOPUS_ID:85148611938
A Framework for Handwritten Date Recognition in Quality Documents
Document text recognition is an important task in optical character recognition. Due to the difficulty in collecting real handwriting samples and the variety of handwritten characters, there are still challenges to recognize handwritten text in quality documents. In this paper, we propose a framework for handwritten date recognition in nuclear power quality documents and a pre-Training data construction method to improve the recognition effect. This framework specifically consists of an image feature extraction module and a text transcription module. The text transcription module combines the text transcription functions in Convolutional Recurrent Neural Network (CRNN) and Show-Attend-And-Read (SAR). The experiments are evaluated on handwritten date recognition with nuclear power quality documents. The evaluation presents the framework achieves obviously improvement on metrics and the pre-Training dataset can help speed up training.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Multimodality" ]
[ 20, 52, 72, 74 ]
https://aclanthology.org//W14-4415/
A Framework for Health Behavior Change using Companionable Robots
[ "Text Generation" ]
[ 47 ]
http://arxiv.org/abs/2005.05507v1
A Framework for Hierarchical Multilingual Machine Translation
Multilingual machine translation has recently been in vogue given its potential for improving machine translation performance for low-resource languages via transfer learning. Empirical examinations demonstrating the success of existing multilingual machine translation strategies, however, are limited to experiments in specific language groups. In this paper, we present a hierarchical framework for building multilingual machine translation strategies that takes advantage of a typological language family tree for enabling transfer among similar languages while avoiding the negative effects that result from incorporating languages that are too different to each other. Exhaustive experimentation on a dataset with 41 languages demonstrates the validity of the proposed framework, especially when it comes to improving the performance of low-resource languages via the use of typologically related families for which richer sets of resources are available.
[ "Multilinguality", "Low-Resource NLP", "Machine Translation", "Syntactic Text Processing", "Text Generation", "Typology", "Responsible & Trustworthy NLP" ]
[ 0, 80, 51, 15, 47, 45, 4 ]
https://aclanthology.org//W06-2002/
A Framework for Incorporating Alignment Information in Parsing
[ "Cross-Lingual Transfer", "Multilinguality" ]
[ 19, 0 ]
SCOPUS_ID:85120344524
A Framework for Indonesian Grammar Error Correction
Grammatical Error Correction (GEC) is a challenge in Natural Language Processing research. Although many researchers have been focusing on GEC in universal languages such as English or Chinese, few studies focus on Indonesian, which is a low-resource language. In this article, we proposed a GEC framework that has the potential to be a baseline method for Indonesian GEC tasks. This framework treats GEC as a multi-classification task. It integrates different language embedding models and deep learning models to correct 10 types of Part of Speech (POS) error in Indonesian text. In addition, we constructed an Indonesian corpus that can be utilized as an evaluation dataset for Indonesian GEC research. Our framework was evaluated on this dataset. Results showed that the Long Short-Term Memory model based on word-embedding achieved the best performance. Its overall macro-average F0.5 in correcting 10 POS error types reached 0.551. Results also showed that the framework can be trained on a low-resource dataset.
[ "Low-Resource NLP", "Text Error Correction", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 80, 26, 72, 15, 12, 4 ]
SCOPUS_ID:85124012841
A Framework for LED Signboard Recognition for the Autonomous Vehicle Management System
An electronic road sign is an important instrument for providing real-Time traffic-related information in the field of intelligent vehicle management systems. In most cases, electronic signboards present a piece of complex text information in which each character is made up of a matrix of light-emitting diodes lamps, referred to as LED text. LED dot matrix displays are also widely used to display notifications and content in a variety of applications. A matrix with a defined number of rows and columns is used to represent a single character. Since it demonstrates discontinuity, the LED text is difficult to detect. To do so, we have proposed a digital image processing-based recognition technique in this paper. Between classes, the variance technique is implemented for converting gray-scale images to binary images. We have used the Sobel masking operator to detect the cell region from the LED text. An improved optical character recognition (OCR) technique is then applied to the normalized LED text images for recognition purposes. The key contribution of this paper is to detect and recognize discontinuous LED texts from different environmental conditions that can be used to assist driver-less vehicle management systems. Our proposed framework has achieved a recognition rate of 84.4%.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/2110.11938v1
A Framework for Learning Assessment through Multimodal Analysis of Reading Behaviour and Language Comprehension
Reading comprehension, which has been defined as gaining an understanding of written text through a process of translating grapheme into meaning, is an important academic skill. Other language learning skills - writing, speaking and listening, all are connected to reading comprehension. There have been several measures proposed by researchers to automate the assessment of comprehension skills for second language (L2) learners, especially English as Second Language (ESL) and English as Foreign Language (EFL) learners. However, current methods measure particular skills without analysing the impact of reading frequency on comprehension skills. In this dissertation, we show how different skills could be measured and scored automatically. We also demonstrate, using example experiments on multiple forms of learners' responses, how frequent reading practices could impact on the variables of multimodal skills (reading pattern, writing, and oral fluency). This thesis comprises of five studies. The first and second studies are based on eye-tracking data collected from EFL readers in repeated reading (RR) sessions. The third and fourth studies are to evaluate free-text summary written by EFL readers in repeated reading sessions. The fifth and last study, described in the sixth chapter of the thesis, is to evaluate recorded oral summaries recited by EFL readers in repeated reading sessions. In a nutshell, through this dissertation, we show that multimodal skills of learners could be assessed to measure their comprehension skills as well as to measure the effect of repeated readings on these skills in the course of time, by finding significant features and by applying machine learning techniques with a combination of statistical models such as LMER.
[ "Multimodality", "Reasoning", "Machine Reading Comprehension" ]
[ 74, 8, 37 ]
SCOPUS_ID:85093834981
A Framework for Learning Cross-Lingual Word Embedding with Topics
Cross-lingual word embeddings have been served as fundamental components for many Web-based applications. However, current models learn cross-lingual word embeddings based on projection of two pre-trained monolingual embeddings based on well-known models such as word2vec. This procedure makes it indiscriminative for some crucial factors of words such as homonymy and polysemy. In this paper, we propose a novel framework for learning better cross-lingual word embeddings with latent topics. In this framework, we firstly incorporate latent topical representations into the Skip-Gram model to learn high quality monolingual word embeddings. Then we use the supervised and unsupervised methods to train cross-lingual word embeddings with topical information. We evaluate our framework in the cross-lingual Web search tasks using the CLEF test collections. The results show that our framework outperforms previous state-of-the-art methods for generating cross-lingual word embeddings.
[ "Multilinguality", "Semantic Text Processing", "Cross-Lingual Transfer", "Representation Learning" ]
[ 0, 72, 19, 12 ]
SCOPUS_ID:84988799184
A Framework for Mining Thai Public Opinions
User-generated textual opinions strongly influence humans' beliefs and decisions. Due to the rapid growth of social data, readers cannot capture major opinions on particular topics by reading through all texts. To provide informative supporting evidence for sentiment analysis results, we integrate an opinion summarization framework into a Data and Opinion Mining (DOM) engine, which is an extension of a mobile Big Data analytics engine for mining Thai public opinions (XDOM). This opinion summarization framework is based on a modified genetic sentence clustering and sentence selection. This chapter presents the development of XDOM, which takes in data from multiple well-known social network sources, and then processes them using MapReduce, a keyword-based sentiment analysis technique, a clustering-based text summarization, and an influencer analysis algorithm. The XDOM engine is capable of identifying overall sentiments, representative text summaries, and influential authors of certain topics. The system's sentiment prediction accuracy was evaluated by matching the predicted result with human sentiment and tested in various case studies. The effectiveness of both approaches demonstrates the practical applications of the engine.
[ "Opinion Mining", "Summarization", "Text Clustering", "Text Generation", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 49, 30, 29, 47, 78, 3 ]
https://aclanthology.org//W08-0128/
A Framework for Model-based Evaluation of Spoken Dialog Systems
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85114919615
A Framework for Modeling Cyber Attack Techniques from Security Vulnerability Descriptions
Attack graphs are one of the main techniques used to automate the cybersecurity risk assessment process. In order to derive a relevant attack graph, up-to-date information on known cyber attack techniques should be represented as interaction rules. However, designing and creating new interaction rules is a time consuming task performed manually by security experts. We present a novel, end-to-end, automated framework for modeling new attack techniques from the textual description of security vulnerabilities. Given a description of a security vulnerability, the proposed framework first extracts the relevant attack entities required to model the attack, completes missing information on the vulnerability, and derives a new interaction rule that models the attack; this new rule is then integrated within the MulVal attack graph tool. The proposed framework implements a novel data science pipeline that includes a dedicated cybersecurity linguistic model trained on the NVD repository, a recurrent neural network model used for attack entity extraction, a logistic regression model used for completing the missing information, and a transition probability matrix for automatically generating new interaction rule. We evaluated the performance of each of the individual algorithms, as well as the complete framework, and demonstrated its effectiveness.
[ "Responsible & Trustworthy NLP", "Structured Data in NLP", "Robustness in NLP", "Multimodality" ]
[ 4, 50, 58, 74 ]
SCOPUS_ID:85097241483
A Framework for Modeling Knowledge Graphs via Processing Natural Descriptions of Vehicle-Pedestrian Interactions
The full-scale deployment of autonomous driving demands successful interaction with pedestrians and other vulnerable road users, which requires an understanding of their dynamic behavior and intention. Current research achieves this by estimating pedestrian’s trajectory mainly based on the gait and movement information in the past as well as other relevant scene information. However, the autonomous vehicles still struggle with such interactions since the visual features alone may not supply subtle details required to attain a superior understanding. The decision-making ability of the system can improve by incorporating human knowledge to guide the vision-based algorithms. In this paper, we adopt a novel approach to retrieve human knowledge from the natural text descriptions about the pedestrian-vehicle encounters, which is crucial to anticipate the pedestrian intention and is difficult for computer vision (CV) algorithms to capture automatically. We applied natural language processing (NLP) techniques on the aggregated description from different annotators to generate a temporal knowledge graph, which can achieve the changes of intention and the corresponding reasoning processes in a better resolution. In future work, we plan to show that in combination with video processing algorithms, the knowledge graph has the potential to aid the decision-making process to be more accurate by passively integrating the reasoning ability of humans.
[ "Visual Data in NLP", "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Reasoning", "Multimodality" ]
[ 20, 72, 50, 18, 8, 74 ]
SCOPUS_ID:85114554391
A Framework for Multi-lingual Scene Text Detection Using K-means++ and Memetic Algorithms
Recent years have witnessed an exponential surge in interest to explore the domain of scene text detection as well as analysis in natural scene images. However, owing to the complexities arising due to various factors, it can be said that existing techniques may fail at times while attempting to detect text components. This paper presents a system wherein an image is taken as input and its color components are extracted at first. Next the intensity values from each color channel are grouped together using K-means++ clustering algorithm. Memetic algorithm is then applied to get an optimal set of candidate components from the color maps while eliminating the background. The spurious components are removed on the basis of their dimension and entropy measure. This system is experimentally evaluated on two standard datasets namely MLe2e and KAIST, and on an in-house dataset of 400 images, all having multi-lingual texts. The results obtained are comparable with some state-of-the-art methods.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining", "Text Clustering" ]
[ 20, 74, 3, 29 ]
http://arxiv.org/abs/cmp-lg/9611006v1
A Framework for Natural Language Interfaces to Temporal Databases
Over the past thirty years, there has been considerable progress in the design of natural language interfaces to databases. Most of this work has concerned snapshot databases, in which there are only limited facilities for manipulating time-varying information. The database community is becoming increasingly interested in temporal databases, databases with special support for time-dependent entries. We have developed a framework for constructing natural language interfaces to temporal databases, drawing on research on temporal phenomena within logic and linguistics. The central part of our framework is a logic-like formal language, called TOP, which can capture the semantics of a wide range of English sentences. We have implemented an HPSG-based sentence analyser that converts a large set of English queries involving time into TOP formulae, and have formulated a provably correct procedure for translating TOP expressions into queries in the TSQL2 temporal database language. In this way we have established a sound route from English to a general-purpose temporal database language.
[ "Natural Language Interfaces" ]
[ 11 ]
http://arxiv.org/abs/2108.08946v1
A Framework for Neural Topic Modeling of Text Corpora
Topic Modeling refers to the problem of discovering the main topics that have occurred in corpora of textual data, with solutions finding crucial applications in numerous fields. In this work, inspired by the recent advancements in the Natural Language Processing domain, we introduce FAME, an open-source framework enabling an efficient mechanism of extracting and incorporating textual features and utilizing them in discovering topics and clustering text documents that are semantically similar in a corpus. These features range from traditional approaches (e.g., frequency-based) to the most recent auto-encoding embeddings from transformer-based language models such as BERT model family. To demonstrate the effectiveness of this library, we conducted experiments on the well-known News-Group dataset. The library is available online.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85015314949
A Framework for Opinion Mining System with Design Pattern
Due to the sheer volume of opinion rich web resources such as discussion forum, review sites, blogs, and news corpora available in digital form, much of the current research is focusing on the area of sentiment analysis. People are intended to develop a system that can identify and classify opinion or sentiment as represented in an electronic text. An accurate method for predicting sentiments could enable us, to extract opinions from the internet and predict on-line customer's preferences, which could prove valuable for economic or marketing research. In this paper we present a framework for opinion mining in Traditional Chinese-called FOM (Framework of Opinion Mining) to collect unstructured articles in the popular web site and analyse the opinion and sentiment in the semi-automatic way. The framework is developed by objected oriented design patterns, such as to support the flexibility and maintainability. With the FOM framework, new analysis algorithm can be easily replaced and integrated in a new application. A flood predication application based on facebook text in Taiwan will be demonstrated in this paper.
[ "Opinion Mining", "Sentiment Analysis" ]
[ 49, 78 ]
SCOPUS_ID:85123796692
A Framework for Pre Processing, Recognizing and Distributed Proofreading of Assamese Printed Text
The paper provides a brief outline of the framework established for digitizing, recognizing and proofreading printed Indic documents with Assamese as a target language. The establishment of such a framework is essential as it depicts the workflow for the digitization and archival of the scanned text and it has a high impact on the end result. The main idea behind the framework is to build the foundation for an automated text correction engine which provides suggestions based on the experience set generated using manual text correction procedure and machine learning approaches. Most of the works already done in this domain is based on the dictionary approach which has its own shortcomings like inability to correct real-word errors, redundant queries, large size, non-exhaustive collection etc. Hence, in this research, the dataset will be built from the scratch based on the experience gathered during digitization which in-Turn shall contribute in increasing the accuracy of the OCR engine by means of post-processing.
[ "Visual Data in NLP", "Text Error Correction", "Syntactic Text Processing", "Multimodality" ]
[ 20, 26, 15, 74 ]
http://arxiv.org/abs/2006.15854v1
A Framework for Pre-processing of Social Media Feeds based on Integrated Local Knowledge Base
Most of the previous studies on the semantic analysis of social media feeds have not considered the issue of ambiguity that is associated with slangs, abbreviations, and acronyms that are embedded in social media posts. These noisy terms have implicit meanings and form part of the rich semantic context that must be analysed to gain complete insights from social media feeds. This paper proposes an improved framework for pre-processing of social media feeds for better performance. To do this, the use of an integrated knowledge base (ikb) which comprises a local knowledge source (Naijalingo), urban dictionary and internet slang was combined with the adapted Lesk algorithm to facilitate semantic analysis of social media feeds. Experimental results showed that the proposed approach performed better than existing methods when it was tested on three machine learning models, which are support vector machines, multilayer perceptron, and convolutional neural networks. The framework had an accuracy of 94.07% on a standardized dataset, and 99.78% on localised dataset when used to extract sentiments from tweets. The improved performance on the localised dataset reveals the advantage of integrating the use of local knowledge sources into the process of analysing social media feeds particularly in interpreting slangs/acronyms/abbreviations that have contextually rooted meanings.
[ "Knowledge Representation", "Semantic Text Processing", "Sentiment Analysis" ]
[ 18, 72, 78 ]
https://aclanthology.org//W15-2206/
A Framework for Procedural Text Understanding
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
http://arxiv.org/abs/2110.04620v1
A Framework for Rationale Extraction for Deep QA models
As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction. Current techniques that provide insights on a model's working are either dependent on adversarial datasets or are proposing models with explicit explanation generation components. These techniques are time-consuming and challenging to extend to existing models and new datasets. In this work, we use `Integrated Gradients' to extract rationale for existing state-of-the-art models in the task of Reading Comprehension based Question Answering (RCQA). On detailed analysis and comparison with collected human rationales, we find that though ~40-80% words of extracted rationale coincide with the human rationale (precision), only 6-19% of human rationale is present in the extracted rationale (recall).
[ "Natural Language Interfaces", "Question Answering", "Information Extraction & Text Mining" ]
[ 11, 27, 3 ]
SCOPUS_ID:85124401574
A Framework for Real-time Sentiment Analysis of Big Data Generated by Social Media Platforms
Sentiment and Opinion analysis have been of significant interest with the possibilities of creating more meaningful business analytics from using data sources such as social media creating a large-scale implementation using Big Data. There has been a range of implementation, typically focusing on one social media platform and user entered text as input. Recently, efforts have been made to make a real-time implementation of such a sentiment system using API and data streams from social media platforms. There exists a need to create a system that uses multiple input sources from social media in real-time. We present an architecture using existing Big Data technologies to implement a real-time multi-social media input source with a central sentiment extraction and analysis component. The proposal uses Apache Kafka for the ingestion layer, lexicon-based classifier and Spark for the analytical layer, YARN clusters for the tasks execution management, and MongoDB database for the storage layer. The performance of the proposed framework is measured based on different quality metrics.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85107655862
A Framework for Scalable Similarity Evaluation in Text Graphs
Graphs and graph databases are applicable over a wide range of domains, including text mining and web mining. Using graphs to represent relationships between entities provides enriched models for emerging tasks of web search and information retrieval. Natural language processing algorithms use graphs to model structural relationships of texts efficiently, resulting in improved performance. However, the need to increase the accuracy of graph construction and weight allocation remains a fundamental challenge. Existing methods for these tasks provide limited efficiency and lack scalability for large graphs. In this study, we propose a novel graph-based method for text modeling and running a query to evaluate the similarity of text segments. In this method, the graph corresponding to the text is first created by modeling words and named entities by the state-of-the-art pre-trained BERT model. Graph nodes are then weighted in two stages. In the first stage, the nodes with more generalization obtain higher weights. The second weighting stage is done by the graph obtained from the query text. In this weighting step, nodes are considered important if they are specifically related to the query text. After determining the important nodes in the graph, the semantic similarity between the query text and the texts in the database is measured. The whole process of this framework uses a natural language processing pipeline in Apache Spark scalable platform. The efficiency of the model was evaluated for both distributed and non-distributed configuration and its scalability on a Spark cluster. Evaluation of the accuracy using the Pearson correlation coefficient shows that the proposed method performs higher performance than its competitors.
[ "Semantic Text Processing", "Green & Sustainable NLP", "Structured Data in NLP", "Semantic Similarity", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 72, 68, 50, 53, 4, 74 ]
SCOPUS_ID:85130946268
A Framework for Segmentation of Characters and Words from In-Air Handwritten Assamese Text
In-air handwriting is a popular human–computer interaction platform that offers users with a natural and intriguing means of communication through air-written hand or finger gestures. The complexity of in-air handwriting stems from the fact that it is executed in a single stroke, which leads to irrelevant connecting movements called ligatures between adjoining character strokes or consecutive characters and words in a sentence. So, detection of relevant character or word components from air-written text is an intricate process that requires special consideration. In this paper, we present a multistage heuristic-based text segmentation framework that explores certain statistical and geometrical attributes to extract the significant character and word components from in-air handwritten text (IAHT) lines. Experimental evaluation on an IAHT data set consisting of Assamese sentences indicates that our proposed methodology offers a success rate of around 95% establishing its capability in spotting characters and words from different variations of continuous text lines.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85114964107
A Framework for Semantic Knowledge Representation of Al-Quran Based on Word Dependencies
A variety of applications have been built in recent years with the aim to extract knowledge from Al-Quran. Current knowledge representations of Al-Quran give attention primarily on conceptual ontology models that describe the semantic relations between the Quranic concepts or entities. There seems to be minimal effort towards recognizing the semantic relations between words in Quranic text, which is relatively more complex. This paper aims to present a framework for semantic knowledge representation of Al-Quran using dependency relations between words, in an attempt to boost the retrieval accuracy for Al-Quran. The semantic analysis is performed on Quranic verses according to word dependency relations using dependency parsing. Based on parsed dependencies, a set of rules are formulated to build a semantic graph of Surah Ali Imran of Al-Quran. The efficiency of the semantic representation was tested by developing a prototype question answering system. The framework was evaluated using precision and recall, First Hit Success, First Answer Reciprocal Rank and Total Reciprocal Rank by comparing the retrieved and actual answers. The results indicate that the performance of the proposed framework using word dependencies improves the semantic representation of knowledge.
[ "Semantic Text Processing", "Question Answering", "Syntactic Text Processing", "Representation Learning", "Knowledge Representation", "Natural Language Interfaces", "Syntactic Parsing", "Information Retrieval" ]
[ 72, 27, 15, 12, 18, 11, 28, 24 ]
SCOPUS_ID:85056185225
A Framework for Semantic Video Content Indexing Using Textual Information
In these last years, many works have been published in the video indexing and retrieval field. However, except some specific cases such ass port video where i ts possible t o estimate the set of important events and concepts in the document, this research is generally limited to analyzing low level content. In this paper, we introduce an approach for semantic video indexing that combines two levels of descriptions. First, we extract automatically textuel information from video frames. The second part of our approach consists to exploit linguistic techniques and semantic network in order to extract semantic concepts such as person identity, location name, event type etc. These informations are then used for semantic description of video content. Our proposed approach was tested on video collection of Arabic TV news and experimental results have been satisfying.
[ "Visual Data in NLP", "Indexing", "Information Retrieval", "Multimodality" ]
[ 20, 69, 24, 74 ]
SCOPUS_ID:85145822009
A Framework for Smart Home System with Voice Control Using NLP Methods
The proliferation of information technologies and the emergence of ubiquitous computing have quickly transformed electronic devices from isolated islands of data and control into interconnected parts of intelligent systems. These network-based systems have advanced features, including Internet of Things (IoT) sensors and actuators, multiple connectivity options and multimodal user interfaces, and they also enable remote monitoring and management. In order to develop a human machine interface of smart home systems with speech recognition, we propose a new IoT-fog-cloud framework using natural language processing (NLP) methods. The new methodology adds utterance to command transformation to the existing cloud-based speech-to-text and text-to-speech services. This approach is flexible and can be easily adapted for different types of automation systems and consumer electronics as well as to almost every non-tonal language not currently supported by online platforms for intent detection and classification. The proposed framework has been employed in the development of prototypes of voice user interface extension of existing smart security system via new service for speech intent recognition. Tests on the system were carried out and the obtained results show the effectiveness of the new voice communication option. The speech-based interface is reliable; it facilitates customers and improves their experience with smart home devices.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
SCOPUS_ID:85058814185
A Framework for Solving Explicit Arithmetic Word Problems and Proving Plane Geometry Theorems
This paper presents a framework for solving math problems stated in a natural language (NL) and applies the framework to develop algorithms for solving explicit arithmetic word problems and proving plane geometry theorems. We focus on problem understanding, that is, the transformation of a NL description of a math problem to a formal representation. We view this as a relation extraction problem, and adopt a greedy algorithm to extract the mathematical relations using a syntax-semantics model, which is a set of patterns describing how a syntactic pattern is mapped to its formal semantics. Our method yields a human readable solution that shows how the mathematical relations are extracted one at a time. We apply our framework to solve arithmetic word problems and prove plane geometry theorems. For arithmetic word problems, the extracted relations are transformed into a system of equations, and the equations are then solved to produce the solution. For plane geometry theorems, these extracted relations are input to an inference system to generate the proof. We evaluate our approach on a set of arithmetic word problems stated in Chinese, and two sets of plane geometry theorems stated in Chinese and English. Our algorithms achieve high accuracies on these datasets and they also show some desirable properties such as brevity of algorithm description and legibility of algorithm actions.
[ "Relation Extraction", "Numerical Reasoning", "Reasoning", "Information Extraction & Text Mining" ]
[ 75, 5, 8, 3 ]
SCOPUS_ID:85149222260
A Framework for Understanding Unstructured Financial Documents Using RPA and Multimodal Approach
The financial business process worldwide suffers from huge dependencies upon labor and written documents, thus making it tedious and time-consuming. In order to solve this problem, traditional robotic process automation (RPA) has recently been developed into a hyper-automation solution by combining computer vision (CV) and natural language processing (NLP) methods. These solutions are capable of image analysis, such as key information extraction and document classification. However, they could improve on text-rich document images and require much training data for processing multilingual documents. This study proposes a multimodal approach-based intelligent document processing framework that combines a pre-trained deep learning model with traditional RPA used in banks to automate business processes from real-world financial document images. The proposed framework can perform classification and key information extraction on a small amount of training data and analyze multilingual documents. In order to evaluate the effectiveness of the proposed framework, extensive experiments were conducted using Korean financial document images. The experimental results show the superiority of the multimodal approach for understanding financial documents and demonstrate that adequate labeling can improve performance by up to about 15%.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Information Retrieval", "Multimodality", "Text Classification", "Multilinguality" ]
[ 20, 3, 24, 74, 36, 0 ]
SCOPUS_ID:85094179656
A Framework for Understanding the Relationship between Social Media Discourse and Mental Health
Over 35% of the world's population uses social media. Platforms like Facebook, Twitter, and Instagram have radically influenced the way individuals interact and communicate. These platforms facilitate both public and private communication with strangers and friends alike, providing rich insight into an individual's personality, health, and wellbeing. To date, many researchers have employed a variety of methods for extracting mental health-centric features from digital text communication (DTC) data, including natural language processing, social network analysis, and extraction of temporal discourse patterns. However, none have explored a hierarchical framework for extracting features from private messages with the goal of unifying approaches across methodological domains. Furthermore, while analyses of large, public corpora abound in existing literature, limited work has been done to explore the relationship between of private textual communications, personality traits, and symptoms of mental illness. We present a framework for constructing rich feature spaces from digital text communications. We then demonstrate the efficacy of our framework by applying it to a dataset of private Facebook messages in a college student population (N=103). Our results reveal key individual differences in temporal and relational behaviors, as well as language usage in relation to validated measures of trait-level anxiety, loneliness, and personality. This work represents a critical step forward in linking features of private social media messages to validated measures of mental health, wellbeing, and personality.
[ "Responsible & Trustworthy NLP", "Ethical NLP", "Information Extraction & Text Mining" ]
[ 4, 17, 3 ]
SCOPUS_ID:85063593996
A Framework for Word Clustering of Bangla Sentences Using Higher Order N-gram Language Model
Clustering of words is the method that is used to partition the sets of words into subsets of semantically similar words. Word clustering has crucial in many uses of natural language processing like POS tagging, spell checker, grammar checker, word sense disambiguation and many more. In this paper we propose a model by using higher order N-grams language model that is helpful for clustering Bangla word efficiently, which is based on the similarity of meaning in language and contextual. N-gram rules used to propagate various types of probabilities for different form of sentences. For implementation we also propose a system that generates different words of cluster and tested by threshold values to justify given result. By experimenting with a large corpus of the word length of Bangla sentences, our proposed model shows the accuracy approximately 89% for higher order N-gram which is quite satisfactory.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Text Clustering" ]
[ 52, 72, 3, 29 ]
SCOPUS_ID:85091286980
A Framework for a Comprehensive Conceptualization of Urban Constructs
Analogy is thought to be foundational for designing and for design creativity. Nonetheless, practicing analogical reasoning needs a knowledge-base. The paper proposes a framework for constructing a knowledge-base of urban constructs that builds on an ontology of urbanism. The framework is composed of two modules that are responsible for representing either the concepts or the features of any urban constructs' materialization. The concepts are represented as a knowledge graph (KG) named SpatialNet, while the physical features are represented by a deep neural network (DNN) called SpatialFeaturesNet. For structuring SpatialNet, as a KG that comprehensively conceptualizes spatial qualities, deep learning applied to natural language processing (NLP) is employed. The comprehensive concepts of SpatialNet are firstly discovered using semantic analyses of nine English lingual corpora and then structured using the urban ontology. The goal of the framework is to map the spatial features to the plethora of their matching concepts. The granularity ànd the coherence of the proposed framework is expected to sustain or substitute other known analogical, knowledge-based, inspirational design approaches such as case-based reasoning (CBR) and its analogical application on architectural design (CBD).
[ "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Reasoning", "Multimodality" ]
[ 72, 50, 18, 8, 74 ]
SCOPUS_ID:84883114095
A Framework for machine translation output combination
In this paper, we propose a framework for combining outputs from multiple on-line machine translation systems. This framework consists of several modules, including selection, substitution, insertion, and deletion. We evaluate the combination framework on IWSLT07 in travel domain, for the translation direction from Chinese to English. Three different on-line machine translation systems, Google, Yahoo, and TransWhiz, are used in the investigation. The experimental results show that our proposed combination framework improves BLEU score from 19.15 to 20.55. It achieves an absolute improvement of 1.4 in the BLEU score.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85086321513
A Framework for online social network volatile data analysis: A case for the fast fashion industry
Consumer satisfaction is an important part for any business as it has been shown to be a major factor for consumer loyalty. Identifying satisfaction in products is also important as it allows businesses alter production plans based on the level of consumer satisfaction for a product. With consumer satisfaction data being very volatile for some products due to a short requirement period for such products, current consumer satisfaction must be identified within a shorter period before the data becomes obsolete. The fast fashion industry, which is part of the fashion industry, is adopted as a case study in this research. Unlike slow fashion, fast fashion products have short shelf lives and their retailers must be able to react swiftly to consumer demands. This research aims to investigate the effectiveness of current data mining techniques when used to identify consumer satisfaction towards fast fashion products. This is carried out by designing, implementing and testing a software artefact that utilises data mining techniques to obtain, validate and analyse fast fashion social data, sourced from Twitter, to identify consumer satisfaction towards specific product types. In addition, further analysis is carried out using a sentiment scaling method adapted to the characteristics of fast fashion.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:77956816842
A Framework for scalable summarization of video
Video summaries provide compact representations of video sequences, with the length of the summary playing an important role, trading off the amount of information conveyed and how fast it can be visualized. This letter proposes scalable summarization as a method to easily adapt the summary to a suitable length, according to the requirements in each case, along with a suitable framework. The analysis algorithm uses a novel iterative ranking procedure in which each summary is the result of the extension of the previous one, balancing information coverage and visual pleasantness. The result of the algorithm is a ranked list, a scalable representation of the sequence useful for summarization. The summary is then efficiently generated from the bitstream of the sequence using bitstream extraction. © 2010 IEEE.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Summarization", "Text Generation", "Multimodality" ]
[ 20, 3, 30, 47, 74 ]
https://aclanthology.org//W89-0236/
A Framework for the Development of Natural Language Grammars
This paper describes a parsing system used in a framework for the development of Natural Language grammars. It is an interactive environment suitable for writing robust NL applications generally. Its heart is the SAIL parsing algorithm that uses a Phrase-Structure Grammar with extensive augmentations. Furthermore, some particular parsing tools are embedded in the system, and provide a powerful environment for developing grammars, even of large coverage.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
https://aclanthology.org//W15-4707/
A Framework for the Generation of Computer System Diagnostics in Natural Language using Finite State Methods
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85083547009
A Framework of Computer-Based Learning System Based on Self-Regulated Model in English Writing
This paper presents a design phase of a computer based learning system for English writing in Thai EFL learners. This system is designed to incorporate the self-regulated model and set the components of linguistics and machine translation as a learning environment. The system is designed based on three main phases of self-regulated model: forethought phase, performance phase, and self-reflection phase. The learning environment used to guide completely target sentence writing. Moreover, the display of user interface is designed for using as assisting tool for supporting a student's self-regulated learning in English writing. There are three main modules of the system that consist of learning profile acquisition, learning behavior collection, and learning analytics. The system design is an important phase to encourages action between learners and computer-based learning system for learning in English writing. Then, learners' behavior are collected into data logs store for learning analysis. This system aims to collect Thai EFL learners' behavior and find the behavioral pattern that could helpful reference for improve system and teaching materials in the future.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/2212.10051v1
A Framework of Customer Review Analysis Using the Aspect-Based Opinion Mining Approach
Opinion mining is the branch of computation that deals with opinions, appraisals, attitudes, and emotions of people and their different aspects. This field has attracted substantial research interest in recent years. Aspect-level (called aspect-based opinion mining) is often desired in practical applications as it provides detailed opinions or sentiments about different aspects of entities and entities themselves, which are usually required for action. Aspect extraction and entity extraction are thus two core tasks of aspect-based opinion mining. his paper has presented a framework of aspect-based opinion mining based on the concept of transfer learning. on real-world customer reviews available on the Amazon website. The model has yielded quite satisfactory results in its task of aspect-based opinion mining.
[ "Opinion Mining", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 49, 78, 3 ]
SCOPUS_ID:85115989855
A Framework of Responsible Innovation (RI) Model for Artificial Intelligence (AI) in Indian Healthcare
COVID 19 pandemic has hastened the digitalization of healthcare in India and a key disruption has been the adoption of Artificial Intelligence (AI) enabled systems. AI enabled healthcare information system (HIS) is the fountain bed on which AI can grow as it impacts data collection, data cleaning data privacy, data comprehensiveness, and data robustness. The allied healthcare staff are vital for using AI enabled Health / Hospital Information Systems (HIS). AI, like any other technology, can be used as a double edged sword and can be used for both good and bad purposes. Therefore, responsible innovation (RI) is essential to tilt the balance more towards social good rather than harm. Here we propose a framework of RI model for useful adoption of healthcare delivery in India that is AI-enabled. This will need policy level driving, as well as ethical building of capacity of the human resources required for healthcare delivery.
[ "Responsible & Trustworthy NLP" ]
[ 4 ]
SCOPUS_ID:85129245343
A Framework to Assist in Didactic Planning at Undergraduate Level
In the teaching-learning process under the competency-based educational model, the instructor is a facilitator and seeks to generate a flexible and adaptable environment for student learning. One of the first tasks of the facilitator is the structuring of didactic planning. Didactic planning includes strategies for teaching and learning, evidence gathering, and choice of evaluation instruments. In this paper, we propose a framework based on natural language processing techniques with the support of an ontology grounded in the experience of instructors and university level course plans in the information systems area. We employ Bloom’s taxonomy in the ontology design, producing an ascending structure for didactic planning, which allows the student to learn gradually. The developed framework can analyze the key elements that a didactic plan must contain and identify inter-related areas. Evaluation results with Cohen’s kappa coefficient between expert judgement and our framework show that is possible to assist instructors in structuring their didactic planning. Out of the nine processes analyzed with the framework, an almost perfect kappa level was achieved in five processes, a substantial level in three processes, and a moderate level for one process.
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
SCOPUS_ID:85109026362
A Framework to Capture the Shift in Dynamics of a Multi-phase Protest—A Case Study of Hong Kong Protests
There has been a shift in protest dynamics in the 2019 Hong Kong protests, when compared to the Hong Kong Umbrella Protests of 2014. The intensity, magnitude and characteristics of the movement in 2019 were perceptibly different. This work presents a framework to capture such a shift using techniques of social network analysis, data science and natural language processing (NLP). This work models and analyzes the underlying social network structure on Twitter to decipher how things changed in 2019—from a social network analysis and NLP view. The social structure analysis indicates the diffused nature of leadership in 2019 versus the central nature of leadership in 2014. It also shows that the relative number of advocates versus supporters changed drastically in 2019. Brokerage analysis indicates the significant roles leaders played in 2014 as compared to the 2019 protest. The topic modelling analysis, along with sentiment analysis of trending hashtags, reveal how the themes of protest changed in 2019. In 2014, the theme evolved from civil disobedience to call for democracy. In 2019, it evolved towards the fight for freedom. Emotion has been investigated at various levels of granularity, i.e. across all tweets, emojis, hashtags, and for each central topic in 2014 versus 2019. The 2019 protest was characterized much more by anger, fear and anticipation. There was a larger sense of hopelessness and despair in 2019. Today social movements have moved online, and happen in phases. The proposed framework successfully deciphers the dynamics behind the two phases of Hong Kong protests and can be reused for similar analysis in future.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
http://arxiv.org/abs/2205.02005v1
A Framework to Generate High-Quality Datapoints for Multiple Novel Intent Detection
Systems like Voice-command based conversational agents are characterized by a pre-defined set of skills or intents to perform user specified tasks. In the course of time, newer intents may emerge requiring retraining. However, the newer intents may not be explicitly announced and need to be inferred dynamically. Thus, there are two important tasks at hand (a). identifying emerging new intents, (b). annotating data of the new intents so that the underlying classifier can be retrained efficiently. The tasks become specially challenging when a large number of new intents emerge simultaneously and there is a limited budget of manual annotation. In this paper, we propose MNID (Multiple Novel Intent Detection) which is a cluster based framework to detect multiple novel intents with budgeted human annotation cost. Empirical results on various benchmark datasets (of different sizes) demonstrate that MNID, by intelligently using the budget for annotation, outperforms the baseline methods in terms of accuracy and F1-score.
[ "Intent Recognition", "Sentiment Analysis" ]
[ 79, 78 ]
SCOPUS_ID:85108029964
A Framework to Identify Allergen and Nutrient Content in Fruits and Packaged Food using Deep Learning and OCR
Allergic reactions to food can depend on a wide range of factors and hence, the proportionate reactions of the same can vary. With such a wide range of unpredictability, classifying allergens, and the rate at which it would effect is what the scientists have been working on for years. To bring awareness of the food we consume and the potential threats it can cause us, in this paper we propose a 2-Tab Deep Learning based Application to provide the nutrient and allergen content in fruits and vegetables and, to display allergen information in packaged food using OCR. Through a novel Deep Learning Framework, the picture of the Fruit or Vegetable captured via an application is classified and recognized and the nutritional facts and allergen information is presented. The fine-tuned deep learning model which is deployed in cloud, obtained a good accuracy of 97.37 percentage on our dataset. For Packaged food, the picture of the Ingredient Index is captured via the application and the allergen information is presented after the text is recognized through Optical Character recognition which would be carried out in a remote server.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85148246253
A Framing Analysis and Regional Comparison of Newspaper Media Reports of COVID-19 in Shanghai: Based on the LDA Topic Model
In this study, we analyzed reports published by local and non-local newspapers on the 2022 COVID-19 outbreak in Shanghai using the Latent Dirichlet Allocation (LDA) topic modeling technique. Framing Theory suggests that both coverage types would typically produce similar media frames, notwithstanding the presence of slight differences in bias. We identified the media frames used by local and non-local newspapers on the Shanghai outbreak and our analysis subsequently informed our discussion on the similarities and differences that were uncovered. The paper offers suggestions for media reporting on how to better cover the outbreak.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
http://arxiv.org/abs/cs/9809050v1
A Freely Available Morphological Analyzer, Disambiguator and Context Sensitive Lemmatizer for German
In this paper we present Morphy, an integrated tool for German morphology, part-of-speech tagging and context-sensitive lemmatization. Its large lexicon of more than 320,000 word forms plus its ability to process German compound nouns guarantee a wide morphological coverage. Syntactic ambiguities can be resolved with a standard statistical part-of-speech tagger. By using the output of the tagger, the lemmatizer can determine the correct root even for ambiguous word forms. The complete package is freely available and can be downloaded from the World Wide Web.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
http://arxiv.org/abs/cmp-lg/9410014v1
A Freely Available Syntactic Lexicon for English
This paper presents a syntactic lexicon for English that was originally derived from the Oxford Advanced Learner's Dictionary and the Oxford Dictionary of Current Idiomatic English, and then modified and augmented by hand. There are more than 37,000 syntactic entries from all 8 parts of speech. An X-windows based tool is available for maintaining the lexicon and performing searches. C and Lisp hooks are also available so that the lexicon can be easily utilized by parsers and other programs.
[ "Syntactic Text Processing" ]
[ 15 ]
http://arxiv.org/abs/cmp-lg/9410024v1
A Freely Available Wide Coverage Morphological Analyzer for English
This paper presents a morphological lexicon for English that handles more than 317000 inflected forms derived from over 90000 stems. The lexicon is available in two formats. The first can be used by an implementation of a two-level processor for morphological analysis. The second, derived from the first one for efficiency reasons, consists of a disk-based database using a UNIX hash table facility. We also built an X Window tool to facilitate the maintenance and browsing of the lexicon. The package is ready to be integrated into an natural language application such as a parser through hooks written in Lisp and C.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
SCOPUS_ID:85096603966
A French corpus and annotation schema for named entity recognition and relation extraction of financial news
In financial services industry, compliance involves a series of practices and controls in order to meet key regulatory standards which aim to reduce financial risk and crime, e.g. money laundering and financing of terrorism. Faced with the growing risks, it is imperative for financial institutions to seek automated information extraction techniques for monitoring financial activities of their customers. This work describes an ontology of compliance-related concepts and relationships along with a corpus annotated according to it. The presented corpus consists of financial news articles in French and allows for training and evaluating domain-specific named entity recognition and relation extraction algorithms. We present some of our experimental results on named entity recognition and relation extraction using our annotated corpus. We aim to furthermore use the the proposed ontology towards construction of a knowledge base of financial relations.
[ "Semantic Text Processing", "Relation Extraction", "Knowledge Representation", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 72, 75, 18, 34, 3 ]
SCOPUS_ID:85096549263
A French medical conversations corpus annotated for a virtual patient dialogue system
Data-driven approaches for creating virtual patient dialogue systems require the availability of large data specific to the language, domain and clinical cases studied. Based on the lack of dialogue corpora in French for medical education, we propose an annotated corpus of dialogues including medical consultation interactions between doctor and patient. In this work, we detail the building process of the proposed dialogue corpus, describe the annotation guidelines and also present the statistics of its contents. We then conducted a question categorization task to evaluate the benefits of the proposed corpus that is made publicly available.
[ "Natural Language Interfaces", "Question Answering", "Dialogue Systems & Conversational Agents" ]
[ 11, 27, 38 ]
SCOPUS_ID:85083954961
A French to English Language Translator Using Recurrent Neural Network with Attention Mechanism
In today’s world there are many people who are facing the problem of language translator for ex: talking to a person who only knows a language which you do not understand or you have some information in a language like French and you only Know English, etc. This type of problem can be overcome by a technology called machine translation. This paper proposed machine translation using the recurrent neural network with attention mechanism, where the recurrent neural network (RNN) are types of neural networks designed for capturing information from sequence and time series data. RNN is useful to learn pattern in a given set of data as the human language is one big complex pattern or a complicated pattern. In machine translation, two recurrent neural networks work together for the transformation of one sequence to the other sequence. An encoder network will change an input sequence in a vector and on the other hand the decoder network will change the vector in a new sequence. To improve the above-stated RNN model we will use the attention mechanism which helps to focus on the specific range of the input sequences.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:84978916675
A French-Tamazight MT system for computer science
Today, industrial and large-audience Machine Translation software are still producing poor quality results. For example when we use Babelfish to translate the compound term: Entrées sorties physiques, we obtain: Entries physical outputs, instead physical input output. Automatic translating software needs increasingly significant and varied terminological resources. Our aim is to develop a French-Tamazight MT system for computer science compound words. NooJ is the linguistic environment of development.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85124970810
A French-to-English Machine Translation Model Using Transformer Network
Traditional machine translation based on RNN has two major defects: (1)it can only process word-by-word and result in slow training speed; (2)when the sentence is too long, gradient disappearance and explosion reduce the accuracy of translation. To solve this problem, this paper designs a Transformer based machine translation implemented by PyTorch. Compared with traditional machine translation, Transformer uses the attention mechanism to redesign the defects of RNN and effectively solves the problems of efficiency and forgetting. The French-English machine translation based on Transformer designed in this paper achieves a translation accuracy rate of 80% from French to English after training and practical application.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
SCOPUS_ID:85111243438
A Frequency-Based Approach to Extract Aspect for Aspect-Based Sentiment Analysis
Data is king nowadays, and users worldwide express their views on different platforms to aggregate this data and analyze it. Sentiment analysis becomes a major tool for analysts. Sentiment analysis can be done on different levels. This will be discussing a more granular level of sentiment analysis using aspect-based sentiment analysis, which aims to predict the sentiment polarity of text for a specific target. The majority of work done in this field focuses on the extraction of aspect or feature and then finding their sentiments polarity and aggregating them to find the whole text's final polarity. Aspect extraction is the key to this process; our work will be focusing on aspect extraction. In this paper, we will address the issue of aspect extraction and then propose our approach to deal with it and show how it is better than these existing approaches.
[ "Sentiment Analysis", "Aspect-based Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 78, 23, 3 ]
SCOPUS_ID:85096410229
A Frequency-Category Based Feature Selection in Big Data for Text Classification
In big data era text classification is considered as one of the most important machine learning application domain. However to build an efficient algorithm for classification feature selection is a fundamental step to reduce dimensionality achieve better accuracy and improve time execution. In the literature most of the feature ranking techniques are document based. The major weakness of this approach is that it favours the terms occurring frequently in the documents and neglects the correlation between the terms and the categories. In this work unlike the traditional approaches which deal with documents individually we use mapreduce paradigm to process the documents of each category as a single document. Then we introduce a parallel frequency-category feature selection method independently of any classifier to select the most relevant features. Experimental results on the 20-Newsgroups dataset showed that our approach improves the classification accuracy to 90.3%. Moreover the system maintains the simplicity and lower execution time.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:2342520096
A Frequency-based Technique to Improve the Spelling Suggestion Rank in Medical Queries
Objective: There is an abundance of health-related information online, and millions of consumers search for such information. Spell checking is of crucial importance in returning pertinent results, so the authors propose a technique for increasing the effectiveness of spell-checking tools used for health-related information retrieval. Design: A sample of incorrectly spelled medical terms was submitted to two different spell-checking tools, and the resulting suggestions, derived under two different dictionary configurations, were re-sorted according to how frequently each term appeared in log data from a medical search engine. Measurements: Univariable analysis was carried out to assess the effect of each factor (spell-checking tool, dictionary type, re-sort, or no re-sort) on the probability of success. The factors that were statistically significant in the univariable analysis were then used in multivariable analysis to evaluate the independent effect of each of the factors. Results: The re-sorted suggestions proved to be significantly more accurate than the original list returned by the spell-checking tool. The odds of finding the correct suggestion in the number one rank were increased by 63% after re-sorting using the authors' method. This effect was independent of both the dictionary and the spell-checking tools that were used. Conclusion: Using knowledge about the frequency of a given word's occurrence in the medical domain can significantly improve spelling correction for medical queries.
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
http://arxiv.org/abs/2302.07159v1
A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
As text-to-image systems continue to grow in popularity with the general public, questions have arisen about bias and diversity in the generated images. Here, we investigate properties of images generated in response to prompts which are visually under-specified, but contain salient social attributes (e.g., 'a portrait of a threatening person' versus 'a portrait of a friendly person'). Grounding our work in social cognition theory, we find that in many cases, images contain similar demographic biases to those reported in the stereotype literature. However, trends are inconsistent across different models and further investigation is warranted.
[ "Visual Data in NLP", "Responsible & Trustworthy NLP", "Ethical NLP", "Multimodality" ]
[ 20, 4, 17, 74 ]
http://arxiv.org/abs/1505.06294v1
A Frobenius Model of Information Structure in Categorical Compositional Distributional Semantics
The categorical compositional distributional model of Coecke, Sadrzadeh and Clark provides a linguistically motivated procedure for computing the meaning of a sentence as a function of the distributional meaning of the words therein. The theoretical framework allows for reasoning about compositional aspects of language and offers structural ways of studying the underlying relationships. While the model so far has been applied on the level of syntactic structures, a sentence can bring extra information conveyed in utterances via intonational means. In the current paper we extend the framework in order to accommodate this additional information, using Frobenius algebraic structures canonically induced over the basis of finite-dimensional vector spaces. We detail the theory, provide truth-theoretic and distributional semantics for meanings of intonationally-marked utterances, and present justifications and extensive examples.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2010.12812v2
A Frustratingly Easy Approach for Entity and Relation Extraction
End-to-end relation extraction aims to identify named entities and extract relations between them. Most recent work models these two subtasks jointly, either by casting them in one structured prediction framework, or performing multi-task learning through shared representations. In this work, we present a simple pipelined approach for entity and relation extraction, and establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC), obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders. Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model. Through a series of careful examinations, we validate the importance of learning distinct contextual representations for entities and relations, fusing entity information early in the relation model, and incorporating global context. Finally, we also present an efficient approximation to our approach which requires only one pass of both entity and relation encoders at inference time, achieving an 8-16$\times$ speedup with a slight reduction in accuracy.
[ "Language Models", "Relation Extraction", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 75, 72, 3 ]
http://arxiv.org/abs/2201.12723v3
A Frustratingly Simple Approach for End-to-End Image Captioning
Image Captioning is a fundamental task to join vision and language, concerning about cross-modal understanding and text generation. Recent years witness the emerging attention on image captioning. Most of existing works follow a traditional two-stage training paradigm. Before training the captioning models, an extra object detector is utilized to recognize the objects in the image at first. However, they require sizeable datasets with fine-grained object annotation for training the object detector, which is a daunting task. In addition, the errors of the object detectors are easy to propagate to the following captioning models, degenerating models' performance. To alleviate such defects, we propose a frustratingly simple but highly effective end-to-end image captioning framework, Visual Conditioned GPT (VC-GPT), by connecting the pre-trained visual encoder (CLIP-ViT) and language decoder (GPT2). Different from the vanilla connection method that directly inserts the cross-attention modules into GPT2, we come up with a self-ensemble cross-modal fusion mechanism that comprehensively considers both the single- and cross-modal knowledge. As a result, we do not need extra object detectors for model training. Experimental results conducted on three popular image captioning benchmarks (MSCOCO, Flickr30k and NoCaps) demonstrate that our VC-GPT achieves either the best or the second-best performance across all evaluation metrics over extensive baseline systems.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
http://arxiv.org/abs/1808.03815v2
A Full End-to-End Semantic Role Labeler, Syntax-agnostic Over Syntax-aware?
Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling. Previous studies usually formulate the entire SRL problem into two or more subtasks. For the first time, this paper introduces an end-to-end neural model which unifiedly tackles the predicate disambiguation and the argument labeling in one shot. Using a biaffine scorer, our model directly predicts all semantic role labels for all given word pairs in the sentence without relying on any syntactic parse information. Specifically, we augment the BiLSTM encoder with a non-linear transformation to further distinguish the predicate and the argument in a given sentence, and model the semantic role labeling process as a word pair classification task by employing the biaffine attentional mechanism. Though the proposed model is syntax-agnostic with local decoder, it outperforms the state-of-the-art syntax-aware SRL systems on the CoNLL-2008, 2009 benchmarks for both English and Chinese. To our best knowledge, we report the first syntax-agnostic SRL model that surpasses all known syntax-aware models.
[ "Semantic Parsing", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 40, 72, 15 ]
SCOPUS_ID:85135015047
A Full Information Enhanced Question Answering System Based on Hierarchical Heterogeneous Crowd Intelligence Knowledge Graph
With the development of deep learning technology, generative question answering models based on neural networks have gradually become a mainstream research direction in academia and industry. The current question answering models fail to make full use of the multi-level knowledge embedded in the learned corpus, and the interpretability and robustness of the models in the face of attack samples have certain shortcomings. From the perspective of information theory, this paper constructs the semantic, pragmatic and syntactic knowledge contained in the large amount of crowd intelligence corpora obtained from the Internet platform into a hierarchical and heterogeneous natural language knowledge graph. The graph-based full information enhanced question answering model (GFIQA) is proposed, and the hierarchical heterogeneous knowledge graph is incorporated in the model. Through the crowd intelligence knowledge interpretation module, knowledge-enhanced generation module and single-layer anisotropic decoder, the relevant knowledge in the crowd intelligence natural language knowledge graph is appropriately selected based on the attention mechanism, and the ability of question understanding and answer generation is improved. The experimental results show that the GFIQA model has a large improvement in PPL, BLEU, and ENC (PPL: −11.76, BLEU: +0.126, ENC: + 0.232) compared with the baseline model, and can generate fluent and smooth answers with reasonable grammatical modifications and rich semantics.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Multimodality" ]
[ 72, 50, 27, 18, 11, 74 ]
http://arxiv.org/abs/2104.08428v1
A Full Text-Dependent End to End Mispronunciation Detection and Diagnosis with Easy Data Augmentation Techniques
Recently, end-to-end mispronunciation detection and diagnosis (MD&D) systems has become a popular alternative to greatly simplify the model-building process of conventional hybrid DNN-HMM systems by representing complicated modules with a single deep network architecture. In this paper, in order to utilize the prior text in the end-to-end structure, we present a novel text-dependent model which is difference with sed-mdd, the model achieves a fully end-to-end system by aligning the audio with the phoneme sequences of the prior text inside the model through the attention mechanism. Moreover, the prior text as input will be a problem of imbalance between positive and negative samples in the phoneme sequence. To alleviate this problem, we propose three simple data augmentation methods, which effectively improve the ability of model to capture mispronounced phonemes. We conduct experiments on L2-ARCTIC, and our best performance improved from 49.29% to 56.08% in F-measure metric compared to the CNN-RNN-CTC model.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP" ]
[ 80, 4 ]
http://arxiv.org/abs/1810.09580v1
A Fully Attention-Based Information Retriever
Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.
[ "Natural Language Interfaces", "Question Answering", "Information Retrieval" ]
[ 11, 27, 24 ]
SCOPUS_ID:85134240240
A Fully Automated Intelligent Medicine Dispensary System Based on AIoT
The COVID-19 pandemic has caused a high rate of infection, and thus effective epidemic prevention measures of avoiding the second spread of COVID-19 in hospitals are major challenges for healthcare workers. Hospitals, where medicines are collected, are vulnerable to the rapid spread of COVID-19. Using the remote health monitoring technology of the Internet of Things (IoT) to automatically monitor and record the basic medical information of patients, reduce the workload of healthcare workers, and avoid direct contact with healthcare workers to cause secondary infections is an important research topic. This research proposes a new artificial intelligence solution based on the IoT, replacing existing medicine stations and recognizing medicine bags through the state-of-the-art optical character recognition (OCR) model and PP-OCR v2. The use of OCR in identification of medicine bags can replace healthcare workers in data recording. In addition, this research proposes an administrator management and monitoring system to monitor the equipment and provide a mobile application for patients to check the latest status of medicine bags in real time, and record their medication times. The results of the experiments indicate that the recognition model works very well in different conditions (up to 80.76% in PP-OCR v2 and 94.22% in PGNet), which supports both Chinese and English languages.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85119008472
A Fully Dynamic Context Guided Reasoning and Reconsidering Network for Video Captioning
Visual reasoning and reconsidering capabilities are instinctively executed alternately as people watch a video and attempt to describe its contents with natural language. Inspired by this, a novel network that joints fully dynamic context guided reasoning and reconsidering is proposed in this paper. Specifically, an elaborate reconsidering module referred to as the reconsiderator is employed for rethinking and sharpening the preliminary results of stepwise reasoning from coarse to fine, thereby generating a higher quality description. And in turn, the reasoning capability of the network can be further boosted under the guidance of the context information summarized during reconsidering. Extensive experiments on two public benchmarks demonstrate that our approach is pretty competitive with the state-of-the-art methods.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Reasoning", "Multimodality" ]
[ 20, 39, 47, 8, 74 ]
http://arxiv.org/abs/2010.02053v1
A Fully Hyperbolic Neural Model for Hierarchical Multi-Class Classification
Label inventories for fine-grained entity typing have grown in size and complexity. Nonetheless, they exhibit a hierarchical structure. Hyperbolic spaces offer a mathematically appealing approach for learning hierarchical representations of symbolic data. However, it is not clear how to integrate hyperbolic components into downstream tasks. This is the first work that proposes a fully hyperbolic model for multi-class multi-label classification, which performs all operations in hyperbolic space. We evaluate the proposed model on two challenging datasets and compare to different baselines that operate under Euclidean assumptions. Our hyperbolic model infers the latent hierarchy from the class distribution, captures implicit hyponymic relations in the inventory, and shows performance on par with state-of-the-art methods on fine-grained classification with remarkable reduction of the parameter size. A thorough analysis sheds light on the impact of each component in the final prediction and showcases its ease of integration with Euclidean layers.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2005.09862v2
A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Building a good speech recognition system usually requires large amounts of transcribed data, which is expensive to collect. To tackle this problem, many unsupervised pre-training methods have been proposed. Among these methods, Masked Predictive Coding achieved significant improvements on various speech recognition datasets with BERT-like Masked Reconstruction loss and Transformer backbone. However, many aspects of MPC have not been fully investigated. In this paper, we conduct a further study on MPC and focus on three important aspects: the effect of pre-training data speaking style, its extension on streaming model, and how to better transfer learned knowledge from pre-training stage to downstream tasks. Experiments reveled that pre-training data with matching speaking style is more useful on downstream recognition tasks. A unified training objective with APC and MPC provided 8.46% relative error reduction on streaming model trained on HKUST. Also, the combination of target data adaption and layer-wise discriminative training helped the knowledge transfer of MPC, which achieved 3.99% relative error reduction on AISHELL over a strong baseline.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Speech Recognition", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 70, 74, 47, 10, 4 ]