id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:1542377538
A Maximal Figure-of-Merit Learning Approach to Text Categorization
A novel maximal figure-of-merit (MFoM) learning approach to text categorization is proposed. Different from the conventional techniques, the proposed MFoM method attempts to integrate any performance metric of interest (e.g. accuracy, recall, precision, or F1 measure) into the design of any classifier. The corresponding classifier parameters are learned by optimizing an overall objective function of interest. To solve this highly nonlinear optimization problem, we use a generalized probabilistic descent algorithm. The MFoM learning framework is evaluated on the Reuters-21578 task with LSI-based feature extraction and a binary tree classifier. Experimental results indicate that the MFoM classifier gives improved F1 and enhanced robustness over the conventional one. It also outperforms the popular SVM method in micro-averaging F1. Other extensions to design discriminative multiple-category MFoM classifiers for application scenarios with new performance metrics could be envisioned too.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
https://aclanthology.org//W02-1116/
A Maximum Entropy Approach to HowNet-Based Chinese Word Sense Disambiguation
[ "Knowledge Representation", "Semantic Text Processing", "Word Sense Disambiguation" ]
[ 18, 72, 65 ]
SCOPUS_ID:85121244516
A Maximum Entropy Classifier for Cross-Lingual Pronoun Prediction
We present a maximum entropy classifier for cross-lingual pronoun prediction. The features are based on local source- and target-side contexts and antecedent information obtained by a co-reference resolution system. With only a small set of feature types our best performing system achieves an accuracy of 72.31%. According to the shared task’s official macro-averaged F1-score at 57.07%, we are among the top systems, at position three out of 14. Feature ablation results show the important role of target-side information in general and of the resolved target-side antecedent in particular for predicting the correct classes.
[ "Multilinguality", "Text Classification", "Cross-Lingual Transfer", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 0, 36, 19, 24, 3 ]
SCOPUS_ID:80053474792
A Mayan ontology of poultry: Selfhood, affect, animals, and ethnography
This article has three key themes: ontology (what kinds of beings there are in the world), affect (cognitive and corporeal attunements to such entities), and selfhood (relatively reflexive centers of attunement). To explore these themes, I focus on women's care for chickens among speakers of Q'eqchi' Maya living in the cloud forests of highland Guatemala. Broadly speaking, I argue that these three themes are empirically, methodologically, and theoretically inseparable. In addition, the chicken is a particularly rich site for such ethnographic research because it is simultaneously self, alter, and object for its owners. To undertake this analysis, I adopt a semiotic stance towards such themes, partly grounded in the writings of the American pragmatists Charles Sanders Peirce, William James, and George Herbert Mead, and partly grounded in recent and classic scholarship by linguists, psychologists, and anthropologists. © Cambridge University Press 2011.
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
http://arxiv.org/abs/1803.06064v2
A Meaning-based Statistical English Math Word Problem Solver
We introduce MeSys, a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper. It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity). Statistical models are proposed to select the operator and operands. A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching. Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.
[ "Reasoning", "Numerical Reasoning" ]
[ 8, 5 ]
SCOPUS_ID:0016127947
A Means for Achieving a High Degree of Compaction on Scan-Digitized Printed Text
A method of video compaction based on transmitting only the first instance of each class of digitized patterns is shown to yield a compaction ratio of 16:1 on a short passage of text from the IEEE Spectrum. Refinements to extend the bandwidth reduction to 40:1 by relatively simple means are proposed but not demonstrated. Copyright © 1974 by The Institute of Electrical and Electronics Engineers, Inc.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
https://aclanthology.org//W06-1111/
A Measure of Aggregate Syntactic Distance
[ "Semantic Text Processing", "Semantic Similarity", "Syntactic Text Processing" ]
[ 72, 53, 15 ]
https://aclanthology.org//W00-0107/
A Measure of Semantic Complexity for Natural Language Systems
[ "Semantic Text Processing", "Text Complexity" ]
[ 72, 42 ]
http://arxiv.org/abs/2212.10502v1
A Measure-Theoretic Characterization of Tight Language Models
Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85147936201
A Mechanical CAD Drawings Retrieval Method Based on Text and Image Information
In the field of industrial production, a lot of engineering drawings are drawn every day. For the purpose of reducing processing costs and improving production efficiency, reusing the used engineering drawings is necessary. In order to effectively reuse engineering drawings, we propose a drawings retrieval method based on text and image information. Firstly, a text parser is used to extract the text information about the manufacturing process features in the drawing and to identify drawings with the same features in the database. Then, the histogram similarity algorithm is used to roughly filter the drawings. Finally, calculating the similarity of the image geometry information of mechanical parts based on the multi-hash algorithm. Through experimental tests, the average recall rate of our proposed drawing retrieval model is about 87% and the average error retrieval rate is less than 20%. The experimental results show that our method meets the actual market demand and has good practical application capability.
[ "Visual Data in NLP", "Information Retrieval", "Multimodality" ]
[ 20, 24, 74 ]
SCOPUS_ID:85082981557
A Mechanics-Based Similarity Measure for Text Classification in Machine Learning Paradigm
Document classification and clustering is emerging as a new challenge in the Big Data era where terabytes of data are generated every second through billions of mobile phones, desktops, servers, and mobile devices such as cameras and watches. The effectiveness of classification and clustering algorithms depends on the similarity measure used between two text documents in the corpus. We have applied Maxwell-Boltzmann distribution to find the similarity between the two documents within a document corpus. In this paper, the document corpus is treated as a large system, individual documents as containers, attributes as subcontainers, and each term as a particle. The proposed similarity measure is named Maxwell-Boltzmann Similarity Measure (MBSM). MBSM is derived from the overall distribution of feature values and total number of nonzero features among the documents. We demonstrate that MBSM satisfies all properties of a document similarity measure. The MBSM is incorporated in single label K-nearest neighbors classification (SLKNN), multi label K-nearest neighbors classification (MLKNN) and K-means clustering. We benchmark MBSM against other similarity measures like Euclidian, Cosine, Jaccard, Pairwise, ITSim, and SMTP. The comparative performance shows that MBSM outperformed all existing similarity measures and increased classification accuracy of SLKNN and MLKNN and clustering accuracy and entropy of K-means algorithm while making them more robust. The highest accuracy obtained from tenfold cross validation for SLKNN is 0.9531 and MLKNN is 0.9373. The MBSM achieved maximum accuracy of 0.6592 and minimum entropy of 0.2426 amongst all similarity measures in the scale of unity for K-means clustering.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Text Clustering" ]
[ 3, 24, 36, 29 ]
SCOPUS_ID:85073796508
A Mechanism for Automatically Summarizing Software Functionality from Source Code
When developers search online to find software components to reuse, they usually first need to understand the container projects/libraries, and subsequently identify the required functionality. Several approaches identify and summarize the offerings of projects from their source code, however they often require that the developer has knowledge of the underlying topic modeling techniques; they do not provide a mechanism for tuning the number of topics, and they offer no control over the top terms for each topic. In this work, we use a vectorizer to extract information from variable/method names and comments, and apply Latent Dirichlet Allocation to cluster the source code files of a project into different semantic topics. The number of topics is optimized based on their purity with respect to project packages, while topic categories are constructed to provide further intuition and Stack Exchange tags are used to express the topics in more abstract terms.
[ "Topic Modeling", "Multimodality", "Programming Languages in NLP", "Information Extraction & Text Mining" ]
[ 9, 74, 55, 3 ]
SCOPUS_ID:85129268320
A Media-based Innovation Indicator: Examining declining Technological Innovation Systems
The recently introduced technological innovation system (TIS) life cycle allows analyzing the decline of mature technologies. This study complements the associated empirical indicators by proposing a novel text-based innovation output indicator based on the media's role in forming collective expectations. We process more than 15,000 English news articles to capture the media's reporting on technological improvements of the internal combustion engine (ICE). Our results depict an increasing number of innovation articles with positive sentiment until 2015. More recently, we observe a decrease in innovation articles. Along with the decreasing ICE sales and performance data, this weakening support of the media for the TIS innovation output suggests a possible misalignment with collective expectations that could lead to a vicious decline cycle. Our study offers a real-time indicator for monitoring innovation strategies and develops a methodological framework to derive technology-specific innovation indicators on the firm-level using unsupervised topic modelling and sentiment analysis.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 9, 3, 78 ]
SCOPUS_ID:85118917327
A Medical AI Diagnosis Platform Based on Vision Transformer for Coronavirus
With the spread of the novel coronavirus in the world, COVID-19 has raised a number of serious issues regarding its diagnosis and treatment. Currently, most COVID-19 patients are diagnosed using a lung CT scan, which is extremely inefficient, especially when faced with a large number of patients. This manual method not only increases the workload for doctors, but it can also delay patient treatment. Thus, many scholars and companies have developed auxiliary diagnosis platforms to speed up the diagnosis process. However, due to limitations surrounding the development of imaging technology, the accuracy of current platform-assisted diagnosis is still relatively low. In order to solve these problems, this paper designs a medical AI dialogue diagnosis platform based on the Vision Transformer by distilling technology to gain more medical information from the traditional image recognition model and significantly improve the predictive performance of COVID-19. At the same time, this paper also proposes the addition of a simple dialogue system to improve the efficiency of man-machine interactions. With this, it can be concluded that the medical dialogue system for COVID-19 detection can realize its anticipated function and has certain practical significance.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Green & Sustainable NLP", "Multimodality", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Responsible & Trustworthy NLP" ]
[ 20, 52, 72, 68, 74, 11, 38, 4 ]
http://arxiv.org/abs/2207.03885v2
A Medical Information Extraction Workbench to Process German Clinical Text
Background: In the information extraction and natural language processing domain, accessible datasets are crucial to reproduce and compare results. Publicly available implementations and tools can serve as benchmark and facilitate the development of more complex applications. However, in the context of clinical text processing the number of accessible datasets is scarce -- and so is the number of existing tools. One of the main reasons is the sensitivity of the data. This problem is even more evident for non-English languages. Approach: In order to address this situation, we introduce a workbench: a collection of German clinical text processing models. The models are trained on a de-identified corpus of German nephrology reports. Result: The presented models provide promising results on in-domain data. Moreover, we show that our models can be also successfully applied to other biomedical text in German. Our workbench is made publicly available so it can be used out of the box, as a benchmark or transferred to related problems.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85091160373
A Meeting of Concepts and Praxis: Multilingualism, Language Policy and the Dominant Language Constellation
This chapter combines discussion of multilingualism and language policy, with the productive power of the Dominant Language Constellation concept. Multilingualism research has flourished and deepened in recent years, producing many analytically revealing and empirically robust accounts of the super-diverse communicative environment of our world. This body of multilingualism studies makes available to educators, public officials and ordinary citizens, as well as scholars, the lived realities of multiple languages and their presence in the everyday lives of citizens. In public policy settings however, multilingualism studies have had far less traction. This chapter pursues a line of questioning about language policy and planning and the DLC in an exploration of how multilingualism can be linked to public policy formation of states (a prime example is Vietnam), and the personal language planning of individuals and institutions. Key to the discussion is a conception of knowledge linked to the vita activa and praxis of enlightened individuals and scholars seeking linguistic justice, but also the vita contemplativa of conceptual clarification. The DLC is a promising conceptual innovation because it fosters productive dialogue between academic accounts of language diversity and the complex realm of policy and decision making. The chapter concludes by discussing how these domains can be aligned through a shared body of concepts to become mutually comprehensible, and the likely outcomes if academics furnish accounts of demo-linguistics that are persuasive and politically tractable. The chapter also offers some new additions to the stock of ideas within the DLC concept, such as the idea of a coherent script cluster within a language grouping, and thereby expands the concept itself.
[ "Multilinguality" ]
[ 0 ]
http://arxiv.org/abs/2012.15156v1
A Memory Efficient Baseline for Open Domain Question Answering
Recently, retrieval systems based on dense representations have led to important improvements in open-domain question answering, and related tasks. While very effective, this approach is also memory intensive, as the dense vectors for the whole knowledge source need to be kept in memory. In this paper, we study how the memory footprint of dense retriever-reader systems can be reduced. We consider three strategies to reduce the index size: dimension reduction, vector quantization and passage filtering. We evaluate our approach on two question answering benchmarks: TriviaQA and NaturalQuestions, showing that it is possible to get competitive systems using less than 6Gb of memory.
[ "Green & Sustainable NLP", "Question Answering", "Natural Language Interfaces", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 68, 27, 11, 24, 4 ]
SCOPUS_ID:85099218409
A Memory Network Information Retrieval Model for Identification of News Misinformation
The speed and volume at which misinformation spreads on social media have motivated efforts to automate fact-checking which begins with stance detection. For fake news stance detection, for example, many classification-based models have been proposed often with high complexity and hand-crafted features. Although these models can achieve high accuracy scores on a targeted small corpus of fake news, few are evaluated on a larger corpus of fake and conspiracy sites due to efficiency limitations and the lack of compatibility with the actual fact-checking process. In this article, we propose a practical two-stage stance detection model that is tailored to the real-life problem. Specifically, we integrate an information retrieval system with an end to end memory network model to sort articles based on their relevance to the claim and then identify the fine-grained stance of each relevant article towards its given claim. We evaluate our model on the Fake News Challenge dataset (FNC-1). The results show that the performance of our model is comparable to those of the state-of-the-art models, average weighted accuracy of 82.1, while it closely follows the real-life process of fact-checking. We also validate our model with a large dataset from a real-life fact-checking website (i.e., Snopes.com), and the findings demonstrate the capability of the model in distinguishing false from true news headlines.
[ "Opinion Mining", "Ethical NLP", "Sentiment Analysis", "Reasoning", "Fact & Claim Verification", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 49, 17, 78, 8, 46, 24, 4 ]
SCOPUS_ID:85139531042
A Memory-Based Account of the Spatial Prisoner’s Dilemma
After the seminal work of Nowak and May (1992), the Spatial Prisoner’s Dilemma has become a common metaphor for studying the dynamics of cooperation in a spatially structured population. In contrast to the widely employed evolutionary model, which studies the dynamics of cooperation in a population of primitive players that lack memory, this paper examines the problem of cooperation in a population of memory-based players. Using computational simulations, it is shown that partial cooperation is maintained in a spatially structured population of players whose decision-making is effectuated by the adaptive nature of memory embodied in the ACT-R cognitive architecture (Anderson & Lebiere, 1998).
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:85016718852
A Memory-Based Learning Approach for Named Entity Recognition in Hindi
Named entity (NE) recognition (NER) is a process to identify and classify atomic elements such as person name, organization name, place/location name, quantities, temporal expressions, and monetary expressions in running text. In this paper, the Hindi NER task has been mapped into a multiclass learning problem, where the classes are NE tags. This paper presents a solution to this Hindi NER problem using a memory-based learning method. A set of simple and composite features, which includes binary, nominal, and string features, has been defined and incorporated into the proposed model. A relatively small Hindi Gazetteer list has also been employed to enhance the system performance. A comparative study on the experimental results obtained by the memory-based NER system proposed in this paper and a hidden Markov model (HMM)-based NER system shows that the performance of the proposed memory-based NER system is comparable to the HMM-based NER system.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85148467229
A Memory-Driven Neural Attention Model for Aspect-Based Sentiment Classification
Sentiment analysis techniques are becoming more and more important as the number of reviews on the World Wide Web keeps increasing. Aspect-based sentiment analysis (ABSA) entails the automatic analysis of sentiments at the highly fine-grained aspect level. One of the challenges of ABSA is to identify the correct sentiment expressed towards every aspect in a sentence. In this paper, a neural attention model is discussed and three extensions are proposed to this model. First, the strengths and weaknesses of the highly successful CABASC model are discussed, and three shortcomings are identified: the aspect-representation is poor, the current attention mechanism can be extended for dealing with polysemy in natural language, and the design of the aspect-specific sentence representation is upheld by a weak construction. We propose the Extended CABASC (E-CABASC) model, which aims to solve all three of these problems. The model incorporates a context-aware aspect representation, a multi-dimensional attention mechanism, and an aspect-specific sentence representation. The main contribution of this work is that it is shown that attention models can be improved upon using some relatively simple extensions, such as fusion gates and multi-dimensional attention, which can be implemented in many state-of-the-art models. Additionally, an analysis of the parameters and attention weights is provided.
[ "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 12, 23, 78, 36, 3 ]
SCOPUS_ID:85129784836
A Mental Health Chatbot with Cognitive Skills for Personalised Behavioural Activation and Remote Health Monitoring
Mental health issues are at the forefront of healthcare challenges facing contemporary human society. These issues are most prevalent among working-age people, impacting negatively on the individual, his/her family, workplace, community, and the economy. Conventional mental healthcare services, although highly effective, cannot be scaled up to address the increasing demand from affected individuals, as evidenced in the first two years of the COVID-19 pandemic. Conversational agents, or chatbots, are a recent technological innovation that has been successfully adapted for mental healthcare as a scalable platform of cross-platform smartphone applications that provides first-level support for such individuals. Despite this disposition, mental health chatbots in the extant literature and practice are limited in terms of the therapy provided and the level of personalisation. For instance, most chatbots extend Cognitive Behavioural Therapy (CBT) into predefined conversational pathways that are generic and ineffective in recurrent use. In this paper, we postulate that Behavioural Activation (BA) therapy and Artificial Intelligence (AI) are more effectively materialised in a chatbot setting to provide recurrent emotional support, personalised assistance, and remote mental health monitoring. We present the design and development of our BA-based AI chatbot, followed by its participatory evaluation in a pilot study setting that confirmed its effectiveness in providing support for individuals with mental health issues.
[ "Responsible & Trustworthy NLP", "Natural Language Interfaces", "Ethical NLP", "Dialogue Systems & Conversational Agents" ]
[ 4, 11, 17, 38 ]
SCOPUS_ID:85141734968
A Message Passing Approach to Biomedical Relation Classification for Drug–Drug Interactions
Featured Application: With this contribution, we aim to aid the drug development process as well as the identification of possible adverse drug events due to simultaneous drug use. The task of extracting drug entities and possible interactions between drug pairings is known as Drug–Drug Interaction (DDI) extraction. Computer-assisted DDI extraction with Machine Learning techniques can help streamline this expensive and time-consuming process during the drug development cycle. Over the years, a variety of both traditional and Neural Network-based techniques for the extraction of DDIs have been proposed. Despite the introduction of several successful strategies, obtaining high classification accuracy is still an area where further progress can be made. In this work, we present a novel Knowledge Graph (KG) based approach that utilizes a unique graph structure in combination with a Transformer-based Language Model and Graph Neural Networks to classify DDIs from biomedical literature. The KG is constructed to model the knowledge of the DDI Extraction 2013 benchmark dataset, without the inclusion of additional external information sources. Each drug pair is classified based on the context of the sentence it was found in, by utilizing transfer knowledge in the form of semantic representations from domain-adapted BioBERT weights that serve as the initial KG states. The proposed approach was evaluated on the DDI classification task of the same dataset and achieved a F1-score of 79.14% on the four positive classes, outperforming the current state-of-the-art approach.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Structured Data in NLP", "Knowledge Representation", "Multimodality", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 50, 18, 74, 36, 3 ]
SCOPUS_ID:85102041095
A Meta Analysis of Attention Models on Legal Judgment Prediction System
Artificial Intelligence in legal research is transforming the legal area in manifold ways. Pendency of court cases is a long-lasting problem in the judiciary due to various reasons such as lack of judges, lack of technology in legal services and the legal loopholes. The judicial system has to be more competent and more reliable in providing justice on time. One of the major causes of pending cases is the lack of legal intelligence to assist the litigants. The study in this paper reviews the challenges faced by judgment prediction system due to lengthy case facts using deep learning model. The Legal Judgment prediction system can help lawyers, judges and civilians to predict the win or loss rate, punishment term and applicable law articles for new cases. Besides, the paper reviews current encoding and decoding architecture with attention mechanism of transformer model that can be used for Legal Judgment Prediction system. Natural Language Processing using deep learning is an exploring field and there is a need for research to evaluate the current state of the art at the intersection of good text processing and feature representation with a deep learning model. This paper aims to develop a systematic review of existing methods used in the legal judgment prediction system and about the Hierarchical Attention Neural network model in detail. This can also be used in other applications such as legal document classification, sentimental analysis, news classification, text translation, medical reports and so on.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
https://aclanthology.org//W01-0805/
A Meta-Algorithm for the Generation of Referring Expressions
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85104738723
A Meta-Analytic Study of Instructed Second Language Pragmatics: A Case of the Speech Act of Request
Research on the effectiveness of request instruction in L2 pragmatics has been extensive, yet inconclusive. The present meta-analysis aims not only to provide a quantitative and reliable measure of the effects of instruction for the speech act of request in Iran, but also to illustrate a description of the relationship between some variables that moderate its effectiveness (age, gender, proficiency level, treatment type, research design, and data collection procedure). To do so, a total of 37 studies were retrieved and by establishing a set of different inclusion/exclusion criteria, 17 primary studies were coded and analyzed. Results revealed that (1) there is an overall large effect size on the effectiveness of the instruction of request (g = 1.48) in an Iranian context; (2) some variables were found to be a moderator for this effectiveness like gender and treatment type; (3) considering gender, the male group produced a larger effect size (g = 3.09) than the female one (g = 1.10); (4) and regarding treatment types, the explicit group yielded a larger effect size (g = 1.53) than the implicit one (g = 1.20). A thorough interpretation of the results, as well as a discussion of practical, theoretical, and methodological implications of this study, is provided to tackle a number of conundrums surrounding the instruction of request and shed light on how to reorient future research.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 71, 72, 70, 74 ]
SCOPUS_ID:85128711873
A Meta-Overview and Bibliometric Analysis of Resilience in Spatial Planning – the Relevance of Place-Based Approaches
This study offers a literature review and bibliometric analysis aiming to enhance our understanding of the actual contribution of resilience approaches to spatial and territorial development and planning studies. Using citation link-based clustering and statistical text-mining techniques (in terms of prevalence of topics, over time, extraction of relevant terms, keywords frequencies), our study maps scientific domains that include the spatial dimension of resilience thinking. It offers a systematic assessment of modern approaches by connecting profoundly theoretical views to more instrumental and policy-oriented approaches. Firstly, the theoretical background of spatial resilience used in numerous studies in various fields is analysed from the viewpoint of the type of embedded resilience (engineering, ecological, social-ecological, economic, social etc.). Secondly, we review and discuss the significance of three main and consistent research directions in terms of different scales and political/institutional contexts that matter from the viewpoint of spatial and territorial planning. Our findings show that spatial resilience debates are far from being settled, as according to many scientists, resilience measurements are often based on technical-reductionist frameworks that cannot comprehensively reflect the complex systems and issues they address. Our conclusions highlight the necessity of a harmonized framework and integrated perspective on resilience in sustainable territorial planning and development, in both theoretical and empirical contexts.
[ "Information Extraction & Text Mining" ]
[ 3 ]
http://arxiv.org/abs/2102.13622v2
A Meta-embedding-based Ensemble Approach for ICD Coding Prediction
International Classification of Diseases (ICD) are the de facto codes used globally for clinical coding. These codes enable healthcare providers to claim reimbursement and facilitate efficient storage and retrieval of diagnostic information. The problem of automatically assigning ICD codes has been approached in literature as a multilabel classification, using neural models on unstructured data. Our proposed approach enhances the performance of neural models by effectively training word vectors using routine medical data as well as external knowledge from scientific articles. Furthermore, we exploit the geometric properties of the two sets of word vectors and combine them into a common dimensional space, using meta-embedding techniques. We demonstrate the efficacy of this approach for a multimodal setting, using unstructured and structured information. We empirically show that our approach improves the current state-of-the-art deep learning architectures and benefits ensemble models.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85140775789
A Meta-heuristic Algorithm for the Minimal High-Quality Feature Extraction of Online Reviews
Feature extraction and selection are critical in sentiment analysis (SA) to extract and select only the appropriate features by removing those deemed redundant. As such, the successful implementation of this process leads to better classification accuracy. Inevitably, selecting high-quality minimal features can be challenging given the inherent complication in dealing with over-fitting issues. Most of the current studies used a heuristic method to perform the classification process that will result in selecting and examining only a single feature subset, while ignoring the other subsets that might give better results. This study explored the effect of using the meta-heuristic method together with the ensemble classification method in the sentiment classification of online reviews. Adding to that point, the extraction and selection of relevant features used feature ranking, hyper-parameter optimization, crossover, and mutation, while the classification process utilized the ensemble classifier. The proposed method was tested on the polarity movie review dataset v2.0 and product review dataset (books, electronics, kitchen, and music). The test results indicated that the proposed method significantly improved the classification results by 94%, which far exceeded the existing method. Therefore, the proposed feature extraction and selection method can help in improving the performance of SA in online reviews and, at the same time, reduce the number of extracted features
[ "Information Retrieval", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 78, 36, 3 ]
SCOPUS_ID:85149435271
A Meta-learning Knowledge Reasoning Framework Combining Semantic Path and Language Model
In order to solve the problems that traditional knowledge reasoning methods can not combine computing power and interpretability, and it is difficult to learn quickly in few-shot scenarios, a Model-Agnostic Meta-Learning (MAML) reasoning framework is proposed in this paper, which combines semantic path and Bidirectional Encoder Representations for Transformers (BERT), and consists of two stages: base-training and meta-training. In base-training stage, the graph reasoning instances is represented by semantic path and BERT model, which is used to calculate the link probability and save reasoning experience offline by fine-tuning. In meta-training stage, the gradient meta-information based on the base-training process of multiple relations is obtained by this framework, which realizes the initial weight optimization, and completes the rapid learning of knowledge under few-shot. Experiments show that better performance in link prediction and fact prediction can be achieved by the base-training reasoning framework, and fast convergence of some few-shot reasoning problems can be achieved by the meta-learning framework.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 8, 4 ]
SCOPUS_ID:85084889019
A Metabolic Imaging Study of Lexical and Phonological Naming Errors in Alzheimer Disease
Patients with Alzheimer disease (AD) produce a variety of errors on confrontation naming that indicate multiple loci of impairment along the naming process in this disease. We correlated brain hypometabolism, measured with 18fluoro-deoxy-glucose positron emission tomography, with semantic and formal errors, as well as nonwords deriving from phonological errors produced in a picture-naming test by 63 patients with AD. Findings suggest that neurodegeneration leads to: (1) phonemic errors, by interfering with phonological short-term memory, or with control over retrieval of phonological or prearticulatory representations, within the left supramarginal gyrus; (2) semantic errors, by disrupting general semantic or visual-semantic representations at the level of the left posterior middle and inferior occipitotemporal cortex, respectively; (3) formal errors, by damaging the lexical–phonological output interface in the left mid–anterior segment of middle and superior temporal gyri. This topography of semantic–lexical–phonological steps of naming is in substantial agreement with dual-stream neurocognitive models of word generation.
[ "Phonology", "Syntactic Text Processing" ]
[ 6, 15 ]
SCOPUS_ID:85036469772
A Metaphor Detection Approach Using Cosine Similarity
Metaphor is a prominent figure of speech. For their prevalence in text and speech, detection and analysis of metaphors are required for complete natural language understanding. This paper describes a novel method for identification of metaphors with word vectors. Our method relies on the semantic distance between the word and the corresponding object or action it is applied to. Our method does not target any particular kind of metaphor but tries to identify metaphors in general. Experimental results on the VU Amsterdam Metaphor Corpus show that our method gives state of the art results as compared to previous reported works.
[ "Semantic Text Processing", "Speech & Audio in NLP", "Representation Learning", "Sentiment Analysis", "Stylistic Analysis", "Multimodality" ]
[ 72, 70, 12, 78, 67, 74 ]
SCOPUS_ID:85130024773
A Method Based on Roberta_Seq2Seq for Chinese Text Multi Label Sentiment Analysis
People's comments on goods or services are more and more faceted, so multi-label text sentiment analysis has become a hot research topic. Compared with single-label sentiment analysis, it is more complex to analyze the sentimental polarity contained in different labels from the text. A multi-label sentiment analysis method of Chinese text based on the combination of Roberta and Seq2Seq is proposed. Roberta is used for text vectorization to improve the Semantic Connotation of vectorization. And Seq2Seq is used to learn more relationships among labels to obtain better results. Experiments show that the accuracy and F1 value of our method are improved compared with some other methods such as TextCNN, BiGRU-DAtt and ATT-TexRNN.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85141433196
A Method Combining Text Classification and Keyword Recognition to Improve Long Text Information Mining
Using a pre-trained language model for long text classification will be limited by the input length of the pre-trained language model. At the same time, we cannot effectively utilize all the text information in the long text. While using TextCNN for long text classification will be limited by the input word embedding based on a specific task, and it is unable to fully understand the semantics of the current task. In order to better understand the semantic information of the current task, we will use TextCNN with special word embedding for long text classification, we use the keyword data of the current task to finetune the pre-training task to obtain better word embedding representations. On the one hand, using the current word embedding representation as inputs to the classification task better supports the classification task, and on the other hand, it further finetunes the pre-training task to obtain a better word embedding representation. At the same time, the keyword data can be better expanded. Compared with the direct use of TextCNN for long text classification, long text classification combined with keywords is more efficiency and achieves the effect close to BERT. Experiments are carried out on the three datasets, ChnSentiCorp, NLPCC14-SC and business opportunity recommendation(BOR). The model accuracy is relatively high, with the increase of 1.8%, 1.03% and 4.03%, the relative increase of F1-score by 1.8%, 1.03% and 5.04% respectively. Experiments show that, without changing the efficiency of the TextCNN text classification model, we keep the integrity of the input text, the pre-training task finetuned by keyword discovery task enables the word embedding to have better understanding of current semantic expression. The text classification model combined with keywords discovery task by co-training can effectively improve the classification effect on different datasets.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Green & Sustainable NLP", "Representation Learning", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 24, 3, 68, 12, 36, 4 ]
SCOPUS_ID:85089193856
A Method for Answer Selection Using DistilBERT and Important Words
Question Answering is a hot topic in artificial intelligence and has many real-world applications. This field aims at generating an answer to the user's question by analyzing a massive volume of text documents. Answer Selection is a significant part of a question answering system and attempts to extract the most relevant answers to the user's question from the candidate answers pool. Recently, researchers have attempted to resolve the answer selection task by using deep neural networks. They first employed the recurrent neural networks and then gradually migrated to convolutional neural networks. Nevertheless, the use of language models, which is implemented by deep neural networks, has recently been considered. In this research, the DistilBERT language model was employed as the language model. The outputs of the Question Analysis part and Expected Answer Extraction component are also applied with [CLS] token output as the final feature vector. This operation leads to improving the method performance. Several experiments are performed to evaluate the effectiveness of the proposed method, and the results are reported based on the MAP and MRR metrics. The results show that the MAP values of the proposed method improved by 0.6%, and the MRR metric is improved by 0.2%. The results of our research show that using a heavy language model does not guarantee a more reliable method for answer selection problem. It also shows that the use of particular words, such as Question Word and Expected Answer word, can improve the performance of the method.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Question Answering" ]
[ 52, 11, 72, 27 ]
SCOPUS_ID:84973618609
A Method for Automatic Construction of Ontological Knowledge Bases. I. Development of a Semantic-Syntactic Model of Natural Language
A semantic-syntactic model of natural language is presented. The tensor approach is applied to modeling semantic-syntactic relationships between words in sentences. The apparatus of control spaces of syntactic structures of natural language is used that makes it possible to improve the tensor semantic-syntactic model describing syntactic structures of arbitrary length and complexity with the help of recursion and superposition.
[ "Knowledge Representation", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 18, 72, 15 ]
SCOPUS_ID:84962013663
A Method for Automatic Construction of Ontological Knowledge Bases. II. Automatic Identification of Semantic Relations in Ontological Networks<sup>*</sup>
A semantic-syntactic model of natural language is presented. After the factorization of constructed tensors of the model, vectors representing the semantic-syntactic valence of words and describing the commutative behavior of words in a sentence are generated. A method is developed for computing vectors of the semantic-syntactic valence of concepts in an ontology that form an implicit description of their semantic relations. An algorithm is proposed for extracting explicit semantic relations between ontological concepts from vectors of their semantic-syntactic valence.
[ "Knowledge Representation", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 18, 72, 15 ]
SCOPUS_ID:84978818619
A Method for Automatic Construction of Ontological Knowledge Bases. III. Automatic Generation of Taxonomy as the Basis for Ontology<sup>*</sup>
A method is developed for the automatic generation of ontological knowledge bases. An algorithm is created for the extraction of explicit semantic relationships between concepts of an ontology on the basis of their semantic-syntactic valence vectors. Vectors of semantic-syntactic valences of words were also used as context vectors in an algorithm for formal concept analysis, which made it possible to develop a method for the automatic generation of high-quality taxonomies. As a result, a basic algorithm for the automatic generation of ontological knowledge bases was developed on the basis of the tensor semantic-syntactic model of natural language.
[ "Knowledge Representation", "Semantic Text Processing", "Syntactic Text Processing", "Information Extraction & Text Mining" ]
[ 18, 72, 15, 3 ]
SCOPUS_ID:85126179221
A Method for CTCS-3 Knowledge Extraction of Unstructured Data
With the rapid development in AI technology, constructing the nationally independent intelligent railway is a trend at present in Chinese rail transit industry. As CTCS-3 is one of the core technologies for Chinese high-speed railway, making machines understand the CTCS-3 knowledge efficiently and concretely is becoming an important topic. Knowledge extraction is one of the most significant parts. Therefore, we proposed a method to extract CTCS-3 knowledge from unstructured data by combining BERT and BiLSTM-CRF. We built a 407936-word labeled dataset about CTCS-3 equipment for model training. Using such small dataset, we completed the experiments of CTCS-3 knowledge extraction including entity recognition and relationship extraction. The experiment show that the F1 score is 75.89%. By the method, we can get some entity relationship triples which are the foundation to achieve cognitive intelligence of CTCS-3. In summary, it extracts CTCS-3 entity relation triples with a small number of rail transit industry dataset.
[ "Information Extraction & Text Mining", "Relation Extraction", "Structured Data in NLP", "Named Entity Recognition", "Multimodality" ]
[ 3, 75, 50, 34, 74 ]
SCOPUS_ID:85123934535
A Method for Case Factor Recognition Based on Pre-trained Language Models
Case factor recognition is an important research content in the domain of legal intelligence. The purpose of this task is to automatically extract the important fact descriptions from the legal case descriptions and classify them based on the factor system designed by the domain experts. Text encoding based on traditional neural networks is difficult to extract deep-level features, and threshold based multi-label classification is difficult to capture the dependencies between labels. So that a multi-label text classification model based on pre-trained language models is proposed. The encoder is the language model fine-tuned with the strategy of Layer-attentive, and the decoder is LSTM based sequence generation model. Experimented on the CAIL2019 dataset, the method can improve the Fl score by up to 7.6% over the traditional neural network algorithm based on Recurrent Neural Network, and about 3.2% over the basic language model under the same hyperparameter settings.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85064554453
A Method for Detecting and Analyzing the Sentiment of Tweets Containing Conditional Sentences
Society is developing daily, and consequently, the population is more interested in public opinion. Surveys are frequently organized for detecting the attitude as well as the belief of the community in situations and their opinion about the measures or products. Users particularly express their feelings through comments posted on social networks, such as Twitter. Tweet sentiment analysis is a process that automatically detects personal information from the public emotion of the users about the events or products related to them from published tweets. Many studies have solved the sentiment analysis problem with high accuracy for the general tweets. However, these previous studies did not consider or dealt with low performance in case of tweets containing conditional sentences. In this study, we focus on solving the detection and sentiment analysis problem of a specific tweet type that includes conditional sentences. The results show that the proposed method achieves high performance in both the tasks.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85070737010
A Method for Detecting and Analyzing the Sentiment of Tweets Containing Fuzzy Sentiment Phrases
Owing to the development and dissemination of Twitter, an increasing number of users' opinions about various topics are being published on Twitter and have become a significant data source for numerous applications; one of the most popular is tweet sentiment analysis. Many researchers have tried to solve this problem with different methods. However, previous studies have only focused on sentiment analysis of general tweets without considering a divide-and-conquer strategy. Meanwhile, a large number of tweets contains fuzzy sentiment phrases. Thus, effectively solving fuzzy sentiment phrases may help to significantly improve the performance of sentiment analysis methods. In this study, we concentrate only on the detection and sentiment analysis problem of a specific tweet type that contains fuzzy sentiment phrases. The results show that the proposed method performs relatively well in both tasks.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1908.09341v2
A Method for Estimating the Proximity of Vector Representation Groups in Multidimensional Space. On the Example of the Paraphrase Task
The following paper presents a method of comparing two sets of vectors. The method can be applied in all tasks, where it is necessary to measure the closeness of two objects presented as sets of vectors. It may be applicable when we compare the meanings of two sentences as part of the problem of paraphrasing. This is the problem of measuring semantic similarity of two sentences (group of words). The existing methods are not sensible for the word order or syntactic connections in the considered sentences. The method appears to be advantageous because it neither presents a group of words as one scalar value, nor does it try to show the closeness through an aggregation vector, which is mean for the set of vectors. Instead of that we measure the cosine of the angle as the mean for the first group vectors projections (the context) on one side and each vector of the second group on the other side. The similarity of two sentences defined by these means does not lose any semantic characteristics and takes account of the words traits. The method was verified on the comparison of sentence pairs in Russian.
[ "Paraphrasing", "Semantic Text Processing", "Text Generation", "Representation Learning" ]
[ 32, 72, 47, 12 ]
SCOPUS_ID:85099540933
A Method for Extracting Keywords from English Literature Based on Location Feature Weighting
Natural language processing (NLP) is a frontier technology in the field of artificial intelligence. Keywords extraction is a key link in NLP and plays an important role in NLP. TFIDF algorithm is considered as the most important invention in information mining. This paper uses the position characteristics of words in the title and full text in the text, and makes weighted improvement on the basis of TFIDF algorithm to improve the accuracy of keyword extraction. In this paper, 400 articles of ACM were used as the training data set, 40 articles as the test set, and accuracy rate, recall rate and F1 value were used as the evaluation criteria. Experimental data show that this method improves the accuracy of keyword extraction and improves the performance of the original algorithm.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85107816681
A Method for Extracting Unstructured Threat Intelligence Based on Dictionary Template and Reinforcement Learning
In recent years, individuals, organizations and countries are all threatened by cyber threats to some degree. The proposal of threat intelligence sharing scheme has greatly helped the protection of cyber security. Traditional threat intelligence sharing scheme mainly collects and analyzes information manually, which include but not limited to Indicators of Compromise (IOC) and forms a machine readable report for Security Operations Center (SOC) to take corresponding action. Therefore, it is challenging and significant to easily and automatically share and exchange cyber threat intelligence (CTI). Aiming at extracting the information of CTI efficiently, we construct a model of automatic information extraction process of the entity recognition and relationship extraction, which are used to extract effective entities and relationships in threat intelligence reports and improve the efficiency of threat intelligence sharing. The specific content and research results include two aspects: (1) Research on threat intelligence entity recognition model. We use the BERT model as a corpus pre-training model based on the classic neural network BiLSTM-CRF, and proposes a model DT-BERT-BiLSTM-CRF based on the dictionary template. The BERT pre-training model makes full use of the contextual semantic information of the corpus and alleviates the problem of ambiguity in the process of threat intelligence entity recognition. By constructing a dictionary template of threat intelligence entities, the accuracy of entity recognition in the threat intelligence field is further improved. (2) Research on the extraction of ITC relations. We constructed the relation extraction data set with distant supervision methods. For alleviating the noise annotation data, we introduce the attention mechanism and reinforcement learning into traditional neural networks, proposing a model NR-RL-PCNN-ATT. Through a new reward mechanism, our model improves the sentence selection quality and the efficiency of relationship extraction.
[ "Language Models", "Semantic Text Processing", "Green & Sustainable NLP", "Relation Extraction", "Named Entity Recognition", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 52, 72, 68, 75, 34, 4, 3 ]
https://aclanthology.org//W03-2122/
A Method for Forming Mutual Beliefs for Communication through Human-robot Multi-modal Interaction
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Multimodality" ]
[ 11, 38, 74 ]
SCOPUS_ID:85087875193
A Method for Generating Mazes with Length Constraint using Genetic Programming
We examine a method to automatically generate mazes with length constraint. Assuming that the user creates a text including branches, our prototype arranges the input text on a maze space. In this study, we employ genetic programming and define commands to structure our maze generation program. We implemented our program, and as a result, we were able to generate the desired maze for simple input text.
[ "Programming Languages in NLP", "Multimodality" ]
[ 55, 74 ]
SCOPUS_ID:85100523340
A Method for Generating Synthetic Electronic Medical Record Text
Machine learning (ML) and Natural Language Processing (NLP) have achieved remarkable success in many fields and have brought new opportunities and high expectation in the analyses of medical data, of which the most common type is the massive free-text electronic medical records (EMR). However, the free EMR texts are lacking consistent standards, rich of private information, and limited in availability. Also, it is often hard to have a balanced number of samples for the types of diseases under study. These problems hinder the development of ML and NLP methods for EMR data analysis. To tackle these problems, we developed a model called Medical Text Generative Adversarial Network or mtGAN, to generate synthetic EMR text. It is based on the GAN framework and is trained by the REINFORCE algorithm. It takes disease tags as inputs and generates synthetic texts as EMRs for the corresponding diseases. We evaluate the model from micro-level, macro-level and application-level on a Chinese EMR text dataset. The results show that the method has a good capacity to fit real data and can generate realistic and diverse EMR samples. This provides a novel way to avoid potential leakage of patient privacy while still supply sufficient well-controlled cohort data for developing downstream ML and NLP methods.
[ "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 58, 4 ]
SCOPUS_ID:85099589487
A Method for Geodesic Distance on Subdivision of Trees With Arbitrary Orders and Their Applications
Geodesic distance, sometimes called shortest path length, has proven useful in a great variety of applications, such as information retrieval on networks including treelike networked models. Here, our goal is to analytically determine the exact solutions to geodesic distances on two different families of growth trees which are recursively created upon an arbitrary tree $\mathcal {T}$T using two types of well-known operations, first-order subdivision and ($1,m$1,m)-star-fractal operation. Different from commonly-used methods, for instance, spectral techniques, for addressing such a problem on growth trees using a single edge as seed in the literature, we propose a novel method for deriving closed-form solutions on the presented trees completely. Meanwhile, our technique is more general and convenient to implement compared to those previous methods mainly because there are not complicated calculations needed. In addition, the closed-form expression of mean first-passage time ($MFPT$MFPT) for random walk on each member in tree families is also readily obtained according to connection of our obtained results to effective resistance of corresponding electric networks. The results suggest that the two topological operations above are sharply different from each other due to $MFPT$MFPT for random walks, and, however, have likely to show the similar performance, at least, on geodesic distance.
[ "Passage Retrieval", "Information Retrieval" ]
[ 66, 24 ]
SCOPUS_ID:85143294503
A Method for Improving Performance of Opinion Targets Extraction by Evaluating Category Classification
Opinion targets extraction is mainly used for text opinion mining to discover evaluation object entities in review texts.The algorithm based on an unsupervised autoencoder can identify hidden topic information in the review corpus without manual annotation, but the evaluation objects extracted by the autoencoder lack diversity.This paper proposes a hybrid model of sentence-level classification tasks using supervised learning and autoencoder based on unsupervised learning.The model trains a classifier to generate aspect categories.The Long Short-Term Memory(LSTM)-Attention structure in the shared classification task of the encoder is encoded to obtain the sentence vector representation to increase the semantic relevance.The obtained aspect category then transforms the sentence vector representation into the middle layer semantic vector to capture the correlation between the aspect category and aspect extraction and to improve the coding ability of the encoder.The model decodes the reconstruction of the sentence vector and trains it to obtain the aspect matrix.Finally, the aspect is extracted by calculating the cosine similarity between the aspect matrix and the words in the sentence.The experimental results for the multidomain review corpus show that compared with k-means and Localized Linear Discriminant Analysis(LocLDA), the evaluation index of this method improves by 3.7% in the restaurant field and 2.1% in the hotel field.This approach somewhat solves the problem of lack of evaluation category diversity in the training process and exhibits improved extraction ability of evaluation objects.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Responsible & Trustworthy NLP", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 80, 72, 24, 12, 4, 36, 3 ]
SCOPUS_ID:85101087628
A Method for Improving Unsupervised Intent Detection using Bi-LSTM CNN Cross Attention Mechanism
Spoken Language Understanding (SLU) can be considered the most important sub-system in a goal-oriented dialogue system. SLU consists of User Intent Detection (UID) and Slot Filling (SF) modules. The accuracy of these modules is highly dependent on the collected data. On the other hand, labeling operation is a tedious task due to the large number of labels required. In this paper, intent labeling for two datasets is performed using an unsupervised learning method. In traditional methods of extracting features from text, the feature space that is obtained is very large, therefore we implemented a novel architecture of auto-encoder neural networks that is based on the attention mechanism to extract small and efficient feature space. This architecture which is called Bi-LSTM CNN Cross Attention Mechanism (BCCAM), crosswise applies the attention mechanism from Convolutional Neural Network (CNN) layer to Bi-LSTM layer and vice versa. Then, after finding a bottleneck on this auto-encoder network, the desired features are extracted from it. Once the features are extracted, then we cluster each sentence corresponding to its feature space using different clustering algorithms, including K-means, DEC, Agglomerative, OPTICS and Gaussian mixture model. In order to evaluate the performance of the model, two datasets are used, including ATIS and SNIPS. After executing various algorithms over the extracted feature space, the best obtained accuracy and NMI for ATIS dataset are 86.5 and 91.6, respectively, and for SNIPS dataset are 49.9 and 43.0, respectively.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Sentiment Analysis", "Intent Recognition", "Text Clustering", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 80, 52, 72, 78, 79, 29, 4, 3 ]
SCOPUS_ID:85111383504
A Method for Improving Word Representation Using Synonym Information
The emergence of word embeddings has created good conditions for natural language processing used in an increasing number of applications related to machine translation and language understanding. Several word-embedding models have been developed and applied, achieving considerably good performance. In addition, several enriching word embedding methods have been provided by handling various information such as polysemous, subwords, temporal, and spatial. However, prior popular vector representations of words ignored the knowledge of synonyms. This is a drawback, particularly for languages with large vocabularies and numerous synonym words. In this study, we introduce an approach to enrich the vector representation of words by considering the synonym information based on the vectors’ extraction and presentation from their context words. Our proposal includes three main steps: First, the context words of the synonym candidates are extracted using a context window to scan the entire corpus; second, these context words are grouped into small clusters using the latent Dirichlet allocation method; and finally, synonyms are extracted and converted into vectors from the synonym candidates based on their context words. In comparison to recent word representation methods, we demonstrate that our proposal achieves considerably good performance in terms of word similarity.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85118234029
A Method for MBTI Classification Based on Impact of Class Components
Predicting the personality type of text authors has a well-known usage in psychology with practical applications in business. From the data science perspective, we can look at this problem as a text classification task that can be tackled using natural language processing (NLP) and deep learning. This paper proposes a method and a novel loss function for multiclass classification using the Myers-Briggs Type Indicator (MBTI) approach for predicting the author's personality type. Furthermore, this paper proposes an approach that improves the current results of the MBTI multiclass classification because it considers components of compound class labels as supportive elements for better classification according to MBTI. As such, it also provides a new perspective on this classification problem. The experimental results on long short-term memory (LSTM) and convolutional neural network (CNN) models outperform baseline models for multiclass classification, related research on multiclass classification, and most research with four binary approaches to MBTI classification. Moreover, other classification problems that target compound class labels and label parts with binary mutually exclusive values can benefit from this approach.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85080079747
A Method for Massive Scientific Literature Clustering Based on Hadoop
With the development of science and technology and a large numbers of advanced vocabularies, the traditional classification of disciplines cannot meet the current needs of the subject division of scientific literature. At the same time, the clustering of the scientific literature put forward more requirements to the efficiency of the methods and the corresponding software and hardware facilities. In this paper, text features are extracted based on the TF-IDF method and the features of scientific literature. In Hadoop distributed environment, text clustering is carried out through Canopy-Kmeans algorithm, which achieved clustering of the massive scientific literature. As a result, our method proposed in this paper has improved key indicators compared to previous algorithms and greatly improved the efficiency of clustering.
[ "Responsible & Trustworthy NLP", "Text Clustering", "Information Extraction & Text Mining", "Green & Sustainable NLP" ]
[ 4, 29, 3, 68 ]
http://arxiv.org/abs/cs/0206014v1
A Method for Open-Vocabulary Speech-Driven Text Retrieval
While recent retrieval techniques do not limit the number of index terms, out-of-vocabulary (OOV) words are crucial in speech recognition. Aiming at retrieving information with spoken queries, we fill the gap between speech recognition and text retrieval in terms of the vocabulary size. Given a spoken query, we generate a transcription and detect OOV words through speech recognition. We then correspond detected OOV words to terms indexed in a target collection to complete the transcription, and search the collection for documents relevant to the completed transcription. We show the effectiveness of our method by way of experiments.
[ "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Information Retrieval", "Multimodality" ]
[ 70, 47, 10, 24, 74 ]
SCOPUS_ID:85048548612
A Method for Predicting Protein Complexes from Dynamic Weighted Protein-Protein Interaction Networks
Predicting protein complexes from protein-protein interaction (PPI) network is of great significance to recognize the structure and function of cells. A protein may interact with different proteins under different time or conditions. Existing approaches only utilize static PPI network data that may lose much temporal biological information. First, this article proposed a novel method that combines gene expression data at different time points with traditional static PPI network to construct different dynamic subnetworks. Second, to further filter out the data noise, the semantic similarity based on gene ontology is regarded as the network weight together with the principal component analysis, which is introduced to deal with the weight computing by three traditional methods. Third, after building a dynamic PPI network, a predicting protein complexes algorithm based on "core-attachment" structural feature is applied to detect complexes from each dynamic subnetworks. Finally, it is revealed from the experimental results that our method proposed in this article performs well on detecting protein complexes from dynamic weighted PPI networks.
[ "Semantic Text Processing", "Semantic Similarity" ]
[ 72, 53 ]
SCOPUS_ID:85059748030
A Method for Predicting the Winner of the USA Presidential Elections using Data extracted from Twitter
This paper presents work on using data extracted from Twitter to predict the outcome of the latest USA presidential elections on 8th of November 2016 in three key states: Florida, Ohio and N. Carolina, focusing on the two dominant candidates: Donald J. Trump and Hillary Clinton. Our method comprises two steps: pre-processing and analysis and it succeeded in capturing negative and positive sentiment towards these candidates, and predicted the winner in these States, who eventually won the presidency, when other similar attempts in the literature have failed. We discuss the strengths and weaknesses of our method proposing directions for further work.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85124397513
A Method for Resume Information Extraction Using BERT-BiLSTM-CRF
To solve the problem of low efficiency of electronic resume information extraction by artificial construction rules, a resume information extraction method based on named entity recognition is proposed, which extracted personal details such as graduation college, job intention and job skills from the resume into named entity recognition. Firstly, the TXT text in different formats of resume file is extracted for data cleaning and other preprocessing. The BERT language model based on multi-head self-attention mechanism is used to extract text features and obtain word granularity vector matrix. The BiLSTM neural network is used to obtain the context abstraction features of serialized text. Finally, using CRF to decode and annotate the global optimal sequence, the corresponding resume entity information is extracted. Experimental results show that the whole scheme can effectively extract electronic resume information, and the performance of the resume information extraction model based on BERT-BiLSTM-CRF is better than other models.
[ "Language Models", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 34, 72, 3 ]
SCOPUS_ID:85036474750
A Method for Semantic Relatedness Based Query Focused Text Summarization
In this paper, a semantic relatedness based query focused text summarization technique is introduced to find relevant information from single text document. This semantic relatedness measure extracts the related sentences according to the query. The query focused text summarization approach can work on short query when the query does not contain enough information. Better summaries are produced by this method with increased number of query related sentences included. Experiments and evaluation are done on DUC 2005 and 2006 datasets and results show significant performance.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85052543368
A Method for Semantic Roles Labeling Consistency Calculation Based on Multi-features
The authors state an automatic method for semantic role labeling consistency calculation, based on the features of annotated corpus' format, structure, content and user performances. The expriment shows that the proposed method is fast, stable and has high recall rate, and it can greatly improve the quality and efficiency.
[ "Semantic Parsing", "Semantic Text Processing" ]
[ 40, 72 ]
http://arxiv.org/abs/1409.5165v1
A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping
A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP" ]
[ 80, 4 ]
SCOPUS_ID:85135788325
A Method for Summarizing Trajectories with Multiple Aspects
Trajectory data mining and analysis have been largely studied in the past years. These tasks are complex and non-trivial due to the data volume and heterogeneity. One solution for these problems is data summarization in order to generate representative data. Few works in the literature address this solution, and none of them consider space, time, and unlimited semantic dimensions and their data type details. This paper proposes a grid-based method for multiple aspects trajectory data summarization named MAT-SG. It brings several contributions: (i) trajectory segmentation into a spatial grid according to data point dispersion; (ii) it expresses a set of trajectory data by a sequence of representative points with representative values for each dimension, considering their data type particularities. We evaluate MAT-SG over two datasets to assess volume reduction and accuracy.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85118958160
A Method for Targeted Sentiment Analysis
Targeted sentiment analysis(TSA)is a crucial task for fine-grained public opinion mining, which focuses on predicting the sentiment polarity towards a specific target in a given sentence. Most of existing works ignore the syntactic structure of the context sentence, and may pay attention to irrelevant context words when making sentiment judgments. To tackle the problem, a novel syntax aware model is proposed for TSA, which integrates the pre-trained bidirectional encoder representation from transformers models and a graph convolutional network over the dependency tree of the sentence to capture the context information and syntactic structure information of the sentence respectively. The proposed model uses the multi-head attention mechanism to aggregate the information to obtain the final target sentiment representation. The proposed model is also combined with the existing domain adaptive method to introduce domain knowledge and syntactic knowledge, which further improves the performance. The experimental results on several widely-used benchmark datasets demonstrate the effectiveness of the proposed model.
[ "Syntactic Text Processing", "Sentiment Analysis" ]
[ 15, 78 ]
SCOPUS_ID:84880650417
A Method for Thematic Term Extraction Base on Word Position Weight
Thematic terms can well represent the main idea of documents. The research on thematic term extraction is one of important fields of Natural Language Processing. This paper proposes a novel thematic term extraction method, which consists of the generation of candidate thematic term set based on the position weight of terms and the extraction of thematic term based on incremental weight of thematic term set. The generation algorithm gives a weight to a term according to its positions in a document, and then generates the candidate thematic term set according to their weights. The extraction algorithm calculates the incremental weight of each candidate term, and selects the terms whose incremental weights are larger than a given threshold. The experiment results on two corpuses show that the overall satisfaction of thematic term extraction of our method is beyond 90%, achieving very good performance. © Springer-Verlag Berlin Heidelberg 2012.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85124282990
A Method of Automated Corpus-Based Identification of Metaphors for Compiling a Dictionary of Metaphors: A Case Study of the Emotion Conceptual Domain
This paper presents a method of automated extraction of metaphors from the semantically annotated corpus of Ukrainian fiction texts (GRAC) for the dictionary of Ukrainian fiction metaphors. The macrostructure and microstructure of the dictionary of metaphors are viewed. We focus on the structural-semantic models of metaphors. The metaphorization of the conceptual domain of EMOTION in Ukrainian is specified.
[ "Emotion Analysis", "Sentiment Analysis" ]
[ 61, 78 ]
SCOPUS_ID:85073878122
A Method of Bug Report Quality Detection Based on Vector Space Model
As a vehicle for recording and tracking defects, bug report provides basis for solving software quality problems. However, moving fake or duplication bug report in multi-person and parallel software testing project is a labor-intensive job. Therefore, this paper proposes a method based on vector space model for automatic dealing with this problem. We built a matching library according to the test requirements and confirmed bug reports and used vector space model to calculate the similarity between the bug report and the matching library. Then the correctness of the bug report is detected based on this similarity.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85076159065
A Method of Calculating the Semantic Similarity Between English and Chinese Concepts
In the big data era, data and information processing is a common concern of diverse fields. To achieve the two keys “efficiency” and “intelligence” to the processing process, it’s necessary to search, define and build the potential links among heterogeneous data. Focusing on this issue, this paper proposes a knowledge-driven method to calculate the semantic similarity between (bilingual English-Chinese) words. This method is built on the knowledge base “HowNet”, which defines and maintains the “atom taxonomy tree” and the “semantic dictionary” - a network of knowledge system describing the relationships between word concepts and attributes of the concepts. Compared to other knowledge bases, HowNet pays more attention to the connections between words based on concepts. Besides, this method is more complete in the analysis of concepts and more convenient in calculation methods. The non-relational database MongoDB is employed to improve the efficiency and fully use the rich knowledge maintained in HowNet. Considering both the structure of HowNet and characteristics of MongoDB, a certain number of equations are defined to calculate the semantic similarity.
[ "Semantic Text Processing", "Green & Sustainable NLP", "Semantic Similarity", "Knowledge Representation", "Responsible & Trustworthy NLP" ]
[ 72, 68, 53, 18, 4 ]
SCOPUS_ID:85140061708
A Method of Chinese NER Based on BERT Model and Coarse-Grained Features
Named entity recognition (NER) is a fundamental task and an important aspect in the fields of information extraction, natural language understanding and retrieval systems. Currently, NER based on deep learning is better than traditional feature and kernel function-based approaches in the field of feature extraction depth and modeling accuracy. The traditional word-based feature approach tends to ignore the coarse-grained word features, and the recognition of named entities in Chinese requires a comprehensive consideration of word-level and word-level features. To address the above issues, the combination of the BERT pre-training model and the Lattice network structure integrates coarse-grained features and word features, and CRF is used to decode sequence labels. The BERT-Lattice-CRF model is proposed in this paper, we use the public data set Resume for testing. Then we can conclude that the F1 value of the model on the data set is significantly improved.
[ "Language Models", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 34, 72, 3 ]
SCOPUS_ID:85133748920
A Method of Chinese-Vietnamese Bilingual Corpus Construction for Machine Translation
A bilingual corpus is vital for natural language processing problems, especially in machine translation. The larger and better quality the corpus is, the higher the efficiency of the resulting machine translation is. There are two popular approaches to building a bilingual corpus. The first is building one automatically based on resources that are available on the internet, typically bilingual websites. The second approach is to construct one manually. Automated construction methods are being used more frequently because they are less expensive and there are a growing number of bilingual websites to exploit. In this paper, we use automated collection methods for a bilingual website to create a bilingual Chinese-Vietnamese corpus. In particular, the bilingual website we use to collect the data is the website of a multilingual dictionary (https://glosbe.com). We collected the Chinese-Vietnamese corpus from this website that includes more than 400k sentence pairs. We chose 100,000 sentence pairs in this corpus for machine translation experiments. From the corpus, we built five datasets consisting of 20k, 40k, 60k, 80k, and 100k sentence pairs, respectively. In addition, we built five additional datasets, applying word segmentation on the sentences of the original datasets. The experimental results showed that: 1) the quality of the corpus is relatively good with the highest BLEU score of 19.8, although there are still some issues that need to be addressed in future works; 2) the larger the corpus is, the higher the machine translation quality is; and 3) the untokenized datasets help train better translation models than the tokenized datasets.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
https://aclanthology.org//W04-2206/
A Method of Creating New Bilingual Valency Entries using Alternations
[ "Multilinguality" ]
[ 0 ]
SCOPUS_ID:85139927360
A Method of Deep Learning Model Optimization for Image Classification on Edge Device
Due to the recent increasing utilization of deep learning models on edge devices, the industry demand for Deep Learning Model Optimization (DLMO) is also increasing. This paper derives a usage strategy of DLMO based on the performance evaluation through light convolution, quantization, pruning techniques and knowledge distillation, known to be excellent in reducing memory size and operation delay with a minimal accuracy drop. Through experiments regarding image classification, we derive possible and optimal strategies to apply deep learning into Internet of Things (IoT) or tiny embedded devices. In particular, strategies for DLMO technology most suitable for each on-device Artificial Intelligence (AI) service are proposed in terms of performance factors. In this paper, we suggest a possible solution of the most rational algorithm under very limited resource environments by utilizing mature deep learning methodologies.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Green & Sustainable NLP", "Text Classification", "Responsible & Trustworthy NLP", "Information Retrieval", "Multimodality" ]
[ 20, 3, 68, 36, 4, 24, 74 ]
SCOPUS_ID:85128072351
A Method of Electricity Meter LCD Screen Defect Detecting Based on Convolutional Neural Network
In order to quickly and accurately detect the display defects of electric meter LCD screen, this paper proposed an electric meter LCD screen defect detecting method based on convolutional neural network (CNN). First, a horizontal straight line of LCD screen frame is found by LSD line detection method for tilt correction. Second, the LCD area is positioned by normalized correlation matching of the corrected image. Then, the accurate positions of the meter characters are located by the characters position information generated by a template annotation tool. Finally, a CNN is used to perform character defect detecting and OCR recognizing on the segmented meter characters. The experimental results show that the accuracy of the method of positioning and detecting the LCD screen characters is about 99%. At the same time, the CNN has an OCR recognition function, which can accurately identify the meter LCD screen characters.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85147981017
A Method of Extracting Discipline Inspection Cases Based on Deep Learning
Event extraction is one of the key tasks in information extraction tasks, and it has been widely used in various fields in recent years. In the field of disciplinary inspection and supervision, disciplinary inspection case text has the characteristics of multiple types, multiple professional terms, and strong correlation between event types and event arguments. In the face of huge data, relying on manual analysis by manpower has seriously affected the efficiency of disciplinary inspection. However, there is no corpus in the field of discipline inspection currently available. This article uses BIO annotation to construct a corpus of discipline inspection to lay the foundation for subsequent work. Proposed BERT-BiGRU-CRF event joint extraction model. Use the BERT model to train the discipline inspection corpus, and Combine BiGRU network and CRF network to realize event type recognition and argument extraction. The experimental results show that the model can effectively extract event information in the discipline inspection field. At the same time, in order to better facilitate the work of disciplinary inspection personnel, a disciplinary inspection and supervision event extraction system is constructed to systematically extract all event information contained in the case.
[ "Event Extraction", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 31, 52, 72, 3 ]
SCOPUS_ID:85076298140
A Method of Extracting Malware Features Based on Probabilistic Topic Model
In the current complex network environment, malicious codes have been spread quickly in various ways, which illegally occupy user terminal equipment or network equipment and illegally steal privacy data. Malware poses a serious security threat to network and Internet users. Traditional methods can't detect unknown malicious codes which is challenged by the diversity and large number of malicious code variants. We propose an unsupervised malware identification approach that generates a standardization rule of assembly instructions by analyzing the content of the decompiled PE files. By introducing latent Dirichlet allocation (LDA), our method extracts the latent "document-topic" and "topic-word" probability allocation from samples. The topic probability distributions are extracted as features of samples, which is a new way for malware feature presentation. Then, we propose a new malware detecting framework to train model and test malware. What's more, our method solves the problem that the topic number in LDA model needs to be specified beforehand using the perplexity and different steps, which evaluates the best numbers of "topics" quickly and automatically. Finally, it analyzes the semantics of "document-topic" and "topic-word" aggregating results in assembly instructions, which explains the latent semantics of features obtained by our method. Experimental results show that our method is more discriminative, which has better classification results than other methods, while providing accurate discrimination of the new novel malware variants.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:84872872629
A Method of Extracting Sentences Related to Protein Interaction from Literature using a Structure Database
Because a protein expresses its function through interaction with other substrates, it is vital to create a database of protein interaction. Since the total volume of information on protein interaction is described in terms of thousands of literatures, it is nearly impossible to extract all this information manually. Although extraction systems for interaction information based on the template matching method have already been developed, it is not possible to match all the sentences with interaction information due to the extent of sentence complexity. We propose a method of extracting sentences with interaction information independent of sentence structure. In a protein-compound complex structure, the interacting residue is near to its partner. The distance between them can be calculated by using the structure data in the PDB database, with a short distance indicating that the sentences associated with them might describe the interaction information. In a free-protein structure, the distance cannot be calculated because the coordinates of the protein's partner are not registered in the structure data. Hence, we use the homology protein structure data, which is complexed with the protein's parter. The proposed method was applied to seven literatures written about protein-compound complexes and four literatures written about free proteins, obtaining F-measures of 71% and 72%, respectively. © 2005, The Institute of Electrical Engineers of Japan. All rights reserved.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85056087480
A Method of Feature Selection Based on Word2Vec in Text Categorization
In text categorization, the performance of classifier decreases with the increase of feature dimension. The main purpose of feature selection is to remove irrelevant features and redundant features in features and reduce feature dimension. Traditional methods of feature selection, such as CHI, IG, DF and so on, take into account only the number of appearances of features and ignore the feature semantics and part-of-speech features. The vector representations of words learned by word2vec models have been shown to carry semantic meanings and are useful in various NLP tasks. Based on the word vectors generated by Word2Vec, the paper proposes the algorithm Word2Vec-SM to reduce the dimensionality of the features. Experimental proof word2vec-SM algorithm.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85055456224
A Method of Micro-Blog Users' Interests Topic Extraction
The bag of word model is first improved according to the social media short text feature. The semantic representation model is then proposed by using semantic relations between features. The sequence diagram model can be constructed by using the sequence of features in the sentence. On the base of these, together with time factor, we propose a user interest topic mode based on Single-Pass to extract the topic of user's attention. The experimental results show that the FM, AA and F of our method are increased by 200.40%, 46.50% and 80.05%, respectively, compared with the latest method FSC-LDA.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85096599319
A Method of Modern Chinese Irony Detection
Irony is a kind of expressions whose literal meaning is the reversal of its real meaning. Although the understanding of ironies is considered as highly depend on the contextual information, there should be some cues in grammatical and semantical level. In this research, we try to find these linguistic cues by the observation of large scale corpora. We will find the frequently-used ironic constructions, then analyze the features and the generation mechanism of them. We notice that the intensity of ironic expressions relies on its immediacy of coercing the listener to experience the reversal. We conclude seven kinds of reversal in Chinese irony and summarize their formalized features. We also design an Irony Identification Procedure (IIP) to help us to detect ironies. In the future, we plan to classify the features and compare the efficiency of them by computational methods to get the quantized data, and then finally find an effective way to detect irony automatically.
[ "Stylistic Analysis", "Sentiment Analysis" ]
[ 67, 78 ]
SCOPUS_ID:85072981984
A Method of Ontology Evolution and Concept Evaluation Based on Knowledge Discovery in the Heavy Haul Railway Risk System
The risk pre-control of heavy haul railways is a collaborative scenario with multi-department linkage and the risk analysis model relies on multiple data sources. As a tool for knowledge formal modeling, Ontology and knowledge graph can achieve knowledge discovery, reasoning and decision support based on multi-dimensional heterogeneous data. This paper restores unusual context with participant behavior data as the core, establishes a basic Scenario-Risk-Accident Chain (SRAC) ontology framework. Under collaborative relationships formed by reasoning rules between context and risk, this paper establishes evolution mechanism of SRAC to introduce new knowledge, such as knowledge extracted from device detection data. New entities are added to the risk concept tree through semantic similarity algorithms. In addition, researchers added weight attribute to the risk ontology. With quantitative representation of risk concepts, this paper uses risk relevance mining to establish associated-subgraphs, establishes a new method for potential accident level assessment through maximum flow search mechanism.
[ "Semantic Text Processing", "Structured Data in NLP", "Semantic Similarity", "Knowledge Representation", "Reasoning", "Multimodality" ]
[ 72, 50, 53, 18, 8, 74 ]
http://arxiv.org/abs/2204.12808v1
A Method of Query Graph Reranking for Knowledge Base Question Answering
This paper presents a novel reranking method to better choose the optimal query graph, a sub-graph of knowledge graph, to retrieve the answer for an input question in Knowledge Base Question Answering (KBQA). Existing methods suffer from a severe problem that there is a significant gap between top-1 performance and the oracle score of top-n results. To address this problem, our method divides the choosing procedure into two steps: query graph ranking and query graph reranking. In the first step, we provide top-n query graphs for each question. Then we propose to rerank the top-n query graphs by combining with the information of answer type. Experimental results on two widely used datasets show that our proposed method achieves the best results on the WebQuestions dataset and the second best on the ComplexQuestions dataset.
[ "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Natural Language Interfaces", "Multimodality" ]
[ 72, 50, 27, 18, 11, 74 ]
SCOPUS_ID:85132332653
A Method of Sharing Sentence Vectors for Opinion Triplet Extraction
The aspect-based sentiment analysis (ABSA) task mainly detects the sentiment polarities of aspect terms, or achieves the aspect-opinion co-extraction. Existing ABSA methods usually divide the task into two independent sub-tasks to execute separately, which will result in getting many meaningless pairs due to the following two reasons: the aspect-sentiment pairs extraction task has no reference opinion terms for the interaction, and the aspect-opinion co-extraction task has no reference to their corresponding sentiment dependencies. In recent years, the triplet extraction task is based on above problems with ABSA task to extract aspect terms, opinion terms and sentiment. However, existing triplet extraction methods still have insufficient interaction among the three sub-tasks and serious error propagation problems. Besides, actual sentences often have overlapping aspect words and opinion words. In this paper, we formulate the ABSA as an opinion triplet extraction (OTE) task under the multi-task learning framework. Meanwhile, a method of sharing sentence vectors for opinion triplet extraction (OTE-SSV) is proposed to enhance the extraction ability of text semantic representation and strengthen the interaction among the elements of triplet extraction. Unlike the existing OTE approaches that rely on a pipeline manner for multiple tasks, OTE-SSV uses a concurrent way to extract multiple sub-tasks, which greatly reduces the errors presenting in the previous subtask propagate into the next subtask extraction task. It is experimentally verified that OTE-SSV can also correctly and efficiently extract triplet in sentences with overlapping aspect words and opinion words. Experimental results on four ABSA semeval benchmarks show that F1 measures of OTE-SSV are 1% to 2% obviously superiorer to series of state-of-the-art technologies.
[ "Semantic Text Processing", "Representation Learning", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 72, 12, 23, 78, 3 ]
SCOPUS_ID:85123509306
A Method of Short Text Representation Fusion with Weighted Word Embeddings and Extended Topic Information
Short text representation is one of the basic and key tasks of NLP. The traditional method is to simply merge the bag-of-words model and the topic model, which may lead to the problem of ambiguity in semantic information, and leave topic information sparse. We propose an unsuper-vised text representation method that involves fusing word embeddings and extended topic infor-mation. Following this, two fusion strategies of weighted word embeddings and extended topic information are designed: static linear fusion and dynamic fusion. This method can highlight im-portant semantic information, flexibly fuse topic information, and improve the capabilities of short text representation. We use classification and prediction tasks to verify the effectiveness of the method. The testing results show that the method is valid.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85122928683
A Method of Typhoon Disaster Loss Identification and Classification Using Micro-blog Information
Social media plays a more and more important role in the real-time disaster information distribution and dissemination. During the disaster event, social media usually generates and contains a lot of real- time disaster loss information, which is very useful for the timely disaster response and disaster loss assessment. However, the social media data has many shortcomings, such as high fragmentation of the information, sparsity of the text features, and the lack of annotated corpus and so on, which makes the traditional supervised learning method difficult to be effectively used for disaster information extraction. This paper proposed a fast disaster loss identification and classification method to extract the disaster information from social media data by extending the context features and matching feature words. By this method, we firstly extracted the keywords from a small amount of sample micro- blog text of different disaster loss categories based on Chinese grammar rules and constructed the pairs of feature words collocation. Then, we used the word vector model and the existing lexicon to supplement and expand these pairs of feature words collocation. And the external corpus was introduced to optimize the semantic collocation relationship between feature words according to the rules of the concurrence of Chinese words. At last, we built a classification knowledgebase for identification and classification of disaster loss information related to typhoon disasters included in micro- blog. An experiment system was developed to evaluate the method introduced in the paper. Typhoon "Meranti" landed on 15th September, 2016 was selected as a case study. Results show that this method has a significant effect (each comprehensive evaluation index of different categories is greater than 0.74) on identifying and classifying different categories of disaster loss information from social media. We mapped the spatio-temporal distribution of typhoon influence based on the classification results of disaster loss from social media. The experiment shows that the classification output data and maps could be used for the disaster loss evaluation and mitigation.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:84869231852
A Method of event ontology-based single automatic summarization
This paper presents a novel single automatic summarization method based on event ontology (EOSum) after analyzing the advantages and disadvantages of event-based automatic summarization. EOSum takes event ontology as semantic resources to reduce events and compute event weights of a document. Event weights and text readability are synthetically considered in the process of selecting events as the summary of a document. The experiment results show that the method proposed can achieve better summarization.
[ "Semantic Text Processing", "Summarization", "Knowledge Representation", "Text Generation", "Information Extraction & Text Mining" ]
[ 72, 30, 18, 47, 3 ]
SCOPUS_ID:84964336616
A Method of the Feature Selection in Hierarchical Text Classification Based on the Category Discrimination and Position Information
Feature dimension reduction is an important part in text categorization, and it even becomes more important for child classification in hierarchical text classification. It is presented that Chinese text feature selection method based on category distinction and feature location information in this paper. Experimental results show that the proposed method has a higher precision and recall rate than the others. Therefore the effect of the feature selection is better.
[ "Text Classification", "Ethical NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 17, 4, 24, 3 ]
SCOPUS_ID:85124396931
A Method to Construct Guidelines for Spanish Comments Annotation for Sentiment Analysis
The application of sentiment analysis in social networks supports the understanding of complaints and claims of users' comments. To train the models that automate this analysis, it is important to construct guidelines that generate a more robust corpus. As far as we know, no related work of guidelines for spanish comments annotation has been found. We propose a method to construct guidelines to annotators reach a consensus in the entire annotation process of spanish comments from social networks. We annotated 3259 spanish comments using our guidelines, where the concordance analysis from our annotators was 84%. We employed our corpus and eight baseline classifiers for sentiment analysis detection, achieving 78.63% as the highest F1-Score with Multilayer Perceptron. Our method is useful to tackle labeling spanish comments which can be used in NLP tasks such as sentiment analysis.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:84937211519
A Method to Control Home Appliances Based on Writing Commands Over the Air
This paper presents a live, free, and real-time video- based pointing method, which allows the users to write hand gesture-based control commands over the air in front of an installed camera in order to control the home appliances. The proposed method has four main parts, viz: finger tracking, OCR analysis, appliance control circuit, and appliance-monitoring system, respectively. The proposed system is tested on two test beds, i.e., computer and mobile device based. Moreover, the proposed system is tested for three different types of communication links, which are dedicated wire, Bluetooth, and global system for mobile (GSM). Results of the performed tests indicate an improvement of 92.023 % in the overall accuracy gained by the proposed system. It is observed that the average recognition time required for per input character is 0.52 s. The average time observed for processing and acknowledgment (from DMG to computer/mobile) is 0.23, 2, 15 s for dedicated wire, Bluetooth, and GSM–SMS-based communication links, respectively.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85081572661
A Method to Estimate Perceived Quality and Perceived Value of Brands to Make Purchase Decision Using Aspect-Based Sentiment Analysis
Perceived quality and value are very essential attributes in the context of brand management. These attributes are traditionally measured using primary surveys. In this work, we propose a methodology to estimate perceived quality and value from online consumer reviews using aspect-based sentiment analysis. We crawled reviews of five popular mobile brands from a reputed e-commerce website. We have applied state-of-the-art text pre-processing techniques to clean the text and to extract the aspects using a semi-automatic approach using dependency parser. The aspects are categorized into five clusters in relevance with benefits consumers get from the brand. Lastly, we have applied TOPSIS, a multi-criterion decision-making algorithm, to rank the brands based on perceived quality scores.
[ "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 23, 78 ]
SCOPUS_ID:85080862622
A Method to Estimate Request Sentences using LSTM with Self-Attention Mechanism
Recently, many opinions are accumulated on the review sites. These reviews are very useful for the service providers and users. Since it requires much cost to extract the useful information manually, there is a need for processing the information automatically. Sentiment analysis is a representative example of the processes. In the sentiment analysis, opinions are classified into positive opinions and negative opinions. The general sentiment analysis have focused on sentiment sentences (positive / negative), however, have not focused on request sentences such as 'Please set an electric pot in the room'. In this research, we aimed to estimate request sentences using deep learning, especially Recurrent Neural, Network (RNN) with multiple Self-Attention mechanisms. We proposed a request estimation method using LSTM with four Self-Attention mechanisms to represent the sentences from multiple perspectives. As the result, we confirmed the effectiveness of the proposed method.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85030848321
A Method to Evaluate the Research Direction of University
The era is changing from information technology to data technology, big data is used very well in the field of financial, medical, e-commerce and so on, but not very well in the field of education. The idea of “Data - driven schools, analysis of change education” make the need for the educational data mining more and more prominent. Data mining in education can help us to connect the relevant areas of education and find the key educational variables, which can make the education and teaching decision simple and accurate. In this paper, by using the Chinese word segmentation algorithm, association rule and RStudio tool, we analyse the title of master’s thesis in four universities that have same discipline structure. The title data is obtained from http://www.cnki.net, which is an authority database in China. The results show that the research directions of the four university tend to be wireless network, mobile communication and algorithms.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85086231356
A Method to Generate Soft Reference Data for Topic Identification
Text mining and topic identification models are becoming increasingly relevant to extract value from the huge amount of unstructured textual information that companies obtain from their users and clients nowadays. Soft approaches to these problems are also gaining relevance, as in some contexts it may be unrealistic to assume that any document has to be associated to a single topic without any further consideration of the involved uncertainties. However, there is an almost total lack of reference documents allowing a proper assessment of the performance of soft classifiers in such soft topic identification tasks. To address this lack, in this paper a method is proposed that generates topic identification reference documents with a soft but objective nature, and which proceeds by combining, in random but known proportions, phrases of existing documents dealing with different topics. We also provide a computational study illustrating the application of the proposed method on a well-known benchmark for topic identification, as well as showing the possibility of carrying out an informative evaluation of soft classifiers in the context of soft topic identification.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85084137155
A Method to Identify the Current Mood of Social Media Users
Mood of a person changes frequently during a day. A mood can be categorized as happy, sad, calm and angry. Most of the people today share their daily activities, opinions, feelings regularly in social media. Identification of current mood will be useful for recommendation systems to change or elevate moods. Thus, this proposed system identifies the current mood of a person by mining their social media contents such as posts, comments, image posts and emoticons. In the proposed solution, a score for the current mood is calculated in two stages; a score for text contents (images with text and posts/comments) and score for emoticons were computed. Posts made within a 24-hour period will be considered for the current mood and scores for multiple posts are combined using a temporal weighted average. Text classification is done using a 1D Convolution Neural Network and emoticon classification is performed using a survey. Finally, an overall accuracy of 85% is achieved.
[ "Visual Data in NLP", "Text Classification", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 20, 36, 74, 24, 3 ]
SCOPUS_ID:85133325191
A Method to Integrate Word Sense Disambiguation and Translation Memory for English to Hindi Machine Translation System
Word sense disambiguation deals with deciding the word’s precise meaning in a certain specific context. One of the major problems in natural language processing is lexical-semantic ambiguity, where a word has more than one meaning. Disambiguating the sense of polysemous words is the most important task in machine translation. This research work aims to design and implement English to Hindi machine translation. The design methodology addresses improving the speed and accuracy of the machine translation process. The algorithm and modules designed in this research work have been deployed on the Hadoop infrastructure, and test cases are designed to check the feasibility and reliability of this process. The research work presented describes the methodologies to reduce data transmission by adding a translation memory component to the framework. The speed of execution is increased by replacing the modules in the machine translation process with lightweight modules, which reduces infrastructure and execution time.
[ "Machine Translation", "Semantic Text Processing", "Word Sense Disambiguation", "Text Generation", "Multilinguality" ]
[ 51, 72, 65, 47, 0 ]
http://arxiv.org/abs/1909.00672v2
A Method to Learn Embedding of a Probabilistic Medical Knowledge Graph: Algorithm Development
This paper proposes an algorithm named as PrTransH to learn embedding vectors from real world EMR data based medical knowledge. The unique challenge in embedding medical knowledge graph from real world EMR data is that the uncertainty of knowledge triplets blurs the border between "correct triplet" and "wrong triplet", changing the fundamental assumption of many existing algorithms. To address the challenge, some enhancements are made to existing TransH algorithm, including: 1) involve probability of medical knowledge triplet into training objective; 2) replace the margin-based ranking loss with unified loss calculation considering both valid and corrupted triplets; 3) augment training data set with medical background knowledge. Verifications on real world EMR data based medical knowledge graph prove that PrTransH outperforms TransH in link prediction task. To the best of our survey, this paper is the first one to learn and verify knowledge embedding on probabilistic knowledge graphs.
[ "Semantic Text Processing", "Structured Data in NLP", "Representation Learning", "Knowledge Representation", "Multimodality" ]
[ 72, 50, 12, 18, 74 ]
http://arxiv.org/abs/2104.07815v1
A Method to Reveal Speaker Identity in Distributed ASR Training, and How to Counter It
End-to-end Automatic Speech Recognition (ASR) models are commonly trained over spoken utterances using optimization methods like Stochastic Gradient Descent (SGD). In distributed settings like Federated Learning, model training requires transmission of gradients over a network. In this work, we design the first method for revealing the identity of the speaker of a training utterance with access only to a gradient. We propose Hessian-Free Gradients Matching, an input reconstruction technique that operates without second derivatives of the loss function (required in prior works), which can be expensive to compute. We show the effectiveness of our method using the DeepSpeech model architecture, demonstrating that it is possible to reveal the speaker's identity with 34% top-1 accuracy (51% top-5 accuracy) on the LibriSpeech dataset. Further, we study the effect of two well-known techniques, Differentially Private SGD and Dropout, on the success of our method. We show that a dropout rate of 0.2 can reduce the speaker identity accuracy to 0% top-1 (0.5% top-5).
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
SCOPUS_ID:85019222484
A Method to Validate the Insertion of a New Concept in an Ontology
This paper presents a method to validate the insertion of a new concept in an ontology. This method is based on our previous works which add new concepts in a basic ontology using a general ontology (genaral ontology contains all the concepts of the basic ontology). To verify the semantic relevance of an ontology, we have proposed a method with three steps. First, we have found the neighborhood of the concept C in the basic ontology Ob and we store their semantic similarity values in a stack. The neighbourhood represents the concepts which are more similar to C in Ob. Secondly, we have assessed in the general ontology Og the semantic similarity between C and its neighbourhood found in the first step. Finally, we have evaluated the correlation between values found in the previous steps. We have considered the basic ontology as ontology with which we work and the general ontology as ontology used to align concepts with the basic ontology. The result obtained thanks to the method is a validated ontology after an update by adding a new concept. To illustrate our method, we have used the whole WordNet as the reference ontology and a branch of WordNet as basic ontology.
[ "Knowledge Representation", "Semantic Text Processing", "Semantic Similarity" ]
[ 18, 72, 53 ]
SCOPUS_ID:85074195908
A Methodological Framework for Dictionary and Rule-based Text Classification
Recent research on dictionary- and rule-based text classification either concentrates on improving the classification quality for standard tasks like sentiment mining or describe applications to a specific domain. The focus is mainly on the underlying algorithmic approach. This work in contrast provides a general methodological approach to dictionary- and rule-based text classification based on a systematic literature analysis. The result is a process description that enables the application of these technologies on specific problems by guidance through major decision points from the definition of the classification goals to the actual classification of texts.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85075104634
A Methodology for Bilingual Lexicon Extraction from Comparable Corpora
Dictionary extraction using parallel corpora is well established. However, for many language pairs parallel corpora are a scarce resource which is why in the current work we discuss methods for dictionary extraction from comparable corpora. Hereby the aim is to push the boundaries of current approaches, which typically utilize correlations between co-occurrence patterns across languages, in several ways: 1) Eliminating the need for initial lexicons by using a bootstrapping approach which only requires a few seed translations. 2) Implementing a new approach which first establishes alignments between comparable documents across languages, and then computes cross-lingual alignments between words and multiword-units. 3) Improving the quality of computed word translations by applying an interlingua approach, which, by relying on several pivot languages, allows an effective multi-dimensional cross-check. 4) We investigate that, by looking at foreign citations, language translations can even be derived from a single monolingual text corpus.
[ "Machine Translation", "Information Extraction & Text Mining", "Text Generation", "Cross-Lingual Transfer", "Multilinguality" ]
[ 51, 3, 47, 19, 0 ]
http://arxiv.org/abs/1907.02784v1
A Methodology for Controlling the Emotional Expressiveness in Synthetic Speech -- a Deep Learning approach
In this project, we aim to build a Text-to-Speech system able to produce speech with a controllable emotional expressiveness. We propose a methodology for solving this problem in three main steps. The first is the collection of emotional speech data. We discuss the various formats of existing datasets and their usability in speech generation. The second step is the development of a system to automatically annotate data with emotion/expressiveness features. We compare several techniques using transfer learning to extract such a representation through other tasks and propose a method to visualize and interpret the correlation between vocal and emotional features. The third step is the development of a deep learning-based system taking text and emotion/expressiveness as input and producing speech as output. We study the impact of fine tuning from a neutral TTS towards an emotional TTS in terms of intelligibility and perception of the emotion.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
http://arxiv.org/abs/2004.07633v2
A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation
In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data. For this, we introduce an intermediate representation that is based on the logical query plan in a database called Operation Trees (OT). This representation allows us to invert the annotation process without losing flexibility in the types of queries that we generate. Furthermore, it allows for fine-grained alignment of query tokens to OT operations. In our method, we randomly generate OTs from a context-free grammar. Afterwards, annotators have to write the appropriate natural language question that is represented by the OT. Finally, the annotators assign the tokens to the OT operations. We apply the method to create a new corpus OTTA (Operation Trees and Token Assignment), a large semantic parsing corpus for evaluating natural language interfaces to databases. We compare OTTA to Spider and LC-QuaD 2.0 and show that our methodology more than triples the annotation speed while maintaining the complexity of the queries. Finally, we train a state-of-the-art semantic parsing model on our data and show that our corpus is a challenging dataset and that the token alignment can be leveraged to increase the performance significantly.
[ "Natural Language Interfaces", "Semantic Text Processing", "Semantic Parsing", "Question Answering" ]
[ 11, 72, 40, 27 ]