id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
SCOPUS_ID:85131410750
|
A Local context focus learning model for joint multi-task using syntactic dependency relative distance
|
Aspect-based sentiment analysis (ABSA) is a significant task in natural language processing. Although many ABSA systems have been proposed, the correlation between the aspect’s sentiment polarity and local context semantic information was not a point of focus. Moreover, aspect term extraction and aspect sentiment classification are fundamental tasks of aspect-based sentiment analysis. However, most existing systems have failed to recognize the natural relation between these two tasks and therefore treat them as relatively independent tasks. In this work, a local context focus method is proposed. It represents semantic distance using syntactic dependency relative distance which is calculated on the basis of an undirected dependency graph. We introduced this method into a multi-task learning framework with a multi-head attention mechanism for aspect term extraction and aspect sentiment classification joint task. Compared with existing models, the proposed local context focus method measures the semantic distance more precisely and helps our model capture more effective local semantic information. In addition, a multi-head attention mechanism is employed to further enhance local semantic representation. Furthermore, the proposed model makes full use of aspect terminology information and aspect sentiment information provided by the two subtasks, thereby improving the overall performance. The experimental results on four datasets show that the proposed model outperforms single task and multi-task models on the aspect term extraction and aspect sentiment classification tasks.
|
[
"Language Models",
"Low-Resource NLP",
"Semantic Text Processing",
"Information Retrieval",
"Term Extraction",
"Syntactic Text Processing",
"Aspect-based Sentiment Analysis",
"Sentiment Analysis",
"Responsible & Trustworthy NLP",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
80,
72,
24,
1,
15,
23,
78,
4,
36,
3
] |
SCOPUS_ID:85081345898
|
A Location Independent Machine Learning Approach for Early Fake News Detection
|
The spread of fake news on the internet is presenting increasing threats to national security, with the potential to incite public unrest and violence. However, detecting fake news is challenging as they are intentionally written to mislead. Some current methods cannot detect fake news early and require external information like the source to assess articles. To tackle these challenges and improve the generalizability of the models, we adopted a text-based location-independent machine learning approach. It employed two types of machine learning models. The first is the bag-of-words model, made more robust by stacking two levels of models. The second is neural networks that utilize pre-trained GloVe word embeddings, including (a) one-dimensional convolutional neural network (CNN) and (b) bidirectional long short-term memory network (BiLSTM). All models were assessed on various metrics (accuracy, recall, precision and F1), and achieved over 90% on the test set, making this an effective location-independent approach to detect fake news at an early stage without reliance on external information.
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
SCOPUS_ID:85137146707
|
A Logic Aware Neural Generation Method for Explainable Data-to-text
|
The most notable neural data-to-text approaches generate natural language from structural data relying on the surface form of the structural content, which ignores the underlying logical correlation between the input data and the target text. Moreover, identifying such logical associations and explaining them in natural language is desirable but not yet studied. In this paper, we introduce a practical data-to-text method for the logic-critical scenario, specifically for anti-money laundering applications. It involves detecting risks from input data and explaining any abnormal behaviors in natural language. The proposed method is a Logic Aware Neural Generation framework (LANG), which is a preliminary attempt to explore the integration of logic modeling and text generation. Concretely, we first convert expert rules to a logic graph. Then, the model utilizes meta path based encoder to exploit the expert knowledge. Besides, a retriever module with the encoded logic knowledge is used to bridge the gap between numeric input and target text. Finally, a rule-constrained loss is leveraged to improve the generation probability of tokens in rule recalled statements to ensure accuracy. We conduct extensive experiments on anti-money laundering data. Results show that the proposed method significantly outperforms baselines in both objective measures with relative 35% improvements in F1 score and subjective measures with 30% improvement in human preference.
|
[
"Explainability & Interpretability in NLP",
"Data-to-Text Generation",
"Text Generation",
"Responsible & Trustworthy NLP"
] |
[
81,
16,
47,
4
] |
http://arxiv.org/abs/2110.03323v3
|
A Logic-Based Framework for Natural Language Inference in Dutch
|
We present a framework for deriving inference relations between Dutch sentence pairs. The proposed framework relies on logic-based reasoning to produce inspectable proofs leading up to inference labels; its judgements are therefore transparent and formally verifiable. At its core, the system is powered by two ${\lambda}$-calculi, used as syntactic and semantic theories, respectively. Sentences are first converted to syntactic proofs and terms of the linear ${\lambda}$-calculus using a choice of two parsers: an Alpino-based pipeline, and Neural Proof Nets. The syntactic terms are then converted to semantic terms of the simply typed ${\lambda}$-calculus, via a set of hand designed type- and term-level transformations. Pairs of semantic terms are then fed to an automated theorem prover for natural logic which reasons with them while using the lexical relations found in the Open Dutch WordNet. We evaluate the reasoning pipeline on the recently created Dutch natural language inference dataset, and achieve promising results, remaining only within a $1.1-3.2{\%}$ performance margin to strong neural baselines. To the best of our knowledge, the reasoning pipeline is the first logic-based system for Dutch.
|
[
"Reasoning",
"Textual Inference",
"Syntactic Text Processing"
] |
[
8,
22,
15
] |
SCOPUS_ID:85065962278
|
A Logic-Based Question Answering System for Cultural Heritage
|
Question Answering (QA) systems attempt to find direct answers to user questions posed in natural language. This work presents a QA system for the closed domain of Cultural Heritage. Our solution gradually transforms input questions into queries that are executed on a CIDOC-compliant ontological knowledge base. Questions are processed by means of a rule-based syntactic classification module running an Answer Set Programming system. The proposed solution is being integrated into a fully-fledged commercial system developed within the PIUCULTURA project, funded by the Italian Ministry for Economic Development.
|
[
"Natural Language Interfaces",
"Question Answering"
] |
[
11,
27
] |
http://arxiv.org/abs/1310.4938v1
|
A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge
|
We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.
|
[
"Semantic Text Processing",
"Syntactic Text Processing",
"Knowledge Representation",
"Reasoning",
"Textual Inference"
] |
[
72,
15,
18,
8,
22
] |
SCOPUS_ID:85143765239
|
A Logical Conceptualization of Knowledge on the Notion of Language Communication
|
The main objective of the paper is to provide a conceptual apparatus of a general logical theory of language communication. The aim of the paper is to outline a formal-logical theory of language in which the concepts of the phenomenon of language communication and language communication in general are defined and some conditions for their adequacy are formulated. The theory explicates the key notions of contemporary syntax, semantics, and pragmatics. The theory is formalized on two levels: token-level and type-level. As such, it takes into account the dual—token and type—ontological character of linguistic entities. The basic notions of the theory: language communication, meaning and interpretation are introduced on the second, type-level of formalization, and their required prior formalization of some of the notions introduced on the first, token-level; among others, the notion of an act of communication. Owing to the theory, it is possible to address the problems of adequacy of both empirical acts of communication and of language communication in general. All the conditions of adequacy of communication discussed in the presented paper, are valid for one-way communication (sender-recipient); nevertheless, they can also apply to the reverse direction of language communication (recipient-sender). Therefore, they concern the problem of two-way understanding in language communication.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:1542379838
|
A Logico-mathematic, Structural Methodology: Part I, the Analysis and Validation of Sub-literal (S<inf>ub</inf>L<inf>it</inf>) Language and Cognition
|
In this first of three papers, a novel cognitive and psycho-linguistic non metric or non quantitative methodology developed for the analysis and validation of unconscious cognition and meaning in ostensibly literal verbal narratives is presented. Unconscious referents are reconceptualized as sub-literal (SubLit) referents. An integrally systemic, structural, and internally consistent set of operations is delineated and instantiated. The method is related to aspects of two models. The first is logico-mathematic structure; the second is linguistic syntax. After initially framing the problem that the method addresses, along with some theoretical implications, historical precursors are briefly outlined. The method presents novel cognitive and linguistic operations. Though the method raises a number of issues of theory, research, and methodology, and makes a contribution to these areas, it stands independently, qua method.
|
[
"Reasoning",
"Numerical Reasoning",
"Psycholinguistics",
"Linguistics & Cognitive NLP"
] |
[
8,
5,
77,
48
] |
SCOPUS_ID:85084278034
|
A Logistic Regression Approach for Generating Movies Reputation Based on Mining User Reviews
|
The paper aims to present an approach for generating a single reputation value towards a target movie based on mining movie reviews and their attached ratings with the use of Logistic Regression classifier and Latent Semantic Indexing (LSI) method. The contribution of the paper is fourfold. First, we apply Logistic Regression classifier to determine the sentiment orientation of movie reviews (positive or negative). Second, we use LSI method and cosine similarity to compute the semantic similarity between reviews. Third, we compute a custom reputation value separately for positive opinions group and negative opinions group. Finally, we use the weighted arithmetic mean to generate a single reputation value towards the target movie.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85101083795
|
A Long Short-Term Memory (LSTM) Model for Business Sentiment Analysis Based on Recurrent Neural Network
|
Business sentiment analysis (BSA) is one of the significant and popular topics of natural language processing. It is one kind of sentiment analysis techniques for business purpose. Different categories of sentiment analysis techniques like lexicon-based techniques and different types of machine learning algorithms are applied for sentiment analysis on different languages like English, Hindi, Spanish, etc. In this paper, long short-term memory (LSTM) is applied for business sentiment analysis, where recurrent neural network is used. LSTM model is used in a modified approach to prevent the vanishing gradient problem rather than applying the conventional recurrent neural network (RNN). To apply the modified RNN model, product review dataset is used. In this experiment, 70% of the data is trained for the LSTM and the rest 30% of the data is used for testing. The result of this modified RNN model is compared with other conventional RNN models and a comparison is made among the results. It is noted that the proposed model performs better than the other conventional RNN models. Here, the proposed model, i.e., modified RNN model approach has achieved around 91.33% of accuracy. By applying this model, any business company or e-commerce business site can identify the feedback from their customers about different types of product that customers like or dislike. Based on the customer reviews, a business company or e-commerce platform can evaluate its marketing strategy.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
SCOPUS_ID:85127481481
|
A Long-Text Classification Method of Chinese News Based on BERT and CNN
|
Text Classification is an important research area in natural language processing (NLP) that has received a considerable amount of scholarly attention in recent years. However, real Chinese online news is characterized by long text, a large amount of information and complex structure, which also reduces the accuracy of Chinese long text classification as a result. To improve the accuracy of long text classification of Chinese news, we propose a BERT-based local feature convolutional network (LFCN) model including four novel modules. First, to address the limitation of Bidirectional Encoder Representations from Transformers (BERT) on the length of the max input sequence, we propose a named Dynamic LEAD-n (DLn) method to extract short texts within the long text based on the traditional LEAD digest algorithm. In Text-Text Encoder (TTE) module, we use BERT pretrained language model to complete the sentence-level feature vector representation of a news text and to capture global features by using the attention mechanism to identify correlated words in text. After that, we propose a CNN-based local feature convolution (LFC) module to capture local features in text, such as key phrases. Finally, the feature vectors generated by the different operations over several different periods are fused and used to predict the category of a news text. Experimental results show that the new method further improves the accuracy of long text classification of Chinese news.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
12,
24,
3
] |
SCOPUS_ID:85106551999
|
A Longitudinal Study of Spanish Language Growth and Loss in Young Spanish-English Bilingual Children
|
This longitudinal study examined trajectories of Spanish language growth and loss in 34 Spanish-English bilingual children attending an English immersion school. Narrative retell language samples were collected in Spanish across 3 years using wordless, picture storybooks. Digital audio recordings were transcribed, coded, and analyzed for mean length of utterance in words, proportion of grammatical utterances, and moving-average type-token ratio. Code switching into English was also coded at the word level to determine its potential impact on moving-average type-token ratio. Growth curve models were used to estimate the change over time for each outcome measure. The findings indicated that the Spanish-English bilingual participants who attended an English immersion school demonstrated loss of Spanish grammatical and lexical production (as defined by encompassing maintenance and or significant deceleration) from preschool through kindergarten, and that the degree of loss in lexical production was impacted by whether code switching was included or excluded. The findings are discussed in the context of clinical decision-making when assessing the Spanish expressive language abilities of this specific population.
|
[
"Code-Switching",
"Multilinguality"
] |
[
7,
0
] |
SCOPUS_ID:85137330497
|
A Look at the Sociointerational Discourse Analysis Between Caregivers and Institutionalized Older Women in Bathing Care
|
Based on Ethnomethodology and Conversational Analysis, and anchored in the Sociointerational Discourse Analysis, this article sought to analyze the speeches of health professionals and their association with stigmas related to institutionalized older women at the time of bathing care. Data were collected using field notes and recordings of interaction moments between caregivers and residents of a selected Long-Term Care Facility (LTCF) for Older Adults. Corroborating the literature on the theme, the results indicate some common characteristics regarding stigmas experienced by older persons living in LTCF, such as age and shelter as vectors of stigmatization, repetition, and confirmation of stigma, as well as refusal of the stigma by the desiring being. Despite the limitations inherent to this study, further research should be replicated in other care contexts, analyzing interactions established with this population, thus providing a broader view of these interactions and their repercussions on the actors involved. In compliance with Resolution 466/2012, the project was submitted to the Research Ethics Committee and approved under Opinion no. 3,534,827, CAAE: 17143019.3.0000.5588.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
71,
72,
17,
4
] |
http://arxiv.org/abs/1908.08917v1
|
A Lost Croatian Cybernetic Machine Translation Program
|
We are exploring the historical significance of research in the field of machine translation conducted by Bulcsu Laszlo, Croatian linguist, who was a pioneer in machine translation in Yugoslavia during the 1950s. We are focused on two important seminal papers written by members of his research group from 1959 and 1962, as well as their legacy in establishing a Croatian machine translation program based around the Faculty of Humanities and Social Sciences of the University of Zagreb in the late 1950s and early 1960s. We are exploring their work in connection with the beginnings of machine translation in the USA and USSR, motivated by the Cold War and the intelligence needs of the period. We also present the approach to machine translation advocated by the Croatian group in Yugoslavia, which is different from the usual logical approaches of the period, and his advocacy of cybernetic methods, which would be adopted as a canon by the mainstream AI community only decades later.
|
[
"Programming Languages in NLP",
"Machine Translation",
"Multimodality",
"Text Generation",
"Multilinguality"
] |
[
55,
51,
74,
47,
0
] |
http://arxiv.org/abs/1705.10754v1
|
A Low Dimensionality Representation for Language Variety Identification
|
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality --and increasing the big data suitability-- to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85077256437
|
A Low Effort Approach to Structured CNN Design Using PCA
|
Deep learning models hold state of the art performance in many fields, yet their design is still based on heuristics or grid search methods that often result in overparametrized networks. This work proposes a method to analyze a trained network and deduce an optimized, compressed architecture that preserves accuracy while keeping computational costs tractable. Model compression is an active field of research that targets the problem of realizing deep learning models in hardware. However, most pruning methodologies tend to be experimental, requiring large compute and time intensive iterations of retraining the entire network. We introduce structure into model design by proposing a single shot analysis of a trained network that serves as a first order, low effort approach to dimensionality reduction, by using PCA (Principal Component Analysis). The proposed method simultaneously analyzes the activations of each layer and considers the dimensionality of the space described by the filters generating these activations. It optimizes the architecture in terms of number of layers, and number of filters per layer without any iterative retraining procedures, making it a viable, low effort technique to design efficient networks. We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3.8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy. We also apply the method to MobileNet, and achieve 1.7X and 3.9X reduction in the number of operations and parameters respectively, while improving accuracy by almost one percentage point.
|
[
"Responsible & Trustworthy NLP",
"Green & Sustainable NLP"
] |
[
4,
68
] |
SCOPUS_ID:85135064820
|
A Low-Cost, Controllable and Interpretable Task-Oriented Chatbot: With Real-World After-Sale Services as Example
|
Though widely used in industry, traditional task-oriented dialogue systems suffer from three bottlenecks: (i) difficult ontology construction (e.g., intents and slots); (ii) poor controllability and interpretability; (iii) annotation-hungry. In this paper, we propose to represent utterance with a simpler concept named Dialogue Action, upon which we construct a tree-structured TaskFlow and further build task-oriented chatbot with TaskFlow as core component. A framework is presented to automatically construct TaskFlow from large-scale dialogues and deploy online. Our experiments on real-world after-sale customer services show TaskFlow can satisfy the major needs, as well as reduce the developer burden effectively.
|
[
"Explainability & Interpretability in NLP",
"Natural Language Interfaces",
"Responsible & Trustworthy NLP",
"Dialogue Systems & Conversational Agents"
] |
[
81,
11,
4,
38
] |
SCOPUS_ID:85131591413
|
A Low-Latency Streaming On-Device Automatic Speech Recognition System Using a CNN Acoustic Model on FPGA and a Language Model on Smartphone
|
This paper presents a low-latency streaming on-device automatic speech recognition system for inference. It consists of a hardware acoustic model implemented in a field-programmable gate array, coupled with a software language model running on a smartphone. The smartphone works as the master of the automatic speech recognition system and runs a three-gram language model on the acoustic model output to increase accuracy. The smartphone calculates and sends the Mel-spectrogram of an audio stream with 80 ms unit input from the built-in microphone of the smartphone to the field-programmable gate array every 80 ms. After ~35 ms, the field-programmable gate array sends the calculated word-piece probability to the smartphone, which runs the language model and generates the text output on the smartphone display. The worst-case latency from the audio-stream start time to the text output time was measured as 125.5 ms. The real-time factor is 0.57. The hardware acoustic model is derived from a time-depth-separable convolutional neural network model by reducing the number of weights from 115 M to 9.3 M to decrease the number of multiply-and-accumulate operations by two orders of magnitude. Additionally, the unit input length is reduced from 1000 ms to 80 ms, and to minimize the latency, no future data are used. The hardware acoustic model uses an instruction-based architecture that supports any sequence of convolutional neural network, residual network, layer normalization, and rectified linear unit operations. For the LibriSpeech test-clean dataset, the word error rate of the hardware acoustic model was 13.2% and for the language model, it was 9.1%. These numbers were degraded by 3.4% and 3.2% from the original convolutional neural network software model due to the reduced number of weights and the lowering of the floating-point precision from 32 to 16 bit. The automatic speech recognition system has been demonstrated successfully in real application scenarios.
|
[
"Language Models",
"Programming Languages in NLP",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
55,
72,
70,
47,
10,
74
] |
SCOPUS_ID:85107366405
|
A Lucrative Model for Identifying Potential Adverse Effects from Biomedical Texts by Augmenting BERT and ELMo
|
This study copes with extracting adverse effects (AEs) from biomedical texts. An adverse effect is a noxious, unintended, and undesired effect caused by the administration of an external entity such as medication, dietary supplement, radiotherapy, and others. A binary classifier is proposed to filter out irrelevant texts from AE assertive texts and a sequence labeling model for extracting the AE mentions. Both models are built by consolidating the cutting-edge deep learning technologies: Bidirectional Encoder Representations from Transformers (BERT), Embeddings from Language Models (ELMo), and Bidirectional Gated Recurrent Units. The performances of our models are evaluated on an Adverse Drug Effects dataset constructed by sampling from Medline case studies. Both models perform significantly better than previously published models with an F1 score of 0.906 for binary classification and an approximate match F1 score of 0.925 for text labeling. The proposed models can be adapted to any tasks with similar interests.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85145880055
|
A META HEURISTIC MULTI-VIEW DATA ANALYSIS OVER UNCONDITIONAL LABELED MATERIAL: AN INTELLIGENCE OCMHAMCV
|
Artificial intelligence has been provided powerful research attributes like data mining and clustering for reducing bigdata functioning. Clustering in multi-labeled categorical analysis gives huge amount of relevant data that explains evaluation and portrayal of qualities as trending notion. A wide range of scenarios, data from many dimensions may be used to provide efficient clustering results. Multi-view clustering techniques had been outdated, however they all provide less accurate results when a single clustering of input data is applied. Numerous data groups are conceivable due to diversity of multi-dimensional data, each with its own unique set of viewpoints. When dealing multi-view labelled data, obtaining quantifiable and realistic cluster results may be challenge. This study provides unique strategy termed OCMHAMCV (Orthogonal Constrained Meta Heuristic Adaptive Multi-View Cluster). In beginning, OMF approach used to cluster similar labelled sample data into prototypes of dimensional clusters of low-dimensional data. Utilize adaptive heuristics integrate complementary data several dimensions complexity of computational analysis data representation data in appropriate orthonormality constrained viewpoint. Studies on massive data sets reveal that proposed method outperforms more traditional multi-view clustering techniques scalability and efficiency. The performance measures like accuracy 98.32%, sensitivity 93.42%, F1-score 98.53% and index score 96.02% has been attained, which was good improvement. Therefore it is proved that proposed methodology suitable for document summarization application for future scientific analysis
|
[
"Information Extraction & Text Mining",
"Summarization",
"Text Generation",
"Text Clustering",
"Responsible & Trustworthy NLP",
"Green & Sustainable NLP"
] |
[
3,
30,
47,
29,
4,
68
] |
SCOPUS_ID:85122575584
|
A METHOD TO IMPROVE EXACT MATCHING RESULTS IN COMPRESSED TEXT USING PARALLEL WAVELET TREE
|
The process of searching on the World Wide Web (WWW.is increasing regularly, and users around the world also use it regularly. In WWW the size of the text corpus is constantly increasing at an exponential rate, so we need an efficient indexing algorithm that reduces both space and time during the search process. This paper proposes a new technique that utilizes Word-Based Tagging Coding compression which is implemented using Parallel Wavelet Tree, called WBTC_PWT. WBTC_PWT uses the word-based tagging coding encoding technique to reduce the space complexity of the index and uses a parallel wavelet tree which reduces the time it takes to construct indexes. This technique utilizes the features of compressed pattern matching to minimize search time complexity. In this technique, all the unique words present in the text corpus are divided into different levels according to the word frequency table and a different wavelet tree is made for each level in parallel. Compared to other existing search algorithms based on compressed text, the proposed WBTC_PWT search method is significantly faster and it reduces the chances of getting the false matching result.
|
[
"Tagging",
"Indexing",
"Information Retrieval",
"Syntactic Text Processing"
] |
[
63,
69,
24,
15
] |
SCOPUS_ID:85055487445
|
A MIML-LSTM neural network for integrated fine-grained event forecasting
|
Societal event forecasting plays a significant role in crisis warning and emergency management. Most traditional prediction methods focus on predicting whether specific events would happen or not. However, the results of these methods are not always informative for the policy makers due to excessive frequency, lack of details and supportive evidence about the predictive events. In this paper, we focus on the problem of integrated fine-grained event forecasting which is to predict the attributes of events and find out related precursors. Given a collection of news sequences, we transform the problem into a Multi-Instance Multi-Label learning (MIML) framework. Considering the sequential influence of events and hybridity of news, we implement the MIML framework based on the Long Short-Term Memory (LSTM) neural network, and propose the model called MIML-LSTM to extract three levels of deep features which represent news article, daily status and news sequence respectively. Based on this hierarchical representation, we design a compositional objective function for joint training of each part. Taking multiple types of protest event prediction as a demonstration, we evaluate the proposed model on news streams from three countries in Latin America, and the experimental results show the effectiveness of our model on integrated fine-grained event forecasting.
|
[
"Language Models",
"Semantic Text Processing",
"Information Extraction & Text Mining"
] |
[
52,
72,
3
] |
SCOPUS_ID:85009195509
|
A MISSING-WORD TEST COMPARISON OF HUMAN AND STATISTICAL LANGUAGE MODEL PERFORMANCE
|
A suite of missing-word tests based on text extracts selected randomly from two different text corpora provided a metric which was used in an evaluation of human performance, an evaluation of language model performance and a cross-comparison of the performances. The effects of providing different sizes of context for the missing word (ranging from two words to three sentences) were examined and two main patterns became clear from the results: • surprisingly, for tests where the language model was able to take advantage of all the context information provided (i.e. where the context consisted of just a few words) it outperformed humans; • conversely, humans outperformed the language model when the size of context given for the missing word exceeded the size, which the language model could usefully, employ in its probability calculations (typically more than six words).
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:84957830652
|
A MODULAR ARCHITECTURE SUPPORTING MULTIPLE HYPOTHESES FOR CONVERSION OF TEXT TO PHONETIC AND LINGUISTIC ENTITIES
|
In this communication we devise a distributed modular scheme for organizing the different types of knowledge needed in the first phase of text-to-speech conversion, namely the conversion of the input text to a symbolic notation, representing phonetic transcription together with syntactic, semantic and pragmatic information. We argue that the proposed scheme will provide for easier formulation and maintenance of knowledge, and that its modularity will also simplify the implementation of different modalities, e.g. various reading modes.
|
[
"Phonetics",
"Speech & Audio in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
64,
70,
15,
74
] |
https://aclanthology.org//2007.mtsummit-papers.61/
|
A MT system from Turkmen to Turkish employing finite state and statistical methods
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85131265568
|
A MULTI DOMAIN KNOWLEDGE ENHANCED MATCHING NETWORK FOR RESPONSE SELECTION IN RETRIEVAL-BASED DIALOGUE SYSTEMS
|
Building a human-machine conversational agent is a core problem in Artificial Intelligence, where knowledge has to be integrated into the model effectively. In this paper, we propose a Multi Domain Knowledge Enhanced Matching Network (MDKEMN) to build retrieval-based dialogue systems that could leverage both explicit knowledge graph and implicit domain knowledge for response selection. Specifically, our MDKEMN leverages the self-attention mechanism of a single-stream Transformer to make deep interactions among the dialogue context, response candidate and external knowledge graph, and finally returns the matching degree of each context-response pair under the external knowledge. Furthermore, to leverage the implicit domain knowledge from all domains to improve the performance of each domain, we combine the multi-domain datasets for training and then finetune the pretrained model on each domain. Experimental results show (1) the effectiveness of both explicit and implicit knowledge incorporating and (2) the superiority of our approach over previous baselines on a Chinese multi-domain knowledge-driven dialogue dataset.
|
[
"Semantic Text Processing",
"Structured Data in NLP",
"Knowledge Representation",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Multimodality"
] |
[
72,
50,
18,
11,
38,
24,
74
] |
SCOPUS_ID:0038791970
|
A MULTILINGUAL TEXT PROCESSING ENGINE FOR THE PAPAGENO TEXT-TO-SPEECH SYNTHESIS SYSTEM
|
Automatic synthesis of speech from arbitrary text requires two basic operations: linguistic analysis of input text and speech waveform generation. The achieved quality of the second stage very much depends on the reliability and richness of information generated in the first stage. In this paper we discuss possibilities and problems of text analysis for multilingual speech synthesis. The language independent approach requires the separation of all the language specific information into the language specific inventory, which is composed of different lexica, various dictionaries and lists. The remaining core represents the language independent text-processing engine.
|
[
"Multimodality",
"Speech & Audio in NLP",
"Multilinguality"
] |
[
74,
70,
0
] |
SCOPUS_ID:85103551874
|
A Machine Learning Analysis of the Recent Environmental and Resource Economics Literature
|
We use topic modeling to study research articles in environmental and resource economics journals in the period 2000–2019. Topic modeling based on machine learning allows us to identify and track latent topics in the literature over time and across journals, and further to study the role of different journals in different topics and the changing emphasis on topics in different journals. The most prevalent topics in environmental and resource economics research in this period are growth and sustainable development and theory and methodology. Topics on climate change and energy economics have emerged with the strongest upward trends. When we look at our results across journals, we see that journals have different topical profiles and that many topics mainly appear in one or a few selected journals. Further investigation reveal latent semantic structures across research themes that only the insider would be aware.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85062225018
|
A Machine Learning Approach for Graph-Based Page Segmentation
|
We propose a new approach for segmenting a document image into its page components (e.g. text, graphics and tables). Our approach consists of two main steps. In the first step, a set of scores corresponding to the output of a convolutional neural network, one for each of the possible page component categories, is assigned to each connected component in the document. The labeled connected components define a fuzzy over-segmentation of the page. In the second step, spatially close connected components that are likely to belong to a same page component are grouped together. This is done by building an attributed region adjacency graph of the connected components and modeling the problem as an edge removal problem. Edges are then kept or removed based on a pre-trained classifier. The resulting groups, defined by the connected subgraphs, correspond to the detected page components. We evaluate our method on the ICDAR2009 dataset. Results show that our method effectively segments pages, being able to detect the nine types of page components. Furthermore, as our approach is based on simple machine learning models and graph-based techniques, it should be easily adapted to the segmentation of a variety of document types.
|
[
"Structured Data in NLP",
"Text Classification",
"Multimodality",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
50,
36,
74,
24,
3
] |
SCOPUS_ID:85075690608
|
A Machine Learning Approach for Hot Topic Detection in News
|
We explore the related problems of topic detection within the stream of news sources collected by newspapers aggregators. In this paper, we focus on evaluating the effectiveness of the collaboration between preprocessing techniques and document clustering techniques and propose to use Pearson product-moment correlation coefficient to address the relation between keywords for retrieving topics behind keywords. The proposed approach is an experiment on over 10,000 articles which has been labeled manually and the experimental result is superior to the state-of-the-art methods with respects to the preciseness.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85131927856
|
A Machine Learning Approach for Multiclass Sentiment Analysis of Twitter Data: A Review
|
Sentiment analysis or opinion mining is a prominent and most demanding research topic in today’s world. The main idea behind this research topic is to recognize the user’s opinions and emotions towards the aspect of service or product via a text basis. Sentiment analysis involves mining text, lexicon construction, extracting features and finally finding polarity of text. Even though numerous amounts of research work were conducted in this field through different methods, opinion mining is still considered a challenging field for research. Most of the prior research concentrated on the binary or ternary classification of sentiments such as positive, negative, neutral. Some studies have done an analysis of Twitter sentiment based on ordinal regression, but by turning the problem of ordinal regression into a problem of binary classification. The aim of this study is to review the multiclass sentiment analysis of Twitter text data using an automated i.e., machine learning approach. This review paper intends to focus on existing work for Twitter sentiment analysis with multiple polarity categorization and explore gaps with future scope in the said research area.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85131915998
|
A Machine Learning Approach for Sentiment Analysis of Book Reviews in Bangla Language
|
With the advent of technology, Sentiment polarity detection has recently piqued the interest of NLP researchers. Sentiment analysis determines the profound meaning of an article. Due to COVID-19 pandemic, online shopping is the safest way of shopping. Moreover, there are product quality and service issues. Our target is to analyze the book reviews which provide positive and negative reviews in Bangla language. For this, a total of 5500 user generated Bengali reviews are collected from various book review pages of social media. In order to get the best possible result, sentiment analysis is used. Thereafter, five different algorithms are applied to predict with almost high accuracy. Among them, the Random Forest provides us the maximum accuracy which is 98.39%.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85103742695
|
A Machine Learning Approach for the Classification of Methamphetamine Dealers on Twitter in Thailand.
|
This research presents a method to classify messages from Twitter (tweet) related to Methamphetamine. The messages are classified into three classes: normal, seller, buyer. The models presented in this research are Multinomial Naive Bayes, Multi-Class LSTM, and Hierarchical LSTM. Model training uses a balanced and imbalanced dataset. The text used for Model training is tokenized from four tokenizers: Tlex+, Lexto+, Attacut, and Deepcut. To study the model performance's effect, we divide the data with a different dataset and tokenizer. The results showed that all models could classify the messages into the three classes. The most effective model built from a balanced dataset is the Hierarchical LSTM model using the Lexto+ Tokenizer provides the highest Accuracy, and the most effective model build from an imbalanced dataset is the Multi-Class LSTM model using the Lexto+ Tokenizer. This model gave the highest Accuracy, but the Fl-Score of the Hierarchical LSTM model gave better Accuracy in each class.The creation of a text classification model related to Methamphetamine uses Twitter messages. Most of them are Thai grammatical errors and has many slang usage. We found that Lexto+ is the best tokenizer to build a model. However, it is not much different from other tokenizers. On the other hand, the best dataset to build the model is a balanced dataset that significantly affects model performance.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Syntactic Text Processing",
"Text Segmentation",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
15,
21,
36,
3
] |
SCOPUS_ID:85113863360
|
A Machine Learning Approach to Analyze Fashion Styles from Large Collections of Online Customer Reviews
|
Social media and online reviews have changed customer behavior when buying fashion products online. Online customer reviews also provide opportunities for businesses to deliver improved customer experiences. This study aims to develop fashion style models, based on online customer reviews from e-commerce systems to analyze customer preferences. Topic Modeling with Latent Dirichlet Allocation (LDA) was performed on a large collection of online customer reviews in different categories to investigate customer preferences by building fashion style models in a semantic space. Online product review data from Amazon, one of the leading online shopping websites globally, and Rakuten, one of the representative online shopping websites in Japan, were used to reveal the hidden topics in the review texts. The obtained topic definitions were manually examined, and the results were used to build computational models reflecting semantic relationships. The obtained fashion style models can potentially help marketing and product design specialists better understand customer preferences in the e-commerce fashion industry.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85132773969
|
A Machine Learning Approach to Analyze Mental Health from Reddit Posts
|
Reddit is a platform with a heavy focus on its community forums and hence is comparatively unique from other social media platforms. It is divided into sub-Reddits, resulting in distinct topic-specific communities. The convenience of expressing thoughts, a flexibility of describing emotions, inter-operability of using jargon, the security of user identity makes Reddit forums replete with mental health-relevant data. Timely diagnosis and detection of early symptoms are one of the main challenges of several mental health conditions for which they have been affecting millions of people across the globe. In this paper, we use a dataset collected from Reddit, containing posts from different sub-Reddits, to extract and interpret meaningful insights using natural language processing techniques followed by supervised machine learning algorithms to build a predictive model to analyze different states of mental health. The paper aims to discover how a user’s psychology is evident from the language used, which can be instrumental in identifying early symptoms in vulnerable groups. This work presents a comparative analysis of two popular feature engineering techniques along with commonly used classification algorithms.
|
[
"Responsible & Trustworthy NLP",
"Ethical NLP",
"Information Extraction & Text Mining"
] |
[
4,
17,
3
] |
http://arxiv.org/abs/2211.07705v1
|
A Machine Learning Approach to Classifying Construction Cost Documents into the International Construction Measurement Standard
|
We introduce the first automated models for classifying natural language descriptions provided in cost documents called "Bills of Quantities" (BoQs) popular in the infrastructure construction industry, into the International Construction Measurement Standard (ICMS). The models we deployed and systematically evaluated for multi-class text classification are learnt from a dataset of more than 50 thousand descriptions of items retrieved from 24 large infrastructure construction projects across the United Kingdom. We describe our approach to language representation and subsequent modelling to examine the strength of contextual semantics and temporal dependency of language used in construction project documentation. To do that we evaluate two experimental pipelines to inferring ICMS codes from text, on the basis of two different language representation models and a range of state-of-the-art sequence-based classification methods, including recurrent and convolutional neural network architectures. The findings indicate a highly effective and accurate ICMS automation model is within reach, with reported accuracy results above 90% F1 score on average, on 32 ICMS categories. Furthermore, due to the specific nature of language use in the BoQs text; short, largely descriptive and technical, we find that simpler models compare favourably to achieving higher accuracy results. Our analysis suggest that information is more likely embedded in local key features in the descriptive text, which explains why a simpler generic temporal convolutional network (TCN) exhibits comparable memory to recurrent architectures with the same capacity, and subsequently outperforms these at this task.
|
[
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
72,
36,
12,
24,
3
] |
http://arxiv.org/abs/1903.06765v1
|
A Machine Learning Approach to Comment Toxicity Classification
|
Now-a-days, derogatory comments are often made by one another, not only in offline environment but also immensely in online environments like social networking websites and online communities. So, an Identification combined with Prevention System in all social networking websites and applications, including all the communities, existing in the digital world is a necessity. In such a system, the Identification Block should identify any negative online behaviour and should signal the Prevention Block to take action accordingly. This study aims to analyse any piece of text and detecting different types of toxicity like obscenity, threats, insults and identity-based hatred. The labelled Wikipedia Comment Dataset prepared by Jigsaw is used for the purpose. A 6-headed Machine Learning tf-idf Model has been made and trained separately, yielding a Mean Validation Accuracy of 98.08% and Absolute Validation Accuracy of 91.61%. Such an Automated System should be deployed for enhancing healthy online conversation
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85028022723
|
A Machine Learning Approach to Evaluating Translation Quality
|
We explored supervised machine learning (ML) techniques to understand and predict the adequacy and fluency of English-Spanish machine translation. Five experiments were conducted using three classifiers in Weka, an open-source ML tool. We found that the highest performance was achieved by applying a dimensionality reduction approach to the classification task, which included collapsing a numeric scale of quality to two categories: high quality and low quality. Our results showed that the Support Vector Machine classifier performed the best at predicting the adequacy (65.65%) and fluency (65.77%) of the translations. More research is needed to explore the methodologies of applying ML to translation evaluation.
|
[
"Machine Translation",
"Information Extraction & Text Mining",
"Text Classification",
"Text Generation",
"Information Retrieval",
"Multilinguality"
] |
[
51,
3,
36,
47,
24,
0
] |
SCOPUS_ID:85136872428
|
A Machine Learning Approach to Model HRI Research Trends in 20102021
|
The present study collects a large amount of HRI-related research studies and analyzes the research trends from 2010 to 2021. Through the topic modeling technique, our developed ML model is able to retrieve the dominant research factors. The preliminary results reveal five important topics, handover, privacy, robot tutor, skin de deformation, and trust. Our results show the research in the HRI domain can be divided into two general directions, namely technical and human aspects regarding the use of robotic applications. At this point, we are increasing the research pool to collect more research studies and advance our ML model to strengthen the robustness of the results.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85127955130
|
A Machine Learning Approach to POS Tagging Case study: Amazighe language
|
The development of automatic processing tools for amazighe language is hampered by the lack of resources for these. In this sense, one of the main objectives of the work reported in this article is to provide this language with a morphosyntactic annotated corpus and a better precision system for morphosyntaxic labeling. To do this, we started by building our corpus of over 60, 000 words. This was first used to carry out the lexical segmentation step. Secondly, this corpus made it possible to train the different models of machine learning and deep learning; in order to develop a part of speech (POS) tagger of amazighe language.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
http://arxiv.org/abs/1810.06639v4
|
A Machine Learning Approach to Persian Text Readability Assessment Using a Crowdsourced Dataset
|
An automated approach to text readability assessment is essential to a language and can be a powerful tool for improving the understandability of texts written and published in that language. However, the Persian language, which is spoken by over 110 million speakers, lacks such a system. Unlike other languages such as English, French, and Chinese, very limited research studies have been carried out to build an accurate and reliable text readability assessment system for the Persian language. In the present research, the first Persian dataset for text readability assessment was gathered and the first model for Persian text readability assessment using machine learning was introduced. The experiments showed that this model was accurate and could assess the readability of Persian texts with a high degree of confidence. The results of this study can be used in a number of applications such as medical and educational text readability evaluation and have the potential to be the cornerstone of future studies in Persian text readability assessment.
|
[
"Semantic Text Processing",
"Text Complexity"
] |
[
72,
42
] |
SCOPUS_ID:85113361236
|
A Machine Learning Approach to Sentiment Analysis on Web Based Feedback
|
The advent of this new era of technology has brought forward new and convenient ways to express views and opinions. This is a major factor for the vast influx of data that we experience every day. People have found out new ways to communicate their feelings and emotions to others through written texts sent over the Internet. This is exactly where the field of sentiment analysis comes into existence. This paper focuses on analyzing the reviews of various applications on the Internet and to understand whether they are positive or negative. For achieving this objective, we initially pre-process the data by performing data cleaning and removal of stop words. TF-IDF method is used to convert the cleaned data into a vectorised form. Finally, the machine learning algorithms: Naïve Bayes, Support Vector Machine and Logistic Regression are applied and their comparative analysis is performed on the basis of accuracy, precision and recall parameters. Our proposed approach has achieved an accuracy of 92.1% and has outperformed many other existing approaches.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85127489679
|
A Machine Learning Approach to Track COVID-19 Pandemic using Sentiment Analysis
|
Coronavirus disease or COVID-19 is one of the most frightening and infectious diseases of the twenty-first century. Since the outbreak of COVID-19 in Wuhan, China, numerous researches are conducted in this sector. At the preliminary stage, there was not sufficient numeric data for research but when we consider the text data such as trending topics of Social Media or patients sharing experiences about their symptoms, we get enough data to ace the navigation of the Coronavirus (SARS-CoV-2). Keeping aside the health complications related to COVID-19, there also has been huge public panic following the pandemic. Sentiment analysis helps to learn the emotions of a vast number of people about any particular topic. In this paper, we have used sentiment analysis methods to observe the public reaction to the COVID-19 pandemic and people's experience of the ongoing vaccination process. Machine Learning-based (ML-based) classification algorithms are implemented for text classification. Finally, the accuracy of the classification models is also calculated for further prediction.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
http://arxiv.org/abs/cmp-lg/9607022v1
|
A Machine Learning Approach to the Classification of Dialogue Utterances
|
The purpose of this paper is to present a method for automatic classification of dialogue utterances and the results of applying that method to a corpus. Superficial features of a set of training utterances (which we will call cues) are taken as the basis for finding relevant utterance classes and for extracting rules for assigning these classes to new utterances. Each cue is assumed to partially contribute to the communicative function of an utterance. Instead of relying on subjective judgments for the tasks of finding classes and rules, we opt for using machine learning techniques to guarantee objectivity.
|
[
"Text Classification",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
11,
38,
24,
3
] |
SCOPUS_ID:85051127172
|
A Machine Learning Based Approach for Opinion Mining on Social Network Data
|
Micro-blogging has been widely used for voicing out opinions in the public domain. One such website, Twitter is a point of attraction for researchers in the areas such as prediction of electoral events, movie box office, stock market, consumer brands etc. In our paper, we focus on using Twitter, for the task of opinion mining. We explore how combining the different parameters affect the accuracy of the machine-learning algorithms with respect to the consumer products. In this paper, we have combined the methods of feature extraction with a parameter known as negation handling. Negation words can awfully change the meaning of a sentence and hence the sentiment expressed in them. We experimented with supervised learning methods like Naïve Bayes (NB) Classifier and Maximum Entropy (MaxEnt) Classifier along with optimization iteration algorithms i.e., Generalized Iterative Scaling (GIS) and Improved Iterative Scaling (IIS). Experimental evaluations show that our proposed technique is better. We have obtained a 99.29% of specificity measure using the MaxEnt-IIS Classifier.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85106415770
|
A Machine Learning Based Framework for Enterprise Document Classification
|
Enterprise Content Management (ECM) systems store large amounts of documents that have to be conveniently labelled for easy managed and searching. The classification rules behind the labelling process are informal and tend to change that complicates the labelling even more. We propose a machine learning based document classification framework (Framework) that allows for continuous retraining of the classification bots, for easy analysis of the bot training results and for tuning of the further training runs to address this challenge. Documents and metadata fields typical for ECM systems are used in the research. The Framework comprises selection of classification and vectorization methods, configuration of the methods hyperparameters and general learning related parameters. The model provides user with visual tools to analyze the classification performance and to tune the further steps of the learning. A couple of challenges are addressed – as handling informal and eventually changing criteria for document classification, and dealing with imbalanced data sets. A prototype of the proposed Framework is developed and short analysis of the prototype performance is presented in the article.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85062596565
|
A Machine Learning Based Natural Language Question and Answering System for Healthcare Data Search using Complex Queries
|
Number of use cases in healthcare are well suited as Big Data applications. In healthcare, large volumes of data are coming in and stored as unstructured big data or as structured data in relational database. In any case, Big Data is coming to embrace SQL as a common tool for querying. Developing a question and answering tool for the users that are lack of specialized skillsets and use natural languages for complex queries is a challenge that need to identify significant details, draw inferences and evaluate hypothesis as how domain experts do those. Although NLIDB systems are developed to translate a natural language queries into a database language for non-technical end users, most of the questions addressed by the systems are factoid questions and answering complex queries remains as an open research problem. The proposed auxiliary system is machine learning based and extends existing NLIDB system to help it answer the complex queries. The auxiliary system mimics the way human experts reach the answers to the complex queries. Instead of building a set of simple conditional statements as rules and invoke them as a sequence of chained actions, the proposed system decomposes complex queries into multiple simple factoid sub-queries with the goal of generating answers to each sub-query with the existing NLIDB system from the data explicitly stored in the database. The underlying NLIDB system takes sub-queries as input queries in parallel and produces query results from the data stored in the relational database. The answers to the sub-queries and the desired output labels are used to train the model and the multiclass classifier produced from the training is used to predict and answer valid input queries.
|
[
"Information Retrieval",
"Question Answering",
"Natural Language Interfaces",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
27,
11,
36,
3
] |
SCOPUS_ID:85128216022
|
A Machine Learning Based Sameness Recognition Method for Power System Management Information
|
State Grid Corporation's power construction project information comes from multiple regions and different scale power construction sub-project, and there are missing fillings and irregularities, which makes supervision difficult, and brings difficulties in project acceptance and management. Based on the random forest algorithm, our work proposes a sameness analysis method of engineering problem description text, and realizes the task of automatic classification of engineering defects. This work first defines the concept of sameness recognition in power construction project, and performs Chinese word segmentation on the problem description text in the engineering information data set after the standardized processing. Then, this work constructs a feature vector space. By implementing random forest algorithm, this work conducts sameness analysis and defect classification on the construction problem text, and proves the effect of the method proposed in this work through case analysis.
|
[
"Text Classification",
"Syntactic Text Processing",
"Text Segmentation",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
15,
21,
24,
3
] |
SCOPUS_ID:85063769591
|
A Machine Learning Based Sentiment Analysis by Selecting Features for Predicting Customer Reviews
|
Nowadays people can express their opinions and views publicly which can be favour and/or against any service, issue, product, event, or policy. With the rapid advancement of internet, people can share their feedback on the web in huge numbers. This large number of reviews for individuals can be crucial to improve their services and products, called Opinion Mining. It is also known as sentiment analysis which ultimate goal is to differentiate the emotions expressed within the reviews. In this paper, we analysed these reviews and classifying them into positive or negative opinions. This paper presents a novel classification method called Support Vector Machine (SVM) in order to improve the accuracy by forming two classes i.e. positive and negative. Initially, words are collected from social site Amazoon.in. (Reviews about electronic products, especially for mobile brands) and pre-processed using wordnet tool. To classify the review/comment, we selected some of the features which are evaluated from user review comments. The Information gain with Fast Correlation based Filter (FCBF) considered for feature selection and then the SVM classifier is find the classes. An intensive experimental study shows the efficiency of these enhancements and shows better performance in terms of precision, recall and f-measure.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85141733119
|
A Machine Learning Method for Customer Sentiment Analysis on Social Media
|
Customer Data analysis is a significant part of different ventures utilizing figuring applications, for example, E-business and online shopping. Enormous information is utilized for advancing items which gives better availability among retailers and customers. These days, individuals consistently utilize online advancements to think about the best shops for purchasing better items. This shopping experience furthermore, assessment of the customer’s shop can be seen by the client experience shared across online media stages. Another client while looking through a shop needs data about manufacturing date and manufacturing price, offers, quality, and ideas which must be given by the past client experience. The MRP and MRD are as of now accessible on the item cover or mark. A few methodologies have been utilized for anticipating the item subtleties however not giving precise data. This paper is persuaded toward applying Machine Learning algorithms for picking up, breaking down, and arranging the item data and the shop data dependent on the client experience. The accuracy of the proposed method is 99.4%.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85140882832
|
A Machine Learning Method for Prediction of Stock Market Using Real-Time Twitter Data
|
Finances represent one of the key requirements to perform any useful activity for humanity. Financial markets, e.g., stock markets, forex, and mercantile exchanges, etc., provide the opportunity to anyone to invest and generate finances. However, to reap maximum benefits from these financial markets, effective decision making is required to identify the trade directions, e.g., going long/short by analyzing all the influential factors, e.g., price action, economic policies, and supply/demand estimation, in a timely manner. In this regard, analysis of the financial news and Twitter posts plays a significant role to predict the future behavior of financial markets, public sentiment estimation, and systematic/idiosyncratic risk estimation. In this paper, our proposed work aims to analyze the Twitter posts and Google Finance data to predict the future behavior of the stock markets (one of the key financial markets) in a particular time frame, i.e., hourly, daily, weekly, etc., through a novel StockSentiWordNet (SSWN) model. The proposed SSWN model extends the standard opinion lexicon named SentiWordNet (SWN) through the terms specifically related to the stock markets to train extreme learning machine (ELM) and recurrent neural network (RNN) for stock price prediction. The experiments are performed on two datasets, i.e., Sentiment140 and Twitter datasets, and achieved the accuracy value of 86.06%. Findings show that our work outperforms the state-of-the-art approaches with respect to overall accuracy. In future, we plan to enhance the capability of our method by adding other popular social media, e.g., Facebook and Google News etc.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85069507352
|
A Machine Learning Model for Average Fuel Consumption in Heavy Vehicles
|
This paper advocates a data summarization approach based on distance rather than the traditional time period when developing individualized machine learning models for fuel consumption. This approach is used in conjunction with seven predictors derived from vehicle speed and road grade to produce a highly predictive neural network model for average fuel consumption in heavy vehicles. The proposed model can easily be developed and deployed for each individual vehicle in a fleet in order to optimize fuel consumption over the entire fleet. The predictors of the model are aggregated over fixed window sizes of distance traveled. Different window sizes are evaluated and the results show that a 1 km window is able to predict fuel consumption with a 0.91 coefficient of determination and mean absolute peak-to-peak percent error less than 4% for routes that include both city and highway duty cycle segments.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
http://arxiv.org/abs/2109.09014v1
|
A Machine Learning Pipeline to Examine Political Bias with Congressional Speeches
|
Computational methods to model political bias in social media involve several challenges due to heterogeneity, high-dimensional, multiple modalities, and the scale of the data. Political bias in social media has been studied in multiple viewpoints like media bias, political ideology, echo chambers, and controversies using machine learning pipelines. Most of the current methods rely heavily on the manually-labeled ground-truth data for the underlying political bias prediction tasks. Limitations of such methods include human-intensive labeling, labels related to only a specific problem, and the inability to determine the near future bias state of a social media conversation. In this work, we address such problems and give machine learning approaches to study political bias in two ideologically diverse social media forums: Gab and Twitter without the availability of human-annotated data. Our proposed methods exploit the use of transcripts collected from political speeches in US congress to label the data and achieve the highest accuracy of 70.5% and 65.1% in Twitter and Gab data respectively to predict political bias. We also present a machine learning approach that combines features from cascades and text to forecast cascade's political bias with an accuracy of about 85%.
|
[
"Multimodality",
"Ethical NLP",
"Speech & Audio in NLP",
"Responsible & Trustworthy NLP"
] |
[
74,
17,
70,
4
] |
SCOPUS_ID:84991619945
|
A Machine Learning approach for classification of sentence polarity
|
Opinion Mining is the process used to determine the attitude/opinion/emotion expressed by a person about a particular topic. Analyzing opinions is an integral part for making decisions. In the era of web, if a person wants to buy a product, he will look into the reviews and comments given by the experienced users in web. But it seems to be a tedious task to read the entire reviews available in the web. Hence people are interested in checking whether the review recommends to buy a product or not. If lot of reviews recommends to buy the product, user reach at a conclusion to buy the product, otherwise not to buy the product. In this study, Machine Learning approach is applied to the TripAdvisor dataset in order to develop an efficient review classification. For this work to be carried out, style markers are applied to each of the reviews. In the next stage, significant style markers are recognized with the help of some suitable feature selection method. Thus the reviews can be identified by developing a classifier using the style markers that help to characterize nature of reviews as positive or negative.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85145438542
|
A Machine Learning based Approach to Identify User Interests from Social Data
|
Social media platforms like Twitter, Facebook, Instagram, etc., are considered a common source of extracting information about individuals, such as their needs, interests, and opinions. Our major contribution in this paper is to identify user interests and desires related to the fashion industry in Pakistan. Since people in Pakistan mostly write tweets and reviews in Roman Urdu, the dataset we focused on in this research was comprised of Roman Urdu Tweets and Google Map reviews. From the literature, we observed that not much effort has been done on Roman Urdu tweets and reviews because of its being a low resource language. In terms of methodology, we applied LDA, LSA, and BERT for topic modeling; Vadar combined with TextBlob and DistilBert for sentiment analysis; and K-Means for identifying user clusters with similar interests. In our experiments, we used 15000 tweets and 6000 Google reviews. We were able to create five distinct clusters for each brand. These clusters were further used to track the users based on their interests. We evaluated the performance of our approach and validated it empirically based on Cohen's Kappa score, and achieved a score of 0.45 that shows moderate agreement between human and machine.
|
[
"Language Models",
"Semantic Text Processing",
"Text Clustering",
"Sentiment Analysis",
"Information Extraction & Text Mining"
] |
[
52,
72,
29,
78,
3
] |
http://arxiv.org/abs/2211.14321v1
|
A Machine Learning, Natural Language Processing Analysis of Youth Perspectives: Key Trends and Focus Areas for Sustainable Youth Development Policies
|
Investing in children and youth is a critical step towards inclusive, equitable, and sustainable development for current and future generations. Several international agendas for accomplishing common global goals emphasize the need for active youth participation and engagement for sustainable development. The 2030 Agenda for Sustainable Development emphasizes the need for youth engagement and the inclusion of youth perspectives as an important step toward addressing each of the 17 Sustainable Development Goals. The aim of this study is to analyze youth perspectives, values, and sentiments towards issues addressed by the 17 Sustainable Development Goals through social network analysis using machine learning. Social network data collected during 7 major sustainability conferences aimed at engaging children and youth is analyzed using natural language processing techniques for sentiment analysis. This data categorized using a natural language processing text classifier trained on a sample dataset of social network data during the 7 youth sustainability conferences for deeper understanding of youth perspectives in relation to the SDGs. Machine learning identified demographic and location attributes and features are utilized in order to identify bias and demographic differences between ages, gender, and race among youth. Using natural language processing, the qualitative data collected from over 7 different countries in 3 languages are systematically translated, categorized, and analyzed, revealing key trends and focus areas for sustainable youth development policies. The obtained results reveal the general youth's depth of knowledge on sustainable development and their attitudes towards each of the 17 SDGs. The findings of this study serve as a guide toward better understanding the interests, roles, and perspectives of children and youth in achieving the goals of Agenda 2030.
|
[
"Green & Sustainable NLP",
"Responsible & Trustworthy NLP",
"Sentiment Analysis"
] |
[
68,
4,
78
] |
SCOPUS_ID:85148025762
|
A Machine Learning-Based Mobile Chatbot for Crop Farmers
|
Agriculture remains the basis of the country’s economy, providing the main source of livelihood for most citizenry such as food, employment, income and foreign exchange as well as raw materials for the manufacturing sectors. Despite the great need for economic advancement in crop farming, agriculture seems to be limited in some parts of the country as many people go about in search of white-collar jobs due to lack of adequate information and knowledge on the use of modern farming technologies. The inability of farmers in rural and sub-urban areas to access agricultural knowledge and real-time information on latest farming practices to enhance informed decision making related to soil properties, seeds, fertilizers, pests, modern agricultural tools, and agro-best practices leads to poor crop productivity by farmers. This work is aimed at providing a mobile chatbot for crop farmers in Uyo and its environs. The dataset used was obtained from Akwa Ibom State Ministry of Agriculture and farmers using a combination of two classic research methods; questionnaires and interviews. An ontology-based representation of the obtained dataset is used for training the chatbot using a hybridized machine approach that consists of word shuffling and Jacquard Similarity algorithm. The resulting chatbot is a knowledge base that will provide the means of obtaining useful answers to questions, advice and recommendations on specific farming concerns. The use of the chatbot will give government a platform to reach out to famers in the state and obtain feedback on governance through agricultural services.
|
[
"Natural Language Interfaces",
"Knowledge Representation",
"Semantic Text Processing",
"Dialogue Systems & Conversational Agents"
] |
[
11,
18,
72,
38
] |
SCOPUS_ID:85139208454
|
A Machine Learning-Based Technique with Intelligent WordNet Lemmatize for Twitter Sentiment Analysis
|
Laterally with the birth of the Internet, the fast growth of mobile strategies has democratised content production owing to the widespread usage of social media, resulting in a detonation of short informal writings. Twitter is microblogging short text and social networking services, with posted millions of quick messages. Twitter analysis addresses the topic of interpreting users’ tweets in terms of ideas, interests, and views in a range of settings and fields. This type of study can be useful for a variation of academics and applications that need knowing people’s perspectives on a given topic or event. Although sentiment examination of these texts is useful for a variety of reasons, it is typically seen as a difficult undertaking due to the fact that these messages are frequently short, informal, loud, and rich in linguistic ambiguities such as polysemy. Furthermore, most contemporary sentiment analysis algorithms are based on clean data. In this paper, we offers a machine-learning-based sentiment analysis method that extracts features from Term Frequency and Inverse Document Frequency (TF-IDF) and needs to apply deep intelligent wordnet lemmatize to improve the excellence of tweets by removing noise. We also utilise the Random Forest network to detect the emotion of a tweet. To authenticate the proposed approach performance, we conduct extensive tests on publically accessible datasets, and the findings reveal that the suggested technique significantly outperforms sentiment classification in multi-class emotion text data.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85125967115
|
A Machine Learning-Based Tool for Exploring Covid-19 Scientific Literature
|
The advent of the COVID-19 pandemic caused by the Sars-CoV2 virus has caused serious damage in different areas. This has prompted thousands of researchers from different disciplines (biology, medicine, artificial intelligence, economics, etc.) to publish a very large number of scientific articles in a very short period, to answer questions related to this pandemic. This abundance of literature, however, raised another problem. It has indeed become extremely difficult for a researcher or a decision-maker to stay up to date with the latest scientific advances or to locate scientific articles related to a specific aspect of this pandemic. In this paper, we present an intelligent tool based on Machine learning, which automatically organizes a large dataset of Covid-19 related scientific literature and visualizes them in a way that helps these people navigating easily through this dataset and locating the sought documents easily. The documents are first pre-processed and transformed into numerical features. Then, those features are passed through a deep denoising autoencoder followed by Uniform Manifold Approximation and Projection technique (UMAP) to reduce their dimensionality into a 2D space. The projected data are then clustered with Agglomerative Clustering Algorithm. This is followed by a topic modeling step which we performed using Latent Dirichlet Allocation (LDA), in order to assign a label to each cluster. Finally, the documents are visualized to the user in an interactive interface that we developed. The experiments we conducted proved that our tool is efficient and useful.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
9,
3,
29
] |
SCOPUS_ID:85148467292
|
A Machine Learning-Sentiment Analysis on Monkeypox Outbreak: An Extensive Dataset to Show the Polarity of Public Opinion From Twitter Tweets
|
Research on sentiment analysis has proven to be very useful in public health, particularly in analyzing infectious diseases. As the world recovers from the onslaught of the COVID-19 pandemic, concerns are rising that another pandemic, known as monkeypox, might hit the world again. Monkeypox is an infectious disease reported in over 73 countries across the globe. This sudden outbreak has become a major concern for many individuals and health authorities. Different social media channels have presented discussions, views, opinions, and emotions about the monkeypox outbreak. Social media sentiments often result in panic, misinformation, and stigmatization of some minority groups. Therefore, accurate information, guidelines, and health protocols related to this virus are critical. We aim to analyze public sentiments on the recent monkeypox outbreak, with the purpose of helping decision-makers gain a better understanding of the public perceptions of the disease. We hope that government and health authorities will find the work useful in crafting health policies and mitigating strategies to control the spread of the disease, and guide against its misrepresentations. Our study was conducted in two stages. In the first stage, we collected over 500,000 multilingual tweets related to the monkeypox post on Twitter and then performed sentiment analysis on them using VADER and TextBlob, to annotate the extracted tweets into positive, negative, and neutral sentiments. The second stage of our study involved the design, development, and evaluation of 56 classification models. Stemming and lemmatization techniques were used for vocabulary normalization. Vectorization was based on CountVectorizer and TF-IDF methodologies. K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest, Logistic Regression, Multilayer Perceptron (MLP), Naïve Bayes, and XGBoost were deployed as learning algorithms. Performance evaluation was based on accuracy, F1 Score, Precision, and Recall. Our experimental results showed that the model developed using TextBlob annotation + Lemmatization + CountVectorizer + SVM yielded the highest accuracy of about 0.9348.
|
[
"Polarity Analysis",
"Sentiment Analysis"
] |
[
33,
78
] |
SCOPUS_ID:85061379799
|
A Machine Reading Comprehension-Based Approach for Featured Snippet Extraction
|
The extraction of featured snippet can be considered as the problem of Question Answering (QA). This paper presents a featured snippet extraction system by employing a technique of machine reading comprehension (MRC). Specifically, we first analyze the characteristics of questions with different types and their corresponding answers. Then, we classify a given question into various types, which is incorporated as key features in the subsequent model configuration. Based on that, we present a model to extract the candidate passages from recalled documents in a MRC fashion. Next, a novel MRC model with multiple stages of attention is proposed to extract answers from the selected passages. Last, in the answer re-ranking stage, we design a question type-adaptive model to produce the final answer. The experimental results on two open-domain QA Datasets clearly validate the effectiveness of our system and models in featured snippet extraction.
|
[
"Information Extraction & Text Mining",
"Question Answering",
"Natural Language Interfaces",
"Reasoning",
"Machine Reading Comprehension"
] |
[
3,
27,
11,
8,
37
] |
SCOPUS_ID:85135766291
|
A Machine Speech Chain Approach for Dynamically Adaptive Lombard TTS in Static and Dynamic Noise Environments
|
Recent end-to-end text-to-speech synthesis (TTS) systems have successfully synthesized high-quality speech. However, TTS speech intelligibility degrades in noisy environments because most of these systems were not designed to handle noisy environments. Several works attempted to address this problem by using offline fine-tuning to adapt their TTS to noisy conditions. Unlike machines, humans never perform offline fine-tuning. Instead, they speak with the Lombard effect in noisy places, where they dynamically adjust their vocal effort to improve the audibility of their speech. This ability is supported by the speech chain mechanism, which involves auditory feedback passing from speech perception to speech production. This paper proposes an alternative approach to TTS in noisy environments that is closer to the human Lombard effect. Specifically, we implement Lombard TTS in a machine speech chain framework to synthesize speech with dynamic adaptation. Our TTS performs adaptation by generating speech utterances based on the auditory feedback that consists of the automatic speech recognition (ASR) loss as the speech intelligibility measure and the speech-to-noise ratio (SNR) prediction as power measurement. Two versions of TTS are investigated: non-incremental TTS with utterance-level feedback and incremental TTS (ITTS) with short-term feedback to reduce the delay without significant performance loss. Furthermore, we evaluate the TTS systems in both static and dynamic noise conditions. Our experimental results show that auditory feedback enhanced the TTS speech intelligibility in noise.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Multimodality"
] |
[
52,
72,
70,
74
] |
https://aclanthology.org//2018.iwslt-1.6/
|
A Machine Translation Approach for Modernizing Historical Documents Using Backtranslation
|
Human language evolves with the passage of time. This makes historical documents to be hard to comprehend by contemporary people and, thus, limits their accessibility to scholars specialized in the time period in which a certain document was written. Modernization aims at breaking this language barrier and increase the accessibility of historical documents to a broader audience. To do so, it generates a new version of a historical document, written in the modern version of the document’s original language. In this work, we propose several machine translation approaches for modernizing historical documents. We tested these approaches in different scenarios, obtaining very encouraging results.
|
[
"Multilinguality",
"Machine Translation",
"Ethical NLP",
"Text Generation",
"Responsible & Trustworthy NLP"
] |
[
0,
51,
17,
47,
4
] |
SCOPUS_ID:85130093774
|
A Machine Translation Framework Based on Neural Network Deep Learning: from Semantics to Feature Analysis
|
This paper uses an encoder-decoder framework based on semantic to feature analysis to construct a neural machine translation model, let the machine automatically perform feature learning, transform corpus data into word vectors in a distributed representation, and use neural networks to implement source language and Direct mapping between target languages. A word alignment method based on deep neural network is proposed, which effectively utilizes the similarity of vocabulary and context information to model word alignment more accurately. This method uses neural network dimensionality reduction method to learn from unlabeled data. The low-dimensional vector representation of the ordering feature is then used to combine the low-dimensional feature representation with other features using a multi-layer neural network and integrated into a linear ordering model.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
https://aclanthology.org//2022.eamt-1.54/
|
A Machine Translation-Powered Chatbot for Public Administration
|
This paper is about a multilingual chatbot developed for public administration within the CEF funded project ENRICH4ALL. We argue for multi-lingual chatbots empowered through MT and discuss the integration of the CEF eTranslation service in a chatbot solution.
|
[
"Machine Translation",
"Natural Language Interfaces",
"Text Generation",
"Dialogue Systems & Conversational Agents",
"Multilinguality"
] |
[
51,
11,
47,
38,
0
] |
SCOPUS_ID:85146119140
|
A Machine Transliteration Tool Between Uzbek Alphabets
|
Machine transliteration, as defined in this paper, is a process of automatically transforming written script of words from a source alphabet into words of another target alphabet within the same language, while preserving their meaning, as well as pronunciation. The main goal of this paper is to present a machine transliteration tool between three common scripts used in low-resource Uzbek language: the old Cyrillic, currently official Latin, and newly announced New Latin alphabets. The tool has been created using a combination of rule-based and fine-tuning approaches. The created tool is available as an open-source Python package, as well as a web-based application including a public API. To our knowledge, this is the first machine transliteration tool that supports the newly announced Latin alphabet of the Uzbek language.
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
4
] |
SCOPUS_ID:85029170521
|
A Machine learning Filter for Relation Extraction
|
The TAC KBP English slot filling track is an evaluation campaign that targets the extraction of 41 pre-identified relations related to specific named entities. In this work, we present a machine learning filter whose aim is to enhance the precision of relation extractors while minimizing the impact on recall. Our approach aims at filtering relation extractors' output using a binary classifier based on a wide array of features including syntactic, lexical and statistical features. We experimented the classifier on 14 of the 18 participating systems in the TAC KBP English slot filling track 2013. The results show that our filter is able to improve the precision of the best 2013 system by nearly 20\% and improve the F1-score for 17 relations out of 33 considered.
|
[
"Semantic Text Processing",
"Information Retrieval",
"Relation Extraction",
"Semantic Parsing",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
72,
24,
75,
40,
36,
3
] |
SCOPUS_ID:85135831949
|
A Machine-Learning Analysis of the Impacts of the COVID-19 Pandemic on Small Business Owners and Implications for Canadian Government Policy Response
|
This study applies a machine-learning technique to a dataset of 38,000 textual comments from Canadian small business owners on the impacts of coronavirus disease 2019 (COVID-19). Topic modelling revealed seven topics covering the short- and longer-term impacts of the pandemic, government relief programs and loan eligibility issues, mental health, and other impacts on business owners. The results emphasize the importance of policy response in aiding small business crisis management and offer implications for theory and policy. Moreover, the study provides an example of using a machine-learning–based automated content analysis in the fields of crisis management, small business, and public policy.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:84965053749
|
A Macroscopic Analysis of News Content in Twitter
|
Previous literature has considered the relevance of Twitter to journalism, for example as a tool for reporters to collect information and for organizations to disseminate news to the public. We consider the reciprocal perspective, carrying out a survey of news media-related content within Twitter. Using a random sample of 1.8 billion tweets over four months in 2014, we look at the distribution of activity across news media and the relative dominance of certain news organizations in terms of relative share of content, the Twitter behavior of news media, the hashtags used in news content versus Twitter as a whole, and the proportion of Twitter activity that is news media-related. We find a small but consistent proportion of Twitter is news media-related (0.8 percent by volume); that news media-related tweets focus on a different set of hashtags than Twitter as a whole, with some hashtags such as those of countries of conflict (Arab Spring countries, Ukraine) reaching over 15 percent of tweets being news media-related; and we find that news organizations’ accounts, across all major organizations, largely use Twitter as a professionalized, one-way communication medium to promote their own reporting. Using Latent Dirichlet Allocation topic modeling, we also examine how the proportion of news content varies across topics within 100,000 #Egypt tweets, finding that the relative proportion of news media-related tweets varies vastly across different subtopics. Over-time analysis reveals that news media were among the earliest adopters of certain #Egypt subtopics, providing a necessary (although not sufficient) condition for influence.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85039951960
|
A Malay named entity recognition using conditional random fields
|
Currently, unstructured textual data analysis has attracted researchers' interest because it offers valuable information into many fields such as business, education, political, healthcare, crime prevention and other. Various sources are accessible that contain unstructured textual data such as online documents, Facebook, Twitter or Instagram. However, the implementation process for these types of unstructured data is limited, especially for Malay language. The lack of textual analysis process brings difficulties in obtaining important information for decision-making. This paper presented an Automated Malay Named Entity Recognition (AMNER) conceptual model using conditional random fields method for Malay language to recognize entities from unstructured textual data. The analysis focused on the developmental model based on Malay language features which guided the recognition process of entities from unstructured text documents.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:84979683319
|
A Malay text corpus analysis for sentence compression using pattern-growth method
|
A text summary extracts serves as a condensed representation of a written input source where important and salient information is kept. However, the condensed representation itself suffer in lack of semantic and coherence if the summary was produced in verbatim using the input itself. Sentence Compression is a technique where unimportant details from a sentence are eliminated by preserving the sentence’s grammar pattern. In this study, we conducted an analysis on our developed Malay Text Corpus to discover the rules and pattern on how human summarizer compresses and eliminates unimportant constituent to construct a summary. A Pattern-Growth based model named Frequent Eliminated Pattern (FASPe) is introduced to represent the text using a set of sequence adjacent words that is frequently being eliminated across the document collection. From the rules obtained, some heuristic knowledge in Sentence Compression is presented with confidence value as high as 85% - that can be used for further reference in the area of Text Summarization for Malay language.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85015877295
|
A Malay text summarizer using pattern-growth method with sentence compression rules
|
A text summary is a condensed representation of text where salient information is extracted with the purpose to ease users' readability. However, if the summary was extracted 'verbatim' from its source, the sentence may contain inessential information along with salient information that may effect on the overall coherence in the summary generation. The purpose of Sentence Compression in Text Summarization is to produce a compact and informational content by eliminating unnecessary constituent in a sentence. We introduce a Pattern-Growth text representation model named Frequent Adjacent Sequential Pattern (FASP) and Frequent Eliminated Pattern (FASPe) to represent the text using a set of sequence adjacent words or 'textual pattern' that are frequently used and eliminated across the Malay news document collection. From the discovered textual pattern, we derived some heuristic Sentence Compression Rules in generating compressed sentences to construct a single extract Malay summary. We conducted experiments on Malay news dataset and the result demonstrates that moderate compressed summary using Sentence Compression Rules has better agreement with human composed summary.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:73849094644
|
A Malayalam OCR system using column-stochastic image matrix approach
|
Indian languages especially South Indian languages have several distinct characteristics that are exploited for the development of a robust optical character recognition system (OCR). This paper addresses the problem of segmentation of printed Malayalam characters, a fairly complex task, along with their characterization through non-trivial dominant Eigen values of column-stochastic image matrices. Rectangular image matrices obtained after digitalization, segmentation and normalization are converted to column-stochastic square matrices. Non trivial dominant Eigen values of such matrices have proved to be unique for characterization of printed Malayalam characters. Further, a novel segmentation algorithm has been proposed and tested. Results and analysis presented indicate effective performance of the OCR system. © 2009 IEEE.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85027871374
|
A Mandarin phonetic-symbol communication aid developed on tablet computers for children with high-functioning autism
|
In this study, a Mandarin phonetic symbol communication aid named as the Zhuyin communication board was developed for children with high-functioning autism. The Zhuyin communication board can be operated on tablet computers to assist autistic children with expressing their thoughts. By using this aid, autistic children can communicate their thoughts by pressing the corresponding phonetic symbols on the developed Zhuyin communication board. Compared with traditional paper keyboards, the developed aid displays the inputted phonetic symbols on the screen of tablet computer instantly and provides the corresponding voice of Zhuyin pronunciation. To stimulate the interest of autistic children, the developed aid provides a picture-based quiz for learning Mandarin phonetic symbols of various objects. In addition, we use a robot bear to mimic human speaking to interact with the autistic children while they use the Zhuyin communication board. The developed aid is available to download cost-free from the iTunes App Store, and the aid content is presented in Mandarin Chinese. Thus, users are not required to expend financial resources or possess a specific level of English language proficiency to use this aid.
|
[
"Phonetics",
"Structured Data in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
64,
50,
15,
74
] |
SCOPUS_ID:84949501731
|
A Mandarin spoken dialogue system with limited portability
|
In the new generation human-computer interaction technology, spoken dialog system based on content is the key issue. This paper describes the status of our spoken dialogue system for tour information retrieval-GUIDE, whose lexicon consists of 2000 Words and WER is 5.7%. Different from AT1S application, tour information is wider and more relative. So we develop an object-oriented knowledge base including database and deducing rules. We also develop a modular architecture of GUIDE. By doing so, we have successfully and easily adapt GUIDE to a new similar application.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85128791884
|
A Manifold Learning Method to Passage Retrieval for Open-Domain Question Answering
|
Passage retriever plays an important role for obtaining answers in open-domain textual question answering system, which selects candidate contexts from a large collection of documents and feed to the machine reader. Traditional defacto methods usually construct sparse vectors to match the rules of co-occurrence of words between passages and questions, such as TF-IDF or BM25. And some more advanced methods model word-level contextual semantics similarities to match the text. In this work, we presents a method of encoding text by short sliding windows with built-in continuity, and applying manifold learning method on it to model continuous representation of semantics, so as to represent the similarity features at the passage-level and reduce the directional sparsity difference caused by the difference of text length. Compared with the traditional Lucene BM25 system in the top-20 paragraphs retrieval, the accuracy of our method is 5%-16% higher, and the recall rate is 8%-16% higher.
|
[
"Passage Retrieval",
"Natural Language Interfaces",
"Question Answering",
"Information Retrieval"
] |
[
66,
11,
27,
24
] |
http://arxiv.org/abs/1805.05542v1
|
A Manually Annotated Chinese Corpus for Non-task-oriented Dialogue Systems
|
This paper presents a large-scale corpus for non-task-oriented dialogue response selection, which contains over 27K distinct prompts more than 82K responses collected from social media. To annotate this corpus, we define a 5-grade rating scheme: bad, mediocre, acceptable, good, and excellent, according to the relevance, coherence, informativeness, interestingness, and the potential to move a conversation forward. To test the validity and usefulness of the produced corpus, we compare various unsupervised and supervised models for response selection. Experimental results confirm that the proposed corpus is helpful in training response selection models.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85141185410
|
A MapReduce Clustering Approach for Sentiment Analysis Using Big Data
|
The modern era of organizations are generating huge amount of data by digitalizing their way of promoting services and products. The companies trying to know what customers are saying in terms of products through reviews in social media analytics constitutes a prime factor to enhance the success of big data era. However, the social media data analytics is a very complex discipline due to subjectivity in the textual review and the productivity in their complexity. Different approach of framework to tackle this problem is proposed: The first stage discussed, in this paper, the sentiment analysis of social media through the different approaches of machine learning. The second stage is used to discuss the challenges faced for processing the sentiment analysis of social media. Then, an overview of the case study presented for the two stages sentiment analysis in different approaches with the help of customer reviews. The machine learning approaches will be followed to analyze the sentiment analysis of social media in processing the big data.
|
[
"Information Extraction & Text Mining",
"Sentiment Analysis",
"Text Clustering"
] |
[
3,
78,
29
] |
SCOPUS_ID:85111153622
|
A MapReduce Improved ID3 Decision Tree for Classifying Twitter Data
|
In this contribution, we introduce an innovative classification approach for opinion mining. We have used the feature extractor Fast Text to detect and capture the given tweets’ relevant data efficiently. Then, we have applied the feature selector Information Gain to reduce the dimensionality of the high feature. Finally, we have employed the obtained features to carry out the classification task using our improved ID3 decision tree classifier, which aims to calculate the weighted information gain instead of information gain used in traditional ID3. In other words, to measure the weighted information gain for the current conditioned feature, we follow two steps: First, we compute the weighted correlation function of the current conditioned feature. Second, we multiply the obtained weighted correlation function by the information gain of this current conditioned feature. This work is implemented in a distributed environment using the Hadoop framework, with its programming framework MapReduce and its distributed file system HDFS. Its primary goal is to enhance the performance of a well-known ID3 classifier in terms of accuracy, execution time, and ability to handle massive datasets. We have performed several experiments that aim to evaluate our suggested classifier’s effectiveness compared to some other contributions chosen from the literature.
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85104238923
|
A MapReduce Opinion Mining for COVID-19-Related Tweets Classification Using Enhanced ID3 Decision Tree Classifier
|
Opinion Mining (OM) is a field of Natural Language Processing (NLP) that aims to capture human sentiment in the given text. With the ever-spreading of online purchasing websites, micro-blogging sites, and social media platforms, OM in online social media platforms has picked the interest of thousands of scientific researchers. Because the reviews, tweets and blogs acquired from these social media networks, act as a significant source for enhancing the decision making process. The obtained textual data (reviews, tweets, or blogs) are classified into three different class labels which are negative, neutral and positive for analyzing and extracting relevant information from the given dataset. In this contribution, we introduce an innovative MapReduce improved weighted ID3 decision tree classification approach for OM, which consists mainly of three aspects: Firstly We have used several feature extractors to efficiently detect and capture the relevant data from the given tweets, including N-grams or character-level, Bag-Of-Words, word embedding (GloVe, Word2Vec), FastText, and TF-IDF. Secondly, we have applied a multiple feature selector to reduce the high feature's dimensionality, including Chi-square, Gain Ratio, Information Gain, and Gini Index. Finally, we have employed the obtained features to carry out the classification task using an improved ID3 decision tree classifier, which aims to calculate the weighted information gain instead of information gain used in traditional ID3. In other words, to measure the weighted information gain for the current conditioned feature, we follow two steps: First, we compute the weighted correlation function of the current conditioned feature. Second, we multiply the obtained weighted correlation function by the information gain of this current conditioned feature. This work is implemented in a distributed environment using the Hadoop framework, with its programming framework MapReduce and its distributed file system HDFS. Its primary goal is to enhance the performance of a well-known ID3 classifier in terms of accuracy, execution time, and ability to handle the massive datasets. We have carried out several experiences that aims to assess the effectiveness of our suggested classifier compared to some other contributions chosen from the literature. The experimental results demonstrated that our ID3 classifier works better on COVID-19_Sentiments dataset than other classifiers in terms of Recall (85.72 %), specificity (86.51 %), error rate (11.18 %), false-positive rate (13.49 %), execution time (15.95s), kappa statistic (87.69 %), F1-score (85.54 %), classification rate (88.82 %), false-negative rate (14.28 %), precision rate (86.67 %), convergence (it convergent towards the iteration 90), stability (it is more stable with mean deviation standard equal to 0.12 %), and complexity (it requires much lower time and space computational complexity).
|
[
"Opinion Mining",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
49,
36,
78,
24,
3
] |
SCOPUS_ID:85052876674
|
A MapReduce implementation of posterior probability clustering and relevance models for recommendation
|
Relevance-Based Language Models are a formal probabilistic approach for explicitly introducing the concept of relevance in the Statistical Language Modelling framework. Recently, they have been determined as a very effective way of computing recommendations. When combining this new recommendation approach with Posterior Probabilistic Clustering for computing neighbourhoods, the item ranking is further improved, radically surpassing rating prediction recommendation techniques. Nevertheless, in the current landscape where the number of recommendation scenarios reaching the big data scale is increasing day after day, high figures of effectiveness are not enough. In this paper, we address one urging and common need of recommendation systems which is algorithm scalability. Particularly, we adapted those highly effective algorithms to the functional MapReduce paradigm, that has been previously proved as an adequate tool for enabling recommenders scalability. We evaluated the performance of our approach under realistic circumstances, showing a good scalability behaviour on the number of nodes in the MapReduce cluster. Additionally, as a result of being able to execute our algorithms distributively, we can show measures in a much bigger collection supporting the results presented on the seminal paper.
|
[
"Language Models",
"Semantic Text Processing",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
52,
72,
3,
29
] |
https://aclanthology.org//D19-5621/
|
A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation
|
Neural models that eliminate the softmax bottleneck by generating word embeddings (rather than multinomial distributions over a vocabulary) attain faster training with fewer learnable parameters. These models are currently trained by maximizing densities of pretrained target embeddings under von Mises-Fisher distributions parameterized by corresponding model-predicted embeddings. This work explores the utility of margin-based loss functions in optimizing such models. We present syn-margin loss, a novel margin-based loss that uses a synthetic negative sample constructed from only the predicted and target embeddings at every step. The loss is efficient to compute, and we use a geometric analysis to argue that it is more consistent and interpretable than other margin-based losses. Empirically, we find that syn-margin provides small but significant improvements over both vMF and standard margin-based losses in continuous-output neural machine translation.
|
[
"Machine Translation",
"Semantic Text Processing",
"Representation Learning",
"Text Generation",
"Multilinguality"
] |
[
51,
72,
12,
47,
0
] |
http://arxiv.org/abs/2212.12800v1
|
A Marker-based Neural Network System for Extracting Social Determinants of Health
|
Objective. The impact of social determinants of health (SDoH) on patients' healthcare quality and the disparity is well-known. Many SDoH items are not coded in structured forms in electronic health records. These items are often captured in free-text clinical notes, but there are limited methods for automatically extracting them. We explore a multi-stage pipeline involving named entity recognition (NER), relation classification (RC), and text classification methods to extract SDoH information from clinical notes automatically. Materials and Methods. The study uses the N2C2 Shared Task data, which was collected from two sources of clinical notes: MIMIC-III and University of Washington Harborview Medical Centers. It contains 4480 social history sections with full annotation for twelve SDoHs. In order to handle the issue of overlapping entities, we developed a novel marker-based NER model. We used it in a multi-stage pipeline to extract SDoH information from clinical notes. Results. Our marker-based system outperformed the state-of-the-art span-based models at handling overlapping entities based on the overall Micro-F1 score performance. It also achieved state-of-the-art performance compared to the shared task methods. Conclusion. The major finding of this study is that the multi-stage pipeline effectively extracts SDoH information from clinical notes. This approach can potentially improve the understanding and tracking of SDoHs in clinical settings. However, error propagation may be an issue, and further research is needed to improve the extraction of entities with complex semantic meanings and low-resource entities using external knowledge.
|
[
"Named Entity Recognition",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
34,
24,
36,
3
] |
SCOPUS_ID:85082433072
|
A Markov Mixed-Effect Multinomial Logistic Regression Model for Nominal Repeated Measures with an Application to Syntactic Self-Priming Effects
|
Syntactic priming effects have been investigated for several decades in psycholinguistics and the cognitive sciences to understand the cognitive mechanisms that support language production and comprehension. The question of whether speakers prime themselves is central to adjudicating between two theories of syntactic priming, activation-based theories and expectation-based theories. However, there is a lack of a statistical model to investigate the two different theories when nominal repeated measures are obtained from multiple participants and items. This paper presents a Markov mixed-effect multinomial logistic regression model in which there are fixed and random effects for own-category lags and cross-category lags in a multivariate structure and there are category-specific crossed random effects (random person and item effects). The model is illustrated with experimental data that investigates the average and participant-specific deviations in syntactic self-priming effects. Results of the model suggest that evidence of self-priming is consistent with the predictions of activation-based theories. Accuracy of parameter estimates and precision is evaluated via a simulation study using Bayesian analysis.
|
[
"Psycholinguistics",
"Linguistics & Cognitive NLP",
"Syntactic Text Processing",
"Linguistic Theories"
] |
[
77,
48,
15,
57
] |
SCOPUS_ID:85042135102
|
A Markov Network Based Passage Retrieval Method for Multimodal Question Answering in the Cultural Heritage Domain
|
In this paper, we propose a Markov network based graphical framework to perform passage retrieval for multimodal question answering (MQA) with weak supervision in the cultural heritage domain. This framework encodes the dependencies between a question’s feature information and the passage containing its answer, with the assumption that there is a latent alignment between a question and its candidate answer. Experiments on a challenging multi-modal dataset show that this framework achieves an improvement of 5% in terms of mean average precision (mAP) compared with a state-of-the-art method employing the same features namely (i) image match and (ii) word co-occurrence information of a passage and a question. We additionally construct two extended graphical frameworks integrating one more feature, namely (question type)-(named entity) match, into this framework in order to further boost the performance. The performance has been further improved by 2% in terms of mAP in one of the extended models.
|
[
"Question Answering",
"Natural Language Interfaces",
"Passage Retrieval",
"Information Retrieval",
"Multimodality"
] |
[
27,
11,
66,
24,
74
] |
SCOPUS_ID:84894561476
|
A Markov chain based line segmentation framework for handwritten character recognition
|
In this paper, we present a novel text line segmentation framework following the divide-and-conquer paradigm: we iteratively identify and re-process regions of ambiguous line segmentation from an input document image until there is no ambiguity. To detect ambiguous line segmentation, we introduce the use of two complimentary line descriptors, referred as to the underline and highlight line descriptors, and identify ambiguities when their patterns mismatch. As a result, we can easily identify already good line segmentations, and largely simplify the original line segmentation problem by only reprocessing ambiguous regions. We evaluate the performance of the proposed line segmentation framework using the ICDAR 2009 handwritten document dataset, and it is close to top-performing systems submitted to the competition. Moreover, the proposed method is also robust against skewness, noise, variable line heights and touching characters. The proposed idea can also be applied to other text analysis tasks such as word segmentation and page layout analysis. © 2014 SPIE-IS&T.
|
[
"Text Segmentation",
"Syntactic Text Processing"
] |
[
21,
15
] |
SCOPUS_ID:85057319564
|
A Markov logic networks based method to predict judicial decisions of divorce cases
|
Prediction of the judicial decision of a case is a research issue of artificial intelligence in legal domain. Existing studies mainly focus on criminal cases and aim at charge prediction, moreover the results of these models are usually hard to interpret. In this paper we propose a Markov logic networks based method for this problem. We firstly describe and extract the semantic of legal factors in a formal way; then we build and train a Markov logic networks for the prediction. The experimental results of prediction for divorce cases show that, our method is insusceptible to different expression styles, at the same time its prediction outcomes are interpretable.
|
[
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP",
"Information Extraction & Text Mining"
] |
[
81,
4,
3
] |
SCOPUS_ID:85132943657
|
A Mask-Guided Transformer Network with Topic Token for Remote Sensing Image Captioning
|
Remote sensing image captioning aims to describe the content of images using natural language. In contrast with natural images, the scale, distribution, and number of objects generally vary in remote sensing images, making it hard to capture global semantic information and the relationships between objects at different scales. In this paper, in order to improve the accuracy and diversity of captioning, a mask-guided Transformer network with a topic token is proposed. Multi-head attention is introduced to extract features and capture the relationships between objects. On this basis, a topic token is added into the encoder, which represents the scene topic and serves as a prior in the decoder to help us focus better on global semantic information. Moreover, a new Mask-Cross-Entropy strategy is designed in order to improve the diversity of the generated captions, which randomly replaces some input words with a special word (named [Mask]) in the training stage, with the aim of enhancing the model’s learning ability and forcing exploration of uncommon word relations. Experiments on three data sets show that the proposed method can generate captions with high accuracy and diversity, and the experimental results illustrate that the proposed method can outperform state-of-the-art models. Furthermore, the CIDEr score on the RSICD data set increased from 275.49 to 298.39.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Captioning",
"Text Generation",
"Multimodality"
] |
[
20,
52,
72,
39,
47,
74
] |
http://arxiv.org/abs/2204.09851v2
|
A Masked Image Reconstruction Network for Document-level Relation Extraction
|
Document-level relation extraction aims to extract relations among entities within a document. Compared with its sentence-level counterpart, Document-level relation extraction requires inference over multiple sentences to extract complex relational triples. Previous research normally complete reasoning through information propagation on the mention-level or entity-level document-graphs, regardless of the correlations between the relationships. In this paper, we propose a novel Document-level Relation Extraction model based on a Masked Image Reconstruction network (DRE-MIR), which models inference as a masked image reconstruction problem to capture the correlations between relationships. Specifically, we first leverage an encoder module to get the features of entities and construct the entity-pair matrix based on the features. After that, we look on the entity-pair matrix as an image and then randomly mask it and restore it through an inference module to capture the correlations between the relationships. We evaluate our model on three public document-level relation extraction datasets, i.e. DocRED, CDR, and GDA. Experimental results demonstrate that our model achieves state-of-the-art performance on these three datasets and has excellent robustness against the noises during the inference process.
|
[
"Visual Data in NLP",
"Language Models",
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Relation Extraction",
"Multimodality"
] |
[
20,
52,
3,
72,
75,
74
] |
http://arxiv.org/abs/2104.07829v2
|
A Masked Segmental Language Model for Unsupervised Natural Language Segmentation
|
Segmentation remains an important preprocessing step both in languages where "words" or other important syntactic/semantic units (like morphemes) are not clearly delineated by white space, as well as when dealing with continuous speech data, where there is often no meaningful pause between words. Near-perfect supervised methods have been developed for use in resource-rich languages such as Chinese, but many of the world's languages are both morphologically complex, and have no large dataset of "gold" segmentations into meaningful units. To solve this problem, we propose a new type of Segmental Language Model (Sun and Deng, 2018; Kawakami et al., 2019; Wang et al., 2021) for use in both unsupervised and lightly supervised segmentation tasks. We introduce a Masked Segmental Language Model (MSLM) built on a span-masking transformer architecture, harnessing the power of a bi-directional masked modeling context and attention. In a series of experiments, our model consistently outperforms Recurrent SLMs on Chinese (PKU Corpus) in segmentation quality, and performs similarly to the Recurrent model on English (PTB). We conclude by discussing the different challenges posed in segmenting phonemic-type writing systems.
|
[
"Language Models",
"Low-Resource NLP",
"Semantic Text Processing",
"Responsible & Trustworthy NLP"
] |
[
52,
80,
72,
4
] |
http://arxiv.org/abs/2109.06324v1
|
A Massively Multilingual Analysis of Cross-linguality in Shared Embedding Space
|
In cross-lingual language models, representations for many different languages live in the same space. Here, we investigate the linguistic and non-linguistic factors affecting sentence-level alignment in cross-lingual pretrained language models for 101 languages and 5,050 language pairs. Using BERT-based LaBSE and BiLSTM-based LASER as our models, and the Bible as our corpus, we compute a task-based measure of cross-lingual alignment in the form of bitext retrieval performance, as well as four intrinsic measures of vector space alignment and isomorphism. We then examine a range of linguistic, quasi-linguistic, and training-related features as potential predictors of these alignment metrics. The results of our analyses show that word order agreement and agreement in morphological complexity are two of the strongest linguistic predictors of cross-linguality. We also note in-family training data as a stronger predictor than language-specific training data across the board. We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus. We make the data and code for our experiments publicly available.
|
[
"Language Models",
"Semantic Text Processing",
"Morphology",
"Syntactic Text Processing",
"Representation Learning",
"Cross-Lingual Transfer",
"Multilinguality"
] |
[
52,
72,
73,
15,
12,
19,
0
] |
https://aclanthology.org//W18-6534/
|
A Master-Apprentice Approach to Automatic Creation of Culturally Satirical Movie Titles
|
Satire has played a role in indirectly expressing critique towards an authority or a person from time immemorial. We present an autonomously creative master-apprentice approach consisting of a genetic algorithm and an NMT model to produce humorous and culturally apt satire out of movie titles automatically. Furthermore, we evaluate the approach in terms of its creativity and its output. We provide a solid definition for creativity to maximize the objectiveness of the evaluation.
|
[
"Text Generation"
] |
[
47
] |
SCOPUS_ID:85142014138
|
A Matching Method of Oral Text to Instruction Based on Word Vector
|
Short oral texts have the characteristics of sparse features and vague expressions, which lead to poor performance when applying matching methods to them. Aiming at these problems, this paper proposes a matching method based on word vector text representation, which comprehensively considers part-of-speech, semantics and word order to map oral text to operation instructions. First, this paper uses the Skip-gram model to train the selected corpus to obtain the word vector representation of the corpus; then uses the cosine similarity to calculate the similarity of the feature words; then, combined with the text characteristics in this scene, use the WMR(Word Matching Rate) to calculate the similarity between short texts; finally, the method in this paper is evaluated on the test set. The results show that the method described in this paper has better precision and recall rate than other methods, and effectively improves the matching of oral text to instruction text.
|
[
"Multimodality",
"Semantic Text Processing",
"Representation Learning"
] |
[
74,
72,
12
] |
http://arxiv.org/abs/cmp-lg/9508005v1
|
A Matching Technique in Example-Based Machine Translation
|
This paper addresses an important problem in Example-Based Machine Translation (EBMT), namely how to measure similarity between a sentence fragment and a set of stored examples. A new method is proposed that measures similarity according to both surface structure and content. A second contribution is the use of clustering to make retrieval of the best matching example from the database more efficient. Results on a large number of test cases from the CELEX database are presented.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85090094971
|
A Matching-Integration-Verification Model for Multiple-Choice Reading Comprehension
|
Multiple-choice reading comprehension is a challenging task requiring a machine to select the correct answer from a candidate answers set. In this paper, we propose a model following a matching-integration-verification-prediction framework, which explicitly employs a verification module inspired by the human being and generates judgment of each option simultaneously according to the evidence information and the verified information. The verification module, which is responsible for recheck information from matching, can selectively combine matched information from the passage and option instead of transmitting them equally to prediction. Experimental results demonstrate that our proposed model achieves significant improvement on several multiple-choice reading comprehension benchmark datasets.
|
[
"Reasoning",
"Machine Reading Comprehension"
] |
[
8,
37
] |
http://arxiv.org/abs/2010.03648v2
|
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
|
Autoregressive language models, pretrained using large text corpora to do well on next word prediction, have been successful at solving many downstream tasks, even with zero-shot usage. However, there is little theoretical understanding of this success. This paper initiates a mathematical study of this phenomenon for the downstream task of text classification by considering the following questions: (1) What is the intuitive connection between the pretraining task of next word prediction and text classification? (2) How can we mathematically formalize this connection and quantify the benefit of language modeling? For (1), we hypothesize, and verify empirically, that classification tasks of interest can be reformulated as sentence completion tasks, thus making language modeling a meaningful pretraining task. With a mathematical formalization of this hypothesis, we make progress towards (2) and show that language models that are $\epsilon$-optimal in cross-entropy (log-perplexity) learn features that can linearly solve such classification tasks with $\mathcal{O}(\sqrt{\epsilon})$ error, thus demonstrating that doing well on language modeling can be beneficial for downstream tasks. We experimentally verify various assumptions and theoretical findings, and also use insights from the analysis to design a new objective function that performs well on some classification tasks.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Reasoning",
"Numerical Reasoning",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
8,
5,
36,
3
] |
SCOPUS_ID:85124051475
|
A Mathematical Model for Universal Semantics
|
We characterize the meaning of words with language-independent numerical fingerprints, through a mathematical analysis of recurring patterns in texts. Approximating texts by Markov processes on a long-range time scale, we are able to extract topics, discover synonyms, and sketch semantic fields from a particular document of moderate length, without consulting external knowledge-base or thesaurus. Our Markov semantic model allows us to represent each topical concept by a low-dimensional vector, interpretable as algebraic invariants in succinct statistical operations on the document, targeting local environments of individual words. These language-independent semantic representations enable a robot reader to both understand short texts in a given language (automated question-answering) and match medium-length texts across different languages (automated word translation). Our semantic fingerprints quantify local meaning of words in 14 representative languages across five major language families, suggesting a universal and cost-effective mechanism by which human languages are processed at the semantic level. Our protocols and source codes are publicly available on https://github.com/yajun-zhou/linguae-naturalis-principia-mathematica.
|
[
"Natural Language Interfaces",
"Reasoning",
"Numerical Reasoning",
"Question Answering"
] |
[
11,
8,
5,
27
] |
http://arxiv.org/abs/cs/0504022v1
|
A Matter of Opinion: Sentiment Analysis and Business Intelligence (position paper)
|
A general-audience introduction to the area of "sentiment analysis", the computational treatment of subjective, opinion-oriented language (an example application is determining whether a review is "thumbs up" or "thumbs down"). Some challenges, applications to business-intelligence tasks, and potential future directions are described.
|
[
"Opinion Mining",
"Sentiment Analysis"
] |
[
49,
78
] |
SCOPUS_ID:85043242485
|
A Matter of Perspective: A Discursive Analysis of the Perceptions of Three Stakeholders of the Mutianyu Great Wall
|
This study aims to investigate the different and competing perspectives of stakeholders of cultural heritage sites by examining the Mutianyu Great Wall in China. Literature review: Most studies focus on investigating the tourism destination image from the perspective of only one stakeholder, and only a small amount of research has attempted to integrate the perspectives of competing stakeholders into a single study. Research questions: 1. How did the business operator perceive the Mutianyu Great Wall? 2. How did UNESCO perceive the Mutianyu Great Wall? 3. How did international tourists on TripAdvisor perceive the Mutianyu Great Wall? 4. What are the dynamics among the three stakeholders' perceptions? 5. In those dynamics, what are the contested issues in the Great Wall's heritage preservation and tourism development? Methodology: The study adopts a discursive approach to social constructivism in examining the images of the site as perceived by the three important stakeholders. It incorporates qualitative thematic and multimodal discourse analysis with quantitative high-frequency word analysis, supplemented by an interview with the heritage site administrator and a field trip. Results: The business operator perceived the Mutianyu Great Wall as a scenic spot for modern rural tourism, UNESCO emphasized its historical and cultural significance, and international tourists perceived it as a hybrid image. Conclusions: The study identified a preservation-growth continuum and showed different and even competing perspectives. It also discussed two contested issues in the field. The study contributes to heritage studies by developing an interdisciplinary discursive framework and suggests practical implications to heritage management and professional communication.
|
[
"Discourse & Pragmatics",
"Visual Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
71,
20,
72,
74
] |
SCOPUS_ID:33746802824
|
A Maximal Figure-of-Merit (MFoM)-learning approach to robust classifier design for text categorization
|
We propose a maximal figure-of-merit (MFoM)-learning approach for robust classifier design, which directly optimizes performance metrics of interest for different target classifiers. The proposed approach, embedding the decision functions of classifiers and performance metrics into an overall training objective, learns the parameters of classifiers in a decision-feedback manner to effectively take into account both positive and negative training samples, thereby reducing the required size of positive training data. It has three desirable properties: (a) it is a performance metric, oriented learning; (b) the optimized metric is consistent in both training and evaluation sets; and (c) it is more robust and less sensitive to data variation, and can handle insufficient training data scenarios. We evaluate it on a text categorization task using the Reuters-21578 dataset. Training an F 1-based binary tree classifier using MFoM, we observed significantly improved performance and enhanced robustness compared to the baseline and SVM, especially for categories with insufficient training samples. The generality for designing other metrics-based classifiers is also demonstrated by comparing precision, recall, and F 1-based classifiers. The results clearly show consistency of performance between the training and evaluation stages for each classifier, and MFoM optimizes the chosen metric. © 2006 ACM.
|
[
"Information Extraction & Text Mining",
"Text Classification",
"Robustness in NLP",
"Information Retrieval",
"Responsible & Trustworthy NLP"
] |
[
3,
36,
58,
24,
4
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.