id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:84901794363
A Fast and Efficient Thinning Algorithm for Binary Images
Skeletonization "also known as thinning" is an important step in the pre-processing phase in many of pattern recognition techniques. The output of Skeletonization process is the skeleton of the pattern in the images. Skeletonization is a crucial process for many applications such as OCR and writer identification. However, the improvements in this area are only a recent phenomenon and still require more researches. In this paper, a new skeletonization algorithm is proposed. This algorithm combines between parallel and sequential, which is categorized under an iterative approach. The suggested method is conducted by experiments of benchmark dataset for evaluation. The outcome is to obtain much better results compared to other thinning methods that are discussed in comparison part. © 2013 Published by ITB Journal Publisher.
[ "Visual Data in NLP", "Multimodality", "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 20, 74, 4, 68 ]
https://aclanthology.org//K17-3025/
A Fast and Lightweight System for Multilingual Dependency Parsing
We present a multilingual dependency parser with a bidirectional-LSTM (BiLSTM) feature extractor and a multi-layer perceptron (MLP) classifier. We trained our transition-based projective parser in UD version 2.0 datasets without any additional data. The parser is fast, lightweight and effective on big treebanks. In the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, the official results show that the macro-averaged LAS F1 score of our system Mengest is 61.33%.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Syntactic Parsing", "Multilinguality" ]
[ 52, 72, 15, 28, 0 ]
http://arxiv.org/abs/1206.6426v1
A Fast and Simple Algorithm for Training Neural Probabilistic Language Models
In spite of their superior performance, neural probabilistic language models (NPLMs) remain far less widely used than n-gram models due to their notoriously long training times, which are measured in weeks even for moderately-sized datasets. Training NPLMs is computationally expensive because they are explicitly normalized, which leads to having to consider all words in the vocabulary when computing the log-likelihood gradients. We propose a fast and simple algorithm for training NPLMs based on noise-contrastive estimation, a newly introduced procedure for estimating unnormalized continuous distributions. We investigate the behaviour of the algorithm on the Penn Treebank corpus and show that it reduces the training times by more than an order of magnitude without affecting the quality of the resulting models. The algorithm is also more efficient and much more stable than importance sampling because it requires far fewer noise samples to perform well. We demonstrate the scalability of the proposed approach by training several neural language models on a 47M-word corpus with a 80K-word vocabulary, obtaining state-of-the-art results on the Microsoft Research Sentence Completion Challenge dataset.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/1810.04142v1
A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
We address fine-grained multilingual language identification: providing a language code for every token in a sentence, including codemixed text containing multiple languages. Such text is prevalent online, in documents, social media, and message boards. We show that a feed-forward network with a simple globally constrained decoder can accurately and rapidly label both codemixed and monolingual text in 100 languages and 100 language pairs. This model outperforms previously published multilingual approaches in terms of both accuracy and speed, yielding an 800x speed-up and a 19.5% averaged absolute gain on three codemixed datasets. It furthermore outperforms several benchmark systems on monolingual language identification.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Multilinguality" ]
[ 3, 24, 36, 0 ]
http://arxiv.org/abs/1807.05855v1
A Fast-Converged Acoustic Modeling for Korean Speech Recognition: A Preliminary Study on Time Delay Neural Network
In this paper, a time delay neural network (TDNN) based acoustic model is proposed to implement a fast-converged acoustic modeling for Korean speech recognition. The TDNN has an advantage in fast-convergence where the amount of training data is limited, due to subsampling which excludes duplicated weights. The TDNN showed an absolute improvement of 2.12% in terms of character error rate compared to feed forward neural network (FFNN) based modelling for Korean speech corpora. The proposed model converged 1.67 times faster than a FFNN-based model did.
[ "Text Generation", "Speech & Audio in NLP", "Speech Recognition", "Multimodality" ]
[ 47, 70, 10, 74 ]
http://arxiv.org/abs/cmp-lg/9610004v1
A Faster Structured-Tag Word-Classification Method
Several methods have been proposed for processing a corpus to induce a tagset for the sub-language represented by the corpus. This paper examines a structured-tag word classification method introduced by McMahon (1994) and discussed further by McMahon & Smith (1995) in cmp-lg/9503011 . Two major variations, (1) non-random initial assignment of words to classes and (2) moving multiple words in parallel, together provide robust non-random results with a speed increase of 200% to 450%, at the cost of slightly lower quality than McMahon's method's average quality. Two further variations, (3) retaining information from less- frequent words and (4) avoiding reclustering closed classes, are proposed for further study. Note: The speed increases quoted above are relative to my implementation of my understanding of McMahon's algorithm; this takes time measured in hours and days on a home PC. A revised version of the McMahon & Smith (1995) paper has appeared (June 1996) in Computational Linguistics 22(2):217- 247; this refers to a time of "several weeks" to cluster 569 words on a Sparc-IPC.
[ "Text Classification", "Syntactic Text Processing", "Text Clustering", "Tagging", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 15, 29, 63, 24, 3 ]
SCOPUS_ID:85116485901
A Fault Data Generation Algorithm Based on GAN and Policy Gradient Mechanism
Generative adversarial networks(GAN) are widely used in various fields. However, when generating text data with contextual correlation characteristics such as fault data, GAN has many limitations. On the one hand, discrete data output makes it difficult to pass gradient updates from the discriminator to the generator; on the other hand, it is difficult for the discriminator to process incompletely generated sequences. In this paper, we propose a fault data generation algorithm based on GAN and policy gradient mechanism. Using the reinforcement learning method aims to solve the gradient update transfer problem, and the policy gradient algorithm is used to directly update the parameters of the generator; at the same time, by using the Upper Confidence Bound Apply to Tree(UCT) algorithm to simulate the incomplete sequence into a complete sequence so that the discriminator can evaluate its reward value. The simulation results show that our fault data generation algorithm based on GAN and policy gradient mechanism performs better in the fault data generation task.
[ "Language Models", "Semantic Text Processing", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 58, 4 ]
SCOPUS_ID:85147494250
A Feasibility Study for inclusion of Ethics and Social issues in Engineering and Design Coursework in Australia
This paper reports on a feasibility study on including ethics and social issues in the current curriculum of a school of engineering and information technology in an Australian university. The study has three goals: first, to understand the current status of inclusion of ethics and social issues in engineering courses. Second, to understand the willingness of staff within the school to include ethics and societal issues in their courses. Third, to understand the opportunities and challenges for inclusion of ethical and societal issues in the coursework. Our methods include interviews with school staff and subject matter experts as well as analyses of textual artifacts such as course outlines, course readings, student assignments, and accreditation reports. The analysis of textual artifacts runs partially via an automated text analyzer that search for words that have ethical connotation, such as safety, responsibility, privacy, harm, etcetera in the dataset of course materials. A manual (human) analysis of the coursework was done for those courses that give insufficient results in the automated text analyzer. We looked for opportunities to include ethics and societal issues in coursework. The conclusion is that there is general consensus amongst staff that ethics and societal issues deserve more attention in the school. At the same time, there is concern that including ethics and societal issues takes away valuable teaching time for technical material. There is preference for an integrated way of ethics teaching, rather than one seperate engineering ethics course.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
http://arxiv.org/abs/2203.08685v2
A Feasibility Study of Answer-Agnostic Question Generation for Education
We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% $\rightarrow$ 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.
[ "Question Generation", "Text Generation" ]
[ 76, 47 ]
SCOPUS_ID:85125804260
A Feasibility Study of Open-Source Sentiment Analysis and Text Classification Systems on Disaster-Specific Social Media Data
Crisis informatics is a multi-disciplinary area of research that has taken on renewed urgency due to the COVID-19 pandemic and the runaway effects of climate change. Due to scarce resources, technology, especially augmented artificial intelligence (AI), has the potential to play a meaningful role by using information management for facilitating better crisis response. In part, this is both due to improvements in the underlying technology, as well as an increasing willingness by stakeholders to release data and systems as open-source. Yet, it is still not clear from published literature if such established systems are truly useful on real-world crisis datasets (such as acquired from Twitter) that often contain noise and inconsistencies. In this paper, we explore this agenda by conducting a set of case studies, using real social media data collected during six disasters (including Hurricane Sandy and the Boston Marathon Bombings) and made publicly available on a crisis informatics platform. We apply established, independently developed AI tools, including a resource specifically designed for the crisis domain, to explore whether they yield useful insights that could be helpful to first-responders. Our results reveal that, while such insights can be obtained with relatively low effort, some caveats and best practices do apply, and sentiment analysis results (in particular) are not always consistent.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
http://arxiv.org/abs/2007.06390v2
A Feature Analysis for Multimodal News Retrieval
Content-based information retrieval is based on the information contained in documents rather than using metadata such as keywords. Most information retrieval methods are either based on text or image. In this paper, we investigate the usefulness of multimodal features for cross-lingual news search in various domains: politics, health, environment, sport, and finance. To this end, we consider five feature types for image and text and compare the performance of the retrieval system using different combinations. Experimental results show that retrieval results can be improved when considering both visual and textual information. In addition, it is observed that among textual features entity overlap outperforms word embeddings, while geolocation embeddings achieve better performance among visual features in the retrieval task.
[ "Visual Data in NLP", "Semantic Text Processing", "Representation Learning", "Information Retrieval", "Multimodality" ]
[ 20, 72, 12, 24, 74 ]
SCOPUS_ID:84987712279
A Feature Based Approach for Sentiment Analysis by Using Support Vector Machine
In this modern era of globalization, e-commerce has become one of the most convenient ways to shop. Every day people buy many products through online and post their reviews about the product which they have used. These reviews play a vital role in determining how far a product has been placed in consumers' psyche. So that the manufacturer can modify the features of the product as required and on the other hand these will also help the new consumers to decide on whether to buy the product or not. However, it would be a tedious task to manually extract overall opinion out of enormous unstructured data. This problem can be addressed by an automated system called 'Sentiment Analysis and Opinion Mining' that can analyze and extract the users' perception in the whole reviews. In our work we have developed an overall process of 'Aspect or Feature based Sentiment Analysis' by using a classifier called Support Vector Machine (SVM) in a novel approach. It is proved to be one of the most effective ways to analyze and extract the overall users' view about the particular feature and whole product as well.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85116800075
A Feature Based Classification and Analysis of Hidden Markov Model in Speech Recognition
Speech recognition is to change over the acoustical sign got from a spokesman or a phone which produces a lot of words. Speech recognition can also regardless called computer speech cognizance which means making the digital device understand what we are talking about. It helps users direct their systems for work, and avoids typing their work because a system can write words faster than a human being. There are several Hmm based models have been developed by the researchers for speech recognition but due to daily advancement of the technology landscape still need a robust techniques in the field of speech recognition. Due to its significant ability, it get classified and with the help of them there are several speech recognition techniques has been developed. The comparative analysis of the various Hmm model shows their efficiency and proposed the effective model in field of speech recognition.
[ "Text Classification", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Speech Recognition", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 70, 74, 47, 10, 24, 3 ]
http://arxiv.org/abs/2201.04227v1
A Feature Extraction based Model for Hate Speech Identification
The detection of hate speech online has become an important task, as offensive language such as hurtful, obscene and insulting content can harm marginalized people or groups. This paper presents TU Berlin team experiments and results on the task 1A and 1B of the shared task on hate speech and offensive content identification in Indo-European languages 2021. The success of different Natural Language Processing models is evaluated for the respective subtasks throughout the competition. We tested different models based on recurrent neural networks in word and character levels and transfer learning approaches based on Bert on the provided dataset by the competition. Among the tested models that have been used for the experiments, the transfer learning-based models achieved the best results in both subtasks.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 3, 17, 4 ]
SCOPUS_ID:85055862286
A Feature Fusion Based Approach for Handwritten Bangla Character Recognition Using Extreme Learning Machine
Optical Character Recognition (OCR) is an abstruse field of pattern recognition. An active branch of OCR is handwritten character recognition. This paper presents Bangla handwritten character recognition based on a feature fusion endeavor. Character recognition mostly depends on impeccable features extracted from input images. Coupling of two distinct feature vectors obtained by Histogram of Oriented Gradients (HOG) and Gabor filter is illustrated here. To evaluate the recognition rate of input characters Extreme Learning Machine (ELM) is used which is a feed-forward neural network. A 5-fold cross-validation scheme has been applied to measure the fulfillment of the organization. While using individual feature extraction technique, HOG and Gabor filter show 90.5% and 91.2% accuracy respectively. However, using feature fusion approach provides a better accuracy of 96.1%.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85093825408
A Feature Learning based Technique to Classify Medline Disease Abstracts
In bioinformatics, the classification of research documents in relation to their subject matter (specifically, disease types) is a very important task with many applications in the field. In terms of text classification, the total number of available disease documents is limited with Medline being the main source, containing a mostly abstract-only collection of research papers. However, abstracts are an important tool in gauging the specific disease in question, and giving a quick summary of the research. We introduce an effective technique based on feature learning for inducing feature weights appropriate for text classification with small dataset (in training and testing). We applied and evaluated the proposed technique in text classification task with medical abstracts representing disease texts. The conducted experiments with small data sets ranging from 40 to 500 documents (abstracts composed of around 300 words) produced excellent and promising results with respect to classification accuracy and AUC. In essence, the proposed technique gives leverage to the class distribution over each attribute rather than the attribute distribution over the two classes. Our technique consistently showed higher performance than conventional methods, and continued to improve with a decrease in the number of documents across all disease abstract classification experiments.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85014914424
A Feature terms extraction method based on polarity analysis of customer reviews for content-based recommendation
Our paper proposes a method for extracting feature terms expressing feelings regarding the use of a product from cus-tomer reviews on e-commerce sites, based on content-based recommendation. Considering previous research indicating that negative events and impressions have a greater impact than positive ones, we dene terms relating to factors over which customers argue the pros and cons in reviews as fea-tures related to feelings regarding the use of a product. Our approach involves extracting sentences expressing opin-ions from customer reviews, and recognizing each evaluated term as a candidate for product features. Using the posi-tive opinion ratio of each candidate to measure the extent of how divided the opinions of reviewers are, we extract feature terms for the selected product by considering a feature score based on the positive opinion ratio. We present an experi-ment to evaluate the utility of the feature terms extracted using our proposed method.
[ "Polarity Analysis", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 33, 78, 3 ]
SCOPUS_ID:85111377100
A Feature-Augmented Deep Learning Model for Extractive Summarization
Extractive text summarization can be seen as a classification task in which sentences from the document are labelled with either in-summary or not-in-summary. The most salient sentences (i.e., with highest ranking score) from the original document will be selected to generate the summary. Recent success of deep learning in the field of Natural Language Processing (NLP) has raised a trending research direction for text summarization task. Many neural models have been proposed in which applying recurrent neural network (rNN) for extractive summarization is also becoming increasingly popular. In this paper, we aim to improve the baseline sequence to sequence model proposed by Nallapati et al. by augmenting more sentence features so that the generated summary can benefit from potential features of the whole document. On one hand, the additional sentence-based features enrich the representation vector resulting from the sentence-level RNN of the baseline model. On the other hand, the relevant information from word-level will also be added to the final vector to increase the accuracy of the classification task. The experiment has been conducted for the DailyMail/CNN dataset to evaluate our proposed method and the state of the art works. The empirical results show that the proposed model with augmented features increases about 0.3-0.4 points of ROUGE-1 and ROUGE-2 in comparison with related works.
[ "Text Classification", "Summarization", "Text Generation", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 30, 47, 24, 3 ]
SCOPUS_ID:85126018527
A Feature-Based Approach for Sentiment Quantification Using Machine Learning
Sentiment analysis has been one of the most active research areas in the past decade due to its vast applications. Sentiment quantification, a new research problem in this field, extends sentiment analysis from individual documents to an aggregated collection of documents. Sentiment analysis has been widely researched, but sentiment quantification has drawn less attention despite offering a greater potential to enhance current business intelligence systems. In this research, to perform sentiment quantification, a framework based on feature engineering is proposed to exploit diverse feature sets such as sentiment, content, and part of speech, as well as deep features including word2vec and GloVe. Different machine learning algorithms, including conventional, ensemble learners, and deep learning approaches, have been investigated on standard datasets of SemEval2016, SemEval2017, STS-Gold, and Sanders. The empirical-based results reveal the effectiveness of the proposed feature sets in the process of sentiment quantification when applied to machine learning algorithms. The results also reveal that the ensemble-based algorithm AdaBoost outperforms other conventional machine learning algorithms using a combination of proposed feature sets. The deep learning algorithm RNN, on the other hand, shows optimal results using word embedding-based features. This research has the potential to help diverse applications of sentiment quantification, including polling, trend analysis, automatic summarization, and rumor or fake news detection.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1803.08463v1
A Feature-Based Model for Nested Named-Entity Recognition at VLSP-2018 NER Evaluation Campaign
In this report, we describe our participant named-entity recognition system at VLSP 2018 evaluation campaign. We formalized the task as a sequence labeling problem using BIO encoding scheme. We applied a feature-based model which combines word, word-shape features, Brown-cluster-based features, and word-embedding-based features. We compare several methods to deal with nested entities in the dataset. We showed that combining tags of entities at all levels for training a sequence labeling model (joint-tag model) improved the accuracy of nested named-entity recognition.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85061320480
A Feature-Enhanced Entity Recognition Method for Chinese Electronic Medical Records
Electronic medical records (EMRs) contain rich medical information, which is of great significance to medical research. The amount of Chinese EMRs is growing, whereas the current named entity recognition methods based on machine learning do not consider the unique characteristics of Chinese EMR. In this paper, four types of entities for disease, symptom, inspection and treatment are trained and tested using the conditional random field model. Firstly, tag-of-words, part-of-speech and context are selected as the basic features. Secondly, by analyzing the characteristics of Chinese electronic medical record text, the chapter name feature, core word feature and word clustering feature are selected as the extended features. Among them, the core word feature is obtained by dividing the collected dictionary into characters and words and then counting the character frequency and word frequency. The word vector clustering feature is obtained by clustering word vectors. Then, by constructing a medical dictionary, a semi-automatic corpus annotation method is used to randomly extract and classify the corpora of a certain scale. Finally, using the conditional random field tool CRF++ to learn and predict, it achieves an accuracy of 93.03%, a recall rate of 90.69%, and an F value of 91.85%.
[ "Semantic Text Processing", "Representation Learning", "Named Entity Recognition", "Text Clustering", "Information Extraction & Text Mining" ]
[ 72, 12, 34, 29, 3 ]
http://arxiv.org/abs/1611.05384v2
A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging
Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this work, we propose a feature-enriched neural model for joint Chinese word segmentation and part-of-speech tagging task. Specifically, to simulate the feature templates of traditional discrete feature based models, we use different filters to model the complex compositional features with convolutional and pooling layer, and then utilize long distance dependency information with recurrent layer. Experimental results on five different datasets show the effectiveness of our proposed model.
[ "Tagging", "Text Segmentation", "Syntactic Text Processing" ]
[ 63, 21, 15 ]
SCOPUS_ID:85139351134
A Feature-Rich Vietnamese Named Entity Recognition Model
In this paper, we present a feature-based named entity recognition (NER) model that achieves the start-of-the-art accuracy for Vietnamese language. We combine word, word-shape features, PoS, chunk, Brown-cluster-based features, and word-embedding-based features in the Conditional Random Fields (CRF) model. We also explore the effects of word segmentation, PoS tagging, and chunking results of many popular Vietnamese NLP toolkits on the accuracy of the proposed feature-based NER model. Up to now, our work is the first work that systematically performs an extrinsic evaluation of basic Vietnamese NLP toolkits on the downstream NER task. Experimental results show that while automatically-generated word segmentation is useful, PoS and chunking information generated by Vietnamese NLP tools does not show their benefits for the proposed feature-based NER model.
[ "Chunking", "Syntactic Text Processing", "Named Entity Recognition", "Text Segmentation", "Information Extraction & Text Mining" ]
[ 43, 15, 34, 21, 3 ]
http://arxiv.org/abs/1803.04375v1
A Feature-Rich Vietnamese Named-Entity Recognition Model
In this paper, we present a feature-based named-entity recognition (NER) model that achieves the start-of-the-art accuracy for Vietnamese language. We combine word, word-shape features, PoS, chunk, Brown-cluster-based features, and word-embedding-based features in the Conditional Random Fields (CRF) model. We also explore the effects of word segmentation, PoS tagging, and chunking results of many popular Vietnamese NLP toolkits on the accuracy of the proposed feature-based NER model. Up to now, our work is the first work that systematically performs an extrinsic evaluation of basic Vietnamese NLP toolkits on the downstream NER task. Experimental results show that while automatically-generated word segmentation is useful, PoS and chunking information generated by Vietnamese NLP tools does not show their benefits for the proposed feature-based NER model.
[ "Chunking", "Syntactic Text Processing", "Named Entity Recognition", "Text Segmentation", "Information Extraction & Text Mining" ]
[ 43, 15, 34, 21, 3 ]
http://arxiv.org/abs/1505.00863v1
A Feature-based Classification Technique for Answering Multi-choice World History Questions
Our FRDC_QA team participated in the QA-Lab English subtask of the NTCIR-11. In this paper, we describe our system for solving real-world university entrance exam questions, which are related to world history. Wikipedia is used as the main external resource for our system. Since problems with choosing right/wrong sentence from multiple sentence choices account for about two-thirds of the total, we individually design a classification based model for solving this type of questions. For other types of questions, we also design some simple methods.
[ "Text Classification", "Question Answering", "Natural Language Interfaces", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 27, 11, 24, 3 ]
SCOPUS_ID:85142247401
A Feature-based Stochastic Morphological Analyzer for Filipino Affixed Words
This paper papers presents a featured-based stochastic stemming methods for obtaining affixes in Filipino language. The method aims to introduce a statistical stemming approach that is based on the morphological attributes of Filipino words. Various Filipino word forms from different types of sources were obtained and test for affix removal system. The stemmer initially performs lexicon check from the created lexis which is comprises of common based words and various categorical language forms. Feature examinations are executed to check the data entry's structure. These includes affix removal, word assimilation, partial duplication, derivational words, and inflectional words. A KSTEM assimilatory method from Hybrid Stemming Algorithm are utilized to support derivational and inflectional conditions. From the created stochastic featured-based template algorithm, the entries were analyzed and perform the final phase of the stemming process. An average of 92.46 percent was obtained using the test data and stemming technique.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
SCOPUS_ID:85140491782
A Feature-enhanced Model for Chinese Medical Text Classification Based on Improved BERT and Feature Fusion
With the development of internet hospitals and natural language processing technology, medical text classification based on machine learning gains increasing attentions. In this paper, we propose a feature-enhanced Chinese text classification model for intelligent medical consultation, which combines char-word level tokenization and feature fusion. Specifically, we expand the dictionary of BERT and improve its segmentation mechanism to make full use advantages of different segmentation mechanisms and pre-trained models. In addition, we select 243 common diseases and symptoms from different departments as manual features, and apply the attention mechanism, CNN with multiple kernel sizes and max-pooling operation to capture potential correlations and dependencies between patients' self-statement and manual features. In order to verify the effectiveness of the proposed model, we carry out many simulation experiments. Experimental results show that compared to other typical text classification models, the proposed model achieves better classification performance with accuracy of 96.87% and Fl-score of 96.75%. In addition, the proposed model improves the accuracies of most departments by 1% as compared to the Google BERT.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85125194640
A Federated Adversarial Learning Method for Biomedical Named Entity Recognition
Identifying medical terms with specific meaning and information with semantic attribute is the prerequisite of conducting semantic analysis in medical field. However, the problem of medical data island restricts the development of entity recognition in a great extent. In addition to the ban of data sharing between different hospitals, different departments in the same hospital also can not exchange data due to privacy security concerns and ethical issues. To solve these problems, in the federated learning framework, the server trains a global model collaboratively through aggregating the encrypted or noised model parameters of the local participated clients without data leakage. In this paper, to well apply federated learning on biomedical named entity recognition (BioNER), we propose the federated adversarial learning (FAL) method with consideration of the training cost and model performance. FAL not only makes use of a modified structured pruning scheme to reduce the number of model parameters but also exploits an improved adversarial learning approach named protected fast gradient method (PFGM) to enhance the robustness and generalization of the model. In the experiment, we use the datasets of five departments in the same tumor hospital, such as gynecology department and gastric surgery department. Results show that the proposed FAL framework achieves expected effect with high efficiency.
[ "Responsible & Trustworthy NLP", "Named Entity Recognition", "Robustness in NLP", "Information Extraction & Text Mining" ]
[ 4, 34, 58, 3 ]
http://arxiv.org/abs/2302.09243v1
A Federated Approach for Hate Speech Detection
Hate speech detection has been the subject of high research attention, due to the scale of content created on social media. In spite of the attention and the sensitive nature of the task, privacy preservation in hate speech detection has remained under-studied. The majority of research has focused on centralised machine learning infrastructures which risk leaking data. In this paper, we show that using federated machine learning can help address privacy the concerns that are inherent to hate speech detection while obtaining up to 6.81% improvement in terms of F1-score.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85124795749
A Federated Learning Based Chinese Text Classification Model with Parameter Factorization Weighting
Federated learning (FL), as an emerging field of machine learning, has received wide attention since this concept was proposed. In this, paper, we conduct research on text classification based on Federated Learning, and propose a Federated Learning via Local Batch Normalization and Parameter Factorization Weighting based Chinese Text Classification Model (FedBN-PW-CTC). We evaluate our approach on both homogenous and non-homogenous datasets and confirm its effect of 2.95% improvement of accuracy and 4.7% improvement of F1 score on non-homogeneous dataset.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85124737906
A Feminist Relational Discourse Analysis of mothers’ voiced accounts of the “duty to protect” children from fatness and fatphobia
Research has highlighted damaging contradictions in the responsibilisation of mothers over children's health, at once held responsible for tackling “childhood obesity” while being cautious not to encourage children to become obsessive with their bodies. While research has highlighted discourses of blame and elucidated mothers’ experiences, less is known about how mothers negotiate discourse in their voiced accounts. Utilising Feminist Relational Discourse Analysis, this study analysed interviews with 12 mothers in England to explore their experiences of a nationally mandated BMI screening programme in schools and how discourses shape their voices and experiences. In negotiating complex and contradictory discourses of motherhood and fatness, participants expressed a “duty to protect” their children from both fatness and fatphobia. Negotiating these responsibilities left mothers feeling guilt at their personal “failure” to protect their children from one or both harms. Mothers did not take up these discourses unproblematically; they resisted them, yet felt constrained by “expert knowledges” of fatness and motherhood that had clear consequences in responsibilising mothers for the “harm” of fatness. This analysis calls attention to how dominant discourses function personally and politically to responsibilise mothers for the harm caused by state-sanctioned fatphobia.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 71, 72, 70, 74 ]
http://arxiv.org/abs/2106.14807v1
A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques
Recent developments in representational learning for information retrieval can be organized in a conceptual framework that establishes two pairs of contrasts: sparse vs. dense representations and unsupervised vs. learned representations. Sparse learned representations can further be decomposed into expansion and term weighting components. This framework allows us to understand the relationship between recently proposed techniques such as DPR, ANCE, DeepCT, DeepImpact, and COIL, and furthermore, gaps revealed by our analysis point to "low hanging fruit" in terms of techniques that have yet to be explored. We present a novel technique dubbed "uniCOIL", a simple extension of COIL that achieves to our knowledge the current state-of-the-art in sparse retrieval on the popular MS MARCO passage ranking dataset. Our implementation using the Anserini IR toolkit is built on the Lucene search library and thus fully compatible with standard inverted indexes.
[ "Information Retrieval" ]
[ 24 ]
http://arxiv.org/abs/2004.03485v1
A Few Topical Tweets are Enough for Effective User-Level Stance Detection
Stance detection entails ascertaining the position of a user towards a target, such as an entity, topic, or claim. Recent work that employs unsupervised classification has shown that performing stance detection on vocal Twitter users, who have many tweets on a target, can yield very high accuracy (+98%). However, such methods perform poorly or fail completely for less vocal users, who may have authored only a few tweets about a target. In this paper, we tackle stance detection for such users using two approaches. In the first approach, we improve user-level stance detection by representing tweets using contextualized embeddings, which capture latent meanings of words in context. We show that this approach outperforms two strong baselines and achieves 89.6% accuracy and 91.3% macro F-measure on eight controversial topics. In the second approach, we expand the tweets of a given user using their Twitter timeline tweets, and then we perform unsupervised classification of the user, which entails clustering a user with other users in the training set. This approach achieves 95.6% accuracy and 93.1% macro F-measure.
[ "Low-Resource NLP", "Information Extraction & Text Mining", "Information Retrieval", "Opinion Mining", "Sentiment Analysis", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 80, 3, 24, 49, 78, 36, 4 ]
SCOPUS_ID:85116756577
A Few-Shot Learning Graph Multi-trajectory Evolution Network for Forecasting Multimodal Baby Connectivity Development from a Baseline Timepoint
Charting the baby connectome evolution trajectory during the first year after birth plays a vital role in understanding dynamic connectivity development of baby brains. Such analysis requires acquisition of longitudinal connectomic datasets. However, both neonatal and postnatal scans are rarely acquired due to various difficulties. A small body of works has focused on predicting baby brain evolution trajectory from a neonatal brain connectome derived from a single modality. Although promising, large training datasets are essential to boost model learning and to generalize to a multi-trajectory prediction from different modalities (i.e., functional and morphological connectomes). Here, we unprecedentedly explore the question: “Can we design a few-shot learning-based framework for predicting brain graph trajectories across different modalities?” To this aim, we propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm where the teacher network learns on pure neonatal brain graphs and the student network learns on simulated brain graphs given a set of different timepoints. To the best of our knowledge, this is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction that is based on few-shot learning and generalized to graph neural networks (GNNs). To boost the performance of the student network, we introduce a local topology-aware distillation loss that forces the predicted graph topology of the student network to be consistent with the teacher network. Experimental results demonstrate substantial performance gains over benchmark methods. Hence, our GmTE-Net can be leveraged to predict atypical brain connectivity trajectory evolution across various modalities. Our code is available at https://github.com/basiralab/GmTE-Net.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Green & Sustainable NLP", "Structured Data in NLP", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 52, 80, 72, 68, 50, 4, 74 ]
SCOPUS_ID:85145567542
A Few-Shot Relation Extraction Method for Enhancing Entity Attention
The aim of the few-shot relation extraction (FSRE) method is to study the relation classification problem with fewer samples. An effective few-shot relation extraction model EnAttConceptFERE is proposed to effectively classify relationships through externally introduced entity concept information and the greater use of internal information. First, we introduce an entity-level vector representation, which uses selects appropriate entity concepts by comparing the similarity between the semantics of entity pairs in a sentence and the semantics of the concepts corresponding to the entities. In addition, the access to external resources is often limited and the introduction of noise cannot be avoided. Therefore, this paper is based on fully mining the effective information of the sample itself, and by introducing the entity self-attention module, the model can pay greater attention to the information of entity pairs that have an impact on relationship extraction. In order to verify the performance of EnAttConceptFERE, experiments are conducted on the FSRE benchmark dataset FewRel. Under the few-shot task setting of 5 way1shot (N=5,K=1) and 10way1shot (N=10,K=1), the accuracy rate is improved by 2.53% and 1.06%, and under the task setting of 5way5shot(N=5,K=5), the accuracy was improved by 1.31% compared with the TD-Proto model, demonstrating the effectiveness and superiority of the EnAttConceptFERE model.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Relation Extraction", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 52, 80, 72, 75, 4, 3 ]
http://arxiv.org/abs/2009.07968v3
A Few-Shot Semantic Parser for Wizard-of-Oz Dialogues with the Precise ThingTalk Representation
Previous attempts to build effective semantic parsers for Wizard-of-Oz (WOZ) conversations suffer from the difficulty in acquiring a high-quality, manually annotated training set. Approaches based only on dialogue synthesis are insufficient, as dialogues generated from state-machine based models are poor approximations of real-life conversations. Furthermore, previously proposed dialogue state representations are ambiguous and lack the precision necessary for building an effective agent. This paper proposes a new dialogue representation and a sample-efficient methodology that can predict precise dialogue states in WOZ conversations. We extended the ThingTalk representation to capture all information an agent needs to respond properly. Our training strategy is sample-efficient: we combine (1) fewshot data sparsely sampling the full dialogue space and (2) synthesized data covering a subset space of dialogues generated by a succinct state-based dialogue model. The completeness of the extended ThingTalk language is demonstrated with a fully operational agent, which is also used in training data synthesis. We demonstrate the effectiveness of our methodology on MultiWOZ 3.0, a reannotation of the MultiWOZ 2.1 dataset in ThingTalk. ThingTalk can represent 98% of the test turns, while the simulator can emulate 85% of the validation set. We train a contextual semantic parser using our strategy, and obtain 79% turn-by-turn exact match accuracy on the reannotated test set.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Representation Learning", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 52, 80, 72, 12, 11, 38, 4, 68 ]
http://arxiv.org/abs/2209.09450v1
A Few-shot Approach to Resume Information Extraction via Prompts
Prompt learning has been shown to achieve near-Fine-tune performance in most text classification tasks with very few training examples. It is advantageous for NLP tasks where samples are scarce. In this paper, we attempt to apply it to a practical scenario, i.e resume information extraction, and to enhance the existing method to make it more applicable to the resume information extraction task. In particular, we created multiple sets of manual templates and verbalizers based on the textual characteristics of resumes. In addition, we compared the performance of Masked Language Model (MLM) pre-training language models (PLMs) and Seq2Seq PLMs on this task. Furthermore, we improve the design method of verbalizer for Knowledgeable Prompt-tuning in order to provide a example for the design of Prompt templates and verbalizer for other application-based NLP tasks. In this case, we propose the concept of Manual Knowledgeable Verbalizer(MKV). A rule for constructing the Knowledgeable Verbalizer corresponding to the application scenario. Experiments demonstrate that templates and verbalizers designed based on our rules are more effective and robust than existing manual templates and automatically generated prompt methods. It is established that the currently available automatic prompt methods cannot compete with manually designed prompt templates for some realistic task scenarios. The results of the final confusion matrix indicate that our proposed MKV significantly resolved the sample imbalance issue.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 3, 4 ]
SCOPUS_ID:85114961497
A Few-shot Learning Method Based on Bidirectional Encoder Representation from Transformers for Relation Extraction
Relation extraction is one of the fundamental subtasks of the information extraction. The purpose is to determine the implicit relation between two entities in a sentence. Therefore, Convolutional Neural Networks and Feature Attention-based Prototypical Networks (CNN-Proto-FATT), a typical few-shot learning method, is proposed and achieve competitive performance. However, convolutional neural networks suffer from the insufficient instances of relation in real scenes, leading to undesirable results. To extract long-distance features more comprehensively, the pre-Trained model Bidirectional Encoder Representation from Transformers (BERT) is incorporated into CNN-Proto-FATT. In this model, named Bidirectional Encoder Representation from Transformers and Feature Attention-based Prototypical Networks (BERT-Proto-FATT), the multi-head attention helps the network extract semantic features cross long-And short-distance to enhance the encoded representations. Experimental results indicate that BERT-Proto-FATT demonstrates significant improvements on the FewRel dataset.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Relation Extraction", "Representation Learning", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 52, 80, 72, 75, 12, 4, 3 ]
SCOPUS_ID:85137039991
A Filipino philosophy of higher education? Exploring the purpose of higher learning in the Philippines
This paper aims to explore the philosophy that is embedded in the Philippine higher education system, and to locate the country’s philosophy of education within the global context. The Philippine higher education is marked by complexity in terms of governance and organization. More importantly, its origin and development are deeply implicated in the country’s colonial history, which in turn significantly impacted how the aims and purposes of higher education are defined and perceived by various stakeholders. Such a condition has resulted in specific social practices, and in a specific understanding of what higher education must contribute to the society. This paper thus examines a ‘distinct’ Filipino philosophy of higher education, the narratives that formed it, and the tensions that surround it. Moreover, it brings the field of Filipino philosophy in conversation with postcoloniality and the emerging field of philosophy of higher education. Analysis of the data shows consistency of the discourse topics, and the concept of nation-building as fundamental in understanding the mandate of higher education institutions.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85070208366
A Filter Based Feature Selection for Imbalanced Text Classification
In this work, a text classification method through a filter type feature selection for imbalanced data is addressed. The model initially clusters the documents associated with a class through a hierarchical clustering there by accomplishing a balanced or near balanced class. Later, a filter type feature selection is recommended to choose the most discriminative features for text classification. Subsequently, the documents are stored in the form of interval valued data. For classification purpose, a suitable symbolic classifier is recommended. The experimentation is done with two standard benchmarking datasets viz., Reuters 21578 and TDT2. The experimental results obtained from the proposed model are better in terms of f-measure when compared to the available models.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Text Clustering" ]
[ 3, 24, 36, 29 ]
http://arxiv.org/abs/2003.04987v1
A Financial Service Chatbot based on Deep Bidirectional Transformers
We develop a chatbot using Deep Bidirectional Transformer models (BERT) to handle client questions in financial investment customer service. The bot can recognize 381 intents, and decides when to say "I don't know" and escalates irrelevant/uncertain questions to human operators. Our main novel contribution is the discussion about uncertainty measure for BERT, where three different approaches are systematically compared on real problems. We investigated two uncertainty metrics, information entropy and variance of dropout sampling in BERT, followed by mixed-integer programming to optimize decision thresholds. Another novel contribution is the usage of BERT as a language model in automatic spelling correction. Inputs with accidental spelling errors can significantly decrease intent classification performance. The proposed approach combines probabilities from masked language model and word edit distances to find the best corrections for misspelled words. The chatbot and the entire conversational AI system are developed using open-source tools, and deployed within our company's intranet. The proposed approach can be useful for industries seeking similar in-house solutions in their specific business domains. We share all our code and a sample chatbot built on a public dataset on Github.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85101577656
A Fine Tuned Model of Grasshopper Optimization Algorithm with Classifiers for Optimal Text Classification
Text classification is widely used application of natural language processing and machine learning classifier. Tuning hyper-parameters of the classifier in machine learning is difficult and important step. The tuning of hyper-parameters means selection of the optimal hyper-parameters of the algorithm. Many approaches for tuning of the parameters have been tested and applied in the literature for the improvisation of the classification performance. Further, the feature selection using machine learning is a major task, dealing with high-dimensional dataset. In this paper, we proposed a hybrid model of tuned Grasshopper optimization algorithm with classifiers. Grasshopper optimization algorithm works on the mimic behavior of grasshoppers. The aim of this meta-heuristic approach is to determine the minimal feature subset from all features to improve the classification performance. For tuning of the classifiers, we have used random search technique. The classifiers chosen for classification are K-Nearest Neighbor and Support Vector Machine. Five multi-class datasets are used to evaluate the performance of the model in terms of Accuracy and AUC curve. All results are computed with 10-fold-cross validation method. The evaluated results of the proposed model is compared with other algorithms, which verifies the performance of our technique. The proposed model outperformed among all the compared state-of-the-art techniques.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
https://aclanthology.org//2021.wmt-1.59/
A Fine-Grained Analysis of BERTScore
BERTScore, a recently proposed automatic metric for machine translation quality, uses BERT, a large pre-trained language model to evaluate candidate translations with respect to a gold translation. Taking advantage of BERT’s semantic and syntactic abilities, BERTScore seeks to avoid the flaws of earlier approaches like BLEU, instead scoring candidate translations based on their semantic similarity to the gold sentence. However, BERT is not infallible; while its performance on NLP tasks set a new state of the art in general, studies of specific syntactic and semantic phenomena have shown where BERT’s performance deviates from that of humans more generally. This naturally raises the questions we address in this paper: what are the strengths and weaknesses of BERTScore? Do they relate to known weaknesses on the part of BERT? We find that while BERTScore can detect when a candidate differs from a reference in important content words, it is less sensitive to smaller errors, especially if the candidate is lexically or stylistically similar to the reference.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Syntactic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 15, 47, 0 ]
SCOPUS_ID:85145007365
A Fine-Grained Anomaly Detection Method Fusing Isolation Forest and Knowledge Graph Reasoning
Anomaly detection aims to find outliers data that do not conform to expected behaviors in a specific scenario, which is indispensable and critical in current safety environments related studies. However, when performing outlier detection on a large-scale multidimensional dataset, most of the traditional methods make no distinctions between classes of outliers and lack the reasoning ability and the explainability of the analysis results, which leads to the low accuracy of global outlier detection and the inability to effectively identify local outliers. In this paper, we propose a fine-grained anomaly detection method which well utilizes isolation forest and knowledge graph reasoning tactics. First, a better extracting data feature method and a more reasonable weighted strategy are used to find global outliers based on the traditional isolation forest algorithm, and then we construct a custom rule base according to the in-depth research and analysis of global abnormal data, and at last the ontology knowledge is reasoned based on such rule base to achieve the detection of local outliers. Through analysis of extensive experiment results, our method could effectively discover more abnormal data without big loss of time costs, has strong generalization, and correspondingly improves the performance for abnormal detection.
[ "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Knowledge Graph Reasoning", "Reasoning", "Multimodality" ]
[ 72, 50, 18, 54, 8, 74 ]
SCOPUS_ID:85146820391
A Fine-Grained Bird Classification Method Based on Attention and Decoupled Knowledge Distillation
Classifying birds accurately is essential for ecological monitoring. In recent years, bird image classification has become an emerging method for bird recognition. However, the bird image classification task needs to face the challenges of high intraclass variance and low inter-class variance among birds, as well as low model efficiency. In this paper, we propose a fine-grained bird classification method based on attention and decoupled knowledge distillation. First of all, we propose an attention-guided data augmentation method. Specifically, the method obtains images of the object’s key part regions through attention. It enables the model to learn and distinguish fine features. At the same time, based on the localization–recognition method, the bird category is predicted using the object image with finer features, which reduces the influence of background noise. In addition, we propose a model compression method of decoupled knowledge distillation. We distill the target and nontarget class knowledge separately to eliminate the influence of the target class prediction results on the transfer of the nontarget class knowledge. This approach achieves efficient model compression. With 67% fewer parameters and only 1.2 G of computation, the model proposed in this paper still has a 87.6% success rate, while improving the model inference speed.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Green & Sustainable NLP", "Responsible & Trustworthy NLP", "Text Classification", "Multimodality" ]
[ 20, 52, 72, 24, 3, 68, 4, 36, 74 ]
SCOPUS_ID:85075572084
A Fine-Grained Multilingual Analysis Based on the Appraisal Theory: Application to Arabic and English Videos
The objective of this paper is to compare the opinions of two videos in two different languages. To do so, a fine-grained approach inspired from the appraisal theory is used to analyze the content of the videos that concern the same topic. In general, the methods devoted to sentiment analysis concern the study of the polarity of a text or an utterance. The appraisal approach goes further than the basic polarity sentiments and consider more detailed sentiments by covering additional attributes of opinions such as: Attitude, Graduation and Engagement. In order to achieve such a comparison, in AMIS (Chist-Era project), we collected a corpus of 1503 Arabic and 1874 English videos. These videos need to be aligned in order to compare their contents, that is why we propose several methods to make them comparable. Then the best one is selected to align them and to constitute the data-set necessary for the fine-grained sentiment analysis.
[ "Multilinguality", "Visual Data in NLP", "Linguistics & Cognitive NLP", "Linguistic Theories", "Sentiment Analysis", "Multimodality" ]
[ 0, 20, 48, 57, 78, 74 ]
http://arxiv.org/abs/1911.12722v2
A Fine-Grained Sentiment Dataset for Norwegian
We introduce NoReC_fine, a dataset for fine-grained sentiment analysis in Norwegian, annotated with respect to polar expressions, targets and holders of opinion. The underlying texts are taken from a corpus of professionally authored reviews from multiple news-sources and across a wide variety of domains, including literature, games, music, products, movies and more. We here present a detailed description of this annotation effort. We provide an overview of the developed annotation guidelines, illustrated with examples, and present an analysis of inter-annotator agreement. We also report the first experimental results on the dataset, intended as a preliminary benchmark for further experiments.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85056149548
A Fine-Grained Spatial-Temporal Attention Model for Video Captioning
Attention mechanism has been extensively used in video captioning tasks, which enables further development of deeper visual understanding. However, most existing video captioning methods apply the attention mechanism on the frame level, which only model the temporal structure and generated words, but ignore the region-level spatial information that provides accurate visual features corresponding to the semantic content. In this paper, we propose a fine-grained spatial-temporal attention model (FSTA), and the spatial information of objects appearing in the video will be our main concern. In the proposed FSTA, we achieve the spatial-hard attention at a fine-grained region level of objects through the mask pooling module and compute the temporal soft attention by using a two-layer LSTM network with attention mechanism to generate sentences. We test the proposed model on two benchmark datasets, namely, MSVD and MSR-VTT. The results indicate that our proposed FSTA model can achieve competitive performance against the state of the arts on both datasets.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
SCOPUS_ID:85139075995
A Fine-Grained Study of Interpretability of Convolutional Neural Networks for Text Classification
In this work, we proposed a new interpretability framework for convolutional neural networks trained for text classification. The objective is to discover the interpretability of the convolutional layers that composes the architecture. The methodology introduced explores the most relevant words for the classification and more generally look for the most relevant concepts learned in the internal representation of the CNN. Here, the concepts studied were the POS tags. Furthermore, we have proposed an iterative algorithm to determine the most relevant filters or neurons for the task. The outcome of this algorithm is a threshold used to mask the least active neurons and focus the interpretability study only on the most relevant parts of the network. The introduced framework has been validated for explaining the internal representation of a well-known sentiment analysis task. As a result of this study, we found evidence that certain POS tags, such as nouns and adjectives, are more relevant for the classification. Moreover, we found evidence of the redundancy among the filters from a convolutional layer.
[ "Information Extraction & Text Mining", "Information Retrieval", "Syntactic Text Processing", "Explainability & Interpretability in NLP", "Sentiment Analysis", "Tagging", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 3, 24, 15, 81, 78, 63, 36, 4 ]
SCOPUS_ID:85123044999
A Fine-Tuned BERT-Based Transfer Learning Approach for Text Classification
Text Classification problem has been thoroughly studied in information retrieval problems and data mining tasks. It is beneficial in multiple tasks including medical diagnose health and care department, targeted marketing, entertainment industry, and group filtering processes. A recent innovation in both data mining and natural language processing gained the attention of researchers from all over the world to develop automated systems for text classification. NLP allows categorizing documents containing different texts. A huge amount of data is generated on social media sites through social media users. Three datasets have been used for experimental purposes including the COVID-19 fake news dataset, COVID-19 English tweet dataset, and extremist-non-extremist dataset which contain news blogs, posts, and tweets related to coronavirus and hate speech. Transfer learning approaches do not experiment on COVID-19 fake news and extremist-non-extremist datasets. Therefore, the proposed work applied transfer learning classification models on both these datasets to check the performance of transfer learning models. Models are trained and evaluated on the accuracy, precision, recall, and F1-score. Heat maps are also generated for every model. In the end, future directions are proposed.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 24, 3, 17, 8, 46, 36, 4 ]
http://arxiv.org/abs/2205.11097v2
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
While there is increasing concern about the interpretability of neural models, the evaluation of interpretability remains an open problem, due to the lack of proper evaluation datasets and metrics. In this paper, we present a novel benchmark to evaluate the interpretability of both neural models and saliency methods. This benchmark covers three representative NLP tasks: sentiment analysis, textual similarity and reading comprehension, each provided with both English and Chinese annotated data. In order to precisely evaluate the interpretability, we provide token-level rationales that are carefully annotated to be sufficient, compact and comprehensive. We also design a new metric, i.e., the consistency between the rationales before and after perturbations, to uniformly evaluate the interpretability on different types of tasks. Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability. We will release this benchmark https://www.luge.ai/#/luge/task/taskDetail?taskId=15 and hope it can facilitate the research in building trustworthy systems.
[ "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP" ]
[ 81, 4 ]
SCOPUS_ID:85096226928
A Fine-grained complex question translation for KBQA
Translating natural language questions into SPARQL queries is a significant challenge of semantic parsing based KBQA due to the gap between their representations. In this paper, we designed a fine-grained complex question answering framework for KBQA, including a semantic similarity model and a neural machine translation model. Based on the above two models, we present a complex question processing algorithm to transform questions into subqueries and then process them parallelly. The experiments evaluated on benchmark datasets show that our approach is significantly effective.
[ "Machine Translation", "Semantic Text Processing", "Question Answering", "Semantic Similarity", "Natural Language Interfaces", "Text Generation", "Multilinguality" ]
[ 51, 72, 27, 53, 11, 47, 0 ]
http://arxiv.org/abs/2111.02735v3
A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding
Speech self-supervised models such as wav2vec 2.0 and HuBERT are making revolutionary progress in Automatic Speech Recognition (ASR). However, they have not been totally proven to produce better performance on tasks other than ASR. In this work, we explored partial fine-tuning and entire fine-tuning on wav2vec 2.0 and HuBERT pre-trained models for three non-ASR speech tasks: Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding. With simple proposed downstream frameworks, the best scores reached 79.58% weighted accuracy on speaker-dependent setting and 73.01% weighted accuracy on speaker-independent setting for Speech Emotion Recognition on IEMOCAP, 2.36% equal error rate for Speaker Verification on VoxCeleb1, 89.38% accuracy for Intent Classification and 78.92% F1 for Slot Filling on SLURP, showing the strength of fine-tuned wav2vec 2.0 and HuBERT on learning prosodic, voice-print and semantic representations.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Sentiment Analysis", "Emotion Analysis", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 78, 61, 10, 74 ]
SCOPUS_ID:85111130193
A Finetuned Language Model for Recommending cQA-QAs for Enriching Textbooks
Textbooks play a vital role in any educational system, despite their clarity and information, students tend to use community question answers (cQA) forums to acquire more knowledge. Due to the high data volume, the quality of Question-Answers (QA) of cQA forums can differ greatly, so it takes additional effort to go through all possible QA pairs for a better insight. This paper proposes an “sentence-level text enrichment system” where the fine-tuned BERT (Bidirectional Encoder Representations from Transformers) summarizer understands the given text, picks out the important sentence, and then rearranged them to give the overall summary of the text document. For each important sentence, we recommend the relevant QA pairs from cQA to make the learning more effective. In this work, we fine-tuned the pre-trained BERT model to extract the relevant QA sets that are most relevant for enriching important sentences of the textbook. We notice that fine-tuning the BERT model significantly improves the performance for QA selection and find that it outperforms existing RNN-based models for such tasks. We also investigate the effectiveness of our fine-tuned BERTLarge model on three cQA datasets for the QA selection task and observed a maximum improvement of 19.72% compared to the previous models. Experiments have been carried out on NCERT (Grade IX and X) Textbooks from India and “Pattern Recognition and Machine Learning” Textbook. The extensive evaluation methods demonstrate that the proposed model offers more precise and relevant recommendations in comparison to the state-of-the-art methods.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Question Answering" ]
[ 52, 11, 72, 27 ]
SCOPUS_ID:85084126945
A Finger-Worn Device for Exploring Chinese Printed Text with Using CNN Algorithm on a Micro IoT Processor
This study designed a finger-worn device-named the Chinese FingerReader-that can be practically applied by visually impaired users for recognizing traditional Chinese characters on the micro internet of things (IoT) processor. The device is portable, easy to operate, and designed to be worn on the index finger. The Chinese FingerReader on the index finger contains a small camera and buttons. The small camera captures images by identifying the relative position of the index finger to the printed text, and the buttons are applied to capture an image by visually impaired users and provide the audio output of the corresponding Chinese character by a voice prompt. To recognize Chinese characters, English letters, and numbers, a robust Chinese optical character recognition (OCR) system was developed according to the training strategy of an augmented convolution neural network algorithm. The proposed Chinese OCR system can segment a single character from the captured image, and the system can accurately recognize rotated Chinese characters. The experimental results revealed that compared with the OCR application programming interfaces of Google and Microsoft, the proposed OCR system obtains 95% accuracy rate in dealing with rotated character images where the Google and Microsoft OCR APIs only obtain 65% and 34% accuracy rates. These results illustrate that the proposed OCR system was more suitable for the needs of visually impaired people in actual use. Finally, three usage scenarios were simulated, and the accuracy and operational performance of the system were tested. Field tests of this system were conducted for visually impaired users to verify its feasibility.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:1642409510
A Finite State Approach in Modeling Human-Computer Communication
This paper presents a finite state model of the user-system communication. The model is based on the deterministic and probabilistic finite automata. Finite state analysis of the user model is based on a slightly generalized notion of a personality model and is illustrated by a simple example of agents playing an iterative prisoner's dilemma game. The deterministic version of the finite state model is used to modeling the system. Further, a construction of the finite state models from a corpus of observed dialogues is described and briefly discussed. This technique can be used in programming and enhancing dialogue systems as well as in user modeling.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
https://aclanthology.org//2000.iwpt-1.33/
A Finite-state Parser with Dependency Structure Output
We show how to augment a finite-state grammar with annotations which allow dependency structures to be extracted. There are some difficulties in determinising the grammar, which is an essential step for computational efficiency, but they can be overcome. The parser also allows syntactically ambiguous structures to be packed into a single representation.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
http://arxiv.org/abs/1908.04212v1
A Finnish News Corpus for Named Entity Recognition
We present a corpus of Finnish news articles with a manually prepared named entity annotation. The corpus consists of 953 articles (193,742 word tokens) with six named entity classes (organization, location, person, product, event, and date). The articles are extracted from the archives of Digitoday, a Finnish online technology news source. The corpus is available for research purposes. We present baseline experiments on the corpus using a rule-based and two deep learning systems on two, in-domain and out-of-domain, test sets.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85146220828
A First Attempt at Unreliable News Detection in Swedish
Throughout the COVID-19 pandemic, a parallel infodemic has also been going on such that the information has been spreading faster than the virus itself. During this time, every individual needs to access accurate news in order to take corresponding protective measures, regardless of their country of origin or the language they speak, as misinformation can cause significant loss to not only individuals but also society. In this paper we train several machine learning models (ranging from traditional machine learning to deep learning) to try to determine whether news articles come from either a reliable or an unreliable source, using just the body of the article. Moreover, we use a previously introduced corpus of news in Swedish related to the COVID-19 pandemic for the classification task. Given that our dataset is both unbalanced and small, we use subsampling and easy data augmentation (EDA) to try to solve these issues. In the end, we realize that, due to the small size of our dataset, using traditional machine learning along with data augmentation yields results that rival those of transformer models such as BERT.
[ "Low-Resource NLP", "Text Classification", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 80, 36, 4, 24, 3 ]
https://aclanthology.org//2021.maiworkshop-1.4/
A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations
Explainable deep learning models are advantageous in many situations. Prior work mostly provide unimodal explanations through post-hoc approaches not part of the original system design. Explanation mechanisms also ignore useful textual information present in images. In this paper, we propose MTXNet, an end-to-end trainable multimodal architecture to generate multimodal explanations, which focuses on the text in the image. We curate a novel dataset TextVQA-X, containing ground truth visual and multi-reference textual explanations that can be leveraged during both training and evaluation. We then quantitatively show that training with multimodal explanations complements model performance and surpasses unimodal baselines by up to 7% in CIDEr scores and 2% in IoU. More importantly, we demonstrate that the multimodal explanations are consistent with human interpretations, help justify the models’ decision, and provide useful insights to help diagnose an incorrect prediction. Finally, we describe a real-world e-commerce application for using the generated multimodal explanations.
[ "Visual Data in NLP", "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 20, 81, 4, 74 ]
http://arxiv.org/abs/1710.08048v1
A First Step in Combining Cognitive Event Features and Natural Language Representations to Predict Emotions
We explore the representational space of emotions by combining methods from different academic fields. Cognitive science has proposed appraisal theory as a view on human emotion with previous research showing how human-rated abstract event features can predict fine-grained emotions and capture the similarity space of neural patterns in mentalizing brain regions. At the same time, natural language processing (NLP) has demonstrated how transfer and multitask learning can be used to cope with scarcity of annotated data for text modeling. The contribution of this work is to show that appraisal theory can be combined with NLP for mutual benefit. First, fine-grained emotion prediction can be improved to human-level performance by using NLP representations in addition to appraisal features. Second, using the appraisal features as auxiliary targets during training can improve predictions even when only text is available as input. Third, we obtain a representation with a similarity matrix that better correlates with the neural activity across regions. Best results are achieved when the model is trained to simultaneously predict appraisals, emotions and emojis using a shared representation. While these results are preliminary, the integration of cognitive neuroscience and NLP techniques opens up an interesting direction for future research.
[ "Representation Learning", "Semantic Text Processing", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 12, 72, 48, 57 ]
http://arxiv.org/abs/1505.01504v2
A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models
In this paper, we propose the new fixed-size ordinally-forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. In this work, we have applied FOFE to feedforward neural network language models (FNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE based FNN-LMs can significantly outperform not only the standard fixed-input FNN-LMs but also the popular RNN-LMs.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
https://aclanthology.org//W03-2905/
A Flexemic Tagset for Polish
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
https://aclanthology.org//W02-0209/
A Flexible Framework for Developing Mixed-Initiative Dialog Systems
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85053129832
A Flexible Keyphrase Extraction Technique for Academic Literature
A keyphrase extraction technique endeavors to extract quality keyphrases from a given document, which provide a high-level summary of that document. Except statistical keyphrase extraction approaches, all other approaches are either domain-dependent or require a sufficient amount of training data, which are rare at present. Therefore, in this paper, a new tree-based automatic keyphrase extraction technique is proposed, which is domain-independent and employs nominal statistical knowledge; but no train data are required. The proposed technique extracts a quality keyphrase through forming a tree from a candidate keyphrase; and later, it is expanded or shrunk or remained in the same state depending on other similar candidate keyphrases. At the end, keyphrases are extracted from the resultant trees based on a value, μ (which is the Maturity Index (MI) of a node in the tree), which enables flexibility in this process. A small μ value would yield many and/or lengthy keyphrases (greedy approach); whereas, a large μ value would yield lower and/or abbreviated keyphrases (conservative approach). Thereby, a user can extract his/her desired-level of keyphrases through tuning μ value. The effectiveness of the proposed technique is evaluated on an actual corpus, and compared with Rapid Automatic Keyphrase Extraction (RAKE) technique.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85110436408
A Flexible Mapping Scheme for Discrete and Dimensional Emotion Representations: Evidence from Textual Stimuli
While research on emotions has become one of the most productive areas at the intersection of cognitive science, artificial intelligence and natural language processing, the diversity and incommensurability of emotion models seriously hampers progress in the field. We here propose kNN regression as a simple, yet effective method for computationally mapping between two major strands of emotion representations, namely dimensional and discrete emotion models. In a series of machine learning experiments on data sets of textual stimuli we gather evidence that this approach reaches a human level of reliability using a relatively small number of data points only.
[ "Emotion Analysis", "Semantic Text Processing", "Sentiment Analysis", "Representation Learning" ]
[ 61, 72, 78, 12 ]
http://arxiv.org/abs/2107.05377v2
A Flexible Multi-Task Model for BERT Serving
In this demonstration, we present an efficient BERT-based multi-task (MT) framework that is particularly suitable for iterative and incremental development of the tasks. The proposed framework is based on the idea of partial fine-tuning, i.e. only fine-tune some top layers of BERT while keep the other layers frozen. For each task, we train independently a single-task (ST) model using partial fine-tuning. Then we compress the task-specific layers in each ST model using knowledge distillation. Those compressed ST models are finally merged into one MT model so that the frozen layers of the former are shared across the tasks. We exemplify our approach on eight GLUE tasks, demonstrating that it is able to achieve both strong performance and efficiency. We have implemented our method in the utterance understanding system of XiaoAI, a commercial AI assistant developed by Xiaomi. We estimate that our model reduces the overall serving cost by 86%.
[ "Multilinguality", "Language Models", "Low-Resource NLP", "Machine Translation", "Semantic Text Processing", "Text Generation", "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 0, 52, 80, 51, 72, 47, 4, 68 ]
http://arxiv.org/abs/cmp-lg/9707003v1
A Flexible POS tagger Using an Automatically Acquired Language Model
We present an algorithm that automatically learns context constraints using statistical decision trees. We then use the acquired constraints in a flexible POS tagger. The tagger is able to use information of any degree: n-grams, automatically learned context constraints, linguistically motivated manually written constraints, etc. The sources and kinds of constraints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus.
[ "Language Models", "Tagging", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 63, 72, 15 ]
http://arxiv.org/abs/cs/0312050v1
A Flexible Pragmatics-driven Language Generator for Animated Agents
This paper describes the NECA MNLG; a fully implemented Multimodal Natural Language Generation module. The MNLG is deployed as part of the NECA system which generates dialogues between animated agents. The generation module supports the seamless integration of full grammar rules, templates and canned text. The generator takes input which allows for the specification of syntactic, semantic and pragmatic constraints on the output.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Text Generation" ]
[ 71, 72, 47 ]
http://arxiv.org/abs/cs/0403039v1
A Flexible Rule Compiler for Speech Synthesis
We present a flexible rule compiler developed for a text-to-speech (TTS) system. The compiler converts a set of rules into a finite-state transducer (FST). The input and output of the FST are subject to parameterization, so that the system can be applied to strings and sequences of feature-structures. The resulting transducer is guaranteed to realize a function (as opposed to a relation), and therefore can be implemented as a deterministic device (either a deterministic FST or a bimachine).
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
http://arxiv.org/abs/cs/9812018v1
A Flexible Shallow Approach to Text Generation
In order to support the efficient development of NL generation systems, two orthogonal methods are currently pursued with emphasis: (1) reusable, general, and linguistically motivated surface realization components, and (2) simple, task-oriented template-based techniques. In this paper we argue that, from an application-oriented perspective, the benefits of both are still limited. In order to improve this situation, we suggest and evaluate shallow generation methods associated with increased flexibility. We advise a close connection between domain-motivated and linguistic ontologies that supports the quick adaptation to new tasks and domains, rather than the reuse of general resources. Our method is especially designed for generating reports with limited linguistic variations.
[ "Text Generation" ]
[ 47 ]
https://aclanthology.org//W02-0710/
A Flexible Speech to Speech Phrasebook Translator
[ "Multilinguality", "Machine Translation", "Speech & Audio in NLP", "Text Generation", "Multimodality" ]
[ 0, 51, 70, 47, 74 ]
http://arxiv.org/abs/1906.05685v2
A Focus on Neural Machine Translation for African Languages
African languages are numerous, complex and low-resourced. The datasets required for machine translation are difficult to discover, and existing research is hard to reproduce. Minimal attention has been given to machine translation for African languages so there is scant research regarding the problems that arise when using machine translation techniques. To begin addressing these problems, we trained models to translate English to five of the official South African languages (Afrikaans, isiZulu, Northern Sotho, Setswana, Xitsonga), making use of modern neural machine translation techniques. The results obtained show the promise of using neural machine translation techniques for African languages. By providing reproducible publicly-available data, code and results, this research aims to provide a starting point for other researchers in African machine translation to compare to and build upon.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/1604.01485v1
A Focused Dynamic Attention Model for Visual Question Answering
Visual Question and Answering (VQA) problems are attracting increasing interest from multiple research disciplines. Solving VQA problems requires techniques from both computer vision for understanding the visual contents of a presented image or video, as well as the ones from natural language processing for understanding semantics of the question and generating the answers. Regarding visual content modeling, most of existing VQA methods adopt the strategy of extracting global features from the image or video, which inevitably fails in capturing fine-grained information such as spatial configuration of multiple objects. Extracting features from auto-generated regions -- as some region-based image recognition methods do -- cannot essentially address this problem and may introduce some overwhelming irrelevant features with the question. In this work, we propose a novel Focused Dynamic Attention (FDA) model to provide better aligned image content representation with proposed questions. Being aware of the key words in the question, FDA employs off-the-shelf object detector to identify important regions and fuse the information from the regions and global features via an LSTM unit. Such question-driven representations are then combined with question representation and fed into a reasoning unit for generating the answers. Extensive evaluation on a large-scale benchmark dataset, VQA, clearly demonstrate the superior performance of FDA over well-established baselines.
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
http://arxiv.org/abs/2209.11910v2
A Focused Study on Sequence Length for Dialogue Summarization
Output length is critical to dialogue summarization systems. The dialogue summary length is determined by multiple factors, including dialogue complexity, summary objective, and personal preferences. In this work, we approach dialogue summary length from three perspectives. First, we analyze the length differences between existing models' outputs and the corresponding human references and find that summarization models tend to produce more verbose summaries due to their pretraining objectives. Second, we identify salient features for summary length prediction by comparing different model settings. Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated. Analysis and experiments are conducted on popular DialogSum and SAMSum datasets to validate our findings.
[ "Summarization", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Extraction & Text Mining" ]
[ 30, 11, 47, 38, 3 ]
SCOPUS_ID:85094653040
A Food Safety Text Filtering Method Based on Text Classification Techniques
In order to filter out food safety texts from unstructured textual data of various types, we proposed a Chinese food safety text filtering method. Firstly, the data is collected and preprocessed by the web crawler; secondly, the BERT pre-training model is fine-tuned by small-scale data; then the document vector calculation is carried out using a feature extraction method combining TF-IDF values and word vectors of keywords proposed in this paper; finally, the SVM classifier is trained by document vectors to screen out food safety text. The experiments show that the SVM classifier is able to filter out food safety texts from various types of text data with high performance, which basically achieves the expected results.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 12, 24, 3 ]
SCOPUS_ID:85112146057
A Forecast Model of Tourism Demand Driven by Social Network Data
To improve the forecasting accuracy of tourism demand through forecasting model and data sources, this paper takes the social network data as an entry point, and collects the social network data by the web crawler, then quantifies the data based on the sentiment analysis of the BERT model. This paper uses structured variables such as social network data, weather, holidays, etc. to build a tourism demand forecasting model based on Gradient Boosting Regression Trees. At last, take Huang Shan as example, use actual statistics of passenger terminal and social network data to do an empirical analysis of Huang Shan tourism demand forecasting. Compared with the existing model and introduce ablation study to verify the effectiveness of the considered factors. The result shows that the model based on social network data has improved the forecasting accuracy from the existing ones, ablation study shows social network data helps to improve the accuracy of tourism demand forecasting.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:0003855894
A Form-based approach to natural language query processing
WE describe a methodology for processing data retrieval and update queries using a form-based natural language interface. For the purpose of illustration, we use computer integrated manufacturing (CIM) as the application domain. The interface consists of a set of fourth-generation interface tools (SQL forms), a set of form definitions, a lexicon, and a parser. The forms are developed from the functional and data models of the system. A form definition consists of a form name, a form object, a set of form fields, and a set of fragment grammars. A form object is a single or composite entity that uniquely identifies a form. Form fields consist of database fields whose values can be entered by users (user-defined), and others whose values can be derivedby the system (system-defined). Fragment grammars are templates that identify the information requested by user queries. The lexicon consists of all words recognized by the system, their grammatical categories, synonyms, and associations (if any) with database objects and forms. The parser scans a natural language query to identify a form in a bottom-up fashion. The information requested by the user query is determined in a top-down manner by matching the fragment grammars associated with a form against the user query. Extragrammatical inputs with limited deviations from the grammar rules are supported. Elliptical queries are supported by deriving me missing information from those specified in previous queries and forms. Combining a natural language processor with SQL forms allows update queries and prevents violation of database integrity constraints, duplication of records, and invalid data entry.
[ "Programming Languages in NLP", "Natural Language Interfaces", "Multimodality" ]
[ 55, 11, 74 ]
http://arxiv.org/abs/2003.07385v1
A Formal Analysis of Multimodal Referring Strategies Under Common Ground
In this paper, we present an analysis of computationally generated mixed-modality definite referring expressions using combinations of gesture and linguistic descriptions. In doing so, we expose some striking formal semantic properties of the interactions between gesture and language, conditioned on the introduction of content into the common ground between the (computational) speaker and (human) viewer, and demonstrate how these formal features can contribute to training better models to predict viewer judgment of referring expressions, and potentially to the generation of more natural and informative referring expressions.
[ "Multimodality" ]
[ 74 ]
https://aclanthology.org//W97-0405/
A Formal Basis for Spoken Language Translation by Analogy
[ "Machine Translation", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Reasoning", "Multilinguality" ]
[ 51, 70, 74, 47, 8, 0 ]
https://aclanthology.org//W13-0807/
A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the ITG Hypothesis.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/2109.03942v1
A Formal Description of Sorani Kurdish Morphology
Sorani Kurdish, also known as Central Kurdish, has a complex morphology, particularly due to the patterns in which morphemes appear. Although several aspects of Kurdish morphology have been studied, such as pronominal endoclitics and Izafa constructions, Sorani Kurdish morphology has received trivial attention in computational linguistics. Moreover, some morphemes, such as the emphasis endoclitic =\^i\c{s}, and derivational morphemes have not been previously studied. To tackle the complex morphology of Sorani, we provide a thorough description of Sorani Kurdish morphological and morphophonological constructions in a formal way such that they can be used as finite-state transducers for morphological analysis and synthesis.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
https://aclanthology.org//W97-0715/
A Formal Model of Text Summarization Based on Condensation Operators of a Terminological Logic
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
http://arxiv.org/abs/1807.01996v1
A Formal Ontology-Based Classification of Lexemes and its Applications
The paper describes the enrichment of OntoSenseNet - a verb-centric lexical resource for Indian Languages. A major contribution of this work is preservation of an authentic Telugu dictionary by developing a computational version of the same. It is important because native speakers can better annotate the sense-types when both the word and its meaning are in Telugu. Hence efforts are made to develop the aforementioned Telugu dictionary and annotations are done manually. The manually annotated gold standard corpus consists 8483 verbs, 253 adverbs and 1673 adjectives. Annotations are done by native speakers according to defined annotation guidelines. In this paper, we provide an overview of the annotation procedure and present the validation of the developed resource through inter-annotator agreement. Additional words from Telugu WordNet are added to our resource and are crowd-sourced for annotation. The statistics are compared with the sense-annotated lexicon, our resource for more insights.
[ "Semantic Text Processing", "Text Classification", "Knowledge Representation", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 18, 24, 3 ]
https://aclanthology.org//W09-4401/
A Formal Scope on the Relations Between Definitions and Verbal Predications
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:67651204900
A Formal Syntax of Natural Languages and the Deductive Grammar
This paper presents a formal syntax framework of natural languages for computational linguistics. The abstract syntax of natural languages, particularly English, and their formal manipulations are described. On the basis of the abstract syntax, a universal language processing model and the deductive grammar of English are developed toward the formalization of Chomsky's universal grammar in linguistics. Comparative analyses of natural and programming languages, as well as the linguistic perception on software engineering, are discussed. A wide range of applications of the deductive grammar of English have been explored in language acquisition, comprehension, generation, and processing in cognitive informatics, computational intelligence, and cognitive computing.
[ "Reasoning", "Syntactic Text Processing" ]
[ 8, 15 ]
https://aclanthology.org//1995.iwpt-1.23/
A Formalism and a Parser for Lexicalised Dependency Grammars
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
http://arxiv.org/abs/cmp-lg/9504019v2
A Formalism and an Algorithm for Computing Pragmatic Inferences and Detecting Infelicities
Since Austin introduced the term ``infelicity'', the linguistic literature has been flooded with its use, but no formal or computational explanation has been given for it. This thesis provides one for those infelicities that occur when a pragmatic inference is cancelled. Our contribution assumes the existence of a finer grained taxonomy with respect to pragmatic inferences. It is shown that if one wants to account for the natural language expressiveness, one should distinguish between pragmatic inferences that are felicitous to defeat and pragmatic inferences that are infelicitously defeasible. Thus, it is shown that one should consider at least three types of information: indefeasible, felicitously defeasible, and infelicitously defeasible. The cancellation of the last of these determines the pragmatic infelicities. A new formalism has been devised to accommodate the three levels of information, called ``stratified logic''. Within it, we are able to express formally notions such as ``utterance U presupposes P'' or ``utterance U is infelicitous''. Special attention is paid to the implications that our work has in solving some well-known existential philosophical puzzles. The formalism yields an algorithm for computing interpretations for utterances, for determining their associated presuppositions, and for signalling infelicitous utterances that has been implemented in Common Lisp. The algorithm applies equally to simple and complex utterances and sequences of utterances.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:0035561386
A Foucauldian Gaze on Gender Research: What Do You Do When Confronted with the Tunnel at the End of the Light?
This article, the focus of which is on girls in mathematics, engages poststructural debates over knowledge and power to explore how female subjectivity is lived within the classroom, and the first section looks at some recent feminist reconstructionists' proposals developed from the idea of "different experience." The second section is set within the context of the poststructuralists' undermining of the "light" of progressive development, central to the Enlightenment project. Foucauldian ideas are introduced for a theoretical discussion about the ways in which the girl becomes gendered through available discourses and practices. Building on this discussion, the third section provides an analysis of some moments of classroom life and offers a different story about girls in school mathematics.
[ "Discourse & Pragmatics", "Reasoning", "Numerical Reasoning", "Semantic Text Processing" ]
[ 71, 8, 5, 72 ]
SCOPUS_ID:85128708010
A Foucauldian discourse analysis of unit coordinators’ experiences of consensus moderation in an Australian university
Consensus moderation, where collaboration and discussion take place to reach an agreement on mark allocation, is a frequently used approach to quality assurance in higher education. Unit coordinators play a vital role in facilitating consensus moderation, yet limited research has focused on their role in moderation practices. This study explored unit coordinators’ perceptions and experiences of consensus moderation in Australian higher education through five focus groups. Using Foucauldian discourse analysis, data analysis identified three discursive constructions of consensus moderation situated in the wider discourse of the neoliberal university: a truly collaborative process, an illusion, and a process to manage markers. Unit coordinators in this study were positioned as either supportive, compliant but ineffective, or powerful. In contrast, markers were positioned as helpful and compliant although inexperienced and needing support; uncooperative, resistant, troublesome and demanding; or inexperienced and malleable. This paper has identified varied knowledge and understanding of consensus moderation processes and practice. These findings can inform moderation policy and practices and unit coordinator professional development.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:0037506734
A Foundation for Representing and Querying Moving Objects
Spatio-temporal databases deal with geometries changing over time. The goal of our work is to provide a DBMS data model and query language capable of handling such time-dependent geometries, including those changing continuously that describe moving objects. Two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, respectively, are of interest. We propose to represent such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a DBMS data model and query language. This paper presents a design of such a system of abstract data types. It turns out that besides the main types of interest, moving point and moving region, a relatively large number of auxiliary data types are needed. For example, one needs a line type to represent the projection of a moving point into the plane, or a "moving real" to represent the time-dependent distance of two moving points. It then becomes crucial to achieve (i) orthogonality in the design of the type system, i.e., type constructors can be applied uniformly; (ii) genericity and consistency of operations, i.e., operations range over as many types as possible and behave consistently; and (iii) closure and consistency between structure and operations of nontemporal and related temporal types. Satisfying these goals leads to a simple and expressive system of abstract data types that may be integrated into a query language to yield a powerful language for querying spatio-temporal data, including moving objects. The paper formally defines the types and operations, offers detailed insight into the considerations that went into the design, and exemplifies the use of the abstract data types using SQL. The paper offers a precise and conceptually clean foundation for implementing a spatio-temporal DBMS extension. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages - Query languages; H.2.8 [Database Management]: Database applications - Spatial databases and GIS General Terms: Languages, Theory.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
https://aclanthology.org//W13-4043/
A Four-Participant Group Facilitation Framework for Conversational Robots
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:21944444399
A Fourier descriptor based character recognition engine implemented under the gamera open-source document processing framework
This paper discusses the implementation of an engine for performing optical character recognition of bi-tonal images using the Gamera framework, an existing open-source framework for building document analysis applications. The OCR engine uses features that are based on the Fourier descriptor to distinguish characters, and is designed to be able to handle character images that contain multiple boundaries. The algorithm works by assigning to each character image a signature that encodes the boundary types that are present in the image as well as the positional relationships that exist between them. Under this approach, only images having the same signature are comparable. Effectively, a meta-classifier is used which first computes the signature of an input image and then dispatches the image to an underlying neural network based classifier which is trained to distinguish between images having that signature. The performance of the OCR engine is evaluated on a set of sample images taken from the newspaper domain, and compares well with other OCR engines. The source code for this engine and all supporting modules is currently available upon request, and will eventually be made available through an open-source project on the sourceforge website. © 2005 SPIE and IS&T.
[ "Visual Data in NLP", "Text Classification", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 20, 36, 74, 24, 3 ]
http://arxiv.org/abs/2102.06038v1
A Fractal Approach to Characterize Emotions in Audio and Visual Domain: A Study on Cross-Modal Interaction
It is already known that both auditory and visual stimulus is able to convey emotions in human mind to different extent. The strength or intensity of the emotional arousal vary depending on the type of stimulus chosen. In this study, we try to investigate the emotional arousal in a cross-modal scenario involving both auditory and visual stimulus while studying their source characteristics. A robust fractal analytic technique called Detrended Fluctuation Analysis (DFA) and its 2D analogue has been used to characterize three (3) standardized audio and video signals quantifying their scaling exponent corresponding to positive and negative valence. It was found that there is significant difference in scaling exponents corresponding to the two different modalities. Detrended Cross Correlation Analysis (DCCA) has also been applied to decipher degree of cross-correlation among the individual audio and visual stimulus. This is the first of its kind study which proposes a novel algorithm with which emotional arousal can be classified in cross-modal scenario using only the source audio and visual signals while also attempting a correlation between them.
[ "Visual Data in NLP", "Speech & Audio in NLP", "Multimodality" ]
[ 20, 70, 74 ]
https://aclanthology.org//W17-2626/
A Frame Tracking Model for Memory-Enhanced Dialogue Systems
Recently, resources and tasks were proposed to go beyond state tracking in dialogue systems. An example is the frame tracking task, which requires recording multiple frames, one for each user goal set during the dialogue. This allows a user, for instance, to compare items corresponding to different goals. This paper proposes a model which takes as input the list of frames created so far during the dialogue, the current user utterance as well as the dialogue acts, slot types, and slot values associated with this utterance. The model then outputs the frame being referenced by each triple of dialogue act, slot type, and slot value. We show that on the recently published Frames dataset, this model significantly outperforms a previously proposed rule-based baseline. In addition, we propose an extensive analysis of the frame tracking task by dividing it into sub-tasks and assessing their difficulty with respect to our model.
[ "Representation Learning", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 12, 11, 72, 38 ]
https://aclanthology.org//W08-0120/
A Frame-Based Probabilistic Framework for Spoken Dialog Management Using Dialog Examples
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85122566041
A Frame-Inspired Task-Based Approach to Metaphor Teaching
The paper aims to make a contribution to recent discussions of the application of a particular cognitive linguistic theory, frame semantics, to the field of language teaching. The proposal put forward demonstrates the meeting points between frame semantics and task-based language teaching. In explaining why and how frame semantics can be integrated with task-based language teaching, we point out the key role of contextualization (both situational and linguistic). The implementation of the proposed integration is illustrated in the context of metaphor teaching: we present sample teaching units which raise English as a Foreign Language learners’ awareness of metaphor. These examples show how frame semantics can find its way into the task-based framework with a view to enhancing vocabulary acquisition through the conceptual and contextual grouping of lexico-grammatical items.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
https://aclanthology.org//W97-0204/
A Frame-Semantic Approach to Semantic Annotation
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85006345056
A Framework Based on Semantic Spaces and Glyphs for Social Sensing on Twitter
In this paper we present a framework aimed at detecting emotions and sentiments in a Twitter stream. The approach uses the well-founded Latent Semantic Analysis technique, which can be seen as a bio-insipred cognitive architecture, to induce a semantic space where tweets are mapped and analysed by soft sensors. The measurements of the soft sensors are then used by a visualisation module which exploits glyphs to graphically present them. The result is an interactive map which makes easy the exploration of reactions and opinions in the whole globe regarding tweets retrieved from specific queries.
[ "Emotion Analysis", "Sentiment Analysis" ]
[ 61, 78 ]
SCOPUS_ID:85116938947
A Framework System Using Word Mover’s Distance Text Similarity Algorithm for Assessing Privacy Policy Compliance
Privacy policies are important as they outline how organizations manage the personal data of consumers who use their services. However, a key issue with privacy policies is that they are lengthy and verbose, hindering the public from fully understanding the contents stated in the privacy policy. While there have been existing research works on assessing privacy policies, most of them are manually done by humans. Besides lacking automated solutions for assessing privacy policies’ compliance with data protection regulations, there has been no usage of semantic text analytics approaches in the study of privacy policy compliance. As such, we researched and implemented a framework system embedded with a data protection requirements dictionary where privacy policies are assessed automatically based on its coverage with the dictionary. We selected the General Data Protection Regulation (GDPR) as the primary source of our experiment for its broader requirements compared to other regulations. The assessment by the framework is realized through the Word Mover's Distance (WMD) text similarity algorithm which calculates the similarity distance of how close the meaning of a privacy policy and the data protection regulation requirements in the dictionary. Our framework system is a novel implementation of the WMD text similarity algorithm in assessing privacy policies semantically and it contributes to an automated assessment on privacy policy compliance with personal data protection requirements.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]