id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:78651098688
A 3D-shape retrieval system to improve the text retrieval
This paper describes a 3D model retrieval system based on Google 3D Warehouse, which consists of 3D models in .skp format. Because of the quickly expanding of the 3D models on the web and the insistent demands of 3D models in virtual reality, virtual scene modeling and 3D animation, there is an increasing need for a search engine to help people find them. Traditional text-based search methods are not always effective for 3D data. In this paper, we proposed a 3D model retrieval system combining the text-based search method and the shape-based search method. By means of the Google 3D Warehouse, we start the initial search with inputting some keywords (such as the model's name). Next, according to the text-based search result, we obtain and download the retrieved models as a database. Then, we convert the SKP models and extract the shape features for the further shape-based retrieval. Finally, we select a model from the text-based retrieved results as a query request and get more reasonable results by shape retrieval. The experiment results show that the proposed system achieves much better performance than Google 3D Warehouse search engine. © 2010 IEEE.
[ "Information Retrieval" ]
[ 24 ]
SCOPUS_ID:0032958476
A 4-year investigation into phonetic inventory development in young cochlear implant users
Phonetic inventories of 9 children with profoundly impaired hearing who used the 22-electrode cochlear implant (Cochlear Limited) were monitored before implantation and during the first 4 years of implant use. All children were 5 years old or younger at the time of implant. Spontaneous speech samples were collected at regular intervals for each child and analyzed to investigate phone acquisition over the post-implant period. Acquisition was measured using two different criteria. The 'targetless' criterion required the child to produce a phonetically recognizable sound spontaneously, and the 'target' criterion required the child to produce the phone correctly at least 50% of the time in meaningful words. At 4 years post-implant, 40 out of 44 phones (91%) had reached the targetless criterion, and 29 phones (66%) had reached the target criterion for 5 or more of the children. Over the time of the study 100% of monophthongs, 63% of diphthongs, and 54% of consonants reached the target criterion. The average time taken for a phone to progress from the targetless to target criterion was 15 months. Overall, the data suggest trends in the order of phone acquisition similar to those of normally hearing children, although the process of acquisition occurred at a slower rate.
[ "Phonetics", "Syntactic Text Processing" ]
[ 64, 15 ]
SCOPUS_ID:85037163649
A 500 million word POS-tagged icelandic corpus
The new POS-tagged Icelandic corpus of the Leipzig Corpora Collection is an extensive resource for the analysis of the Icelandic language. As it contains a large share of all Web documents hosted under the.is top-level domain, it is especially valuable for investigations on modern Icelandic and non-standard language varieties. The corpus is accessible via a dedicated web portal and large shares are available for download. Focus of this paper will be the description of the tagging process and evaluation of statistical properties like word form frequencies and part of speech tag distributions. The latter will be in particular compared with values from the Icelandic Frequency Dictionary (IFD) Corpus.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85055564668
A 5W1H based annotation scheme for semantic role labeling of English tweets
Semantic Role Labeling (SRL) is a well researched area of Natural Language Processing. State-of-the-art lexical resources have been developed for SRL on formal texts that involve a tedious annotation scheme and require linguistic expertise. The difficulties increase manifold when such complex annotation scheme is applied on tweets for identifying predicates and role arguments. In this paper, we present a simplified approach for annotation of English tweets for identification of predicates and corresponding semantic roles. For annotation purpose, we adopted the 5W1H (Who, What, When, Where, Why and How) concept which is widely used in journalism. The 5W1H task seeks to extract the semantic information in a natural language sentence by distilling it into the answers to the 5W1H questions: Who, What, When, Where, Why and How. The 5W1H approach is comparatively simple and convenient with respect to the ProbBank Semantic Role Labeling task. We report an the performance of our annotation scheme for SRL on tweets and show that non-expert annotators can produce quality SRL data for tweets. This paper also reports the difficulties and challenges involved with semantic role labeling on twitter data and propose solutions to them.
[ "Semantic Parsing", "Semantic Text Processing" ]
[ 40, 72 ]
http://arxiv.org/abs/2110.01258v1
A Aelf-supervised Tibetan-chinese Vocabulary Alignment Method Based On Adversarial Learning
Tibetan is a low-resource language. In order to alleviate the shortage of parallel corpus between Tibetan and Chinese, this paper uses two monolingual corpora and a small number of seed dictionaries to learn the semi-supervised method with seed dictionaries and self-supervised adversarial training method through the similarity calculation of word clusters in different embedded spaces and puts forward an improved self-supervised adversarial learning method of Tibetan and Chinese monolingual data alignment only. The experimental results are as follows. First, the experimental results of Tibetan syllables Chinese characters are not good, which reflects the weak semantic correlation between Tibetan syllables and Chinese characters; second, the seed dictionary of semi-supervised method made before 10 predicted word accuracy of 66.5 (Tibetan - Chinese) and 74.8 (Chinese - Tibetan) results, to improve the self-supervision methods in both language directions have reached 53.5 accuracy.
[ "Low-Resource NLP", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 80, 58, 4 ]
SCOPUS_ID:84900508576
A BBS opinion leader mining algorithm based on topic model
The BBS opinion leader mining is the primary goal of public opinion control and the existing mining algorithm can not find out the topic-specific opinion leaders. This paper presents a BBS opinion leader mining algorithm based on topic model (TOLM). The study first preprocesses the post titles based on their publication date and then does further analysis using a semantic model based on latent Dirichlet allocation (LDA) which combines with TF-IDF. In the end, the algorithm sets up variable scale posts reply relational network for social network analysis and sentiment analysis and ranks the users' influence to indentify the opinion leader. The TOLM algorithm is designed to mine opinion leader in a network hot events quickly and has higher practicability with considering topic attributes, sentiment orientation and network structure.The feasibility and effectiveness of the model is verified by experiments. Copyright © 2014 Binary Information Press.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 9, 3, 78 ]
SCOPUS_ID:84874800985
A BDI dialogue agent for social support: Specification and evaluation method
An important task for empathic agents is to provide social support, that is, to help people increase their well-being and decrease the perceived burden of their problems. The contributions of this paper are 1) the specification of speech acts for a social support dialogue agent, and 2) an evaluation method for this agent. The dialogue agent provides emotional support and practical advice to victims of cyberbullying. The conversation is structured according to the 5-phase model, a methodology for setting up online counseling for children. Before this agent can be used to support real children with real-world problems, a careful and thorough evaluation is of utmost importance. We propose an evaluation method for the social support dialogue agent based on multi-stage expert evaluation in which (adult) online bullying counselors interact with the system with varying degrees of freedom. Only when we are convinced that performance of the system is satisfactory, children will be involved, again in multiple stages and under the supervision of experts.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85131231441
A BERT BASED JOINT LEARNING MODEL WITH FEATURE GATED MECHANISM FOR SPOKEN LANGUAGE UNDERSTANDING
Intent detection (ID) and slot filling (SF) are two major tasks for spoken language understanding (SLU). Recent joint learning approaches consider the relationship between intent detection and slot filling, which leverage the shared knowledge across two tasks to benefit each other. However, most existing methods do not make full use of the BERT model and gate mechanisms to improve the semantic correlation between slot filling and intent detection tasks. In this paper, we propose a joint learning model based on BERT, which introduce dual encoder structure and utilizes semantic information by performing feature gate mechanisms in predicting intents and slots. Experimental results demonstrate that our proposed method provides very competitive results on CAIS and DDoST datasets.
[ "Language Models", "Semantic Text Processing", "Semantic Parsing", "Intent Recognition", "Sentiment Analysis" ]
[ 52, 72, 40, 79, 78 ]
SCOPUS_ID:85115137488
A BERT Based Approach for Arabic POS Tagging
Large pre-trained language models, such as BERT, have recently achieved state-of-the-art performance in different natural language processing tasks. However, BERT based models in Arabic language are less abundant than in other languages. This paper aims to design a grammatical tagging system for texts in Arabic language using BERT. The main goal is to label an input sentence with the most likely sequence of tags at the output. We also build a large corpus by combining the available corpora such as the Arabic WordNet and the Quranic Arabic Corpus. The accuracy of the developed system reached 91.69%. Our source code and corpus are available at GitHub upon request.
[ "Language Models", "Tagging", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 63, 72, 15 ]
http://arxiv.org/abs/1901.08634v3
A BERT Baseline for the Natural Questions
This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained model are available at https://github.com/google-research/language/tree/master/language/question_answering/bert_joint.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85145612050
A BERT Framework to Sentiment Analysis of Tweets
Sentiment analysis has been widely used in microblogging sites such as Twitter in recent decades, where millions of users express their opinions and thoughts because of its short and simple manner of expression. Several studies reveal the state of sentiment which does not express sentiment based on the user context because of different lengths and ambiguous emotional information. Hence, this study proposes text classification with the use of bidirectional encoder representations from transformers (BERT) for natural language processing with other variants. The experimental findings demonstrate that the combination of BERT with CNN, BERT with RNN, and BERT with BiLSTM performs well in terms of accuracy rate, precision rate, recall rate, and F1-score compared to when it was used with Word2vec and when it was used with no variant.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85132015024
A BERT Model-Based Sentiment Analysis on COVID-19 Tweets
In the past few decades, the growth of data on the Internet has increased significantly, and even today, tons of data get generated with each passing day. The World Wide Web has become a great source of e-learning, sharing ideas, and interchanging school of thoughts and views. Internet community sites like Twitter, Facebook, and Instagram have gained immense attraction and have gathered a huge pool of daily active users over the past few decades, as they provide a medium to exchange or express their opinions or ideas about everything and anything. Many studies have been conducted on the topic of sentiment analysis, particularly on Twitter data. Sentiment analysis is effective for analyzing data in tweets where suppositions are very unstructured, varied, and positive, negative, or neutral. This research examines the evaluation of coronavirus-related Twitter data. We looked into the feelings of a large number of tweets gathered by Web scraping of tweets using various hashtags. The experimental results were presented using the bidirectional encoder representations from transformers (BERT) model, which was compared to the performance of traditional classification models like stochastic gradient descent, Naive Bayes, random forest, decision tree, logistic regression, and XG Boost. With an overall accuracy of 95.12%, the BERT model outperformed all other standard models.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 78, 24, 3 ]
SCOPUS_ID:85125999983
A BERT and Topic Model Based Approach to reviews Requirements Analysis
With the rise of mobile applications, user reviews are an important avenue of user feedback in which users may mention different issues in using the software. For example, unresponsiveness, low-level privacy, etc. In order to extract effective requirement information and problematic feedback from these huge user reviews, this paper proposes a review non-functional requirement analysis method based on BERT model and topic model (NRABL). Firstly, we use BERT model to classify the reviews with multi-labels classification and then use LDA (latent Dirichlet allocation) to extract topic and review analysis, this method can help developers to quickly understand the user's requirements and the specific usage problems.
[ "Language Models", "Topic Modeling", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 9, 72, 3 ]
SCOPUS_ID:85107799787
A BERT based Sentiment Analysis and Key Entity Detection Approach for Online Financial Texts
The emergence and rapid progress of the Internet have brought ever-increasing impact on financial domain. How to rapidly and accurately mine the key information from the massive negative financial texts has become one of the key issues for investors and decision Rakers. Aiming at the issue, we propose a sentiment analysis and key entity detection approach based on BERT, which is applied in online financial text mining and public opinion analysis in social media. By using pre-train model, we first study sentiment analysis, and then we consider key entity detection as a sentence matching or Machine Reading Comprehension (MRC) task in different granularity. Among them, we mainly focus on negative sentimental information. We detect the specific entity by using our approach, which is different from traditional Named Entity Recognition (NER). In addition, we also use ensemble learning to improve the performance of proposed approach. Experimental results show that the performance of our approach is generally higher than SVM, LR, NBM, and BERT for two financial sentiment analysis and key entity detection datasets.
[ "Language Models", "Semantic Text Processing", "Named Entity Recognition", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 52, 72, 34, 78, 3 ]
SCOPUS_ID:85127180477
A BERT based dual-channel explainable text emotion recognition system
In this paper, a novel dual-channel system for multi-class text emotion recognition has been proposed, and a novel technique to explain its training & predictions has been developed. The architecture of the proposed system contains the embedding module, dual-channel module, emotion classification module, and explainability module. The embedding module extracts the textual features from the input sentences in the form of embedding vectors using the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model. Then the embedding vectors are fed as the inputs to the dual-channel network containing two network channels made up of convolutional neural network (CNN) and bidirectional long short term memory (BiLSTM) network. The intuition behind using CNN and BiLSTM in both the channels was to harness the goodness of the convolutional layer for feature extraction and the BiLSTM layer to extract text's order and sequence-related information. The outputs of both channels are in the form of embedding vectors which are concatenated and fed to the emotion classification module. The proposed system's architecture has been determined by thorough ablation studies, and a framework has been developed to discuss its computational cost. The emotion classification module learns and projects the emotion embeddings on a hyperplane in the form of clusters. The proposed explainability technique explains the training and predictions of the proposed system by analyzing the inter & intra-cluster distances and the intersection of these clusters. The proposed approach's consistent accuracy, precision, recall, and F1 score results for ISEAR, Aman, AffectiveText, and EmotionLines datasets, ensure its applicability to diverse texts.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Information Retrieval", "Sentiment Analysis", "Representation Learning", "Explainability & Interpretability in NLP", "Text Clustering", "Emotion Analysis", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 3, 24, 78, 12, 81, 29, 61, 36, 4 ]
SCOPUS_ID:85133025073
A BERT-Based Approach for Multilingual Discourse Connective Detection
In this paper, we report on our experiments towards multilingual discourse connective (or DC) identification and show how language specific BERT models seem to be sufficient even with little task-specific training data. While some languages have large corpora with human annotated DCs, most languages are low in such resources. Hence, relying solely on discourse annotated corpora to train a DC identification system for low resourced languages is insufficient. To address this issue, we developed a model based on pretrained BERT and fine-tuned it with discourse annotated data of varying sizes. To measure the effect of larger training data, we induced synthetic training corpora with DC annotations using word-aligned parallel corpora. We evaluated our models on 3 languages: English, Turkish and Mandarin Chinese in the context of the recent DISRPT 2021 Task 2 shared task. Results show that the F-measure achieved by the standard BERT model (92.49%, 93.97%, 87.42% for English, Turkish and Chinese) is hard to improve upon even with larger task specific training corpora.
[ "Discourse & Pragmatics", "Language Models", "Semantic Text Processing", "Multilinguality" ]
[ 71, 52, 72, 0 ]
SCOPUS_ID:85149113734
A BERT-Based Artificial Intelligence to Analyze Free-Text Clinical Notes for Binary Classification in Papillary Thyroid Carcinoma Recurrence
Patient information in free text form exists in medical information systems. Before the successes of the natural language processing models, it had costed resources to refine unstructured information into neat information formats for training artificial intelligence models. Here, we applied the bidirectional encoder representations from transformer (BERT) classifier to analyze unstructured clinical text information on diagnosis of the recurrent papillary thyroid cancer (PTC). It showed a neat performance of 98.8% in the binary classification of PTC recurrence.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85133590359
A BERT-Based Aspect-Level Sentiment Analysis Algorithm for Cross-Domain Text
Cross-domain text sentiment analysis is a text sentiment classification task that uses the existing source domain annotation data to assist the target domain, which can not only reduce the workload of new domain data annotation, but also significantly improve the utilization of source domain annotation resources. In order to effectively achieve the performance of cross-domain text sentiment classification, this paper proposes a BERT-based aspect-level sentiment analysis algorithm for cross-domain text to achieve fine-grained sentiment analysis of cross-domain text. First, the algorithm uses the BERT structure to extract sentence-level and aspect-level representation vectors, extracts local features through an improved convolutional neural network, and combines aspect-level corpus and sentence-level corpus to form a sequence sentence pair. Then, the algorithm uses domain adversarial neural network to make the feature representation extracted from different domains as indistinguishable as possible, that is, the features extracted from the source domain and the target domain have more similarity. Finally, by training the sentiment classifier on the source domain dataset with sentiment labels, it is expected that the classifier can achieve a good sentiment classification effect in both source and target domain, and achieve sentence-level and aspect-level sentiment classification. At the same time, the error pooled values of the sentiment classifier and the domain adversary are passed backwards to realize the update and optimization of the model parameters, thereby training a model with cross-domain analysis capability. Experiments are carried out on the Amazon product review dataset, and accuracy and F1 value are used as evaluation indicators. Compared with other classical algorithms, the experimental results show that the proposed algorithm has better performance.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 23, 78, 36, 3 ]
SCOPUS_ID:85129993541
A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay
This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners’ writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85122020760
A BERT-Based Generation Model to Transform Medical Texts to SQL Queries for Electronic Medical Records: Model Development and Validation
Background: Electronic medical records (EMRs) are usually stored in relational databases that require SQL queries to retrieve information of interest. Effectively completing such queries can be a challenging task for medical experts due to the barriers in expertise. Existing text-to-SQL generation studies have not been fully embraced in the medical domain. Objective: The objective of this study was to propose a neural generation model that can jointly consider the characteristics of medical text and the SQL structure to automatically transform medical texts to SQL queries for EMRs. Methods: We proposed a medical text–to-SQL model (MedTS), which employed a pretrained Bidirectional Encoder Representations From Transformers model as the encoder and leveraged a grammar-based long short-term memory network as the decoder to predict the intermediate representation that can easily be transformed into the final SQL query. We adopted the syntax tree as the intermediate representation rather than directly regarding the SQL query as an ordinary word sequence, which is more in line with the tree-structure nature of SQL and can also effectively reduce the search space during generation. Experiments were conducted on the MIMICSQL dataset, and 5 competitor methods were compared. Results: Experimental results demonstrated that MedTS achieved the accuracy of 0.784 and 0.899 on the test set in terms of logic form and execution, respectively, which significantly outperformed the existing state-of-the-art methods. Further analyses proved that the performance on each component of the generated SQL was relatively balanced and offered substantial improvements. Conclusions: The proposed MedTS was effective and robust for improving the performance of medical text–to-SQL generation, indicating strong potential to be applied in the real medical scenario.
[ "Language Models", "Programming Languages in NLP", "Semantic Text Processing", "Representation Learning", "Text Generation", "Code Generation", "Multimodality" ]
[ 52, 55, 72, 12, 47, 44, 74 ]
SCOPUS_ID:85132965819
A BERT-Based Model for Question Answering on Construction Incident Reports
Construction sites are among the most hazardous workplaces. To reduce accidents, it is required to identify risky situations beforehand, and to describe which countermeasures to put in place. In this paper, we investigate possible techniques to support the identification of risky activities and potential hazards associated with those activities. More precisely, we propose a method for classifying injury narratives based on different attributes, such as work activity, injury type, and injury severity. We formulate our problem as a Question Answering (QA) task by fine-tuning BERT sentence-pair classification model, and we achieve state-of-the-art results on a dataset obtained from the Occupational Safety and Health Administration (OSHA). In addition, we propose a method for identifying potential hazardous items using a model-agnostic technique.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Question Answering" ]
[ 52, 11, 72, 27 ]
SCOPUS_ID:85100611845
A BERT-Based Semantic Matching Ranker for Open-Domain Question Answering
Open-domain question answering (QA) is a hot topic in recent years. Previous work has shown that an effective ranker can improve the overall QA performance by denoising irrelevant context. There are also some recent works leveraged BERT pre-trained model to tackle with open-domain QA tasks, and achieved significant improvements. Nevertheless, these BERT-based models simply concatenates a paragraph with a question, ignoring the semantic similarity of them. In this paper, we propose a simple but effective BERT-based semantic matching ranker to compute the semantic similarity between the paragraph and given question, in which three different representation aggregation functions are explored. To validate the generalized performance of our ranker, we conduct a series of experiments on two public open-domain QA datasets. Experimental results demonstrate that the proposed ranker contributes significant improvements on both the ranking and the final QA performances.
[ "Language Models", "Semantic Text Processing", "Question Answering", "Semantic Similarity", "Natural Language Interfaces" ]
[ 52, 72, 27, 53, 11 ]
SCOPUS_ID:85076696813
A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media
Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 4 ]
SCOPUS_ID:85111442441
A BERT-Bi-LSTM-Based Knowledge Graph Question Answering Method
With the development of knowledge graph, the research of question answering methods based on knowledge graph has gradually become a hot spot. However, in the current mainstream question answering methods, there is insufficient mining of the semantic information of question sentences, resulting in poor entity recognition and relation recognition effects, and many question answering methods rely on predefined rules, which have low transferability and high labor costs. To solve this problem, this paper proposes a knowledge graph question answering method based on BERT word vector, which mainly includes entity recognition and relation recognition. In the entity recognition part, the BERT-Bi-LSTM-CRF model is built, which can fully mine the semantic information contained in the question and improve the accuracy. In the part of relation recognition, the traditional method of semantic relation between multiple entities in the question is transformed into a text classification problem, which can simplify the model complexity and improve the accuracy. Finally, experiments were performed on entity recognition and relation recognition using related data sets. The results show that compared with traditional question answering methods, this method can achieve higher accuracy in entity recognition and relation recognition.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Structured Data in NLP", "Question Answering", "Knowledge Representation", "Named Entity Recognition", "Natural Language Interfaces", "Multimodality" ]
[ 52, 72, 3, 50, 27, 18, 34, 11, 74 ]
SCOPUS_ID:85082138835
A BERT-BiLSTM-CRF model for Chinese electronic medical records named entity recognition
Named entity recognition is a fundamental task in natural language processing and many studies have done about it in recent decades. Previous word representation methods represent words as a single vector of multiple dimensions, which ignore the ambiguity of the character in Chinese. To solve this problem, we apply a BERT-BiLSTM-CRF model to Chinese electronic medical records named entity recognition in this paper. This model enhances the semantic representation of words by using BERT pre-trained language model, then we combine a BiLSTM network with CRF layer, and the word vector is used as the input for training. To evaluate the performance, we compare this model with several baseline models in CCKS 2017 datasets. Experimental results demonstrate that the BERT-BiLSTM-CRF model could achieve a better performance than the other baseline models.
[ "Language Models", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 34, 72, 3 ]
SCOPUS_ID:85093870821
A BERT-based Approach with Relation-aware Attention for Knowledge Base Question Answering
Knowledge Base Question Answering (KBQA), which uses the facts in the knowledge base (KB) to answer natural language questions, has received extensive attention in recent years. The existing works mainly focus on the modeling method and neglect the relations between questions and KB facts, which might restrict the further improvements of the performance. To address this problem, this paper proposes a BERT-based approach for single-relation question answering (SR-QA), which consists of two models, entity linking and relation detection. For entity linking, we adopt pre-trained BERT and a heuristic algorithm to reduce the noise in the candidate facts. For relation detection, the existing approaches usually model the question and the candidate fact respectively before calculate their semantic similarity, which might lose part of the original interaction information between them. To work around this problem, a BERT-based model with relation-aware attention is proposed. We construct the question-fact pair as the input of pre-trained BERT to preserves the original interactive information. To bridge the semantic gap between the questions and the KB facts, we also use a relation-aware attention network to enhance the representation of candidates. The experimental results show that our entity linking model achieves new state-of-the-art results and our complete approach also achieves state-of-the-art accuracy of 80.9% on the SimpleQuestions dataset.
[ "Language Models", "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Knowledge Representation" ]
[ 52, 72, 27, 11, 18 ]
http://arxiv.org/abs/2211.01954v1
A BERT-based Deep Learning Approach for Reputation Analysis in Social Media
Social media has become an essential part of the modern lifestyle, with its usage being highly prevalent. This has resulted in unprecedented amounts of data generated from users in social media, such as users' attitudes, opinions, interests, purchases, and activities across various aspects of their lives. Therefore, in a world of social media, where its power has shifted to users, actions taken by companies and public figures are subject to constantly being under scrutiny by influential global audiences. As a result, reputation management in social media has become essential as companies and public figures need to maintain their reputation to preserve their reputation capital. However, domain experts still face the challenge of lacking appropriate solutions to automate reliable online reputation analysis. To tackle this challenge, we proposed a novel reputation analysis approach based on the popular language model BERT (Bidirectional Encoder Representations from Transformers). The proposed approach was evaluated on the reputational polarity task using RepLab 2013 dataset. Compared to previous works, we achieved 5.8% improvement in accuracy, 26.9% improvement in balanced accuracy, and 21.8% improvement in terms of F-score.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/2010.05384v1
A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies
In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods. First, the quality of the existing DG methods are still far from practical use. There is still room for DG quality improvement. Second, the existing DG designs are mainly for single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. Aiming at these goals, in this paper, we present a new distractor generation scheme with multi-tasking and negative answer training strategies for effectively generating \textit{multiple} distractors. The experimental results show that (1) our model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse and show strong distracting power for multiple choice question.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 4 ]
http://arxiv.org/abs/2011.02378v1
A BERT-based Dual Embedding Model for Chinese Idiom Prediction
Chinese idioms are special fixed phrases usually derived from ancient stories, whose meanings are oftentimes highly idiomatic and non-compositional. The Chinese idiom prediction task is to select the correct idiom from a set of candidate idioms given a context with a blank. We propose a BERT-based dual embedding model to encode the contextual words as well as to learn dual embeddings of the idioms. Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context. We then match the embedding of each candidate idiom with the hidden representations of all the tokens in the context thorough context pooling. We further propose to use two separate idiom embeddings for the two kinds of matching. Experiments on a recently released Chinese idiom cloze test dataset show that our proposed method performs better than the existing state of the art. Ablation experiments also show that both context pooling and dual embedding contribute to the improvement of performance.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
SCOPUS_ID:85114964457
A BERT-based End-to-End Model for Chinese Document-level Event Extraction
Document-level event extraction aims at discovering event mentions and extracting events which contain event arguments and their roles from texts. This paper proposes an end-to-end model for closed-domain based on BERT. We introduce the embedding of event type and entity nodes to the subsequent layer for event argument and role identification, which represents the relation between event, arguments and roles and improves the accuracy of classifying multi-event arguments. With the title, the quintuple of event, we calculate the master slave structure between multiple events with the embedding presentation. Experimental results show that our model outperforms the state of the art.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Event Extraction", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 31, 3 ]
SCOPUS_ID:85099597607
A BERT-based Hierarchical Model for Vietnamese Aspect Based Sentiment Analysis
Aspect based sentiment analysis (ABSA) is the task of identifying sentiment polarity towards specific entities and their aspects mentioned in customers' reviews. This paper presents a new and effective hierarchical model using the pre-trained language model, Bidirectional Encoder Representations from Transformers (BERT). This model integrates the context information of the previous layer (i.e. entity type) into the prediction for the following layer (i.e. aspect type) and optimizes the global loss functions to capture the entire information from all layers. Experimental results on two public benchmark datasets in Vietnamese showed that the proposed model is superior to the existing ones. Specifically, the model achieved 84.23% and 82.06% in the F1_micro scores in detecting entities and their aspects on the domains of restaurants and hotels, respectively. In identifying aspect sentiment polarity, the model gained 71.3% and 74.69% in the F1_micro scores on the domains of restaurants and hotels, respectively. These results outperformed the best submission of the campaign by a large margin and gained a new state of the art.
[ "Language Models", "Semantic Text Processing", "Polarity Analysis", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 52, 72, 33, 23, 78 ]
SCOPUS_ID:85137904470
A BERT-based Idiom Detection Model
Idioms are figures of speech that contradict the principle of compositionality. This disposition of idioms can misdirect Natural Language Processing (NLP) techniques, which mostly focus on the literal meaning of terms. In this paper, we propose a novel idiom detection model that distinguishes between literal and idiomatic expressions. It utilizes a token classification approach to fine-tune BERT(Bidirectional Encoder Representations from Transformers). It is empirically evaluated on four idiom datasets, yielding an accuracy of more than 0.94. This model adds to the robustness and diversity of NLP techniques available to process and understand increasing magnitudes of free-form text and speech. Furthermore, the social value of this model is in enabling non-native speakers to comprehend the nuances of a foreign language.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 52, 72, 70, 74 ]
SCOPUS_ID:85140077157
A BERT-based Language Modeling Framework
Deep learning has brought considerable changes and created a new paradigm in many research areas, including computer vision, speech processing, and natural language processing. In the context of language modeling, recurrent-based language models and word embedding methods have been pivotal studies in the past decade. Recently, pre-trained language models have attracted widespread attention due to their enormous success in many challenging tasks. However, only a dearth of works concentrates on creating novel language models based on the pre-trained models. In order to bridge the research gap, we take the bidirectional encoder representations from Transformers (BERT) model as an example to explore novel uses of a pre-trained model for language modeling. More formally, this paper proposes a set of BERT-based language models, and a neural-based dynamic adaptation method is also introduced to combine these language models systematically and methodically. We conduct comprehensive studies on three datasets for the perplexity evaluation. Experiments show that the proposed framework achieves 11%, 39%, and 5% relative improvements over the baseline model for Penn Treebank, Wikitext 2, and Tedlium Release 2 corpora, respectively. Besides, when applied to rerank n-best lists from a speech recognizer, our framework also yields promising results compared with baseline systems.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 52, 72, 70, 74 ]
SCOPUS_ID:85137179072
A BERT-based Text Sentiment Classification Algorithm through Web Data
In order to analyze the sentiment tendency of public opinion, this paper conducts a textual sentiment classification research through web data. In the research, this paper uses the BERT (Bidirectional Encoder Representation from Transformers) model to replace the commonly used word2vec model as a text vectorization tool, which has stronger semantic representation capabilities and can realize polysemous words. For the multi-label classification problem of reviews, the BR (Binary Relevance) algorithm is used to transform the problem into multiple binary classification problems, which is directly and efficient for processing multi-label data. Design the BiLSTM-Attention model, which combines the bidirectional long and short-Term memory network and the attention mechanism to achieve further extraction of text features. After multiple sets of comparative experiments, the effectiveness of the BiLSTM-Attention model is verified through performance evaluation. In order to further improve the performance of the model, the problem of unbalanced data set is solved by adjusting the loss function and various parameters so that a better classification effect is achieved.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 78, 24, 3 ]
SCOPUS_ID:85071179765
A BERT-based approach for automatic humor detection and scoring
In this paper we report our participation in the 2019 HAHA task where a corpus of crowd-annotated tweets is provided and required to tell if a tweet is a joke or not and predict a funniness score value for a tweet. Our approach utilizes BERT, a multi-layer bidirectional transformer encoder which can help learn deep bi-directional representations, and the pretrained model is fine-tuned on training data for HAHA task. The representation of a tweet is fed into an output layer for classification. To predict the funniness score, we apply another output layer to generate scores by using float labels and train it with the mean squad error between the prediction scores and the labels. Our best F-Score on the test set for Task 1 is 0.784 and RMSE for Task 2 is 0.910. We find that our approach is competitive and applicable to multilingual text classification tasks.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Commonsense Reasoning", "Reasoning", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 62, 8, 36, 3 ]
SCOPUS_ID:85134556431
A BERT-based ensemble learning approach for the BioCreative VII challenges: full-text chemical identification and multi-label classification in PubMed articles
In this research, we explored various state-of-the-art biomedical-specific pre-trained Bidirectional Encoder Representations from Transformers (BERT) models for the National Library of Medicine - Chemistry (NLM CHEM) and LitCovid tracks in the BioCreative VII Challenge, and propose a BERT-based ensemble learning approach to integrate the advantages of various models to improve the system's performance. The experimental results of the NLM-CHEM track demonstrate that our method can achieve remarkable performance, with F1-scores of 85% and 91.8% in strict and approximate evaluations, respectively. Moreover, the proposed Medical Subject Headings identifier (MeSH ID) normalization algorithm is effective in entity normalization, which achieved a F1-score of about 80% in both strict and approximate evaluations. For the LitCovid track, the proposed method is also effective in detecting topics in the Coronavirus disease 2019 (COVID-19) literature, which outperformed the compared methods and achieve state-of-the-art performance in the LitCovid corpus.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85089591944
A BERT-based ensemble model for chinese news topic prediction
With the rapid development of big data mining technology in the Chinese commercial field, the news topic prediction becomes increasingly important. Since the accuracy of Chinese news topic classification can directly affect the personalized recommendation effect of the Chinese news system and then affect business profits, the news category prediction performance needs to be higher as possible. With the great success of the BERT model in the past two years, using the BERT model alone has achieved extremely good performance on Chinese text classification tasks. Therefore, using the advantages of the BERT to study more effective methods for the Chinese news classification will become more meaningful. In this paper, we propose a model that combines the advantages of both BERT and the long short-term memory (LSTM) network, named BERT ensemble LSTM-BERT(BERT-LB). Our method is more effective than using BERT alone. This model uses a three-step method to calculate and integrate Chinese news text features. Besides, we use two datasets to evaluate our method and other baseline methods. We demonstrate that the proposed method has the promising ability to predict Chinese news topics and prove its generalization ability.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85131918728
A BERT-based multi-semantic learning model with aspect-aware enhancement for aspect polarity classification
Aspect-Based Sentiment Classification (ABSA), predicting the sentimental tendency towards given aspects, is an important branch in natural language understanding. However, in the existing deep learning models for ABSA, there is a contradiction between the fine sentiment analysis and the small amount of corpus. To solve this contradiction, we propose a BERT-based Multi-Semantic Learning (BERT-MSL) model with aspect-aware enhancement for aspect polarity classification, which follows the Transformer structure in BERT and uses lightweight multi-head self-attentions for encoding. First, we make full use of the extensive pre-training and post-training of the BERT model to obtain the initialization parameters with rich knowledge for our BERT-MSL model, so that our model can be quickly adapted to the ABSA task only by fine-tuning on a small corpus. Second, to achieve the fine sentiment analysis centered on aspect target, we propose a BERT-based multi-semantic learning model composed of the left-side local semantic, right-side local semantic, aspect target semantic and global semantic learning modules, and propose an aspect-aware enhancement method based on BERT and multi-head attention. Third, we propose two alternative semantic merging methods to generate the final expressive-powerful sentiment semantics for ABSA. Furthermore, to expand the application scope of our model, we design an advanced structure for our model by introducing a CNN-based semantic refinement layer. Experimental results on five SemEval and Twitter datasets demonstrate that our model improves the stability and robustness of ABSA and significantly outperforms some of the state-of-the-art models under the BERT Post-Training (BERT-PT) environment.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Polarity Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 33, 78, 36, 3 ]
SCOPUS_ID:85099884804
A BERT-based named entity recognition in chinese electronic medical record
Named entity recognition, aiming at identifying and classifying named entity mentioned in the structured or unstructured text, is a fundamental subtask for information extraction in natural language processing (NLP). With the development of electronic medical records, obtaining the key and effective information in electronic document through named entity identification has become an increasingly popular research direction. In this article, we adapt a recently introduced pre-trained language model BERT for named entity recognition in electronic medical records to solve the problem of missing context information and we add an extra mechanism to capture the relationship between words. Based on this, (1) the entities can be represented by sentence-level vector, with the forward as well as backward information of the sentence, which can be directly used by downstream tasks; (2) the model acquires the representation of word in context and learn the potential relation between words to decrease the influence of inconsistent entity markup problem of a text. We conduct experiments an electronic medical record dataset proposed by China Conference on Knowledge Graph and Semantic Computing in 2019. The experimental result shows that our proposed method has an improvement compared with the traditional methods.
[ "Language Models", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 34, 72, 3 ]
SCOPUS_ID:85113848708
A BERT-based system for multi-topic labeling of Arabic content
Text classification (or categorization) is one of the most common natural language processing (NLP) tasks. It is very useful to simplify the management of a large volume of textual data by assigning each text to one or more categories. This operation is challenging when it is a multi-label classification. For Arabic text, this task becomes more challenging due to the complex morphology and structure of Arabic language. In this paper, we address this issue by proposing a classification system for the Mowjaz Multi-Topic Labelling Task. The objective of this task is to classify Arabic articles according to the 10 topics predefined in Mowjaz. The proposed system is based on AraBERT, a pre-trained BERT model for the Arabic language. The first step of this system consists in tokenizing and representing the input articles using the AraBERT model. Then, a fully connected neural network is applied on the output of the AraBERT model to classify the articles according to their topics. The experimental tests conducted on the Mowjaz dataset showed an accuracy of 0.865 for the development set and an accuracy of 0.851 for the test set.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85141198696
A BIBLIOMETRIC AND TOPIC MODELING OVERVIEW OF KAJIAN MALAYSIA BETWEEN 2011 AND 2020: A RESEARCH NOTE
Kajian Malaysia, published by Penerbit Universiti Sains Malaysia, is an interdisciplinary journal which provides a forum for a broad range of social sciences and humanities research. This research note presents a bibliometric review of the articles published in the journal Kajian Malaysia between 2011 and 2020. The purpose of this research note is to evaluate publication patterns and the topic model of articles published in Kajian Malaysia. The bibliographical material applied in this study was retrieved from the Scopus database. This study bibliometrically examines 192 documents published in Kajian Malaysia from 2011 to 2020 to rank the most productive countries, institutions, authors, keywords, influential articles and the topic model. This research note assists researchers with an understanding of the development of Kajian Malaysia, provides an important reference for Kajian Malaysia’s future trajectory as well as provides an effective method of analysis for the future evaluation of journals.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85123986946
A BIBLIOMETRIC APPROACH TO SUPPORT REDEFINING MANAGEMENT OF TECHNOLOGY FOR THE POST-DIGITAL WORLD
Management of Technology (MoT) has evolved since its inception in the 1980s and definitions from the 1990s. However, the field's definition may not be keeping up with the ever-increasing changes in our world. This paper implements bibliometrics, through natural language processing and topic modelling, of published literature on MoT to trace the evolution of research focus areas. The processed literature consists of an extensive sample from a keyword search of publication databases. Analysing the topic priorities over time indicates how research in the field evolved. Comparing these focus areas to the different definitions provide inputs for improving the definition of MoT. The topics extracted in this paper over the history of MoT offers a base from where to initiate such an investigation.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
http://arxiv.org/abs/1908.08674v1
A BLSTM Network for Printed Bengali OCR System with High Accuracy
This paper presents a printed Bengali and English text OCR system developed by us using a single hidden BLSTM-CTC architecture having 128 units. Here, we did not use any peephole connection and dropout in the BLSTM, which helped us in getting better accuracy. This architecture was trained by 47,720 text lines that include English words also. When tested over 20 different Bengali fonts, it has produced character level accuracy of 99.32% and word level accuracy of 96.65%. A good Indic multi script OCR system is also developed by Google. It sometimes recognizes a character of Bengali into the same character of a non-Bengali script, especially Assamese, which has no distinction from Bengali, except for a few characters. For example, Bengali character for 'RA' is sometimes recognized as that of Assamese, mainly in conjunct consonant forms. Our OCR is free from such errors. This OCR system is available online at https://banglaocr.nltr.org
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Multimodality" ]
[ 20, 52, 72, 74 ]
SCOPUS_ID:0012130276
A BOOTSTRAP TECHNIQUE FOR BUILDING DOMAIN-DEPENDENT LANGUAGE MODELS
In this paper, we propose a new bootstrap technique to build domain-dependent language models. We assume that a seed corpus consisting of a small amount of data relevant to the new domain is available, which is used to build a reference language model. We also assume the availability of an external corpus, consisting of a large amount of data from various sources, which need not be directly relevant to the domain of interest. We use the reference language model and a suitable metric, such as the perplexity measure, to select sentences from the external corpus that are relevant to the domain. Once we have a sufficient number of new sentences, we can rebuild the reference language model. We then continue to select additional sentences from the external corpus, and this process continues to iterate until some satisfactory termination point is achieved. We also describe several methods to further enhance the bootstrap technique, such as combining it with mixture modeling and class-based modeling. The performance of the proposed approach was evaluated through a set of experiments, and the results are discussed. Analysis of the convergence properties of the approach and the conditions that need to be satisfied by the external corpus and the seed corpus are highlighted, but detailed work on these issues is deferred for the future.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:84901745856
A BP neural network text categorization method optimized by an improved genetic algorithm
The back propagation(BP) neural network is widely used for text categorization and could achieve high performance. However, the greatest disadvantage of this network is its long training time. The genetic algorithm is often used to generate useful solutions for optimization. In this paper we combined the genetic algorithm and the back propagation neural network for text categorization. We use the genetic algorithm to optimize weights of connections in the back propagation neural network instead of back-propagating. At the same time, we improved the genetic algorithm to increase its efficiency. Through this method, we overcome the traditional disadvantage of the BP neural network. Our experiments show that our method outperforms the traditional method for text categorization. © 2013 IEEE.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:80051910758
A BST-based approach to dictionary structure for Chinese word segmentation
This paper firstly analyzes current Chinese word segmentation methods, and then bases on word dictionary segmentation and binary search tree (BST) proposing a way to organize the dictionary which focuses on reducing comparison times in order to increase the segmentation speed. After that, a practical demo is used to illustrate the feasibility and effectiveness of the proposed algorithm. © 2011 IEEE.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85120079173
A Background Knowledge Revising and Incorporating Dialogue Model
Currently, dialogue systems have attracted increasing research interest. In particular, background knowledge is incorporated to improve the performance of dialogue systems. Existing dialogue systems mostly assume that the background knowledge is correct and comprehensive. However, low-quality background knowledge is common in real-world applications. On the other hand, dialogue datasets with manual labeled background knowledge are often insufficient. To tackle these challenges, this article presents an algorithm to revise low-quality background knowledge, named background knowledge revising transformer (BKR-Transformer). By innovatively formulating the knowledge revising task as a sequence-to-sequence (Seq2Seq) problem, BKR-Transformer generates the revised background knowledge based on the original background knowledge and dialogue history. More importantly, to alleviate the effect of insufficient training data, BKR-Transformer introduces the ideas of parameter sharing and tensor decomposition, which could significantly reduce the number of model parameters. Furthermore, this work presents a background knowledge revising and incorporating dialogue model that combines the background knowledge revision with response selection in a unified model. Empirical analyses on real-world applications demonstrate that the proposed background knowledge revising and incorporating dialogue system (BKRI) could revise most low-quality background knowledge and substantially outperforms previous dialogue models.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85030166830
A Badge of Honor?: How The New York Times discredits President Trump’s fake news accusations
News organizations in many Western democracies face decreasing trust amid fake news accusations. In this situation, news organizations risk losing their license to operate and need to defend their legitimacy. This study analyzes how The New York Times (NYT) discredits fake news accusations, which are prominently expressed by US President Trump. A critical discourse analysis of the NYT’s news articles about fake news accusations in the first 70 days following President Trump’s inauguration reveals four delegitimizing strategies. First, the accusations are taken as a “badge of honor” for professional journalism but are morally evaluated to damage journalism’s role as the fourth estate in democracy. Second, using sarcasm, the articles criticize President Trump’s capacity to govern and thus question his legitimacy. Third, reporting implies that fake news accusations aim at suppressing critical thinking as in authoritarian regimes. Fourth, accusations are described as irrational responses to professional reporting or proven to be factually wrong, when possible. Overall, reporting in the NYT portrays President Trump as an irresponsible leader risking the well-being of the country’s citizens, its journalism, and its democracy, as well as journalism in foreign countries.
[ "Semantic Text Processing", "Discourse & Pragmatics", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Responsible & Trustworthy NLP" ]
[ 72, 71, 17, 8, 46, 4 ]
http://arxiv.org/abs/2109.08232v1
A Bag of Tricks for Dialogue Summarization
Dialogue summarization comes with its own peculiar challenges as opposed to news or scientific articles summarization. In this work, we explore four different challenges of the task: handling and differentiating parts of the dialogue belonging to multiple speakers, negation understanding, reasoning about the situation, and informal language understanding. Using a pretrained sequence-to-sequence language model, we explore speaker name substitution, negation scope highlighting, multi-task learning with relevant tasks, and pretraining on in-domain data. Our experiments show that our proposed techniques indeed improve summarization performance, outperforming strong baselines.
[ "Summarization", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Extraction & Text Mining" ]
[ 30, 11, 47, 38, 3 ]
SCOPUS_ID:85059967366
A Bag-of-Phonetic-Codes Modelfor Cyber-Bullying Detection in Twitter
Social networking sites such as Twitter, Facebook, MySpace, Instagram are emerging as a strong medium of communication these days. These have become a part and parcel of daily life. People can express their thoughts and activities among their social circle with brings them closer to their community. However this freedom of expression has its drawbacks. Sometimes people show their aggression on Social Media which in turn hurts the sentiments of the targeted victims. Certain forms of cyber-bullying are sexual, racial and physical disability based. Hence a proper surveillance is necessary to tackle such situations. Twitter as a micro-blogging site sees cyber abuse on a daily basis. However, tweets are raw texts; containing a lot of misspelled words and censored words. This paper proposes a novel method to detect cyber-bullying, a Bag-of-Phonetic-Codes model. Using pronunciation of words as features can rectify misspelled words and can identify censored words. Correctly identifying duplicate words can lead to smaller vocabulary of words, thereby reducing the feature space. The inspiration for this proposed work is drawn from the famous Bag-of-Words model for extracting textual features. Phonetic code generation has been done using the Soundex Algorithm. Besides the proposed model, experiments were carried out with both supervised and unsupervised machine learning approaches on multiple datasets to understand the approaches and challenges in the domain of cyber-bullying detection.
[ "Phonetics", "Syntactic Text Processing", "Sentiment Analysis" ]
[ 64, 15, 78 ]
SCOPUS_ID:85075500006
A Bag of Constrained Visual Words Model for Image Representation
We propose a bag of constrained visual words model for image representation. Each image under this model is considered to be an aggregation of patches. SURF features are used to describe each patch. Two sets of constraints, namely, the must-link and the cannot-link, are developed for each patch in a completely unsupervised manner. The constraints are formulated using the distance information among different patches as well as statistical analysis of the entire patch data. All the patches from the image set under consideration are then quantized using the Linear-time-Constrained Vector Quantization Error (LCVQE), a fast yet accurate constrained k-means algorithm. The resulting clusters, which we term as constrained visual words, are then used to label the patches in the images. In this way, we model an image as a bag (histogram) of constrained visual words and then show its utility for image retrieval. Clustering as well as initial retrieval results on COIL-100 dataset indicate the merit of our approach.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Semantic Text Processing", "Representation Learning", "Text Clustering", "Information Retrieval", "Multimodality" ]
[ 20, 3, 72, 12, 29, 24, 74 ]
SCOPUS_ID:85055970663
A Bakhtinian take on languaging in a dual language immersion classroom
Language brings a classroom to life and crafts the teaching and learning space. Sociocultural theories of language acquisition and learning focus on the role of language as a mediator for development. While research on languaging and translanguaging practices in English medium and multilingual classrooms is on the rise, less attention has been paid to the languaging work occurring in the translanguaging space of the dual language immersion classroom. To this end, this study explores the languaging choices of a second grade French immersion teacher as he imparts both language learning objectives and his enactment of critical peace education. This article provides a micro-discourse perspective of the speech genres utilized within a single lesson on adjectives conducted in both French and English. Data analysis revealed the ways in which students and their teacher together languaged academic French and peace education in and through different speech genres. This study demonstrates the affordances of a speech genre analysis for researching multilingualism in dual language learning settings. Implications for pedagogy and theory about bi/multilingual discourse are discussed.
[ "Multilinguality", "Linguistic Theories", "Speech & Audio in NLP", "Linguistics & Cognitive NLP", "Multimodality" ]
[ 0, 57, 70, 48, 74 ]
http://arxiv.org/abs/2205.04086v1
A Balanced Data Approach for Evaluating Cross-Lingual Transfer: Mapping the Linguistic Blood Bank
We show that the choice of pretraining languages affects downstream cross-lingual transfer for BERT-based models. We inspect zero-shot performance in balanced data conditions to mitigate data size confounds, classifying pretraining languages that improve downstream performance as donors, and languages that are improved in zero-shot performance as recipients. We develop a method of quadratic time complexity in the number of languages to estimate these relations, instead of an exponential exhaustive computation of all possible combinations. We find that our method is effective on a diverse set of languages spanning different linguistic features and two downstream tasks. Our findings can inform developers of large-scale multilingual language models in choosing better pretraining configurations.
[ "Multilinguality", "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Cross-Lingual Transfer", "Responsible & Trustworthy NLP" ]
[ 0, 52, 80, 72, 19, 4 ]
SCOPUS_ID:85077992328
A Bangla Spell Checking Technique to Facilitate Error Correction in Text Entry Environment
Spell checker is a common tool used in different text entry techniques for error free writing. Recent advances in spell checking techniques in different languages significantly decreases word and sentence level typos from writing text. But developing an optimized spell checker in Bangla language is still a great research challenge due to the complex rules of Bangla spelling. To overcome the challenge we propose a new technique of spell checking considering both word and sentence level errors to facilitate the text correction techniques of Bangla language. We use a hybrid approach by combining edit distance algorithm and probability based N-gram language model for the error detection and correction tasks. A large corpus is built containing almost 0.5 million words and 50, 000 n-gram sentences to detect and correct the errors. We evaluate the performance of our proposed method with different test data collected from online sources. Again, the efficiency of proposed technique is estimated by conducting a pilot study involving users of different ages. The accuracy rate of our proposed spell checker is approximately 97%. The validity of our claims have been proved through comparison with other Bangla spell checkers.
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
SCOPUS_ID:85081661138
A Bangla Word Sense Disambiguation Technique using Minimum Edit Distance Algorithm and Cosine Distance
In Natural Language Processing, Morphology known as the most decisive part. It can be more difficult when there are several meanings for only one word. Ambiguous word is a word which has those several meanings. The human brain can easily identify these ambiguities but for machines, it is very complicated to detect. Word Sense Disambiguation(WSD) is such a technique that trains machines to detect ambiguities. Different types of research work have been published in different languages for this technique. But developing an optimized Bangla WSD system is still a great research challenge. To overcome this challenge we have to propose a new technique to detect ambiguous word in a sentence. A corpus containing 3860 sentences is built from different resources. We applied the Levenshtein distance algorithm to detect ambiguous word and Cosine Similarity to sense actual meaning in a given Bangla sentence. The accuracy of our method is 80.82%. The validity of our claims has been proved through comparisons with other well established methods.
[ "Semantic Text Processing", "Word Sense Disambiguation" ]
[ 72, 65 ]
SCOPUS_ID:85077205506
A Barrage sentiment analysis scheme based on expression and tone
Most of existing methods do not consider the influence of expression and tone on barrage sentiment analysis. This decreases the effect and accuracy of barrage sentiment analysis. Therefore, we propose a barrage sentiment analysis scheme based on expression and tone. First, we propose a new sentiment dictionary based on expression and tone to increase the effect of barrage sentiment analysis. Second, we propose a new calculation method of sentiment value based on expression and tone for barrage sentiment analysis to increase the accuracy. Meanwhile, we change a single threshold to a threshold range, to expand the scope of neutral barrages. Finally, the experimental results show that our scheme is more effective and practical than the existing methods in the scenario of barrage sentiment analysis.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/2008.10648v2
A Baseline Analysis for Podcast Abstractive Summarization
Podcast summary, an important factor affecting end-users' listening decisions, has often been considered a critical feature in podcast recommendation systems, as well as many downstream applications. Existing abstractive summarization approaches are mainly built on fine-tuned models on professionally edited texts such as CNN and DailyMail news. Different from news, podcasts are often longer, more colloquial and conversational, and noisier with contents on commercials and sponsorship, which makes automatic podcast summarization extremely challenging. This paper presents a baseline analysis of podcast summarization using the Spotify Podcast Dataset provided by TREC 2020. It aims to help researchers understand current state-of-the-art pre-trained models and hence build a foundation for creating better models.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
http://arxiv.org/abs/1907.12437v1
A Baseline Neural Machine Translation System for Indian Languages
We present a simple, yet effective, Neural Machine Translation system for Indian languages. We demonstrate the feasibility for multiple language pairs, and establish a strong baseline for further research.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:67649435512
A Basic Parallel Process as a Parallel Pushdown Automaton
We investigate the set of basic parallel processes, recursively defined by action prefix, interleaving, 0 and 1. Different from literature, we use the constants 0 and 1 standing for unsuccessful and successful termination in order to stay closer to the analogies in automata theory. We prove that any basic parallel process is rooted branching bisimulation equivalent to a regular process communicating with a bag (also called a parallel pushdown automaton) and therefore we can regard the bag as the prototypical basic parallel process. This result is closely related to the fact that any context-free process is either rooted branching bisimulation equivalent or contrasimulation equivalent to a regular process communicating with a stack, a result that is the analogy in process theory of the language theory result that any context-free language is the language of a pushdown automaton. © 2009 Elsevier B.V. All rights reserved.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
http://arxiv.org/abs/1708.05997v2
A Batch Noise Contrastive Estimation Approach for Training Large Vocabulary Language Models
Training large vocabulary Neural Network Language Models (NNLMs) is a difficult task due to the explicit requirement of the output layer normalization, which typically involves the evaluation of the full softmax function over the complete vocabulary. This paper proposes a Batch Noise Contrastive Estimation (B-NCE) approach to alleviate this problem. This is achieved by reducing the vocabulary, at each time step, to the target words in the batch and then replacing the softmax by the noise contrastive estimation approach, where these words play the role of targets and noise samples at the same time. In doing so, the proposed approach can be fully formulated and implemented using optimal dense matrix operations. Applying B-NCE to train different NNLMs on the Large Text Compression Benchmark (LTCB) and the One Billion Word Benchmark (OBWB) shows a significant reduction of the training time with no noticeable degradation of the models performance. This paper also presents a new baseline comparative study of different standard NNLMs on the large OBWB on a single Titan-X GPU.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:84911871232
A Battleground of identity: Racial formation and the african american discourse on interracial marriage
This article utilizes a sample of letters to the editor from African American newspapers to investigate racial identity formation. Drawing on an analysis of 234 letters, published predominantly between 1925 and 1965, I examine howAfrican American writers discussed black-white intermarriage. Writers used the issue of intermarriage to negotiate conceptions of racial identity and the politics of racial emancipation. Because of its strong symbolic implications, the intermarriage discourse became a "battleground of identity" for the conflict between two competing racial ideologies: integrationism and separatism. The battleground concept elucidates why some debates become polarized, and why it is so difficult to arbitrate them. I argue that identity battlegrounds may emerge around emotionally charged and concrete but heavily symbolic issues that densely link to key ideas in the ideological systems of two or more conflicting movements. They must be issues that none of the movements can cease to compete over without surrendering their political essence.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85109430198
A Baybayin word recognition system
Baybayin is a pre-Hispanic Philippine writing system used in Luzon island. With the effort in reintroducing the script, in 2018, the Committee on Basic Education and Culture of the Philippine Congress approved House Bill 1022 or the ‘’National Writing System Act,” which declares the Baybayin script as the Philippines’ national writing system. Since then, Baybayin OCR has become a field of research interest. Numerous works have proposed different techniques in recognizing Baybayin scripts. However, all those studies anchored on the classification and recognition at the character level. In this work, we propose an algorithm that provides the Latin transliteration of a Baybayin word in an image. The proposed system relies on a Baybayin character classifier generated using the Support Vector Machine (SVM). The method involves isolation of each Baybayin character, then classifying each character according to its equivalent syllable in Latin script, and finally concatenate each result to form the transliterated word. The system was tested using a novel dataset of Baybayin word images and achieved a competitive 97.9% recognition accuracy. Based on our review of the literature, this is the first work that recognizes Baybayin scripts at the word level. The proposed system can be used in automated transliterations of Baybayin texts transcribed in old books, tattoos, signage, graphic designs, and documents, among others.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85122566072
A Bayesian CNN-LSTM Model for Sentiment Analysis in Massive Open Online Courses MOOCs
Massive Open Online Courses (MOOCs) are increasingly used by learners to acquire knowledge and develop new skills. MOOCs provide a trove of data that can be leveraged to better assist learners, including behavioral data from built-in collaborative tools such as discussion boards and course wikis. Data tracing social interactions among learners are especially interesting as their analyses help improve MOOCs’ effectiveness. We particularly perform sentiment analysis on such data to predict learners at risk of dropping out, measure the success of the MOOC, and personalize the MOOC according to a learner’s behavior and detected emotions. In this paper, we propose a novel approach to sentiment analysis that combines the advantages of the deep learning architectures CNN and LSTM. To avoid highly uncertain predictions, we utilize a Bayesian neural network (BNN) model to quantify uncertainty within the sentiment analysis task. Our empirical results indicate that: 1) The Bayesian CNN-LSTM model provides interesting performance compared to other models (CNN-LSTM, CNN, LSTM) in terms of accuracy, precision, recall, and F1-Score; and 2) there is a high correlation between the sentiment in forum posts and the dropout rate in MOOCs.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:84968876693
A Bayesian Classification Approach Using Class-Specific Features for Text Categorization
In this paper, we present a Bayesian classification approach for automatic text categorization using class-specific features. Unlike conventional text categorization approaches, our proposed method selects a specific feature subset for each class. To apply these class-specific features for classification, we follow Baggenstoss's PDF Projection Theorem (PPT) to reconstruct the PDFs in raw data space from the class-specific PDFs in low-dimensional feature subspace, and build a Bayesian classification rule. One noticeable significance of our approach is that most feature selection criteria, such as Information Gain (IG) and Maximum Discrimination (MD), can be easily incorporated into our approach. We evaluate our method's classification performance on several real-world benchmarks, compared with the state-of-the-art feature selection approaches. The superior results demonstrate the effectiveness of the proposed approach and further indicate its wide potential applications in data mining.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/0907.0785v1
A Bayesian Model for Discovering Typological Implications
A standard form of analysis for linguistic typology is the universal implication. These implications state facts about the range of extant languages, such as ``if objects come after verbs, then adjectives come after nouns.'' Such implications are typically discovered by painstaking hand analysis over a small sample of languages. We propose a computational model for assisting at this process. Our model is able to discover both well-known implications as well as some novel implications that deserve further study. Moreover, through a careful application of hierarchical analysis, we are able to cope with the well-known sampling problem: languages are not independent.
[ "Typology", "Syntactic Text Processing", "Multilinguality" ]
[ 45, 15, 0 ]
http://arxiv.org/abs/1506.04334v2
A Bayesian Model for Generative Transition-based Dependency Parsing
We propose a simple, scalable, fully generative model for transition-based dependency parsing with high accuracy. The model, parameterized by Hierarchical Pitman-Yor Processes, overcomes the limitations of previous generative models by allowing fast and accurate inference. We propose an efficient decoding algorithm based on particle filtering that can adapt the beam size to the uncertainty in the model while jointly predicting POS tags and parse trees. The UAS of the parser is on par with that of a greedy discriminative baseline. As a language model, it obtains better perplexity than a n-gram model by performing semi-supervised learning over a large unlabelled corpus. We show that the model is able to generate locally and syntactically coherent sentences, opening the door to further applications in language generation.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
http://arxiv.org/abs/1603.01514v1
A Bayesian Model of Multilingual Unsupervised Semantic Role Induction
We propose a Bayesian model of unsupervised semantic role induction in multiple languages, and use it to explore the usefulness of parallel corpora for this task. Our joint Bayesian model consists of individual models for each language plus additional latent variables that capture alignments between roles across languages. Because it is a generative Bayesian model, we can do evaluations in a variety of scenarios just by varying the inference procedure, without changing the model, thereby comparing the scenarios directly. We compare using only monolingual data, using a parallel corpus, using a parallel corpus with annotations in the other language, and using small amounts of annotation in the target language. We find that the biggest impact of adding a parallel corpus to training is actually the increase in mono-lingual data, with the alignments to another language resulting in small improvements, even with labeled data for the other language.
[ "Multilinguality", "Low-Resource NLP", "Semantic Text Processing", "Semantic Parsing", "Responsible & Trustworthy NLP" ]
[ 0, 80, 72, 40, 4 ]
http://arxiv.org/abs/1310.3099v2
A Bayesian Network View on Acoustic Model-Based Techniques for Robust Speech Recognition
This article provides a unifying Bayesian network view on various approaches for acoustic model adaptation, missing feature, and uncertainty decoding that are well-known in the literature of robust automatic speech recognition. The representatives of these classes can often be deduced from a Bayesian network that extends the conventional hidden Markov models used in speech recognition. These extensions, in turn, can in many cases be motivated from an underlying observation model that relates clean and distorted feature vectors. By converting the observation models into a Bayesian network representation, we formulate the corresponding compensation rules leading to a unified view on known derivations as well as to new formulations for certain approaches. The generic Bayesian perspective provided in this contribution thus highlights structural differences and similarities between the analyzed approaches.
[ "Speech & Audio in NLP", "Robustness in NLP", "Text Generation", "Responsible & Trustworthy NLP", "Speech Recognition", "Multimodality" ]
[ 70, 58, 47, 4, 10, 74 ]
SCOPUS_ID:85019013527
A Bayesian Race Model for Recognition Memory
Many psychological models use the idea of a trace, which represents a change in a person’s cognitive state that arises as a result of processing a given stimulus. These models assume that a trace is always laid down when a stimulus is processed. In addition, some of these models explain how response times (RTs) and response accuracies arise from a process in which the different traces race against each other. In this article, we present a Bayesian hierarchical model of RT and accuracy in a difficult recognition memory experiment. The model includes a stochastic component that probabilistically determines whether a trace is laid down. The RTs and accuracies are modeled using a minimum gamma race model, with extra model components that allow for the effects of stimulus, sequential dependencies, and trend. Subject-specific effects, as well as ancillary effects due to processes such as perceptual encoding and guessing, are also captured in the hierarchy. Predictive checks show that our model fits the data well. Marginal likelihood evaluations show better predictive performance of our model compared to an approximate Weibull model. Supplementary materials for this article are available online.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:84971513897
A Bayesian Sampling Method for Product Feature Extraction from Large-Scale Textual Data
The authors of this work propose an algorithm that determines optimal search keyword combinations for querying online product data sources in order to minimize identification errors during the product feature extraction process. Data-driven product design methodologies based on acquiring and mining online product-feature-related data are presented with two fundamental challenges: (1) determining optimal search keywords that result in relevant product related data being returned and (2) determining how many search keywords are sufficient to minimize identification errors during the product feature extraction process. These challenges exist because online data, which is primarily textual in nature, may violate several statistical assumptions relating to the independence and identical distribution of samples relating to a query. Existing design methodologies have predetermined search terms that are used to acquire textual data online, which makes the resulting data acquired, a function of the quality of the search term(s) themselves. Furthermore, the lack of independence and identical distribution of text data from online sources impacts the quality of the acquired data. For example, a designer may search for a product feature using the term screen which may return relevant results such as the screen size is just perfect but may also contain irrelevant noise such as researchers should really screen for this type of error.A text mining algorithm is introduced to determine the optimal terms without labeled training data that would maximize the veracity of the data acquired to make a valid conclusion. A case study involving real-world smartphones is used to validate the proposed methodology.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85144435663
A Bayesian Topic Model for Human-Evaluated Interpretability
One desideratum of topic modeling is to produce interpretable topics. Given a cluster of document-tokens comprising a topic, we can order the topic by counting each word. It is natural to think that each topic could easily be labeled by looking at the words with the highest word count. However, this is not always the case. A human evaluator can often have difficulty identifying a single label that accurately describes the topic as many top words seem unrelated. This paper aims to improve interpretability in topic modeling by providing a novel, outperforming interpretable topic model. Our approach combines two previously established subdomains in topic modeling: nonparametric and weakly-supervised topic models. Given a nonparametric topic model, we can include weakly-supervised input using novel modifications to the nonparametric generative model. These modifications lay the groundwork for a compelling setting-one in which most corpora, without any previous supervised or weakly-supervised input, can discover interpretable topics. This setting also presents various challenging sub-problems of which we provide resolutions. Combining nonparametric topic models with weakly-supervised topic models leads to an exciting discovery-a complete, self-contained and outperforming topic model for interpretability.
[ "Low-Resource NLP", "Topic Modeling", "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 80, 9, 81, 4, 3 ]
SCOPUS_ID:85028462788
A Bayesian approach for semantic search based on DAG-shaped ontologies
Semantic search has a great potentiality in helping users to make choices, since it appears to outperform traditional keyword-based approaches. This paper presents an ontology-based semantic search method, referred to as influential SemSim (i-SemSim), which relies on the Bayesian probabilistic approach for weighting the reference ontology. The Bayesian approach seems promising when the reference ontology is organized according to a Directed Acyclic Graph (DAG). In particular, in the proposed method the similarity among a user request and semantically annotated resources is evaluated. The user request, as well as each annotated resource, is represented by a set of concepts of the reference ontology. The experimental results of this paper show that the adoption of the Bayesian method for weighting DAG-based reference ontologies allows i-SemSim to outperform the most representative methods selected in the literature.
[ "Semantic Search", "Knowledge Representation", "Semantic Text Processing", "Information Retrieval" ]
[ 41, 18, 72, 24 ]
SCOPUS_ID:85006944225
A Bayesian approach forweighted ontologies and semantic search
Semantic similarity search is one of the most promising methods for improving the performance of retrieva systems. This paper presents a new probabilistic method for ontology weighting based on a Bayesian approach In particular, this work addresses the semantic search method SemSim for evaluating the similarity amon a user request and semantically annotated resources. Each resource is annotated with a vector of feature (annotation vector), i.e., a set of concepts defined in a reference ontology. Analogously, a user request i represented by a collection of desired features. The paper shows, on the bases of a comparative study, that th adoption of the Bayesian weighting method improves the performance of the SemSim method.
[ "Semantic Search", "Knowledge Representation", "Semantic Text Processing", "Information Retrieval" ]
[ 41, 18, 72, 24 ]
SCOPUS_ID:84977103368
A Bayesian approach to classify the music scores on the basis of the music style
This article presents a new version of the algorithm proposed by Della Ventura (12th TELE-INFO International Conference on Recent Researches in Telecommunications, and Informatics, 2013, [1]) to classify the musical scores. Score classification means an automatic process of assignment of the specific score to a certain class or category: baroque, romantic or contemporary music. The algorithm is based on a Bayesian probabilistic model that extends the Naive Bayes classifier by adding a variable tied to the value of the information contained within the. The score is not seen as a single entity, but as a set of subtopics, every single one of which identifies and represents a standard feature of music writing. The classification of the score is done on the basis of its subtopics: an intermediate level of classification is thus introduced, which induces a hierarchical classification. The new algorithm performs equally well on the old dataset, but gives much better results on the new larger and more diverse dataset.
[ "Text Classification", "Speech & Audio in NLP", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 70, 74, 24, 3 ]
SCOPUS_ID:67651097685
A Bayesian approach to intention-based response generation
The statistical approach to natural language generation of overgeneration-and-ranking suffers from expensive overgeneration. This article reports the findings of response classification experiment in the new approach of intention-based classification-and- ranking. Possible responses are deliberately chosen from a dialogue corpus rather than wholly generated, so the approach allows short ungrammatical utterances as long as they satisfy the intended meaning of the input utterance. We hypothesize that a response is relevant when it satisfies the intention of the preceding utterance, therefore this approach highly depends on intentions, rather than syntactic characterization of input utterance. The response classification experiment is tested on a mixed-initiative, transaction dialogue corpus in the theater domain. This article reports a promising start of 73% accuracy in prediction of response classes in a classification experiment with application of Bayesian networks. © EuroJournals Publishing, Inc. 2009.
[ "Dialogue Response Generation", "Text Classification", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 14, 36, 11, 47, 38, 24, 3 ]
SCOPUS_ID:84867191524
A Bayesian approach to semantic composition for spoken language interpretation
This paper introduces a stochastic interpretation process for composing semantic structures. This process, dedicated to spoken language interpretation, allows to derive semantic frame structures directly from word and basic concept sequences representing the users' utterances. First a two-step rule-based process has been used to provide a reference semantic frame annotation of the speech training data. Then, through a decoding stage, dynamic Bayesian networks are used to hypothesize frames with confidence scores from test data. The semantic frames used in this work have been derived from the Berkeley FrameNet paradigm. Experiments are reported on the MEDIA corpus. MEDIA is a French dialog corpus recorded using a Wizard of Oz system simulating a telephone server for tourist information and hotel booking. For all the data the manual transcriptions and annotations at the word and concept levels are available. In order to evaluate the robustness of the proposed approach tests are performed under 3 different conditions raising in difficulty wrt the errors in the word and concept sequence inputs: (i) according to whether they are manually transcribed and annotated, (ii) manually transcribed and enriched with concepts provided by an automatic annotation, (iii) fully automatically transcribed and annotated. From the experiment results it appears that the proposed probabilistic framework is able to carry out semantic frame annotation with a good reliability, comparable to a semimanual rule-based approach. Copyright © 2008 ISCA.
[ "Explainability & Interpretability in NLP", "Natural Language Interfaces", "Responsible & Trustworthy NLP", "Dialogue Systems & Conversational Agents" ]
[ 81, 11, 4, 38 ]
SCOPUS_ID:85097057119
A Bayesian brain model of adaptive behavior: An application to the Wisconsin Card Sorting Task
Adaptive behavior emerges through a dynamic interaction between cognitive agents and changing environmental demands. The investigation of information processing underlying adaptive behavior relies on controlled experimental settings in which individuals are asked to accomplish demanding tasks whereby a hidden regularity or an abstract rule has to be learned dynamically. Although performance in such tasks is considered as a proxy for measuring high-level cognitive processes, the standard approach consists in summarizing observed response patterns by simple heuristic scoring measures. With this work, we propose and validate a new computational Bayesian model accounting for individual performance in the Wisconsin Card Sorting Test (WCST), a renowned clinical tool to measure set-shifting and deficient inhibitory processes on the basis of environmental feedback. We formalize the interaction between the task's structure, the received feedback, and the agent's behavior by building a model of the information processing mechanisms used to infer the hidden rules of the task environment. Furthermore, we embed the new model within the mathematical framework of the Bayesian Brain Theory (BBT), according to which beliefs about hidden environmental states are dynamically updated following the logic of Bayesian inference. Our computational model maps distinct cognitive processes into separable, neurobiologically plausible, information-theoretic constructs underlying observed response patterns. We assess model identification and expressiveness in accounting for meaningful human performance through extensive simulation studies. We then validate the model on real behavioral data in order to highlight the utility of the proposed model in recovering cognitive dynamics at an individual level. We highlight the potentials of our model in decomposing adaptive behavior in the WCST into several information-theoretic metrics revealing the trial-by-trial unfolding of information processing by focusing on two exemplary individuals whose behavior is examined in depth. Finally, we focus on the theoretical implications of our computational model by discussing the mapping between BBT constructs and functional neuroanatomical correlates of task performance. We further discuss the empirical benefit of recovering the assumed dynamics of information processing for both clinical and research practices, such as neurological assessment and model-based neuroscience.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:85016946262
A Bayesian classifiers based combination model for automatic text classification
Text classification deals with allocating a text document to a predetermined class. Generally, this involves learning about a class from representations of documents belonging to that class. In this paper, we propose a classifier combination that uses a Multinomial Naïve Bayesian (MNB) classifier along with Bayesian Networks (BN) classifier. The results of two classifiers are combined by taking an average of the probability distributions calculated by each of the two classifiers. Feature extraction and selection techniques have been incorporated with the model to find the most discriminating terms for classification. This classification model has been tested on three real text datasets. According to experiments, this approach showed better performance and the overall accuracy is higher than the accuracies of the two constituent classifiers. This technique also surpasses the accuracy of other well known, standard classifiers. This approach differs from the previous classification techniques in that it successfully incorporates MNB and BN classifiers and shows significantly better results than using either of the two classifiers separately. A comparative study of previous approaches with our method indicates a significant improvement over a number of techniques that were evaluated on the same dataset.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85095410236
A Bayesian end-to-end model with estimated uncertainties for simple question answering over knowledge bases
Existing methods for question answering over knowledge bases (KBQA) ignore the consideration of the model prediction uncertainties. We argue that estimating such uncertainties is crucial for the reliability and interpretability of KBQA systems. Therefore, we propose a novel end-to-end KBQA model based on Bayesian Neural Network (BNN) to estimate uncertainties arose from both model and data. To our best knowledge, we are the first to consider the uncertainty estimation problem for the KBQA task using BNN. The proposed end-to-end model integrates entity detection and relation prediction into a unified framework, and employs BNN to model entity and relation under the given question semantics, transforming network weights into distributions. Therefore, predictive distributions can be estimated by sampling weights and forward inputs through the network multiple times. Uncertainties can be further quantified by calculating the variances of predictive distributions. The experimental results demonstrate the effectiveness of uncertainties in both the misclassification detection task and cause of error detection task. Furthermore, the proposed model also achieves comparable performance compared to the existing state-of-the-art approaches on SimpleQuestions dataset.
[ "Natural Language Interfaces", "Knowledge Representation", "Semantic Text Processing", "Question Answering" ]
[ 11, 18, 72, 27 ]
SCOPUS_ID:84857369324
A Bayesian feature selection paradigm for text classification
The automated classification of texts into predefined categories has witnessed a booming interest, due to the increased availability of documents in digital form and the ensuing need to organize them. An important problem for text classification is feature selection, whose goals are to improve classification effectiveness, computational efficiency, or both. Due to categorization unbalancedness and feature sparsity in social text collection, filter methods may work poorly. In this paper, we perform feature selection in the training process, automatically selecting the best feature subset by learning, from a set of preclassified documents, the characteristics of the categories. We propose a generative probabilistic model, describing categories by distributions, handling the feature selection problem by introducing a binary exclusion/inclusion latent vector, which is updated via an efficient Metropolis search. Real-life examples illustrate the effectiveness of the approach. © 2011 Elsevier Ltd. All rights reserved.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Text Classification", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 3, 68, 36, 24, 4 ]
SCOPUS_ID:84874529222
A Bayesian framework for simultaneously modeling neural and behavioral data
Scientists who study cognition infer underlying processes either by observing behavior (e.g., response times, percentage correct) or by observing neural activity (e.g., the BOLD response). These two types of observations have traditionally supported two separate lines of study. The first is led by cognitive modelers, who rely on behavior alone to support their computational theories. The second is led by cognitive neuroimagers, who rely on statistical models to link patterns of neural activity to experimental manipulations, often without any attempt to make a direct connection to an explicit computational theory. Here we present a flexible Bayesian framework for combining neural and cognitive models. Joining neuroimaging and computational modeling in a single hierarchical framework allows the neural data to influence the parameters of the cognitive model and allows behavioral data, even in the absence of neural data, to constrain the neural model. Critically, our Bayesian approach can reveal interactions between behavioral and neural parameters, and hence between neural activity and cognitive mechanisms. We demonstrate the utility of our approach with applications to simulated fMRI data with a recognition model and to diffusion-weighted imaging data with a response time model of perceptual choice. © 2013 Elsevier Inc.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:67349278780
A Bayesian framework for word segmentation: Exploring the effects of context
Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words - in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what's that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered. © 2009 Elsevier B.V. All rights reserved.
[ "Text Segmentation", "Speech & Audio in NLP", "Syntactic Text Processing", "Multimodality" ]
[ 21, 70, 15, 74 ]
SCOPUS_ID:84963589419
A Bayesian hierarchical model for comparing average F1 scores
In multi-class text classification, the performance (effectiveness) of a classifier is usually measured by micro-averaged and macro-averaged F1 scores. However, the scores themselves do not tell us how reliable they are in terms of forecasting the classifier's future performance on unseen data. In this paper, we propose a novel approach to explicitly modelling the uncertainty of average F1 scores through Bayesian reasoning, and demonstrate that it can provide much more comprehensive performance comparison between text classifiers than the traditional frequentist null hypothesis significance testing (NHST).
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:72449200175
A Bayesian learning approach to promoting diversity in ranking for biomedical information retrieval
In this paper, we propose a Bayesian learning approach to promoting diversity for information retrieval in biomedicine and a re-ranking model to improve retrieval performance in the biomedical domain. First, the re-ranking model computes the maximum posterior probability of the hidden property corresponding to each retrieved passage. Then it iteratively groups the passages into subsets according to their properties. Finally, these passages are re-ranked from the subsets as our output. There is no need for our proposed method to use any external biomedical resource. We evaluate our Bayesian learning approach by conducting extensive experiments on the TREC 2004-2007 Genomics data sets. The experimental results show the effectiveness of the proposed Bayesian learning approach for promoting diversity in ranking for biomedical information retrieval on four years TREC data sets. Copyright 2009 ACM.
[ "Passage Retrieval", "Information Retrieval" ]
[ 66, 24 ]
SCOPUS_ID:84877760378
A Bayesian model for learning SCFGs with discontiguous rules
We describe a nonparametric model and corresponding inference algorithm for learning Synchronous Context Free Grammar derivations for parallel text. The model employs a Pitman-Yor Process prior which uses a novel base distribution over synchronous grammar rules. Through both synthetic grammar induction and statistical machine translation experiments, we show that our model learns complex translational correspondences - including discontiguous, many-to-many alignments-and produces competitive translation results. Further, inference is efficient and we present results on significantly larger corpora than prior work. © 2012 Association for Computational Linguistics.
[ "Text Error Correction", "Machine Translation", "Syntactic Text Processing", "Text Generation", "Multilinguality" ]
[ 26, 51, 15, 47, 0 ]
https://aclanthology.org//2010.iwslt-papers.7/
A Bayesian model of bilingual segmentation for transliteration
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:77952922710
A Bayesian model of syntax-directed tree to string grammar induction
Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step. © 2009 ACL and AFNLP.
[ "Text Error Correction", "Machine Translation", "Syntactic Text Processing", "Text Generation", "Multilinguality" ]
[ 26, 51, 15, 47, 0 ]
SCOPUS_ID:84866015784
A Bayesian modeling approach to multi-dimensional sentiment distributions prediction
Sentiment analysis has long focused on binary classification of text as either positive or negative. There has been few work on mapping sentiments or emotions into multiple dimensions. This paper studies a Bayesian modeling approach to multi-class sentiment classification and multidimensional sentiment distributions prediction. It proposes effective mechanisms to incorporate supervised information such as labeled feature constraints and document-level sentiment distributions derived from the training data into model learning. We have evaluated our approach on the datasets collected from the confession section of the Experience Project website where people share their life experiences and personal stories. Our results show that using the latent representation of the training documents derived from our approach as features to build a maximum entropy classifier outperforms other approaches on multi-class sentiment classification. In the more difficult task of multi-dimensional sentiment distributions prediction, our approach gives superior performance compared to a few competitive baselines. © 2012 ACM.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:15944425469
A Bayesian network coding scheme for annotating biomedical information presented to genetic counseling clients
We developed a Bayesian network coding scheme for annotating biomedical content in layperson-oriented clinical genetics documents. The coding scheme supports the representation of probabilistic and causal relationships among concepts in this domain, at a high enough level of abstraction to capture commonalities among genetic processes and their relationship to health. We are using the coding scheme to annotate a corpus of genetic counseling patient letters as part of the requirements analysis and knowledge acquisition phase of a natural language generation project. This paper describes the coding scheme and presents an evaluation of intercoder reliability for its tag set. In addition to giving examples of use of the coding scheme for analysis of discourse and linguistic features in this genre, we suggest other uses for it in analysis of layperson-oriented text and dialogue in medical communication. © 2004 Elsevier Inc. All rights reserved.
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:84937812818
A Bayesian non-linear method for feature selection in machine translation quality estimation
We perform a systematic analysis of the effectiveness of features for the problem of predicting the quality of machine translation (MT) at the sentence level. Starting from a comprehensive feature set, we apply a technique based on Gaussian processes, a Bayesian non-linear learning method, to automatically identify features leading to accurate model performance. We consider application to several datasets across different language pairs and text domains, with translations produced by various MT systems and scored for quality according to different evaluation criteria. We show that selecting features with this technique leads to significantly better performance in most datasets, as compared to using the complete feature sets or a state-of-the-art feature selection approach. In addition, we identify a small set of features which seem to perform well across most datasets.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:84925276643
A Bayesian nonparametric topic model for user interest modeling
Web users display their preferences implicitly by a sequence of pages they navigated. Web recommendation systems use methods to extract useful knowledge about user interests from such data. We propose a Bayesian nonparametric approach to the problem of modeling user interests in recommender systems using implicit feedback like user navigations and clicks on items. Our approach is based on the discovery of a set of latent interests that are shared among users in the system and make a key assumption that each user activity is motivated only by several interests amongst user interest profile which is quite different from most of the existing recommendation algorithms. By using a beta process and a Dirichlet prior, the number of hidden interests and the relationships between interests and items are both inferred from the data. In order to model the sequential information on user's visits, we make a Markovian assumption on each user's navigated item sequence. We develop a Markov chain Monte Carlo inference method based on the Indian buffet process representation of the beta process. We validate our sampling algorithm using synthetic data and real world datasets to demonstrate promising results on recovering the hidden user interests.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85065234797
A Bayesian race model for response times under cyclic stimulus discriminability
Response time (RT) data from psychology experiments are often used to validate theories of how the brain processes information and how long it takes a person to make a decision. When an RT results from a task involving two or more possible responses, the cognitive process that determines the RT may be modeled as the first-passage time of underlying competing (racing) processes with each process describing accumulation of information in favor of one of the responses. In one popular model the racers are assumed to be Gaussian diffusions. Their first-passage times are inverse Gaussian random variables and the resulting RT has a min-inverse Gaussian distribution. The RT data analyzed in this paper were collected in an experiment requiring people to perform a two-choice task in response to a regularly repeating sequence of stimuli. Starting from a min-inverse Gaussian likelihood for the RTs we build a Bayesian hierarchy for the rates and thresholds of the racing diffusions. The analysis allows us to characterize patterns in a person’s sequence of responses on the basis of features of the person’s diffusion rates (the “footprint” of the stimuli) and a person’s gradual changes in speed as trends in the diffusion thresholds. Last, we propose that a small fraction of RTs arise from distinct, noncognitive processes that are included as components of a mixture model. In the absence of sharp prior information, the inclusion of these mixture components is accomplished via a two-stage, empirical Bayes approach. The resulting framework may be generalized readily to RTs collected under a variety of experimental designs.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:84971612146
A Bayesian recommender model for user rating and review profiling
Intuitively, not only do ratings include abundant information for learning user preferences, but also reviews accompanied by ratings. However, most existing recommender systems take rating scores for granted and discard the wealth of information in accompanying reviews. In this paper, in order to exploit user profiles' information embedded in both ratings and reviews exhaustively, we propose a Bayesian model that links a traditional Collaborative Filtering (CF) technique with a topic model seamlessly. By employing a topic model with the review text and aligning user review topics with "user attitudes" (i.e., abstract rating patterns) over the same distribution, our method achieves greater accuracy than the traditional approach on the rating prediction task. Moreover, with review text information involved, latent user rating attitudes are interpretable and "cold-start" problem can be alleviated. This property qualifies our method for serving as a "recommender" task with very sparse datasets. Furthermore, unlike most related works, we treat each review as a document, not all reviews of each user or item together as one document, to fully exploit the reviews' information. Experimental results on 25 real-world datasets demonstrate the superiority of our model over state-of-the-art methods.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:84883267393
A Bayesian topic model for spam filtering
Spam is one of the major problems of today's Internet because it brings financial damage to companies and annoys individual users. Among those approaches developed to detect spam, the content-based machine learning algorithms are important and popular. However, these algorithms are trained using statistical representations of the terms that usually appear in the e-mails. Additionally, these methods are unable to account for the underlying semantics of terms within the messages. In this paper, we present a Bayesian topic model to address these limitations. We explore the use of semantics in spam filtering by representing e-mails as vectors of topics with a topic model: the Latent Dirichlet Allocation (LDA). Based upon this representation, the relationship between the topics and spam can be discovered by using a Bayesian method. We test this model on the Enron-Spam datasets and results show that the proposed model performs better than the baseline and can detect the internal semantics of spam messages. © 2013 by Binary Information Press.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
http://arxiv.org/abs/2012.10251v1
A Benchmark Arabic Dataset for Commonsense Explanation
Language comprehension and commonsense knowledge validation by machines are challenging tasks that are still under researched and evaluated for Arabic text. In this paper, we present a benchmark Arabic dataset for commonsense explanation. The dataset consists of Arabic sentences that does not make sense along with three choices to select among them the one that explains why the sentence is false. Furthermore, this paper presents baseline results to assist and encourage the future evaluation of research in this field. The dataset is distributed under the Creative Commons CC-BY-SA 4.0 license and can be found on GitHub
[ "Commonsense Reasoning", "Explainability & Interpretability in NLP", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 62, 81, 8, 4 ]
http://arxiv.org/abs/2202.02013v2
A Benchmark Corpus for the Detection of Automatically Generated Text in Academic Publications
Automatic text generation based on neural language models has achieved performance levels that make the generated text almost indistinguishable from those written by humans. Despite the value that text generation can have in various applications, it can also be employed for malicious tasks. The diffusion of such practices represent a threat to the quality of academic publishing. To address these problems, we propose in this paper two datasets comprised of artificially generated research content: a completely synthetic dataset and a partial text substitution dataset. In the first case, the content is completely generated by the GPT-2 model after a short prompt extracted from original papers. The partial or hybrid dataset is created by replacing several sentences of abstracts with sentences that are generated by the Arxiv-NLP model. We evaluate the quality of the datasets comparing the generated texts to aligned original texts using fluency metrics such as BLEU and ROUGE. The more natural the artificial texts seem, the more difficult they are to detect and the better is the benchmark. We also evaluate the difficulty of the task of distinguishing original from generated text by using state-of-the-art classification models.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Text Generation", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 47, 24, 3 ]
https://aclanthology.org//W14-2109/
A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics
[ "Argument Mining", "Reasoning" ]
[ 60, 8 ]
http://arxiv.org/abs/1909.04251v1
A Benchmark Dataset for Learning to Intervene in Online Hate Speech
Countering online hate speech is a critical yet challenging task, but one which can be aided by the use of Natural Language Processing (NLP) techniques. Previous research has primarily focused on the development of NLP methods to automatically and effectively detect online hate speech while disregarding further action needed to calm and discourage individuals from using hate speech in the future. In addition, most existing hate speech datasets treat each post as an isolated instance, ignoring the conversational context. In this paper, we propose a novel task of generative hate speech intervention, where the goal is to automatically generate responses to intervene during online conversations that contain hate speech. As a part of this work, we introduce two fully-labeled large-scale hate speech intervention datasets collected from Gab and Reddit. These datasets provide conversation segments, hate speech labels, as well as intervention responses written by Mechanical Turk Workers. In this paper, we also analyze the datasets to understand the common intervention strategies and explore the performance of common automatic response generation methods on these new datasets to provide a benchmark for future research.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85144420853
A Benchmark Dataset for Multi-Level Complexity-Controllable Machine Translation
This paper introduces a new benchmark test dataset for multi-level complexity-controllable machine translation (MLCC-MT), which is an MT that controls the output complexity at more than two levels. In previous studies, MLCC-MT models have been evaluated on a test dataset automatically generated from the Newsela corpus, which is a document-level comparable corpus with document-level complexity. There are three issues with the existing test dataset: first, a source language sentence and its target language sentence are not necessarily an exact translation pair because they are automatically detected. Second, a target language sentence and its simplified target language sentence are not always perfectly parallel since they are automatically aligned. Third, a sentence-level complexity is not always appropriate because it is derived from an article-level complexity associated with the Newsela corpus. Therefore, we created a benchmark test dataset for Japanese-to-English MLCC-MT from the Newsela corpus by introducing an automatic filtering of data with inappropriate sentence-level complexity, manual check for parallel target language sentences with different complexity levels, and manual translation. Furthermore, we implement two MLCC-NMT frameworks with a Transformer architecture and report their performance on our test dataset as baselines for future research. Our test dataset and codes are released.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/2210.12314v1
A Benchmark Study of Contrastive Learning for Arabic Social Meaning
Contrastive learning (CL) brought significant progress to various NLP tasks. Despite this progress, CL has not been applied to Arabic NLP to date. Nor is it clear how much benefits it could bring to particular classes of tasks such as those involved in Arabic social meaning (e.g., sentiment analysis, dialect identification, hate speech detection). In this work, we present a comprehensive benchmark study of state-of-the-art supervised CL methods on a wide array of Arabic social meaning tasks. Through extensive empirical analyses, we show that CL methods outperform vanilla finetuning on most tasks we consider. We also show that CL can be data efficient and quantify this efficiency. Overall, our work allows us to demonstrate the promise of CL methods, including in low-resource settings.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]