id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
http://arxiv.org/abs/1905.04749v2
A Benchmark Study of Machine Learning Models for Online Fake News Detection
The proliferation of fake news and its propagation on social media has become a major concern due to its ability to create devastating impacts. Different machine learning approaches have been suggested to detect fake news. However, most of those focused on a specific type of news (such as political) which leads us to the question of dataset-bias of the models used. In this research, we conducted a benchmark study to assess the performance of different applicable machine learning approaches on three different datasets where we accumulated the largest and most diversified one. We explored a number of advanced pre-trained language models for fake news detection along with the traditional and deep learning ones and compared their performances from different aspects for the first time to the best of our knowledge. We find that BERT and similar pre-trained models perform the best for fake news detection, especially with very small dataset. Hence, these models are significantly better option for languages with limited electronic contents, i.e., training data. We also carried out several analysis based on the models' performance, article's topic, article's length, and discussed different lessons learned from them. We believe that this benchmark study will help the research community to explore further and news sites/blogs to select the most appropriate fake news detection method.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 8, 46, 4 ]
SCOPUS_ID:85099607543
A Benchmark Study on Machine Learning Methods using Several Feature Extraction Techniques for News Genre Detection from Bangla News Articles & Titles
Genre detection from news articles or news titles is one kind of text classification procedures where news articles or titles are categorized among different families. Nowadays, text classification has become a key research field in text mining and natural language understanding because of it's several applications, such as search engine, document filtering, keywords extraction, text summarizing, etc. Several studies have been conducted for detecting news genres in different languages, but little research has been done in Bangla language due to the lack of Bangla resources. Moreover, in the few works of Bangla language, a little number of news genres have been used for classifications and dataset has been created from a small number of news sources. In this study, we have shown comparative analysis of different traditional machine learning, advanced neural networks, attention-based supervised learning models by extracting several informative features including TF-IDF, category-based word frequency, word embedding, etc. We have implemented these categorization methods for both news article and news title classifications against ten mutually exclusive news genres. From our analysis, We have found that Bidirectional LSTM' with Word2vec' feature extraction technique using 'Skip-gram' method is the most robust method for both classifications, compared to other models.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2211.07980v1
A Benchmark and Dataset for Post-OCR text correction in Sanskrit
Sanskrit is a classical language with about 30 million extant manuscripts fit for digitisation, available in written, printed or scannedimage forms. However, it is still considered to be a low-resource language when it comes to available digital resources. In this work, we release a post-OCR text correction dataset containing around 218,000 sentences, with 1.5 million words, from 30 different books. Texts in Sanskrit are known to be diverse in terms of their linguistic and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the Indian subcontinent for about 3 millennia. Keeping this in mind, we release a multi-domain dataset, from areas as diverse as astronomy, medicine and mathematics, with some of them as old as 18 centuries. Further, we release multiple strong baselines as benchmarks for the task, based on pre-trained Seq2Seq language models. We find that our best-performing model, consisting of byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1), yields a 23% point increase over the OCR output in terms of word and character error rates. Moreover, we perform extensive experiments in evaluating these models on their performance and analyse common causes of mispredictions both at the graphemic and lexical levels. Our code and dataset is publicly available at https://github.com/ayushbits/pe-ocr-sanskrit.
[ "Visual Data in NLP", "Text Error Correction", "Syntactic Text Processing", "Multimodality" ]
[ 20, 26, 15, 74 ]
SCOPUS_ID:85027982130
A Benchmark and Evaluation for Text Extraction from PDF
Extracting the body text from a PDF document is an important but surprisingly difficult task. The reason is that PDF is a layout-based format which specifies the fonts and positions of the individual characters rather than the semantic units of the text (e.g., words or paragraphs) and their role in the document (e.g., body text or caption). There is an abundance of extraction tools, but their quality and the range of their functionality are hard to determine. In this paper, we show how to construct a high-quality benchmark of principally arbitrary size from parallel TeX and PDF data. We construct such a benchmark of 12,098 scientific articles from arXiv.org and make it publicly available. We establish a set of criteria for a clean and independent assessment of the semantic abilities of a given extraction tool. We provide an extensive evaluation of 14 state-of-the-art tools for text extraction from PDF on our benchmark according to our criteria. We include our own method, Icecite, which significantly outperforms all other tools, but is still not perfect. We outline the remaining steps necessary to finally make text extraction from PDF a solved problem.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85104464876
A Benchmark for Analyzing Chart Images
Charts are a compact method of displaying and comparing data. Automatically extracting data from charts is a key step in understanding the intent behind a chart which could lead to a better understanding of the document itself. To promote the development of automatically decompose and understand these visualizations. The CHART-Infographics organizers holds the Competition on Harvesting Raw Tables from Infographics. In this paper, based on machine learning, image recognition, object detection, keypoint estimation, OCR, and others, we explored and proposed our methods for almost all tasks and achieved relatively good performance.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/2201.05793v1
A Benchmark for Generalizable and Interpretable Temporal Question Answering over Knowledge Bases
Knowledge Base Question Answering (KBQA) tasks that involve complex reasoning are emerging as an important research direction. However, most existing KBQA datasets focus primarily on generic multi-hop reasoning over explicit facts, largely ignoring other reasoning types such as temporal, spatial, and taxonomic reasoning. In this paper, we present a benchmark dataset for temporal reasoning, TempQA-WD, to encourage research in extending the present approaches to target a more challenging set of complex reasoning tasks. Specifically, our benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata. The TempQA-WD dataset is available at https://github.com/IBM/tempqa-wd.
[ "Semantic Text Processing", "Question Answering", "Explainability & Interpretability in NLP", "Knowledge Representation", "Natural Language Interfaces", "Reasoning", "Responsible & Trustworthy NLP" ]
[ 72, 27, 81, 18, 11, 8, 4 ]
http://arxiv.org/abs/2211.15421v1
A Benchmark for Structured Extractions from Complex Documents
Understanding visually-rich business documents to extract structured data and automate business workflows has been receiving attention both in academia and industry. Although recent multi-modal language models have achieved impressive results, we find that existing benchmarks do not reflect the complexity of real documents seen in industry. In this work, we identify the desiderata for a more comprehensive benchmark and propose one we call Visually Rich Document Understanding (VRDU). VRDU contains two datasets that represent several challenges: rich schema including diverse data types as well as nested entities, complex templates including tables and multi-column layouts, and diversity of different layouts (templates) within a single document type. We design few-shot and conventional experiment settings along with a carefully designed matching algorithm to evaluate extraction results. We report the performance of strong baselines and three observations: (1) generalizing to new document templates is very challenging, (2) few-shot performance has a lot of headroom, and (3) models struggle with nested fields such as line-items in an invoice. We plan to open source the benchmark and the evaluation toolkit. We hope this helps the community make progress on these challenging tasks in extracting structured data from visually rich documents.
[ "Low-Resource NLP", "Language Models", "Visual Data in NLP", "Semantic Text Processing", "Structured Data in NLP", "Multimodality", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 80, 52, 20, 72, 50, 74, 4, 3 ]
https://aclanthology.org//2020.nlpbt-1.4/
A Benchmark for Structured Procedural Knowledge Extraction from Cooking Videos
Watching instructional videos are often used to learn about procedures. Video captioning is one way of automatically collecting such knowledge. However, it provides only an indirect, overall evaluation of multimodal models with no finer-grained quantitative measure of what they have learned. We propose instead, a benchmark of structured procedural knowledge extracted from cooking videos. This work is complementary to existing tasks, but requires models to produce interpretable structured knowledge in the form of verb-argument tuples. Our manually annotated open-vocabulary resource includes 356 instructional cooking videos and 15,523 video clip/sentence-level annotations. Our analysis shows that the proposed task is challenging and standard modeling approaches like unsupervised segmentation, semantic role labeling, and visual action detection perform poorly when forced to predict every action of a procedure in a structured form.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining" ]
[ 20, 74, 3 ]
SCOPUS_ID:85142007156
A Benchmark for the Use of Topic Models for Text Visualization Tasks
Based on the assumption that semantic relatedness between documents is reflected in the distribution of the vocabulary, topic models are a widely used class of techniques for text analysis tasks. The application of topic models results in concepts, the so-called topics, and a high-dimensional description of the documents. For visualization tasks, they can be projected onto a lower-dimensional space using dimensionality reduction techniques. Though the quality of the resulting point layout mainly depends on the chosen topic model and dimensionality reduction technique, it is unclear which particular combinations are suitable for displaying the semantic relatedness between the documents. In this work, we propose a benchmark comprising various datasets, layout algorithms and their hyperparameters, and quality metrics for conducting an empirical study.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85132753357
A Benchmark of Parsing Vietnamese Publications
In recent decades, digital transformation has received growing attention worldwide, that has leveraged the explosion of digitized document data. In this paper, we address the problem of parsing publications, in particular, Vietnamese publications. The Vietnamese publications are well-known with high variant, diverse layouts, and some characters are equivocal in the visual form due to accent symbols and derivative characters that pose many challenges. To this end, we collect the UIT-DODV-Ext dataset: a challenging Vietnamese document image including scientific papers and textbooks with 5,000 fully annotated images. We introduce a general framework to parse Vietnamese publications containing two components: page object detection and caption recognition. We further conduct an extensive benchmark with various state-of-the-art object detection and text recognition methods. Finally, we present a hybrid parser which achieves the top place in the benchmark. Extensive experiments on the UIT-DODV-Ext dataset provide a comprehensive evaluation and insightful analysis.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
http://arxiv.org/abs/2011.01615v1
A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch Novels and News
We evaluate a rule-based (Lee et al., 2013) and neural (Lee et al., 2018) coreference system on Dutch datasets of two domains: literary novels and news/Wikipedia text. The results provide insight into the relative strengths of data-driven and knowledge-driven systems, as well as the influence of domain, document length, and annotation schemes. The neural system performs best on news/Wikipedia text, while the rule-based system performs best on literature. The neural system shows weaknesses with limited training data and long documents, while the rule-based system is affected by annotation differences. The code and models used in this paper are available at https://github.com/andreasvc/crac2020
[ "Coreference Resolution", "Information Extraction & Text Mining" ]
[ 13, 3 ]
SCOPUS_ID:85131128251
A Benchmark of Named Entity Recognition Approaches in Historical Documents Application to 19 <sup>th</sup> Century French Directories
Named entity recognition (NER) is a necessary step in many pipelines targeting historical documents. Indeed, such natural language processing techniques identify which class each text token belongs to, e.g. “person name”, “location”, “number”. Introducing a new public dataset built from 19th century French directories, we first assess how noisy modern, off-the-shelf OCR are. Then, we compare modern CNN- and Transformer-based NER techniques which can be reasonably used in the context of historical document analysis. We measure their requirements in terms of training data, the effects of OCR noise on their performance, and show how Transformer-based NER can benefit from unsupervised pre-training and supervised fine-tuning on noisy data. Results can be reproduced using resources available at https://github.com/soduco/paper-ner-bench-das22 and https://zenodo.org/record/6394464.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Named Entity Recognition", "Multimodality" ]
[ 20, 52, 72, 3, 34, 74 ]
http://arxiv.org/abs/2003.07743v2
A Benchmarking Study of Embedding-based Entity Alignment for Knowledge Graphs
Entity alignment seeks to find entities in different knowledge graphs (KGs) that refer to the same real-world object. Recent advancement in KG embedding impels the advent of embedding-based entity alignment, which encodes entities in a continuous embedding space and measures entity similarities based on the learned embeddings. In this paper, we conduct a comprehensive experimental study of this emerging field. We survey 23 recent embedding-based entity alignment approaches and categorize them based on their techniques and characteristics. We also propose a new KG sampling algorithm, with which we generate a set of dedicated benchmark datasets with various heterogeneity and distributions for a realistic evaluation. We develop an open-source library including 12 representative embedding-based entity alignment approaches, and extensively evaluate these approaches, to understand their strengths and limitations. Additionally, for several directions that have not been explored in current approaches, we perform exploratory experiments and report our preliminary findings for future studies. The benchmark datasets, open-source library and experimental results are all accessible online and will be duly maintained.
[ "Semantic Text Processing", "Structured Data in NLP", "Representation Learning", "Knowledge Representation", "Multimodality" ]
[ 72, 50, 12, 18, 74 ]
http://arxiv.org/abs/2105.03409v1
A Benchmarking on Cloud based Speech-To-Text Services for French Speech and Background Noise Effect
This study presents a large scale benchmarking on cloud based Speech-To-Text systems: {Google Cloud Speech-To-Text}, {Microsoft Azure Cognitive Services}, {Amazon Transcribe}, {IBM Watson Speech to Text}. For each systems, 40158 clean and noisy speech files about 101 hours are tested. Effect of background noise on STT quality is also evaluated with 5 different Signal-to-noise ratios from 40dB to 0dB. Results showed that {Microsoft Azure} provided lowest transcription error rate $9.09\%$ on clean speech, with high robustness to noisy environment. {Google Cloud} and {Amazon Transcribe} gave similar performance, but the latter is very limited for time-constraint usage. Though {IBM Watson} could work correctly in quiet conditions, it is highly sensible to noisy speech which could strongly limit its application in real life situations.
[ "Text Generation", "Speech & Audio in NLP", "Speech Recognition", "Multimodality" ]
[ 47, 70, 10, 74 ]
http://arxiv.org/abs/1406.3915v1
A Bengali HMM Based Speech Synthesis System
The paper presents the capability of an HMM-based TTS system to produce Bengali speech. In this synthesis method, trajectories of speech parameters are generated from the trained Hidden Markov Models. A final speech waveform is synthesized from those speech parameters. In our experiments, spectral properties were represented by Mel Cepstrum Coefficients. Both the training and synthesis issues are investigated in this paper using annotated Bengali speech database. Experimental evaluation depicts that the developed text-to-speech system is capable of producing adequately natural speech in terms of intelligibility and intonation for Bengali.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
https://aclanthology.org//W12-3507/
A Bengali Speech Synthesizer on Android OS
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
SCOPUS_ID:85081554585
A Bengali Text Generation Approach in Context of Abstractive Text Summarization Using RNN
Automatic text summarization is one of the mentionable research areas of natural language processing. The amount of data is increasing rapidly, and the necessity of understanding the gist of any text is just a mandatory tool, nowadays. The area of text summarization has been developing since many years. Mentionable research has been already done through extractive summarization approach; in other side, abstractive summarization approach is the way to summarize any text as like human. Machine will be able to provide a new type of summarization, where the understanding of given summary may found as like as human-generated summary. Several research developments have already been done for abstractive summarization in English language. This paper shows a necessary method—“text generation” in context of Bengali abstractive text summarization development. Text generation helps the machine to understand the pattern of human-written text and then produce the output as is human-written text. A basis recurrent neural network (RNN) has been applied for this text generation approach. The most applicable and successful RNN—long short-term memory (LSTM)—has been applied. Contextual tokens have been used for the better sequence prediction. The proposed method has been developed in the context of making it useable for further development of abstractive text summarization.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85143124474
A Bert-based Joint Model of Intent Recognition and Slot Filling Cross-correlation
Intent recognition and slot filling are two key steps in natural language understanding. In the past, the two steps were often completed separately, and a large number of joint modeling methods have recently demonstrated that the two are closely related and can leverage the shared knowledge between tasks to achieve better performance. Previous studies have focused on multi-task implicit joint modeling or slot filling tasks relying on information from intent recognition, ignoring that intent recognition and slot filling are interrelated. In this paper, the joint model Bi-Correlation is improved to form the cross-correlation of intent classification and slot filling. It includes two modules, IR2SF and SF2IR so that the performance of intention recognition and slot filling can be mutually enhanced; and considering the lack of generalization in the case of small datasets, In this paper, the Bert pre-training model is used to improve the generalization of the model, thereby improving the performance of the model. Experiments are carried out on two Chinese datasets, CAIS and SMP-ECDT. The experimental results show that the model in this paper performs better than the existing models, and the accuracy is significantly improved.
[ "Language Models", "Semantic Text Processing", "Semantic Parsing", "Intent Recognition", "Sentiment Analysis" ]
[ 52, 72, 40, 79, 78 ]
SCOPUS_ID:84859176984
A Bespoked secure framework for an ontology-based data-extraction system
In this Bespoked Secure Framework for an Ontology-Based Data-Extraction System study, we report on the implementation of existing generalized framework with alternate technology. Implementation is done using Natural language processing instead of heuristic based method. Heuristic methods are based on assumptions. The assumptions are just unspecified and as a consequence not understood. If for a given secure data extraction limitation problem, the realization of model-based solutions appears to be too complicated or too pricey to carry out. Heuristic approaches need to be incorporated with a meticulous analysis designed at checking the extent to which the approach formalizes rational agency preference structures and/or data user behaviors. Our Secure Data Extraction system will allow new algorithms and ideas to be incorporated into a Data extraction system. Extraction of information from semi-structured or unstructured documents, such as web pages, is a useful yet complex task. Ontologies can achieve a high degree of accuracy and Privacy in Data extraction system while maintaining resiliency in the face of document changes. Ontologies do not, however, diminish the complexity of a Data-extraction system. As research in the field progress, the need for a modular Data-extraction system that decouples the associated processes continues to grow. © 2010 Academic Journals Inc.
[ "Knowledge Representation", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 18, 72, 3 ]
SCOPUS_ID:0027576803
A Best-First Language Processing Model Integrating the Unification Grammar and Markov Language Model for Speech Recognition Applications
In speech recognition applications, a language proscessing model is to find out a most promising sentence hypothesis for a given word lattice obtained from acoustic signal processor. Conventionally, either grammatical or statistical approaches can be used in such problems. In this paper a new language processing model is proposed, in which the grammatical approach of unification grammar and the statistical approach of Markov language model are properly integrated in a word lattice chart parsing algorithm with different best-first parsing strategies. This language processing model has been successfully implemented in experiments on Mandarin speech recognition, although the present model is language independent. Test results show that significant improvements in both correct rate of recognition and computation speed can be achieved. For example, considering 200 test Chinese sentences with 60 Chinese unification grammar rules and a Markov Chinese language model trained by the primary school Chinese textbooks, the proposed language processing model with the parsing strategy based on the length/probability selection principle and first-3 decision rule achieves a good compromise between accuracy and speed, i.e., a correct rate of 93.8% and 5 s per sentence on an IBM PC/AT, as compared with 73.8% and 25 s using unification grammar alone, as well as 82.2% and 3 s using Markov language model alone. This high performance is completely due to the effective rejection of noisy word hypothesis interferences, that is, the unification-based grammatical analysis eliminates all illegal combinations; while the Markovian probabilities of constituents combined with the considerations on constitutent length indicate the correct direction of processing. Therefore, the global structural synthesis capabilities of the unification grammar and the local relation estimation capabilities of the Markov language model are properly integrated. © 1993 IEEE
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
http://arxiv.org/abs/2212.09052v1
A Better Choice: Entire-space Datasets for Aspect Sentiment Triplet Extraction
Aspect sentiment triplet extraction (ASTE) aims to extract aspect term, sentiment and opinion term triplets from sentences. Since the initial datasets used to evaluate models on ASTE had flaws, several studies later corrected the initial datasets and released new versions of the datasets independently. As a result, different studies select different versions of datasets to evaluate their methods, which makes ASTE-related works hard to follow. In this paper, we analyze the relation between different versions of datasets and suggest that the entire-space version should be used for ASTE. Besides the sentences containing triplets and the triplets in the sentences, the entire-space version additionally includes the sentences without triplets and the aspect terms which do not belong to any triplets. Hence, the entire-space version is consistent with real-world scenarios and evaluating models on the entire-space version can better reflect the models' performance in real-world scenarios. In addition, experimental results show that evaluating models on non-entire-space datasets inflates the performance of existing models and models trained on the entire-space version can obtain better performance.
[ "Information Extraction & Text Mining", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 3, 23, 78 ]
SCOPUS_ID:85121910195
A Better Multiway Attention Framework for Fine-Tuning
Powerful pre-training models have been paid widespread attention. However, little attention has been devoted to solve downstream natural language understanding (NLU) tasks in fine-tuning stage. In this paper, we propose a novel architecture named multiway attention framework (MA) in fine-tuning stage. Which utilizes a concatenated feature of the first and the last BERT-style model (e.g., BERT, ALBERT) layers, and a mean-pooling feature of the last BERT-style model layer as input. Then it applies four various attention mechanisms on the input features to learn a sentence embedding in phrase-level and semantic-level. Moreover, it aggregates the output of multiway attention, and sends this result to self-attention to learn the best combination scheme of multiway attention for target task. Experimental results on GLUE, SQuAD and RACE benchmark datasets show that our approach can obtain significant performance improvement.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
http://arxiv.org/abs/2005.08271v2
A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer
Dense video captioning aims to localize and describe important events in untrimmed videos. Existing methods mainly tackle this task by exploiting only visual features, while completely neglecting the audio track. Only a few prior works have utilized both modalities, yet they show poor results or demonstrate the importance on a dataset with a specific domain. In this paper, we introduce Bi-modal Transformer which generalizes the Transformer architecture for a bi-modal input. We show the effectiveness of the proposed model with audio and visual modalities on the dense video captioning task, yet the module is capable of digesting any two modalities in a sequence-to-sequence task. We also show that the pre-trained bi-modal encoder as a part of the bi-modal transformer can be used as a feature extractor for a simple proposal generation module. The performance is demonstrated on a challenging ActivityNet Captions dataset where our model achieves outstanding performance. The code is available: v-iashin.github.io/bmt
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Speech & Audio in NLP", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 70, 47, 74 ]
http://arxiv.org/abs/1909.02218v1
A Better Way to Attend: Attention with Trees for Video Question Answering
We propose a new attention model for video question answering. The main idea of the attention models is to locate on the most informative parts of the visual data. The attention mechanisms are quite popular these days. However, most existing visual attention mechanisms regard the question as a whole. They ignore the word-level semantics where each word can have different attentions and some words need no attention. Neither do they consider the semantic structure of the sentences. Although the Extended Soft Attention (E-SA) model for video question answering leverages the word-level attention, it performs poorly on long question sentences. In this paper, we propose the heterogeneous tree-structured memory network (HTreeMN) for video question answering. Our proposed approach is based upon the syntax parse trees of the question sentences. The HTreeMN treats the words differently where the \textit{visual} words are processed with an attention module and the \textit{verbal} ones not. It also utilizes the semantic structure of the sentences by combining the neighbors based on the recursive structure of the parse trees. The understandings of the words and the videos are propagated and merged from leaves to the root. Furthermore, we build a hierarchical attention mechanism to distill the attended features. We evaluate our approach on two datasets. The experimental results show the superiority of our HTreeMN model over the other attention models especially on complex questions. Our code is available on github. Our code is available at https://github.com/ZJULearning/TreeAttention
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
SCOPUS_ID:85125950349
A Bi-Channel Math Word Problem Solver With Understanding and Reasoning
This paper addresses the problem of solving arithmetic word problems that are stated in Chinese with some commonsense implicit quantity relations. The addition of commonsense quantity relations is a critical step in building the machine solver for solving arithmetic word problems. This paper proposes a channel-based method to overcome the challenges of the problem and forms a Bi-channel model. The first model is the Syntax-Semantic model that convert a math problem into a set of equations. The second model adds the set of quantity equations from commonsense into the equation set. The last section is to convert the machine solution into a humanoid solution. The experimental results on 286 arithmetic word problems from the textbooks show that the proposed method has good potential.
[ "Commonsense Reasoning", "Reasoning", "Numerical Reasoning" ]
[ 62, 8, 5 ]
SCOPUS_ID:85131146946
A Bi-LSTM and GRU Hybrid Neural Network with BERT Feature Extraction for Amazon Textual Review Analysis
Nowadays, businesses move towards digital platforms for their product promotion and to improve their overall profit margin. Customer reviews determine the purchase decision of the specified products in the e-commerce system in this digital world. In this case, reviewing products before buying is the common scenario in this current world. It will help the buyers to buy quality products at affordable prices. On this basis, it is necessary to implement deep learning techniques to analyze the sentimental tweets of customers based on their product ratings. The study planned to propose an enhanced BERT algorithm in feature extraction on sentiment analysis with a large data set and hybridized deep Bi-LSTM-GRU neural network for the classification process. Consequently, amazon products' customer review datasets are employed for the analysis. The review about mobile phones is retrieved from amazon for the sentiment analysis. Initially, the data was preprocessed to increase the accuracy and performance of the classifier. Further, the feature extraction is done to reduce many data into accurate ones with BERT (Bidirectional Encoder Representations from Transformers) algorithm. It isn't easy to evaluate the review sentiments without efficient classification. After this process, the study analyses the performance of the deep Bi-LSTM-GRU neural network method with the existing method. Finally, the study concluded that the proposed algorithm achieved 97.87, 98.36, 98.89, and 98.47, respectively, based on the accuracy level, frequency, precision, and recall measures. These performance measures are higher than the existing algorithms developed in the studies. The study achieved more accuracy and efficiency through the proposed deep Bi-LSTM-GRU neural network method in sentiment analysis on mobile phone reviews in the amazon e-commerce system.
[ "Language Models", "Information Extraction & Text Mining", "Semantic Text Processing", "Information Retrieval", "Sentiment Analysis", "Responsible & Trustworthy NLP", "Text Classification", "Green & Sustainable NLP" ]
[ 52, 3, 72, 24, 78, 4, 36, 68 ]
http://arxiv.org/abs/1608.07720v1
A Bi-LSTM-RNN Model for Relation Classification Using Low-Cost Sequence Features
Relation classification is associated with many potential applications in the artificial intelligence area. Recent approaches usually leverage neural networks based on structure features such as syntactic or dependency features to solve this problem. However, high-cost structure features make such approaches inconvenient to be directly used. In addition, structure features are probably domain-dependent. Therefore, this paper proposes a bi-directional long-short-term-memory recurrent-neural-network (Bi-LSTM-RNN) model based on low-cost sequence features to address relation classification. This model divides a sentence or text segment into five parts, namely two target entities and their three contexts. It learns the representations of entities and their contexts, and uses them to classify relations. We evaluate our model on two standard benchmark datasets in different domains, namely SemEval-2010 Task 8 and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves comparable performance compared with other models using sequence features. In the latter dataset, our model obtains the third best results compared with other models in the official evaluation. Moreover, we find that the context between two target entities plays the most important role in relation classification. Furthermore, statistic experiments show that the context between two target entities can be used as an approximate replacement of the shortest dependency path when dependency parsing is not used.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85135049491
A Bi-level Individualized Adaptive Learning Recommendation System Based on Topic Modeling
Adaptive learning offers real attention to individual students’ differences and fits different needs from students. This study proposes a bi-level recommendation system with topic models, gradient descent, and a content-based filtering algorithm. In the first level, the learning materials were analyzed by a topic model, and topic proportions to each short item in each learning material were yielded as representation features. The second level contains a measurement component and a recommendation strategy component which employ gradient descent and content-based filtering algorithm to analyze personal profile vectors and make an individualized recommendation. An empirical data consists of cumulative assessments that were used as a demonstration of the recommendation process. Results have suggested that the distribution to the estimated values in the person profile vectors were related to the ability estimation from the Rasch model, and students with similar profile vectors could be recommended with the same learning material.
[ "Language Models", "Topic Modeling", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 9, 72, 3 ]
SCOPUS_ID:85137266885
A Bi-level representation learning model for medical visual question answering
Medical Visual Question Answering (VQA) targets at answering questions related to given medical images and it contains tremendous potential in healthcare services. However, researches on medical VQA are still facing challenges, particularly on how to learn a fine-grained multimodal semantic representation from relatively small volume of data resources for answer prediction. Moreover, the long-tailed distribution labels of medical VQA data frequently result in poor performance of models. To this end, we propose a novel bi-level representation learning model with two reasoning modules to learn bi-level representations for the medical VQA task. One is sentence-level reasoning to learn sentence-level semantic representations from multimodal input. The other is token-level reasoning that employs an attention mechanism to generate a multimodal contextual vector by fusing image features and word embeddings. The contextual vector is used to filter irrelevant semantic representations from sentence-level reasoning to generate a fine-grained multimodal representation. Furthermore, a label-distribution-smooth margin loss is proposed to minimize generalization error bound of long-tailed distribution datasets by modifying margin bound of different labels in training set. Based on standard VQA-Rad dataset and PathVQA dataset, the proposed model achieves 0.7605 and 0.5434 on accuracy, 0.7741 and 0.5288 on F1-score, respectively, outperforming a set of state-of-the-art baseline models.
[ "Visual Data in NLP", "Semantic Text Processing", "Question Answering", "Representation Learning", "Natural Language Interfaces", "Reasoning", "Multimodality" ]
[ 20, 72, 27, 12, 11, 8, 74 ]
http://arxiv.org/abs/1812.10235v1
A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling
Intent detection and slot filling are two main tasks for building a spoken language understanding(SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model. Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoder-decoder structure) to model two tasks, hence may not fully take advantage of the cross-impact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data, with about 0.5$\%$ intent accuracy improvement and 0.9 $\%$ slot filling improvement.
[ "Language Models", "Semantic Text Processing", "Semantic Parsing", "Intent Recognition", "Sentiment Analysis" ]
[ 52, 72, 40, 79, 78 ]
SCOPUS_ID:85144059396
A Bi-party Engaged Modeling Framework for Renewable Power Predictions with Privacy-preserving
This paper presents a pioneering study in developing data-driven models for predicting the future renewable power out-put sequence via using numerical weather predictions of multiple sites without breaching the data privacy. A novel bi-party engaged data-driven modeling framework (BEDMF) is developed to enable efficiently learning local and global latent features serving as de-centralized data for data-driven modeling with privacy-preserv-ing. The BEDMF contains two stages, the pretraining stage and fine-tuning. At the pretraining stage of the BEDMF, local latent features are learned via local models and then aggregated to pro-duce the global latent feature via a global model. At the fine-tuning stage, local latent features are learned using local data and global latent feature from the previous iteration. The proposed frame-work enables capturing spatial-temporal patterns among multiple sites to further benefit modeling in renewable power prediction tasks. Meanwhile, the framework preserves the data privacy via isolating data locally in the clients. To verify the advantage of the BEDMF, a comprehensive computational study is conducted to benchmark it against famous baselines. Results show that the BEDMF achieve at least 3&#x0025; improvements on average.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 4 ]
SCOPUS_ID:85146199171
A Bi-recursive Auto-encoders for Learning SemanticWord Embedding
The meaning of a word depends heavily on the context in which it is embedded. Deep neural network have recorded recently a great success in representing the words' meaning. Among them, auto-encoders based models have proven their robustness in representing the internal structure of several data. Thus, in this paper, we present a novel deep model to represent words meanings using auto-encoders and considering the left/right contexts around the word of interest. Our proposal, referred to as Bi-Recursive Auto-Encoders (Bi-RAE ), consists in modeling the meaning of a word as an evolved vector and learning its semantic features over its set of contexts.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
SCOPUS_ID:85145355335
A BiLSTM-CRF Based Approach to Word Segmentation in Chinese
This paper proposes a approach for word segmentation in Chinese. The word segmentation model in this paper combines Bi-directional Long Short-Term Memory (BiLSTM) and Conditional Random Fields (CRF), and proposes a four-state word segmentation model of DSZM, so that the model can not only consider the correlation between the front and rear of the sequence like CRF, but also have the feature extraction and fitting capabilities of BiLSTM.
[ "Language Models", "Text Segmentation", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 21, 72, 15 ]
SCOPUS_ID:85116260167
A BiLSTM-CRF Entity Type Tagger for Question Answering System
Question answering system over linked data (QALD) has been a very important research field in natural language processing (NLP). And the process of detecting useful words and assigning them with right entity types is crucial to the performance of QALD systems. Although entity-type taggers achieved good results using probability graph models such as MEMM and CRF, the design and selection of features may pose limitations. Due to the popularity of deep learning architectures, many studies employed Recurrent Neural Network (RNN) framework and achieved state-of-art performances in NLP. Therefore, we choose to use BiLSTM-CRF in the design of entity-type tagger. It can be seen from the experimental results that the proposed BiLSTM-CRF model outperformed other probability graph models, which also lead to the best performance of overall Question Answering system than other competitor systems.
[ "Language Models", "Semantic Text Processing", "Structured Data in NLP", "Question Answering", "Syntactic Text Processing", "Natural Language Interfaces", "Tagging", "Multimodality" ]
[ 52, 72, 50, 27, 15, 11, 63, 74 ]
SCOPUS_ID:85079220701
A BiLSTM-based system for cross-lingual pronoun prediction
We describe the Uppsala system for the 2017 DiscoMT shared task on cross-lingual pronoun prediction. The system is based on a lower layer of BiLSTMs reading the source and target sentences respectively. Classification is based on the BiLSTM representation of the source and target positions for the pronouns. In addition we enrich our system with dependency representations from an external parser and character representations of the source sentence. We show that these additions perform well for German and Spanish as source languages. Our system is competitive and is in first or second place for all language pairs.
[ "Language Models", "Semantic Text Processing", "Cross-Lingual Transfer", "Multilinguality" ]
[ 52, 72, 19, 0 ]
SCOPUS_ID:85126998554
A Biaffine Attention-Based Approach for Event Factor Extraction
Event extraction is an important task under certain profession domains. CCKS 2021 holds a communication domain event extraction benchmark and we purposed an approach with the biaffine attention mechanism to finish the task. The solution combines the state-of-the-art BERT-like base models and the biaffine attention mechanism to build a two-stage model, one stage for event trigger extraction and another for event role extraction. Besides, we apply several strategies, ensemble multi models to retrieve the final predictions. Eventually our approach performs on the competition data set well with an F1-score of 0.8033 and takes the first place on the leaderboard.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85143058609
A Biased Random-key Genetic Algorithm for Extractive Single-document Summarisation
Extractive text summarization has been dealt with by several metaheuristics that proved their efficiency. In those works the feasibility of solutions has been mostly guaranteed through some operators, whose role is to check and/or correct infeasible solutions. To reduce the complexity of the task, this works proposes a Biased Random-Key Genetic Algorithm, with a newly-proposed decoder, it is adapted to extractive single-document summarization. We have tested the performances of our approach on two standard datasets, DUC-2001 and DUC-2002, through using the ROUGE-1 and ROUGE-2 metrics. The results are very promising and show that our approach outperforms other reference methods, it came first out of 14 algorithms.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85121767904
A Bibliometric Analysis of COVID-19 Vaccines and Sentiment Analysis
Recent statistical and social studies have shown that social media platforms such as Instagram, Facebook, and Twitter contain valuable data that influence human behaviors. This data can be used to track, fight, and control the spread of the COVID-19 and are an excellent asset for analyzing and understanding people's sentiments. Current levels of willingness to receive a COVID-19 vaccination are still insufficient to achieve immunity standards as stipulated by the World Health Organization (WHO). The present study employs bibliometric analysis to uncover trends and research into sentiment analysis and COVID-19 vaccination. A range of analyses is conducted using the open-source tool VOSviewer and Scopus database from 2020-2021 to acquire a deeper insight and evaluate current research trends on COVID-19 vaccines. The quantitative methodology used generates various bibliometric network visualizations and trends as a function of publication metrics such as citation, geographical attributes, journal publications, and research institutions. Results of network visualization revealed that understanding the the-state-of-the-art in applying sentiment analysis to the COVID-19 pandemic is crucial to local government health agencies and healthcare providers to help in neutralizing the infodemic and improve vaccine acceptance.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85103676217
A Bibliometric Analysis of Distributed Incremental Clustering on Images
Unstructured information is continuously irregular and streaming information from such a sequence is tedious because it lacks labels and accumulates with time. This is possible using Incremental Clustering algorithms that use previously learned information to accommodate new data and avoid retraining. This paper therefore seeks to understand the status of "Distributed Incremental Clustering" on images with text and numerical values, its limitations, scope, and other details to devise a better algorithm in future. To further enhance the analysis, we have also included methodology, which can be used to perform clustering on images or documents based on its content.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining", "Text Clustering" ]
[ 20, 74, 3, 29 ]
SCOPUS_ID:85149502500
A Bibliometric Analysis of Machine Translation Post-editing from 2012 to 2021
Machine translation post-editing (MTPE) has gained a lot of attention lately. This paper conducted a bibliometric analysis of 270 publications on MTPE retrieved from the core database of Web of Science in recent decade from 2012 to 2021 with the aid of literature analysis software VOSviewer. By means of keyword co-occurrence and clustering of literature co-citation, this paper reviews the distribution of annual publications, disciplines, institutions and authors, citation structure of MTPE, etc. Meanwhile, the future trends related to the study of MTPE are predicted so as to provide reference and inspiration for further research. The research findings are as follows: the number of MTPE publications has fluctuated in the past decade, but the overall trend is increasing; MTPE studies are mainly distributed in linguistics and computer science; Dublin City University in Ireland is currently the most productive institution in MTPE research, and Joss Moorkens from the university plays a critical role in promoting the cooperation between scholars in this field; Spain is currently the most productive country in MTPE, but Ireland is the most cited country in the literature. From the analysis of keywords, the paper concludes that most of the current MTPE articles focus on the key concepts of 'quality' and 'effort while in the future, 'NMT', 'translation quality assessment' and 'MTPE skills and training' will be the research trends.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85140917225
A Bibliometric Review of Soft Computing for Recommender Systems and Sentiment Analysis
Soft computing, which focuses on approximate models and provides solutions to complicated real-life issues, has gained increasing momentum in application-specific domains, such as sentiment analysis and recommender systems, to emulate cognitive processes behind decision-making. In this work, bibliometrics and structural topic modeling (STM) were adopted to analyze the text contents of research articles concerning soft computing for sentiment analysis and recommender systems. Results indicated that this research field had experienced a dramatic increase in both quantity and quality as measured by scientific output and their received citations. Using STM, we identified 17 research topics frequently discussed within the analyzed articles. The analysis of annual topic prevalence indicated a shift in research foci from recommender applications to sentiment analysis and a growing interest in soft computing. This study served as a guideline for those seeking to contribute to research on soft computing for sentiment analysis and recommender systems. We also made methodological contributions by combining the leading-edge text mining algorithms to make the time-honored bibliometrics adaptive to the analysis of large quantities of unstructured texts beyond structured publication data statistics.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 9, 3, 78 ]
SCOPUS_ID:85136782321
A Bibliometric Review of the Mathematics Journal
In this study, we conduct a bibliometric review of the Mathematics journal to map its thematic structure, and to identify major research trends for future research to build on. Our review focuses primarily on the bibliometric clusters derived from an application of a bibliographic coupling algorithm and offers insights into how studies included in the review sample relate to one another to form coherent research streams. We combine this analysis with keyword frequency and topic modeling analyses to reveal the discourse that is taking place in the journal more recently. We believe that a systematic/computer-assisted review of the Mathematics journal can open a path for new developments and discoveries in research and help editors assess the performance and historic evolution of the journal and predict future developments. In so doing, the findings should advance our cumulative understanding in those areas consistent with the scope of the Mathematics journal, such as applied mathematics, analytics, and computational sciences.
[ "Topic Modeling", "Reasoning", "Numerical Reasoning", "Information Extraction & Text Mining" ]
[ 9, 8, 5, 3 ]
SCOPUS_ID:85142822650
A Bibliometric Review of Methods and Algorithms for Generating Corpora for Learning Vector Word Embeddings
Natural Language Processing (NLP) problems are among the hardest Machine Learning (ML) problems due to the complex nature of the human language. The introduction of word embeddings improved the performance of ML models on various NLP tasks as text classification, sentiment analysis, machine translation, etc. Word embeddings are real-valued vector representations of words in a specific vector space. Producing quality word embeddings that are then used as input to downstream NLP tasks is important in obtaining a good performance. To accomplish it, corpora of sufficient size is needed. Corpora may be formed in a multitude of ways, including text that was originally electronic, spoken language transcripts, optical character recognition, and synthetically producing text from the available dataset. The study provides the most recent bibliometric analysis on the topic of corpora generation for learning word vector embeddings. The analysis is based on the publication data from 2006 to 2022 retrieved from Scopus scientific database. A descriptive analysis method has been employed to obtain statistical characteristics of the publications in the research area. The systematic analysis results show the field’s evolution over time and highlight influential contributions to the field. It is believed that compiled bibliometric reviews could help researchers gain knowledge of the general state of the scientific knowledge, its descriptive features, patterns, and insights to design their studies systematically.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85098583158
A Bibliometric Survey on Cognitive Document Processing
Heterogenous and voluminous unstructured data is produced from various sources like emails, social media tweets, reviews, videos, audio, images, PDFs, scanned documents, etc. Organizations need to store this wide range of unstructured data for more and longer periods so that they can examine information all the more profoundly to make a better decision and extracting useful insights. Manual processing of such unstructured data is always a challenging, time-consuming, and expensive task for any organization. Automating unstructured document processing using Optical Character Recognition (OCR) and Robotics Process Automation (RPA), seems to have limitations, as those techniques are driven by rules or templates. It needs to define the template or rules for every new input, which limits the use of rule or templates based techniques for unstructured document processing. These limitation demands to develop a tool which can be able to process the unstructured documents using Artificial Intelligence techniques. This bibliometric survey on Cognitive Document Processing reveals the mentioned facts about unstructured data processing challenges. This survey is performed on the Scopus database's scientific documents. Various tools such as Microsoft Excel, Sciencescape, VOSviewer, Leximancer, and Gephi for drawing network data analysis diagrams are used. The study revealed that the largest number of publications on Cognitive Document Processing had been explored very recently. It is observed that universities/institutions in India are leading in the research studies focusing on this research topic.
[ "Visual Data in NLP", "Structured Data in NLP", "Multimodality" ]
[ 20, 50, 74 ]
SCOPUS_ID:85119429385
A Bibliometric and Sentiment Analysis of CARV and MCPC Conferences in the 21<sup>st</sup> Century: Towards Sustainable Customization
This opening paper of the CARV/MCPC 2021 book of proceedings presents a study of papers published within the series of Changeable, Agile, Reconfigurable and Virtual Conferences (CARV) and Mass Customization & Personalization Conference (MCPC). In total, 398 papers are included from the three most recent MCPC conferences and the four most recent CARV conferences. In addition, 119 papers from the CARV/MCPC 2021 conference are included as well. Bibliometric analyses are presented, highlighting the most cited papers and authors, the most productive authors, and recurrence of authors across conference years. In addition, a sentiment analysis highlights trends in research, applying text mining techniques on paper titles, keywords, and abstracts. Finally, past trends are compared to trends found in papers published in the joint CARV/MCPC 2021 conference proceedings, which highlights future prominent research areas and new emerging topics relevant to the CARV and MCPC communities and future conferences.
[ "Responsible & Trustworthy NLP", "Sentiment Analysis", "Green & Sustainable NLP" ]
[ 4, 78, 68 ]
SCOPUS_ID:85092150955
A Bichannel Transformer with Context Encoding for Document-Driven Conversation Generation in Social Media
Along with the development of social media on the internet, dialogue systems are becoming more and more intelligent to meet users' needs for communication, emotion, and social intercourse. Previous studies usually use sequence-to-sequence learning with recurrent neural networks for response generation. However, recurrent-based learning models heavily suffer from the problem of long-distance dependencies in sequences. Moreover, some models neglect crucial information in the dialogue contexts, which leads to uninformative and inflexible responses. To address these issues, we present a bichannel transformer with context encoding (BCTCE) for document-driven conversation. This conversational generator consists of a context encoder, an utterance encoder, and a decoder with attention mechanism. The encoders aim to learn the distributed representation of input texts. The multihop attention mechanism is used in BCTCE to capture the interaction between documents and dialogues. We evaluate the proposed BCTCE by both automatic evaluation and human judgment. The experimental results on the dataset CMU_DoG indicate that the proposed model yields significant improvements over the state-of-the-art baselines on most of the evaluation metrics, and the generated responses of BCTCE are more informative and more relevant to dialogues than baselines.
[ "Language Models", "Semantic Text Processing", "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 14, 11, 47, 38 ]
https://aclanthology.org//2000.iwpt-1.32/
A Bidirectional Bottom-up Parser for TAG
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
SCOPUS_ID:85089306946
A Bidirectional Iterative Algorithm for Nested Named Entity Recognition
Nested named entity recognition (NER) is a special case of structured prediction in which annotated sequences can be contained inside each other. It is a challenging and significant problem in natural language processing. In this paper, we propose a novel framework for nested named entity recognition tasks. Our approach is based on a deep learning model which can be called in an iterative way, expanding the set of predicted entity mentions with each subsequent iteration. The proposed framework combines two such models trained to identify named entities in different directions: from general to specific (outside-in), and from specific to general (inside-out). The predictions of both models are then aggregated by a selection policy. We propose and evaluate several selection policies which can be used with our algorithm. Our method does not impose any restrictions on the length of entity mentions, number of entity classes, depth, or structure of the predicted output. The framework has been validated experimentally on four well-known nested named entity recognition datasets: GENIA, NNE, PolEval, and GermEval. The datasets differ in terms of domain (biomedical, news, mixed), language (English, Polish, German), and the structure of nesting (simple, complex). Through extensive tests, we prove that the approach we have proposed outperforms existing methods for nested named entity recognition.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85030155599
A Bidirectional LSTM Approach with Word Embeddings for Sentence Boundary Detection
Recovering sentence boundaries from speech and its transcripts is essential for readability and downstream speech and language processing tasks. In this paper, we propose to use deep recurrent neural network to detect sentence boundaries in broadcast news by modeling rich prosodic and lexical features extracted at each inter-word position. We introduce an unsupervised word embedding to represent word identity, learned from the Continuous Bag-of-Words (CBOW) model, into sentence boundary detection task as an effective feature. The word embedding contains syntactic information that is essential for this detection task. In addition, we propose another two low-dimensional word embeddings derived from a neural network that includes class and context information to represent words by supervised learning: one is extracted from the projection layer, the other one comes from the last hidden layer. Furthermore, we propose a deep bidirectional Long Short Term Memory (LSTM) based architecture with Viterbi decoding for sentence boundary detection. Under this framework, the long-range dependencies of prosodic and lexical information in temporal sequences are modeled effectively. Compared with previous state-of-the-art DNN-CRF method, the proposed LSTM approach reduces 24.8% and 9.8% relative NIST SU error in reference and recognition transcripts, respectively.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Representation Learning", "Multimodality" ]
[ 52, 72, 70, 12, 74 ]
http://arxiv.org/abs/2008.13339v3
A Bidirectional Tree Tagging Scheme for Joint Medical Relation Extraction
Joint medical relation extraction refers to extracting triples, composed of entities and relations, from the medical text with a single model. One of the solutions is to convert this task into a sequential tagging task. However, in the existing works, the methods of representing and tagging the triples in a linear way failed to the overlapping triples, and the methods of organizing the triples as a graph faced the challenge of large computational effort. In this paper, inspired by the tree-like relation structures in the medical text, we propose a novel scheme called Bidirectional Tree Tagging (BiTT) to form the medical relation triples into two two binary trees and convert the trees into a word-level tags sequence. Based on BiTT scheme, we develop a joint relation extraction model to predict the BiTT tags and further extract medical triples efficiently. Our model outperforms the best baselines by 2.0\% and 2.5\% in F1 score on two medical datasets. What's more, the models with our BiTT scheme also obtain promising results in three public datasets of other domains.
[ "Tagging", "Information Extraction & Text Mining", "Syntactic Text Processing", "Relation Extraction" ]
[ 63, 3, 15, 75 ]
SCOPUS_ID:84920627406
A Bidirectional View of Executive Function and Social Interaction
In this chapter, we explore the idea that the relation between social interaction and executive functions might be best characterized as bi-directionaldirectional. That is, that while developing executive function abilities almost definitely have considerable impact on emerging social understanding in young children, social interactions may also provide significant impetus for executive development. Working from a broadly Piagetian framework we include two avenues of exploration to illustrate. The first is that social collaboration on a problem might facilitate executive processes. Here we use the example of a collaboration on a strategic deception task. The second is that exposure to the ambiguous nature of social interactions may force the child to exercise more executive control, resulting in advances in various aspects of executive function. For examples, we draw from two research literatures-children's understanding of sarcasm and children's ability to grapple with acquiring more than one language.
[ "Stylistic Analysis", "Sentiment Analysis" ]
[ 67, 78 ]
SCOPUS_ID:85115262248
A Big Data Approach for Healthcare Analysis During Covid-19
In the present times, with the massive growth of the Internet, unbelievably enormous measures of data are in our reach. Although our lives have been changed by prepared access to boundless information, still we need to explore the use of technology in various thrust areas. In this paper, we have analyzed and classify the mental state of people to raise awareness about mental health, especially during COVID-19. I have adopted the big data approach to accomplish this project. Two standard datasets have been used for our experiments. The idea behind our work is to use propose a customized mental health solution with the use of big data approach that can be useful for health care as well. We have applied state-of-the-art classifiers algorithm and found that the CountVec with the multinomial Naïve Bayes method gives the highest accuracy in terms of precision and recall.
[ "Text Classification", "Ethical NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 17, 4, 24, 3 ]
SCOPUS_ID:85124279426
A Big Data Experiment to Evaluate the Effectiveness of Traditional Machine Learning Techniques Against LSTM Neural Networks in the Hotels Clients Opinion Mining
Context: Nowadays, client reviews on social networks can be a great source of knowledge extraction for strategic marketing planning. In the tourism area, opinions given by hotel clients in tourism social networks can drive improvements in service. In this context, traditional text mining techniques and new deep learning technologies should be tried out to select the best options for classifying opinions. Objective: Evaluate the performance and quality of the LDA (Latent Dirichlet Allocation), Naives Bayes (NB), Logistic Regression, SVM (Support Vector Machine) and LSTM - Long Short-Term Memory Units algorithms in the task of opinion mining of hotel reviews published on the TripAdvisor hotel booking website. Method: An In Vivo Controlled Experiment (Case Study) to compare the performance of the classifiers by means of accuracy, precision, recall, F1-measure and average training and classification times. Results: The LSTM model presented the best results regarding quality metrics. However, it did not present satisfactory results regarding processing time. Conclusion: The LSTM classifier had clearly superior performance when compared to the other evaluated ones. On the other hand, its average training and classification times greater than the others classifiers considered.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Opinion Mining", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 49, 78, 36, 3 ]
SCOPUS_ID:85067982465
A Big Data Processing Framework for Polarity Detection in Social Network Data
Big Data refers to the extremely big datasets that are produced from different areas which exhibits certain trends and associations. Major areas of big data include medical data, sensor data, social networks such as facebook, twitter, youtube etc. Among this, social networks produce large amount of data per millisecond which can be analysed for several predictive and analytic purposes. Tweets produced by twitter is used in sentiment analysis and polarity detection that helps in identifying the attitude, polarity of words, text or documents. Applying polarity detection in big data is a tedious task as it includes both historical and streaming data. Several frameworks have been proposed for analysing both historical and streaming data in big data. In this paper, lambda architecture for polarity detection of tweets has been proposed which analyses both streaming and historical data. Both the data can be analysed in parallel and used for certain predictive and analytic purposes.
[ "Polarity Analysis", "Sentiment Analysis" ]
[ 33, 78 ]
SCOPUS_ID:85030527435
A Big Data architecture for knowledge discovery in PubMed articles
The need of smart information retrieval systems is in contrast with the difficulties to deal with huge amount of data. In this paper we present a Big Data Analytics architecture used to implement a semantic similarity search tool for natural language texts in biomedical domain. The implemented methodology is based on Word Embeddings (WEs) models obtained using the word2vec algorithm. The system has been assessed with documents extracted from the whole PubMed library. It will be also presented a user friendly web front-end able to assess the methodology on a real context.
[ "Semantic Text Processing", "Semantic Similarity", "Representation Learning" ]
[ 72, 53, 12 ]
SCOPUS_ID:85042535268
A Big-Data Approach to Understanding the Thematic Landscape of the Field of Business Ethics, 1982–2016
This study focuses on examining the thematic landscape of the history of scholarly publication in business ethics. We analyze the titles, abstracts, full texts, and citation information of all research papers published in the field’s leading journal, the Journal of Business Ethics, from its inaugural issue in February 1982 until December 2016—a dataset that comprises 6308 articles and 42 million words. Our key method is a computational algorithm known as probabilistic topic modeling, which we use to examine objectively the field’s latent thematic landscape based on the vast volume of scholarly texts. This “big-data” approach allows us not only to provide time-specific snapshots of various research topics, but also to track the dynamic evolution of each topic over time. We further examine the pattern of individual papers’ topic diversity and the influence of individual papers’ topic diversity on their impact over time. We conclude this study with our recommendation for future studies in business ethics research.
[ "Responsible & Trustworthy NLP", "Topic Modeling", "Ethical NLP", "Information Extraction & Text Mining" ]
[ 4, 9, 17, 3 ]
SCOPUS_ID:85082115960
A BigData approach for sentiment analysis of twitter data using Naive Bayes and SVM Algorithm
Data mining and sentiment analysis are two most versatile research areas in field of real time knowledge extraction. Real time twitter data analysis can plays very crucial role to observe the thinking and view point of people and users. Nowadays, social networking sites have become centric points to share your thoughts and viewpoints. Analysis of social networking data can help a lot to observe trend of society. It can also help to derive user interest and hidden activities. Sentiment analysis is the approach to determine whether piece of writing is positive, negative or neutral. It also help to derive user opinion and attitude of writer. Sentiment analysis of twitter user can help to track eventual view point of user. Country wise accumulative view point can help to derive overall opinion of country citizens and their thinking criteria. This work has proposed sentiment analysis model to observe positive and negative view point of different countries based on sentiment analysis approach. The complete work will be implemented using Hadoop Ecosystem to perform parallel processing on large data.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85090840643
A Bigram-based Inference Model for Retrieving Abbreviated Phrases in Source Code
Expanding abbreviations in source code to their full meanings is very useful for software maintainers to comprehend the source code. The existing approaches, however, focus on expanding an abbreviation to a single word, i.e., unigram. They do not perform well when dealing with abbreviations of phrases that consist of multiple unigrams. This paper proposes a bigram-based approach for retrieving abbreviated phrases automatically. Key to this approach is a bigram-based inference model for choosing the best phrase from all candidates. It utilizes the statistical properties of unigrams and bigrams as prior knowledge and a bigram language model for estimating the likelihood of each candidate phrase of a given abbreviation. We have applied the bigram-based approach to 100 phrase abbreviations, randomly selected from eight open source projects. The experiment results show that it has correctly retrieved 78% of the abbreviations by using the unigram and bigram properties of a source code repository. This is 9% more accurate than the unigram-based approach and much better than other existing approaches. The bigram-based approach is also less biased towards specific phrase sizes than the unigram-based approach.
[ "Language Models", "Programming Languages in NLP", "Semantic Text Processing", "Information Retrieval", "Multimodality" ]
[ 52, 55, 72, 24, 74 ]
SCOPUS_ID:85069954280
A Bilingual Adversarial Autoencoder for Unsupervised Bilingual Lexicon Induction
Unsupervised bilingual lexicon induction aims to generate bilingual lexicons without any cross-lingual signals. Successfully solving this problem would benefit many downstream tasks, such as unsupervised machine translation and transfer learning. In this work, we propose an unsupervised framework, named bilingual adversarial autoencoder, which automatically generates bilingual lexicon for a pair of languages from their monolingual word embeddings. In contrast to existing frameworks which learn a direct cross-lingual mapping of word embeddings from the source language to the target language, we train two autoencoders jointly to transform the source and the target monolingual word embeddings into a shared embedding space, where a word and its translation are close to each other. In this way, we capture the cross-lingual features of word embeddings from different languages and use them to induce bilingual lexicons. By conducting extensive experiments across eight language pairs, we demonstrate that the proposed method significantly outperforms the existing adversarial methods and even achieves best-published results across most language pairs.
[ "Language Models", "Low-Resource NLP", "Machine Translation", "Semantic Text Processing", "Robustness in NLP", "Representation Learning", "Text Generation", "Responsible & Trustworthy NLP", "Cross-Lingual Transfer", "Multilinguality" ]
[ 52, 80, 51, 72, 58, 12, 47, 4, 19, 0 ]
SCOPUS_ID:85111511541
A Bilingual Comparison of Sentiment and Topics for a Product Event on Twitter
Social media enable companies to assess consumers’ opinions, complaints and needs. The systematic and data-driven analysis of social media to generate business value is summarized under the term Social Media Analytics which includes statistical, network-based and language-based approaches. We focus on textual data and investigate which conversation topics arise during the time of a new product introduction on Twitter and how the overall sentiment is during and after the event. The analysis via Natural Language Processing tools is conducted in two languages and four different countries, such that cultural differences in the tonality and customer needs can be identified for the product. Different methods of sentiment analysis and topic modeling are compared to identify the usability in social media and in the respective languages English and German. Furthermore, we illustrate the importance of preprocessing steps when applying these methods and identify relevant product insights.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis", "Multilinguality" ]
[ 9, 3, 78, 0 ]
http://arxiv.org/abs/1911.03895v2
A Bilingual Generative Transformer for Semantic Sentence Embedding
Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model's posterior to predict sentence embeddings for monolingual data at test time. Second, we use high-capacity transformers as both data generating distributions and inference networks -- contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of unsupervised semantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of these evaluations where simple word overlap is not a good indicator of similarity.
[ "Representation Learning", "Language Models", "Semantic Text Processing", "Multilinguality" ]
[ 12, 52, 72, 0 ]
https://aclanthology.org//W18-5027/
A Bilingual Interactive Human Avatar Dialogue System
This demonstration paper presents a bilingual (Arabic-English) interactive human avatar dialogue system. The system is named TOIA (time-offset interaction application), as it simulates face-to-face conversations between humans using digital human avatars recorded in the past. TOIA is a conversational agent, similar to a chat bot, except that it is based on an actual human being and can be used to preserve and tell stories. The system is designed to allow anybody, simply using a laptop, to create an avatar of themselves, thus facilitating cross-cultural and cross-generational sharing of narratives to wider audiences. The system currently supports monolingual and cross-lingual dialogues in Arabic and English, but can be extended to other languages.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Multilinguality" ]
[ 11, 38, 0 ]
SCOPUS_ID:85085711637
A Bilingual Word Alignment Method of Chinese-English based on Recurrent Neural Network
Word alignment is an important step in statistical machine translation. Chinese-English bilingual language has a large difference in language characteristics, which may lead to some inconsistent results in word alignment. In this paper, a word alignment method based on recurrent neural network (RNN) is proposed. Firstly, Chinese-English bilingual words are transformed into word embedding, which are input to RNN model and incorporate context information. RNN uses internal memory to process input sequences of arbitrary time series. The experimental results show that compared with DNN and IBM4 models, this method improves the accuracy of word alignment and the quality of machine translation.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:0012402135
A Bilingual lexical database for frame semantics
Frame semantics is a linguistic theory which is currently gaining ground. The creation of lexical entries for a large number of words presupposes the development of complex lexical acquisition techniques in order to identify the vocabulary for describing the elements of a 'frame'. In this paper, we show how a lexical-semantic database compiled from a bilingual (English-French) dictionary can be used to identify some general frame elements which are relevant to a frame-semantic approach such as the one adopted in the FrameNet project (Fillmore and Atkins 1998, Gahl 1998). The database has been systematically enriched with explicit lexical-semantic relations holding between some elements of the microstructure of the dictionary entries. The manifold relationships have been labelled in terms of lexical functions, based on Mel'cuk's notion of co-occurrence and lexical-semantic relations in Meaning-Text Theory (Mel'cuk et al. 1984). We show how these lexical functions can be used and refined to extract potential realizations of frame elements such as typical instruments or typical locatives, which are believed to be recurrent elements in a large number of frames. We also show how the database organization of the computational lexicon makes it possible to readily access combinatorial information that is implicit and relevant to translation. © 2000 Oxford University Press.
[ "Linguistic Theories", "Linguistics & Cognitive NLP", "Multilinguality" ]
[ 57, 48, 0 ]
http://arxiv.org/abs/2112.04888v1
A Bilingual, OpenWorld Video Text Dataset and End-to-end Video Text Spotter with Transformer
Most existing video text spotting benchmarks focus on evaluating a single language and scenario with limited data. In this work, we introduce a large-scale, Bilingual, Open World Video text benchmark dataset(BOVText). There are four features for BOVText. Firstly, we provide 2,000+ videos with more than 1,750,000+ frames, 25 times larger than the existing largest dataset with incidental text in videos. Secondly, our dataset covers 30+ open categories with a wide selection of various scenarios, e.g., Life Vlog, Driving, Movie, etc. Thirdly, abundant text types annotation (i.e., title, caption or scene text) are provided for the different representational meanings in video. Fourthly, the BOVText provides bilingual text annotation to promote multiple cultures live and communication. Besides, we propose an end-to-end video text spotting framework with Transformer, termed TransVTSpotter, which solves the multi-orient text spotting in video with a simple, but efficient attention-based query-key mechanism. It applies object features from the previous frame as a tracking query for the current frame and introduces a rotation angle prediction to fit the multiorient text instance. On ICDAR2015(video), TransVTSpotter achieves the state-of-the-art performance with 44.1% MOTA, 9 fps. The dataset and code of TransVTSpotter can be found at github:com=weijiawu=BOVText and github:com=weijiawu=TransVTSpotter, respectively.
[ "Multilinguality", "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Multimodality" ]
[ 0, 20, 52, 72, 74 ]
http://arxiv.org/abs/cs/0407046v1
A Bimachine Compiler for Ranked Tagging Rules
This paper describes a novel method of compiling ranked tagging rules into a deterministic finite-state device called a bimachine. The rules are formulated in the framework of regular rewrite operations and allow unrestricted regular expressions in both left and right rule contexts. The compiler is illustrated by an application within a speech synthesis system.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85010303443
A Binomial Heap Extractor for Automatic Keyword Extraction
Automatic Extraction of Keywords using Frequent Itemsets (AEKFI) is a new technique for keyword extraction which integrates adjacency of location of words within the document to automatically select the most discriminative words without using a corpus. This paper introduces a novel Binomial Heap Approach based AEKFI for document summarization. Binomial heap does keyword extraction using binomial minimum heap operations. AEKFI provides flexibility to select either the set of keywords from a given document or user specified number of keywords. AEKFI does not impose any restriction on the length of keywords being extracted. Demonstration of Binomial Heap Extractor has been made and has been found efficient in reducing the time complexity O (n2) of existing approaches to O (n log n). Experimental results prove the advantage of Binomial Minimum Heap based AEKFI over other keyword extraction tools.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85088740670
A Bioinspired Algorithm for Improving the Effectiveness of Knowledge Processing
The paper deals with an approach to improve the effectiveness of knowledge processing in terms of large dimensions. The authors suggest the model of classification of the information resources to be used as a preprocessing stage for their further integration. The amount of information produced, transferred and processed by people and various technical devices rapidly grow every year, so the problem of improving the knowledge processing efficiency is very important these days. Ontological structures as used in this work to represent knowledge of the information systems, because they allow us to consider the semantics of the processed knowledge. The authors propose performing the classification using two components of semantic similarity between the objects of the ontologies. They are equivalent and horizontal semantic similarity components. To solve the classification task, we use bioinspired algorithms since they are proven to be effective in terms of solving the optimization problems of large dimensions. One of the suggested bioinspired algorithms developed for solving the mentioned tasks is the bacteria optimization algorithm. The paper describes the algorithm and provides the results of its work. The experiments show that the bacteria algorithm gives effective results with polynomial time complexity.
[ "Semantic Text Processing", "Text Classification", "Semantic Similarity", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 53, 24, 3 ]
SCOPUS_ID:85125183644
A Biological Test Questions Naming Entity Recognition Method for Fusion Triggers
Training a neural model for named entity recognition (NER) in a new field often requires additional human annotations (for example, a large number of tagged instances), which are often expensive and time-consuming to collect. Therefore, one of the key research problems is how to obtain the supervision effect in an economical and effective way. In this article, we use 'entity triggers' for the biological domain, which can help explain how to effectively learn the label of NER's model. Entity triggers are defined as words or words in a sentence, which helps explain why people recognize entities in a sentence. For biological disciplines, according to the characteristics of the high school biology field test data, in a small, small amounts of annotation and trigger a small entity tagging of self-built corpora in the biology one experiment, the experimental results show that using 'triggers the matching network can automatically learn' trigger said and soft matching module, which can be easily generalization for the tag will be invisible to the sentence, then use the two-way coding BiLSTM, finally with the airport marking label according to conditions. The experimental results show that this model has a better recognition effect than other algorithms, and can effectively solve the problems of entity recognition difficulty and low accuracy in the task of named entity recognition in the field of biological science, such as insufficient labeling data, and the special ranking of biological science
[ "Explainability & Interpretability in NLP", "Named Entity Recognition", "Information Extraction & Text Mining", "Responsible & Trustworthy NLP" ]
[ 81, 34, 3, 4 ]
SCOPUS_ID:85117803982
A Biologically Inspired Computational Model of Time Perception
Time perception-how humans and animals perceive the passage of time-forms the basis for important cognitive skills, such as decision making, planning, and communication. In this work, we propose a framework for examining the mechanisms responsible for time perception. We first model neural time perception as a combination of two known timing sources: Internal neuronal mechanisms and external (environmental) stimuli, and design a decision-making framework to replicate them. We then implement this framework in a simulated robot. We measure the robot's success on a temporal discrimination task originally performed by mice to evaluate their capacity to exploit temporal knowledge. We conclude that the robot is able to perceive time similarly to animals when it comes to their intrinsic mechanisms of interpreting time and performing time-aware actions. Next, by analyzing the behavior of agents equipped with the framework, we propose an estimator to infer characteristics of the timing mechanisms intrinsic to the agents. In particular, we show that from their empirical action probability distribution, we are able to estimate parameters used for perceiving time. Overall, our work shows promising results when it comes to drawing conclusions regarding some of the characteristics present in biological timing mechanisms.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:84903524955
A Biologically Plausible SOM Representation of the Orthographic Form of 50,000 French Words
Recently, an important aspect of human visual word recognition has been characterized. The letter position is encoded in our brain using an explicit representation of order based on letter pairs: the open-bigram coding [15]. We hypothesize that spelling has evolved in order to minimize reading errors. Therefore, word recognition using bigrams - instead of letters - should be more efficient. First, we study the influence of the size of the neighborhood, which defines the number of bigrams per word, on the performance of the matching between bigrams and word. Our tests are conducted against one of the best recognition solutions used today by the industry, which matches letters to words. Secondly, we build a cortical map representation of the words in the bigram space - which implies numerous experiments in order to achieve a satisfactory projection. Third, we develop an ultra-fast version of the self-organizing map in order to achieve learning in minutes instead of months. © Springer International Publishing Switzerland 2014.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
http://arxiv.org/abs/1705.05437v1
A Biomedical Information Extraction Primer for NLP Researchers
Biomedical Information Extraction is an exciting field at the crossroads of Natural Language Processing, Biology and Medicine. It encompasses a variety of different tasks that require application of state-of-the-art NLP techniques, such as NER and Relation Extraction. This paper provides an overview of the problems in the field and discusses some of the techniques used for solving them.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85149702077
A Biomedical Named Entity Recognition Framework with Multi-granularity Prompt Tuning
Deep Learning based Biomedical named entity recognition (BioNER) requires a large number of annotated samples, but annotated medical data is very scarce. To address this challenge, this paper proposes Prompt-BioNER, a BioNER framework using prompt tuning. Specifically, the framework is based on multi-granularity prompt fusion and achieves different levels of feature extraction through masked language model and next sentence prediction pre-trained tasks, which effectively reduces the model’s dependence on annotated data. To evaluate the overall performance of Prompt-BioNER, we conduct extensive experiments on 3 datasets. Experimental results demonstrate that BioNER outperforms the the-state-of-the-arts methods, and it can achieve good performance under low resource conditions.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Extraction & Text Mining", "Named Entity Recognition", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 3, 34, 4 ]
SCOPUS_ID:85124722182
A Bisociated Research Paper Recommendation Model using BiSOLinkers
In the current days of information overload, it is nearly impossible to obtain a form of relevant knowledge from massive information repositories without using information retrieval and filtering tools. The academic field daily receives lots of research articles, thus making it virtually impossible for researchers to trace and retrieve important articles for their research work. Unfortunately, the tools used to search, retrieve and recommend relevant research papers suggest similar articles based on the user profile characteristic, resulting in the overspecialization problem whereby recommendations are boring, similar, and uninteresting. We attempt to address this problem by recommending research papers from domains considered unrelated and unconnected. This is achieved through identifying bridging concepts that can bridge these two unrelated domains through their outlying concepts – BiSOLinkers. We modeled a bisociation framework using graph theory and text mining technologies. Machine learning algorithms were utilized to identify outliers within the dataset, and the accuracy achieved by most algorithms was between 96.30% and 99.49%, suggesting that the classifiers accurately classified and identified the outliers. We additionally utilized the Latent Dirichlet Allocation (LDA) algorithm to identify the topics bridging the two unrelated domains at their point of intersection. BisoNets were finally generated, conceptually demonstrating how the two unrelated domains were linked, necessitating cross-domain recommendations. Hence, it is established that recommender systems' overspecialization can be addressed by combining bisociation, topic modeling, and text mining approaches.
[ "Topic Modeling", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 9, 24, 3 ]
http://arxiv.org/abs/cs/0108005v1
A Bit of Progress in Language Modeling
In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We find some significant interactions, especially with smoothing and clustering techniques. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8.9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. This is the extended version of the paper; it contains additional details and proofs, and is designed to be a good introduction to the state of the art in language modeling.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Text Clustering" ]
[ 52, 72, 3, 29 ]
SCOPUS_ID:85103858112
A Block-Level RNN Model for Resume Block Classification
Resume block classification is the most significant step in resume information extraction. However, the existing algorithms applied to resume block classification are all the general text classification algorithms, which failed to consider the contextual order of each block within a resume. In order to improve the performance of resume block classification, we propose in this paper a block-level bidirectional recurrent neural network model that makes full use of the contextual order relationship among different resume blocks. The experimental results show that the average F1-score value of our model on three 1,400 real resume datasets is 6% to 9% higher than the existing methods.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85125704976
A Blockchain-Based Consent Mechanism for Access to Fitness Data in the Healthcare Context
Wearable fitness devices are widely used to track an individual's health and physical activities to improve the quality of health services. These devices sense a considerable amount of sensitive data processed by a centralized third party. While many researchers have thoroughly evaluated privacy issues surrounding wearable fitness trackers, no study has addressed privacy issues in trackers by giving control of the data to the user. Blockchain is an emerging technology with outstanding advantages in resolving consent management privacy concerns. As there are no fully transparent, legally compliant solutions for sharing personal fitness data, this study introduces an architecture for a human-centric, legally compliant, decentralized and dynamic consent system based on blockchain and smart contracts. Algorithms and sequence diagrams of the proposed system's activities show consent-related data flow among various agents, which are used later to prove the system's trustworthiness by formalizing the security requirements. The security properties of the proposed system were evaluated using the formal security modeling framework SeMF, which demonstrates the feasibility of the solution at an abstract level based on formal language theory. As a result, we have shown that blockchain technology is suitable for mitigating the privacy issues of fitness providers by recording individuals' consent using blockchain and smart contracts.
[ "Responsible & Trustworthy NLP", "Ethical NLP", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 4, 17, 48, 57 ]
SCOPUS_ID:85123370866
A Blockchain-Based Sentiment Analysis Framework for Reliable Feedback System
In many institutions, feedback collection is either carried out manually or in a generic centralized manner, both of which are prone to manipulation. To provide a legitimate and practical feedback system is still being explored in the area of industry and information security. With an increase in demand for user feedback, concerns on authenticity and privacy of information have grown. This article describes a web-based application implemented on a decentralized approach for building a reliable system that can ensure legitimate feedback. This feedback system provides an online platform to accomplish the whole procedure which is a research field of cryptography with the basics of encryption, hashing and signature algorithms. Further, sentiment analysis has been explored for projecting the cumulative feedback and providing veritable results. This application can be applied in corporate offices and educational institutions where students and employees will be required to submit feedback for the concerned authorities.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85115667323
A Blockchain-Based Verification System for Academic Certificates
Millions of students complete their education each year and go on to do higher studies or a corporate job. In this case student credentials are verified through a lengthy document verification process. This results in significant overhead as documents are transferred between institutions for verification. There is a need for an automated credential verification system which can reduce the time required for the document verification process. Blockchain Technology can be used to reduce overhead and reduce the time taken for document verification from days to mere seconds. In this work, an attempt has been made to develop a Blockchain-based verification system for academic certificates. With the advent of public Blockchain like Ethereum, DApps (Decentralized Applications) and Smart contracts, scalable and cost effective solutions can be implemented to reduce overhead and make document verification a seamless process. The proposed solution consists of a web app which will have a front-end for registering and requesting verification, along with a backend which will have two modules: An OCR module to extract details from certificates and a Blockchain module to send and verify data stored in the Blockchain.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:84983372464
A Bloom filter based semi-index on q-grams
We present a simple q-gram based semi-index, which allows to look for a pattern typically only in a small fraction of text blocks. Several space-time tradeoffs are presented. Experiments on Pizza & Chili datasets show that our solution is up to three orders of magnitude faster than the Claude et al. (Journal of Discrete Algorithms 2012; 11:37) semi-index at a comparable space usage. Moreover, the construction of our data structure is fast and easily parallelizable. Copyright © 2016 John Wiley & Sons, Ltd.
[ "Indexing", "Information Retrieval" ]
[ 69, 24 ]
SCOPUS_ID:85139264329
A Blueprint for Integrating Task-Oriented Conversational Agents in Education
Over the past few years, there has been an increase in the use of chatbots for educational purposes. Nevertheless, the chatbot technologies and architectures that are often applied to educational contexts are not necessarily designed for such contexts. While general-purpose chatbot technologies can be used in educational contexts, there are some challenges specific to these contexts that need to be taken into consideration. Namely, chatbot technologies intended for education should, by design, integrate directly within online learning applications and focus on achieving learning goals by supporting learners with the task at hand. In this paper, we propose a blueprint for an architecture specifically aimed at integrating task-oriented chatbots to support learners in educational contexts. We then present a proof-of-concept implementation of our blueprint as a part of a code review application designed to teach programming best practices. Our blueprint could serve as a starting point for developers in education looking to build chatbot technologies targeting educational contexts and is a first step toward an open chatbot architecture explicitly tailored for learning applications.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85144603130
A Bona Fide Turing Test
The constantly rising demand for human-like conversational agents and the accelerated development of natural language processing technology raise expectations for a breakthrough in intelligent machine research and development. However, measuring intelligence is impossible without a proper test. Alan Turing proposed a test for machine intelligence based on imitation and unconstrained conversations between a machine and a human. To the best of our knowledge, no one has ever conducted Turing's test as Turing prescribed, even though the Turing Test has been a bone of contention for more than seventy years. Conducting a bona fide Turing Test will contribute to machine intelligence evaluation research and has the potential to advance AI researchers in their ultimate quest, developing an intelligent machine.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85044056296
A Bootstrap Method for Automatic Rule Acquisition on Emotion Cause Extraction
Emotion cause extraction is one of the promising research topics in sentiment analysis, but has not been well-investigated so far. This task enables us to obtain useful information for sentiment classification and possibly to gain further insights about human emotion as well. This paper proposes a bootstrapping technique to automatically acquire conjunctive phrases as textual cue patterns for emotion cause extraction. The proposed method first gathers emotion causes via manually given cue phrases. It then acquires new conjunctive phrases from emotion phrases that contain similar emotion causes to previously gathered ones. In existing studies, the cost for creating comprehensive cue phrase rules for building emotion cause corpora was high because of their dependencies both on languages and on textual natures. The contribution of our method is its ability to automatically create the corpora from just a few cue phrases as seeds. Our method can expand cue phrases at low cost and acquire a large number of emotion causes of the promising quality compared to human annotations.
[ "Emotion Analysis", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 61, 78, 3 ]
SCOPUS_ID:85128343312
A Bootstrap Training Approach for Language Model Classifiers
In this paper, we present a bootstrap training approach for language model (LM) classifiers. Training class dependent LM and running them in parallel, LM can serve as classifiers with any kind of symbol sequence, e.g., word or phoneme sequences for tasks like topic spotting or language identification (LID). Irrespective of the special symbol sequence used for a LM classifier, the training of a LM is done with a manually labeled training set for each class obtained from not necessarily cooperative speakers. Therefore, we have to face some erroneous labels and deviations from the originally intended class specification. Both facts can worsen classification. It might therefore be better not to use all utterances for training but to automatically select those utterances that improve recognition accuracy; this can be done by a bootstrap procedure. We present the results achieved with our best approach on the VERBMOBIL corpus for the tasks dialog act classification and LID.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85138352694
A Bootstrapped Chinese Biomedical Named Entity Recognition Model Incorporating Lexicons
Biomedical named entity recognition (BioNER) is a sub-task of named entity recognition, aiming at recognizing named entities in medical text to boost the knowledge discovery. In this paper, we propose a bootstrapped model incorporating lexicons, which takes advantage of pretrained language model, semi-supervised learning and external lexicon features to apply BioNER to Chinese medical abstracts. Extensive evaluation shows that our system is competitive on limited annotated training data, which surpasses the baselines including HMM, CRF, BiLSTM, BiLSTM-CRF and BERT for 54.60%, 37.92%, 55.46%, 48.67%, 7.99% respectively. The experimental results demonstrate that unsupervised pretraining makes pretrained language model acquire the ability that only a few annotated data can achieve great performance for downstream tasks. In addition, semi-supervised learning and external lexicon features can further compensate for the problem of insufficient annotated data.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Extraction & Text Mining", "Named Entity Recognition", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 3, 34, 4 ]
http://arxiv.org/abs/2008.04276v1
A Bootstrapped Model to Detect Abuse and Intent in White Supremacist Corpora
Intelligence analysts face a difficult problem: distinguishing extremist rhetoric from potential extremist violence. Many are content to express abuse against some target group, but only a few indicate a willingness to engage in violence. We address this problem by building a predictive model for intent, bootstrapping from a seed set of intent words, and language templates expressing intent. We design both an n-gram and attention-based deep learner for intent and use them as colearners to improve both the basis for prediction and the predictions themselves. They converge to stable predictions in a few rounds. We merge predictions of intent with predictions of abusive language to detect posts that indicate a desire for violent action. We validate the predictions by comparing them to crowd-sourced labelling. The methodology can be applied to other linguistic properties for which a plausible starting point can be defined.
[ "Intent Recognition", "Sentiment Analysis" ]
[ 79, 78 ]
https://aclanthology.org//2000.iwpt-1.5/
A Bootstrapping Approach to Parser Development
This paper presents a robust parsing system for unrestricted Basque texts. It analyzes a sentence in two stages: a unification-based parser builds basic syntactic units such as NPs, PPs, and sentential complements, while a finite-state parser performs syntactic disambiguation and filtering of the results. The system has been applied to the acquisition of verbal subcategorization information, obtaining 66% recall and 87% precision in the determination of verb subcategorization instances. This information will be later incorporated to the parser, in order to improve its performance.
[ "Text Classification", "Syntactic Text Processing", "Syntactic Parsing", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 15, 28, 24, 3 ]
SCOPUS_ID:85067254364
A Bootstrapping Approach with CRF and Deep Learning Models for Improving the Biomedical Named Entity Recognition in Multi-Domains
Biomedical named entity recognition (biomedical NER) is a core component to build biomedical text processing systems, such as biomedical information retrieval and question answering systems. Recently, many studies based on machine learning have been developed for a biomedical NER. The machine learning-based approaches generally require significant amounts of annotated corpora to achieve high performance. However, it is expensive to manually create a large number of high-quality corpora due to the demand for biomedical experts. In addition, most existing corpora have focused on several specific sub-domains, such as disease, protein, and species. It is difficult for a biomedical NER system trained with these corpora to provide much information for biomedical text processing systems. In this paper, we propose a method for automatically generating the machine-labeled biomedical NER corpus that covers various sub-domains by using proper categories from the semantic groups of a unified medical language system (UMLS). We use a bootstrapping approach with a small amount of manually annotated corpus to automatically generate a significant amount of corpus and then construct a biomedical NER system trained with the machine-labeled corpus. At last, we train two machine learning-based classifiers, conditional random fields (CRFs) and long short-term memory (LSTM), with the machine-labeled data to improve performance. The experimental results show that the proposed method is effective to improve performance. As a result, the proposed one obtains higher performance in 23.69% than the model that trained only a small amount of manually annotated corpus in F1-score.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85049099038
A Bootstrapping-based Method to Automatically Identify Data-usage Statements in Publications
Purpose: Our study proposes a bootstrapping-based method to automatically extract data-usage statements from academic texts. Design/methodology/approach: The method for data-usage statements extraction starts with seed entities and iteratively learns patterns and data-usage statements from unlabeled text. In each iteration, new patterns are constructed and added to the pattern list based on their calculated score. Three seed-selection strategies are also proposed in this paper. Findings: The performance of the method is verified by means of experiments on real data collected from computer science journals. The results show that the method can achieve satisfactory performance regarding precision of extraction and extensibility of obtained patterns. Research limitations: While the triple representation of sentences is effective and efficient for extracting data-usage statements, it is unable to handle complex sentences. Additional features that can address complex sentences should thus be explored in the future. Practical implications: Data-usage statements extraction is beneficial for data-repository construction and facilitates research on data-usage tracking, dataset-based scholar search, and dataset evaluation. Originality/value: To the best of our knowledge, this paper is among the first to address the important task of automatically extracting data-usage statements from real data.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:84944930145
A Borda count for collective sentiment analysis
Sentiment analysis assigns a positive, negative or neutral polarity to an item or entity, extracting and aggregating individual opinions from their textual expressions by means of natural language processing tools. In this paper we observe that current sentiment analysis techniques are satisfactory in case there is a single entity under consideration, but can lead to inaccurate or wrong results when dealing with a set of multiple items. We argue in favor of importing techniques from voting theory and preference aggregation to provide a more accurate definition of the collective sentiment over a set of multiple items. We propose a notion of Borda count which combines individuals’ sentiment with comparative preference information, we show that this class of rules satisfies a number of properties which have a natural interpretation in the sentiment analysis domain, and we evaluate its behavior when faced with highly incomplete domains.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85079725811
A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication
Artificially intelligent (AI) agents increasingly occupy roles once served by humans in computer-mediated communication (CMC). Technological affordances like emoji give interactants (humans or bots) the ability to partially overcome the limited nonverbal information in CMC. However, despite the growth of chatbots as conversational partners, few CMC and human-machine communication (HMC) studies have explored how bots’ use of emoji impact perceptions of communicator quality. This study examined the relationship between emoji use and observers’ impressions of interpersonal attractiveness, CMC competence, and source credibility; and whether impressions formed of human versus chatbot message sources were different. Results demonstrated that participants rated emoji-using chatbot message sources similarly to human message sources, and both humans and bots are significantly more socially attractive, CMC competent, and credible when compared to verbal-only message senders. Results are discussed with respect to the CASA paradigm and the human-to-human interaction script framework.
[ "Natural Language Interfaces", "Visual Data in NLP", "Multimodality", "Dialogue Systems & Conversational Agents" ]
[ 11, 20, 74, 38 ]
SCOPUS_ID:85103442629
A Bottom-Up Approach for Moroccan Legal Ontology Learning from Arabic Texts
Ontologies constitute an exciting model for representing a domain of interest, since they enable information-sharing and reuse. Existing inference machines can also use them to reason about various contexts. However, ontology construction is a time-consuming and challenging task. The ontology learning field answers this problem by providing automatic or semi-automatic support to extract knowledge from various sources, such as databases and structured and unstructured documents. This paper reviews the ontology learning process from unstructured text and proposes a bottom-up approach to building legal domain-specific ontology from Arabic texts. In this work, the learning process is based on Natural Language Processing (NLP) techniques and includes three main tasks: corpus study, term acquisition, and conceptualization. Corpus study enriches the original corpus with valuable linguistic information. Term acquisition selects tagged lemmas sequences as potential term candidates, and conceptualization drives concepts and their relationships from the extracted terms. We used the NooJ platform to implement the required linguistic resources for each task. Further, we developed a Java module to enrich the ontology vocabulary from the Arabic WordNet (AWN) project. The obtained results were essential but incomplete. The legal expert revised them manually, and then they were used to refine and expand a domain ontology for a Moroccan Legal Information Retrieval System (LIRS).
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
SCOPUS_ID:85128443098
A Bottom-Up DAG Structure Extraction Model for Math Word Problems
Research on automatically solving mathematical word problems (MWP) has a long history. Most recent works adopt the Seq2Seq approach to predict the result equations as a sequence of quantities and operators. Although result equations can be written as a sequence, it is essentially a structure. More precisely, it is a Direct Acyclic Graph (DAG) whose leaf nodes are the quantities, and internal and root nodes are arithmetic or comparison operators. In this paper, we propose a novel Seq2DAG approach to extract the equation set directly as a DAG structure. It extracts the structure in a bottom-up fashion by aggregating quantities and sub-expressions layer by layer iteratively. The advantages of our approach are threefold: it is intrinsically suitable to solve multivariate problems, it always outputs valid structure, and its computation satisfies commutative law for +, and =. Experimental results on DRAW1K and Math23K datasets demonstrate that our model outperforms state-of-the-art deep learning methods. We also conduct detailed analysis on the results to show the strengths and limitations of our approach.
[ "Reasoning", "Numerical Reasoning", "Information Extraction & Text Mining" ]
[ 8, 5, 3 ]
https://aclanthology.org//W12-1628/
A Bottom-Up Exploration of the Dimensions of Dialog State in Spoken Interaction
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85097203273
A Boundary Assembling Method for Nested Biomedical Named Entity Recognition
Biomedical named entity recognition (BNER) is an important task in biomedical natural language processing, in which neologisms (new terms, words) are coined constantly. Most of the existing work can only identify biomedical named entities with flattened structures and ignore nested biomedical named entities and discontinuous biomedical named entities. Because biomedical domains often use nested structures to represent semantic information of named entities, existing methods fail to utilize abundant information when processing biomedical texts. This paper focuses on identifying nested biomedical named entities using a boundary assembly (BA) model, which is a cascading framework consisting of three steps. First, start and end named entity boundaries are identified and then assembled into named entity candidates. Finally, a classifier is implemented for filtering false named entities. Our approach is effective in handling nesting and discontinuous problems in biomedical named entity recognition tasks. It improves the performance considerably, achieving an F1-score of 81.34% on the GENIA dataset.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85111245497
A Boundary Determined Neural Model For Relation Extraction
Existing models extract entity relations only after two entity spans have been precisely extracted that influenced the performance of relation extraction. Compared with recognizing entity spans, because the boundary has a small granularity and a less ambiguity, it can be detected precisely and incorporated to learn better representation. Motivated by the strengths of boundary, we propose a boundary determined neural (BDN) model, which leverages boundaries as task-related cues to predict the relation labels. Our model can predict high-quality relation instance via the pairs of boundaries, which can relieve error propagation problem. Moreover, our model fuses with boundary-relevant information encoding to represent distributed representation to improve the ability of capturing semantic and dependency information, which can increase the discriminability of neural network. Experiments show that our model achieves state-of-the-art performances on ACE05 corpus.
[ "Relation Extraction", "Information Extraction & Text Mining" ]
[ 75, 3 ]
SCOPUS_ID:85138815713
A Boundary Regression Model for Nested Named Entity Recognition
Recognizing named entities (NEs) is commonly treated as a classification problem, and a class tag for a word or an NE candidate in a sentence is predicted. In recent neural network developments, deep structures that map categorized features into continuous representations have been adopted. Using this approach, a dense space saturated with high-order abstract semantic information is unfolded, and the prediction is based on distributed feature representations. In this paper, the positions of NEs in a sentence are represented as continuous values. Then, a regression operation is introduced to regress the boundaries of NEs in a sentence. Based on boundary regression, we design a boundary regression model to support nested NE recognition. It is a multiobjective learning framework that simultaneously predicts the classification score of an NE candidate and refines its spatial location in a sentence. This model was evaluated on the ACE 2005 Chinese and English corpus and the GENIA corpus. State-of-the-art performance was experimentally demonstrated for nested NE recognition, which outperforms related works about 5% and 2% respectively. Our model has the advantage to resolve nested NEs and support boundary regression for locating NEs in a sentence. By sharing parameters for predicting and locating, this model enables more potent nonlinear function approximators to enhance model discriminability.
[ "Named Entity Recognition", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 34, 24, 36, 3 ]
SCOPUS_ID:85135375255
A Bounded Transition Hidden Markov Model for Continuous Speech Recognition
An HMM for phonetic transcription is presented. The inter-state transitions are bounded around phone boundaries, which are estimated from the observation sequence by statistical phone boundary detectors. The detection is done using the ratio of two probabilities, a probability that the observation sequence in a window has a phone boundary and a probability that the observation sequence in a window does not have a phone boundary. Finding an optimal state sequence is done by a simple Viterbi algorithm with 2 variables(time and state). In phonetic transcription experiments the presented HMM achieved the best accuracy against HMMs with explicit modeling of state durations. The great improvement in total performance was due to reduction of insertion errors.
[ "Speech & Audio in NLP", "Syntactic Text Processing", "Text Generation", "Phonetics", "Speech Recognition", "Multimodality" ]
[ 70, 15, 47, 64, 10, 74 ]
SCOPUS_ID:85057412305
A Bounding Box Approach for Performing Dynamic Optical Character Recognition in MATLAB
OCR is used to recognize written or optical generated text by the computer. Machine learning and artificial intelligence are relying frequently on such automation process with high accuracy. This paper present setting of the threshold value is once for whole bounding box algorithm rather than the random threshold value. Region properties of the image measure in the second and final module of our article. In the proposed approach, the final extraction of optical character is done by removing all the feature vectors having pixels less than 30. This process will subsequently increase the accuracy of recognition and visual effects as well. Old and new data sets are implemented by the proposed algorithm. After that, a comparative analysis was done for both outputs of the proposed algorithm. Proposed algorithm extracts different optical characters at the same time so as to reduce time complexity as well.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85080112548
A Bourdieusian analysis of the multilingualism in a poverty-stricken ethnic minority area: can linguistic capital be transferred to economic capital?
Indigenous languages in poverty-stricken areas are often threatened by competition from the majority languages driving economic progress. Within the framework of the economics of linguistic exchanges, this paper discusses the possibility of transferring linguistic capital into economic capital, and the revaluation of minority languages to promote multilingualism in underdeveloped regions. A mixed-method approach (questionnaires, focused interviews, ethnographic observations) was adopted to investigate the linguistic use of and attitudes towards Hani, Mandarin and English among 142 Hani participants in Yuanyang County, China, and the dispositions of 1,395 participants outside Yuanyang towards four types of objectified linguistic products used in two submarkets. The qualitative and quantitative data from Yuanyang show that with a conservative monolingual attitude, young Hani are shifting from Hani to the national lingua franca, Mandarin, for socioeconomic reasons. However, outsiders favour the utilisation of multilingual resources in the submarkets of local tourism and sales of regionally specific products. The findings demonstrate that there is room for the revaluation of indigenous language in these submarkets, implying that minority and majority languages may coexist and develop in harmony if the local Hani and poverty alleviation workers can begin to transfer multilingual resources into economic capital in the submarkets.
[ "Multilinguality" ]
[ 0 ]