id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
http://arxiv.org/abs/2104.01791v2
A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles
The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world to stay connected. With the advent of technology, digital media has become more relevant and widely used than ever before and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention. In this paper, we describe a novel Fake News Detection system that automatically identifies whether a news item is "real" or "fake", as an extension of our work in the CONSTRAINT COVID-19 Fake News Detection in English challenge. We have used an ensemble model consisting of pre-trained models followed by a statistical feature fusion network , along with a novel heuristic algorithm by incorporating various attributes present in news items or tweets like source, username handles, URL domains and authors as statistical feature. Our proposed framework have also quantified reliable predictive uncertainty along with proper class output confidence level for the classification task. We have evaluated our results on the COVID-19 Fake News dataset and FakeNewsNet dataset to show the effectiveness of the proposed algorithm on detecting fake news in short news content as well as in news articles. We obtained a best F1-score of 0.9892 on the COVID-19 dataset, and an F1-score of 0.9073 on the FakeNewsNet dataset.
[ "Reasoning", "Fact & Claim Verification", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 8, 46, 17, 4 ]
http://arxiv.org/abs/1512.03950v1
A Hidden Markov Model Based System for Entity Extraction from Social Media English Text at FIRE 2015
This paper presents the experiments carried out by us at Jadavpur University as part of the participation in FIRE 2015 task: Entity Extraction from Social Media Text - Indian Languages (ESM-IL). The tool that we have developed for the task is based on Trigram Hidden Markov Model that utilizes information like gazetteer list, POS tag and some other word level features to enhance the observation probabilities of the known tokens as well as unknown tokens. We submitted runs for English only. A statistical HMM (Hidden Markov Models) based model has been used to implement our system. The system has been trained and tested on the datasets released for FIRE 2015 task: Entity Extraction from Social Media Text - Indian Languages (ESM-IL). Our system is the best performer for English language and it obtains precision, recall and F-measures of 61.96, 39.46 and 48.21 respectively.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:38149032434
A Hidden Markov Model based named entity recognition system: Bengali and Hindi as case studies
Named Entity Recognition (NER) has an important role in almost all Natural Language Processing (NLP) application areas including information retrieval, machine translation, question-answering system, automatic summarization etc. This paper reports about the development of a statistical Hidden Markov Model (HMM) based NER system. The system is initially developed for Bengali using a tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web. The system is trained with a training corpus of 150,000 wordforms, initially tagged with a HMM based part of speech (POS) tagger. Evaluation results of the 10-fold cross validation test yield an average Recall, Precision and F-Score values of 90.2%, 79.48% and 84.5%, respectively. This HMM based NER system is then trained and tested on the Hindi data to show its effectiveness towards the language independent abilities. Experimental results of the 10-fold cross validation test has demonstrated the average Recall, Precision and F-Score values of 82.5%, 74.6% and 78.35%, respectively with 27,151 Hindi wordforms. © Springer-Verlag Berlin Heidelberg 2007.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
http://arxiv.org/abs/1611.06607v2
A Hierarchical Approach for Generating Descriptive Image Paragraphs
Recent progress on image captioning has made it possible to generate novel sentences describing images in natural language, but compressing an image into a single sentence can describe visual content in only coarse detail. While one new captioning approach, dense captioning, can potentially describe images in finer levels of detail by captioning many regions within an image, it in turn is unable to produce a coherent story for an image. In this paper we overcome these limitations by generating entire paragraphs for describing images, which can tell detailed, unified stories. We develop a model that decomposes both images and paragraphs into their constituent parts, detecting semantic regions in images and using a hierarchical recurrent neural network to reason about language. Linguistic analysis confirms the complexity of the paragraph generation task, and thorough experiments on a new dataset of image and paragraph pairs demonstrate the effectiveness of our approach.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
SCOPUS_ID:85107364152
A Hierarchical Approach for Joint Extraction of Entities and Relations
Most existing approaches for the extraction of entities and relations face two main challenges: extracting overlapping relations and capturing the interactions between entity and relation extractions. In this paper, we present a novel sequence-to-sequence model with a hierarchical decoder to solve both issues elegantly and efficiently. Specifically, we use the low-level decoder to predict multi-relations and produce a relation vector for each triple. Given this relation vector, the high-level decoder generates two entities associated with the triple. In this manner, we can directly capture the interactions between entity and relation extractions. Moreover, by decomposing two tasks into two decoding phases, the overlapping multi-relations extraction can be naturally separated. Experiments on popular public datasets demonstrate that our model can effectively extract overlapping triples.
[ "Language Models", "Relation Extraction", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 75, 72, 3 ]
http://arxiv.org/abs/1909.12401v1
A Hierarchical Approach for Visual Storytelling Using Image Description
One of the primary challenges of visual storytelling is developing techniques that can maintain the context of the story over long event sequences to generate human-like stories. In this paper, we propose a hierarchical deep learning architecture based on encoder-decoder networks to address this problem. To better help our network maintain this context while also generating long and diverse sentences, we incorporate natural language image descriptions along with the images themselves to generate each story sentence. We evaluate our system on the Visual Storytelling (VIST) dataset and show that our method outperforms state-of-the-art techniques on a suite of different automatic evaluation metrics. The empirical results from this evaluation demonstrate the necessities of different components of our proposed architecture and shows the effectiveness of the architecture for visual storytelling.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85135753167
A Hierarchical Approach to Interpretability of TS Rule-Based Models
Interpretability of fuzzy rule-based models has always been of significant interest to the research community and the research in this area led to a number of far-reaching results. In this study, we briefly revisit the methodology and concepts of interpretability of Takagi-Sugeno (T-S) rule-based models and develop a conceptual framework involving several levels at which rules are interpreted. The layers at which interpretability is positioned are structured hierarchically by starting with the initial fuzzy set level (originating from the design of the rules), moving to information granules of finite support (where interval calculus is engaged) and finally ending up with symbols built at the higher level. As T-S rule-based models are endowed with local functions forming the conclusion parts of the rules, with the use of the principle of justifiable granularity, we develop a way of forming an interpretable conclusion in the form of information granule. To facilitate interpretability of conditions of the rules, multidimensional fuzzy sets (coming as a result of clustering) are decomposed into a Cartesian product of 1-D fuzzy sets and the quality of the resulting decomposition is evaluated. The quality of granular rules is assessed by analyzing the relationship between specificity of condition and conclusion information granules. The rules emerging at the level of symbols are further interpreted by engaging linguistic approximation, which helps approximate a collection of linguistic terms of subconditions producing a linguistic summarization in the form τ (inputs are A) consisting of a certain linguistic quantifier τ. The performance of summarization is provided in the form of ranking of the relevance of the rules. Experimental studies using publicly available data are completed and analyzed.
[ "Explainability & Interpretability in NLP", "Summarization", "Text Generation", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 81, 30, 47, 4, 3 ]
SCOPUS_ID:85115289880
A Hierarchical Category Embedding Based Approach for Fault Classification of Power ICT System
To solve the low classification accuracy oreven misclassification issue in fault diagnosis, a text classification method based on hierarchical category embedding is proposed in information and communication technology (ICT) customer service systems. First, a hierarchical label system is constructed for the failure data in power ICT systems based on the textual data of the work orders.Then, hierarchical deep pyramid convolutional neural networks (HDPCNN) and hierarchical disconnected recurrent neural networks are proposed, which adopt hierarchical category embedding technique for level-by-level fault type classification. The experimental results show that the hierarchical text classification algorithm HDPCNN has the best classification accuracy, which can provide efficient and accurate services for fault type recognition.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85081665146
A Hierarchical Classification Framework for Phonemes and Broad Phonetic Groups (BPGs): a Discriminative Template-Based Approach
In this paper, a novel framework to phone or phoneme classification is presented. The framework combines discriminative classification approach to the traditional HMM framework. Unlike the traditional HMM approach to phoneme recognition, here all phones are modeled by one HMM. However, instead of using generative models (e.g., GMMs or codebooks), this framework employs a discriminative classifier to predict the state probabilities and finds the optimal state sequence to obtain a time-alignment function between the acoustic feature vector sequence and the state sequence. For each state Si, the corresponding feature vectors are averaged resulting in a single feature vector that represents the i-th vector of the block. All feature vectors of the block are then concatenated to a single feature vector to represent a phone unit, which is used as a feature vector for a phone classifier. The phone classifier is hierarchical is the sense that the broad phonetic groups (BPGs) are classifier followed by the phonemes belonging to those classes. Validated by the TIMIT database, the proposed framework with MFCCs has comparable performance to related phoneme classification algorithms, but with flexibility to account for duration and other features such as articulatory features. We also observe that the framework gives promising results for BPG classification.
[ "Text Classification", "Syntactic Text Processing", "Phonetics", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 15, 64, 24, 3 ]
SCOPUS_ID:85084761659
A Hierarchical Clustering Approach to Fuzzy Semantic Representation of Rare Words in Neural Machine Translation
Rare words are usually replaced with a single < unk> token in the current encoder-decoder style of neural machine translation, challenging the translation modeling by an obscured context. In this article, we propose to build a fuzzy semantic representation (FSR) method for rare words through a hierarchical clustering method to group rare words together, and integrate it into the encoder-decoder framework. This hierarchical structure can compensate for the semantic information in both source and target sides, and providing fuzzy context information to capture the semantic of rare words. The introduced FSR can also alleviate the data sparseness, which is the bottleneck in dealing with rare words in neural machine translation. In particular, our method is easily extended to the transformer-based neural machine translation model and learns the FSRs of all in-vocabulary words to enhance the sentence representations in addition to rare words. Our experiments on Chinese-To-English translation tasks confirm a significant improvement in the translation quality brought by the proposed method.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Information Extraction & Text Mining", "Representation Learning", "Text Generation", "Text Clustering", "Multilinguality" ]
[ 52, 51, 72, 3, 12, 47, 29, 0 ]
SCOPUS_ID:85065666811
A Hierarchical Clustering Based Relation Extraction Method for Domain Ontology
At present, the focus of ontology learning is on the extraction of concepts and relations. The relation extraction is divided into hierarchical relation extraction and non-hierarchical relation extraction, and hierarchical relation extraction is the basis of non-hierarchical relation extraction. In this paper, the method of mixed hierarchical clustering is used to extract and classify domain concepts from textual corpora in different fields of Uyghur language. Due to the limitations of the text content selected in this paper and the difficulty of merging nodes by hierarchical clustering, although the results are not ideal, they basically conform to the ontology hierarchy form. The experimental results show that the method is feasible, and the factors such as increasing the vector dimension and the text content, can further improve the accuracy of the domain concept and relation.
[ "Semantic Text Processing", "Relation Extraction", "Knowledge Representation", "Text Clustering", "Information Extraction & Text Mining" ]
[ 72, 75, 18, 29, 3 ]
SCOPUS_ID:85072198536
A Hierarchical Deep Correlative Fusion Network for Sentiment Classification in Social Media
Most existing research of sentiment analysis are based on either textual or visual data and can not achieve satisfied results. As multi-modal data can provide richer information, multi-modal sentiment analysis is attracting more and more attentions and has become a hot research topic. Due to the strong semantic correlation between visual data and the co-occurrence textual data in social media, mixed data of texts and images provides a new view to learn better classifier for social media sentiment classification. A hierarchical deep correlative fusion network framework is proposed to jointly learn textual and visual sentiment representations from training samples for sentiment classification. In order to alleviate the problem of fine-grained semantic matching between image and text, both the middle level semantic features of images and the deep multi-modal discriminative correlation analysis are applied to learn the most relevant visual feature representation and semantic feature representation, meanwhile, keeping both the visual and semantic feature representations to be linear discriminable. Motivated by the successful use of attention mechanisms, we further propose a multi-modal attention fusion network by incorporating visual and semantic feature representations to train sentiment classifier. Experiments on the real-world datasets which come from social networks show that, the proposed method gets more accurate prediction on multi-media sentiment analysis by capturing the internal relations between text and image hierarchically.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Multimodality" ]
[ 20, 3, 36, 78, 24, 74 ]
SCOPUS_ID:84974288913
A Hierarchical Dirichlet Language Model
We discuss a hierarchical probabilistic model whose predictions are similar to those of the popular language modelling procedure known as ‘smoothing’. A number of interesting differences from smoothing emerge. The insights gained from a probabilistic view of this problem point towards new directions for language modelling. The ideas of this paper are also applicable to other problems such as the modelling of triphomes in speech, and DNA and protein sequences in molecular biology. The new algorithm is compared with smoothing on a two million word corpus. The methods prove to be about equally accurate, with the hierarchical model using fewer computational resources. © 1995, Cambridge University Press. All rights reserved.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/1504.05929v2
A Hierarchical Distance-dependent Bayesian Model for Event Coreference Resolution
We present a novel hierarchical distance-dependent Bayesian model for event coreference resolution. While existing generative models for event coreference resolution are completely unsupervised, our model allows for the incorporation of pairwise distances between event mentions -- information that is widely used in supervised coreference models to guide the generative clustering processing for better event clustering both within and across documents. We model the distances between event mentions using a feature-rich learnable distance function and encode them as Bayesian priors for nonparametric clustering. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods for both within- and cross-document event coreference resolution.
[ "Coreference Resolution", "Information Extraction & Text Mining", "Text Clustering" ]
[ 13, 3, 29 ]
http://arxiv.org/abs/1805.01089v2
A Hierarchical End-to-End Model for Jointly Improving Text Summarization and Sentiment Classification
Text summarization and sentiment classification both aim to capture the main ideas of the text but at different levels. Text summarization is to describe the text within a few sentences, while sentiment classification can be regarded as a special type of summarization which "summarizes" the text into a even more abstract fashion, i.e., a sentiment class. Based on this idea, we propose a hierarchical end-to-end model for joint learning of text summarization and sentiment classification, where the sentiment classification label is treated as the further "summarization" of the text summarization output. Hence, the sentiment classification layer is put upon the text summarization layer, and a hierarchical structure is derived. Experimental results on Amazon online reviews datasets show that our model achieves better performance than the strong baseline systems on both abstractive summarization and sentiment classification.
[ "Information Retrieval", "Summarization", "Text Generation", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 30, 47, 78, 36, 3 ]
http://arxiv.org/abs/2108.09505v1
A Hierarchical Entity Graph Convolutional Network for Relation Extraction across Documents
Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations. In this work, we propose cross-document relation extraction, where the two entities of a relation tuple appear in two different documents that are connected via a chain of common entities. Following this idea, we create a dataset for two-hop relation extraction, where each chain contains exactly two documents. Our proposed dataset covers a higher number of relations than the publicly available sentence-level datasets. We also propose a hierarchical entity graph convolutional network (HEGCN) model for this task that improves performance by 1.1\% F1 score on our two-hop relation extraction dataset, compared to some strong neural baselines.
[ "Multimodality", "Relation Extraction", "Structured Data in NLP", "Information Extraction & Text Mining" ]
[ 74, 75, 50, 3 ]
SCOPUS_ID:85094149152
A Hierarchical Fine-Tuning Approach Based on Joint Embedding of Words and Parent Categories for Hierarchical Multi-label Text Classification
Many important classification problems in real world consist of a large number of categories. Hierarchical multi-label text classification (HMTC) with higher accuracy over large sets of closely related categories organized in a hierarchical structure or taxonomy has become a challenging problem. In this paper, we present a hierarchical fine-tuning deep learning approach for HMTC, where a joint embedding of words and their parent categories is generated by leveraging the hierarchical relations in the hierarchical structure of categories and the textual data. A fine tuning technique is applied to the Ordered Neural LSTM (ONLSTM) neural network such that the text classification results in the upper levels are able to help the classification in the lower ones. The extensive experiments were made over two benchmark datasets, and the results show that the method proposed in this paper outperforms the state-of-the-art hierarchical and flat multi-label text classification approaches, in particular the aspect of reducing computational costs while achieving superior performance.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 12, 24, 3 ]
SCOPUS_ID:85085727446
A Hierarchical Fine-Tuning Based Approach for Multi-label Text Classification
Hierarchical Text classification has recently become increasingly challenging with the growing number of classification labels. In this paper, we propose a hierarchical fine-tuning based approach for hierarchical text classification. We use the ordered neurons LSTM (ONLSTM) model by combining the embedding of text and parent category for hierarchical text classification with a large number of categories, which makes full use of the connection between the upper-level and lower-level labels. Extensive experiments show that our model outperforms the state-of-the-art hierarchical model at a lower computation cost.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 12, 24, 3 ]
http://arxiv.org/abs/1811.03925v1
A Hierarchical Framework for Relation Extraction with Reinforcement Learning
Most existing methods determine relation types only after all the entities have been recognized, thus the interaction between relation types and entity mentions is not fully modeled. This paper presents a novel paradigm to deal with relation extraction by regarding the related entities as the arguments of a relation. We apply a hierarchical reinforcement learning (HRL) framework in this paradigm to enhance the interaction between entity mentions and relation types. The whole extraction process is decomposed into a hierarchy of two-level RL policies for relation detection and entity extraction respectively, so that it is more feasible and natural to deal with overlapping relations. Our model was evaluated on public datasets collected via distant supervision, and results show that it gains better performance than existing methods and is more powerful for extracting overlapping relations.
[ "Relation Extraction", "Information Extraction & Text Mining" ]
[ 75, 3 ]
http://arxiv.org/abs/2208.11283v1
A Hierarchical Interactive Network for Joint Span-based Aspect-Sentiment Analysis
Recently, some span-based methods have achieved encouraging performances for joint aspect-sentiment analysis, which first extract aspects (aspect extraction) by detecting aspect boundaries and then classify the span-level sentiments (sentiment classification). However, most existing approaches either sequentially extract task-specific features, leading to insufficient feature interactions, or they encode aspect features and sentiment features in a parallel manner, implying that feature representation in each task is largely independent of each other except for input sharing. Both of them ignore the internal correlations between the aspect extraction and sentiment classification. To solve this problem, we novelly propose a hierarchical interactive network (HI-ASA) to model two-way interactions between two tasks appropriately, where the hierarchical interactions involve two steps: shallow-level interaction and deep-level interaction. First, we utilize cross-stitch mechanism to combine the different task-specific features selectively as the input to ensure proper two-way interactions. Second, the mutual information technique is applied to mutually constrain learning between two tasks in the output layer, thus the aspect input and the sentiment input are capable of encoding features of the other task via backpropagation. Extensive experiments on three real-world datasets demonstrate HI-ASA's superiority over baselines.
[ "Text Classification", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 23, 78, 24, 3 ]
SCOPUS_ID:85072868301
A Hierarchical Label Network for Multi-label EuroVoc Classification of Legislative Contents
EuroVoc is a thesaurus maintained by the European Union Publication Office, used to describe and index legislative documents. The Eurovoc concepts are organized following a hierarchical structure, with 21 domains, 127 micro-thesauri terms, and more than 6,700 detailed descriptors. The large number of concepts in the EuroVoc thesaurus makes the manual classification of legal documents highly costly. In order to facilitate this classification work, we present two main contributions. The first one is the development of a hierarchical deep learning model to address the classification of legal documents according to the EuroVoc thesaurus. Instead of training a classifier for each level, our model allows the simultaneous prediction of the three levels of the EuroVoc thesaurus. Our second contribution concerns the proposal of a new legal corpus for evaluating the classification of documents written in Portuguese. Our proposed corpus, named EUR-Lex PT, contains more than 220k documents, labeled under the three EuroVoc hierarchical levels. Comparative experiments with other state-of-the-art models indicate that our approach has competitive results, at the same time offering the ability to interpret predictions through attention weights.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85128391147
A Hierarchical Language Model for CSR
We present a new language model that includes some of the most promising techniques for overcoming linguistic inadequacy, - including POS tagging [3] and refining [4], hierarchical, locally conditioned grammars [5], parallel modelling of acoustic and linguistic domains [6] - and some of our own: language modelling as language parsing, and a better integration of the whole process with the acoustic model resulting in a richer educt from the language modelling process. We are building this model for a translation into Spanish of the DARPA RM task, maintaining the same 1k words vocabulary and some 1000 sentences.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85146198285
A Hierarchical Long Short-Term Memory Encoder-Decoder Model for Abstractive Summarization
Abstractive summarization is the task of generating concise summary of a source text, which is a challenging problem in Natural Language Processing (NLP). Many recent studies have relied on encoder-decoder sequence-to-sequence deep neural networks to solve this problem. However, most of these models treat the input as a sequence of words at the same level during the encoding process. This will make the encoding inefficient, especially for long input texts. Addressing this issue, in this paper we propose a model to encode text in a hierarchical manner, which helps to encode information in a way that is consistent with the nature of the text: the text is synthesized from sentences, and each sentence is synthesized from words. Our proposed model is based on Long Short Term Memory model that we called HLSTM (Hierarchical Long Short Term Memory) and applied to the problem of abstractive summarization. We conducted extensive experiments on the two most popular corpora (Gigaword and Amazon Review) and obtain significant improvements in comparison with the baseline models.
[ "Language Models", "Semantic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 30, 47, 3 ]
SCOPUS_ID:84880793927
A Hierarchical Method for Clustering Binary Text Image
Image clustering is a crucial task in image retrieving, filtering and organizing. Most of recent work focuses on dealing with color images or gray scale images with features extracted from text content, annotation or image content. This paper aims at binary text images and proposes a novel clustering method that can be used for automatic image procession in digital library and automatic office. The method is divided into three main steps. Firstly images are preprocessed to denoise, correct orientation and produce coarse classes. Secondly, features are extracted and similar images are grouped into new classes with hierarchical clustering algorithm. At last new classes are combined to the nearest old ones under distance condition. To speed clustering Local Sensitive Hash algorithm is imported for boosting merging procedure. Experiments show that this method is faster and efficient compared with the basic clustering method. © Springer-Verlag Berlin Heidelberg 2013.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining", "Text Clustering" ]
[ 20, 74, 3, 29 ]
SCOPUS_ID:85083979747
A Hierarchical Model for Data-to-Text Generation
Transcribing structured data into natural language descriptions has emerged as a challenging task, referred to as “data-to-text”. These structures generally regroup multiple elements, as well as their attributes. Most attempts rely on translation encoder-decoder methods which linearize elements into a sequence. This however loses most of the structure contained in the data. In this work, we propose to overpass this limitation with a hierarchical model that encodes the data-structure at the element-level and the structure level. Evaluations on RotoWire show the effectiveness of our model w.r.t. qualitative and quantitative metrics.
[ "Data-to-Text Generation", "Text Generation" ]
[ 16, 47 ]
http://arxiv.org/abs/1609.02745v1
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis
Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review's argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any hand-engineered features or external resources.
[ "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 23, 78 ]
SCOPUS_ID:85075825614
A Hierarchical Model with Recurrent Convolutional Neural Networks for Sequential Sentence Classification
Hierarchical neural networks approaches have achieved outstanding results in the latest sequential sentence classification research work. However, it is challenging for the model to consider both the local invariant features and word dependent information of the sentence. In this work, we concentrate on the sentence representation and context modeling components that influence the effects of the hierarchical architecture. We present a new approach called SR-RCNN to generate more precise sentence encoding which leverage complementary strength of bi-directional recurrent neural network and text convolutional neural network to capture contextual and literal relevance information. Afterwards, statement-level encoding vectors are modeled to capture the intrinsic relations within surrounding sentences. In addition, we explore the applicability of attention mechanisms and conditional random fields to the task. Our model advances sequential sentence classification in medical abstracts to new state-of-the-art performance.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
http://arxiv.org/abs/2011.09046v2
A Hierarchical Multi-Modal Encoder for Moment Localization in Video Corpus
Identifying a short segment in a long video that semantically matches a text query is a challenging task that has important application potentials in language-based video search, browsing, and navigation. Typical retrieval systems respond to a query with either a whole video or a pre-defined video segment, but it is challenging to localize undefined segments in untrimmed and unsegmented videos where exhaustively searching over all possible segments is intractable. The outstanding challenge is that the representation of a video must account for different levels of granularity in the temporal domain. To tackle this problem, we propose the HierArchical Multi-Modal EncodeR (HAMMER) that encodes a video at both the coarse-grained clip level and the fine-grained frame level to extract information at different scales based on multiple subtasks, namely, video retrieval, segment temporal localization, and masked language modeling. We conduct extensive experiments to evaluate our model on moment localization in video corpus on ActivityNet Captions and TVR datasets. Our approach outperforms the previous methods as well as strong baselines, establishing new state-of-the-art for this task.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Multimodality" ]
[ 20, 52, 72, 24, 74 ]
http://arxiv.org/abs/1811.06031v2
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
Much effort has been devoted to evaluate whether multi-task learning can be leveraged to learn rich representations that can be used in various Natural Language Processing (NLP) down-stream applications. However, there is still a lack of understanding of the settings in which multi-task learning has a significant effect. In this work, we introduce a hierarchical model trained in a multi-task learning setup on a set of carefully selected semantic tasks. The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model. This model achieves state-of-the-art results on a number of tasks, namely Named Entity Recognition, Entity Mention Detection and Relation Extraction without hand-engineered features or external NLP tools like syntactic parsers. The hierarchical training supervision induces a set of shared semantic representations at lower layers of the model. We show that as we move from the bottom to the top layers of the model, the hidden states of the layers tend to represent more complex semantic information.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 12, 4 ]
http://arxiv.org/abs/2004.02016v4
A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining
With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization intractable. Meanwhile, there are a handful of deep neural models for text summarization and dialogue systems. However, the semantic structure and styles of meeting transcripts are quite different from articles and conversations. In this paper, we propose a novel abstractive summary network that adapts to the meeting scenario. We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers. Furthermore, due to the inadequacy of meeting summary data, we pretrain the model on large-scale news summary data. Empirical results show that our model outperforms previous approaches in both automatic metrics and human evaluation. For example, on ICSI dataset, the ROUGE-1 score increases from 34.66% to 46.28%.
[ "Language Models", "Semantic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 30, 47, 3 ]
https://aclanthology.org//W16-4403/
A Hierarchical Neural Network for Information Extraction of Product Attribute and Condition Sentences
This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents. The network classifies each sentence in a document into attribute and condition classes on the basis of word sequences and sentence sequences in the document. Experimental results showed the method using the proposed network significantly outperformed baseline methods by taking semantic representation of word and sentence sequential data into account. We also evaluated the network with two different product domains (insurance and tourism domains) and found that it was effective for both the domains.
[ "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Knowledge Representation", "Information Extraction & Text Mining" ]
[ 72, 27, 11, 18, 3 ]
SCOPUS_ID:85069485473
A Hierarchical Neural Summarization Framework for Spoken Documents
Extractive text or speech summarization seeks to select indicative sentences from a source document and assemble them together to form a succinct summary, so as to help people to browse and understand the main theme of the document efficiently. A more recent trend is towards developing supervised deep learning based methods for extractive summarization. This paper extends and contextualizes this line of research for spoken document summarization, while its contributions are at least three-fold. First, we propose a neural summarization framework with the flexibility to incorporate extra acoustic/prosodic and lexical features, for which the ROUGE evaluation metric is embedded into the training objective function and can be optimized with reinforcement learning. Second, disparate ways to integrate acoustic features into this framework are investigated. Third, the utility of our proposed summarization methods and several widely-used state-of-the-art ones are extensively compared and evaluated. A series of empirical experiments seem to demonstrate the effectiveness of our summarization methods.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85066316807
A Hierarchical Quasi-Recurrent approach to Video Captioning
Video captioning has picked up a considerable attention thanks to the ability of Recurrent Neural Networks to extrapolate an encoded representation of the input video, and then use it to generate a description. We propose a recurrent encoding approach able to find and exploit the layered design of the video. Differently from the established encoder-decoder procedure, in which a video is repeatedly encoded by a recurrent layer, we employ revised Quasi-Recurrent Neural Networks. We further extend their basic cell with a boundary detector in order to recognize discontinuous segments boundaries and likewise correct the temporal connections of the encoding layer accordingly. Experiments, on the Montreal Video Annotation dataset, demonstrate that our approach can find suitable levels of representation of the input information, while reducing the computational requirements.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
http://arxiv.org/abs/2012.11960v1
A Hierarchical Reasoning Graph Neural Network for The Automatic Scoring of Answer Transcriptions in Video Job Interviews
We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.
[ "Visual Data in NLP", "Structured Data in NLP", "Question Answering", "Natural Language Interfaces", "Reasoning", "Multimodality" ]
[ 20, 50, 27, 11, 8, 74 ]
http://arxiv.org/abs/1906.01833v1
A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer
Unsupervised text style transfer aims to alter text styles while preserving the content, without aligned data for supervision. Existing seq2seq methods face three challenges: 1) the transfer is weakly interpretable, 2) generated outputs struggle in content preservation, and 3) the trade-off between content and style is intractable. To address these challenges, we propose a hierarchical reinforced sequence operation method, named Point-Then-Operate (PTO), which consists of a high-level agent that proposes operation positions and a low-level agent that alters the sentence. We provide comprehensive training objectives to control the fluency, style, and content of the outputs and a mask-based inference algorithm that allows for multi-step revision based on the single-step trained agents. Experimental results on two text style transfer datasets show that our method significantly outperforms recent methods and effectively addresses the aforementioned challenges.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP", "Text Generation", "Text Style Transfer" ]
[ 80, 4, 47, 35 ]
SCOPUS_ID:85130792473
A Hierarchical Representation Model Based on Longformer and Transformer for Extractive Summarization
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation method is crucial for the quality of the generated summarization. To effectively represent the document, we propose a hierarchical document representation model Long-Trans-Extr for Extractive Summarization, which uses Longformer as the sentence encoder and Transformer as the document encoder. The advantage of Longformer as sentence encoder is that the model can input long document up to 4096 tokens with adding relative a little calculation. The proposed model Long-Trans-Extr is evaluated on three benchmark datasets: CNN (Cable News Network), DailyMail, and the combined CNN/DailyMail. It achieves 43.78 (Rouge-1) and 39.71 (Rouge-L) on CNN/DailyMail and 33.75 (Rouge-1), 13.11 (Rouge-2), and 30.44 (Rouge-L) on the CNN datasets. They are very competitive results, and furthermore, they show that our model has better performance on long documents, such as the CNN corpus.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 30, 47, 3 ]
SCOPUS_ID:85105727255
A Hierarchical Sequence-To-Sequence Model for Korean POS Tagging
Part-of-speech (POS) tagging is a fundamental task in natural language processing. Korean POS tagging consists of two subtasks: morphological analysis and POS tagging. In recent years, scholars have tended to use the seq2seq model to solve this problem. The full context of a sentence is considered in these seq2seq-based Korean POS tagging methods. However, Korean morphological analysis relies more on local contextual information, and in many cases, there exists one-To-one matching between morpheme surface form and base form. To make better use of these characteristics, we propose a hierarchical seq2seq model. In our model, the low-level Bi-LSTM encodes the syllable sequence, whereas the high-level Bi-LSTM models the context information of the whole sentence, and the decoder generates the morpheme base form syllables as well as the POS tags. To improve the accuracy of the morpheme base form recovery, we introduced the convolution layer and the attention mechanism to our model. The experimental results on the Sejong corpus show that our model outperforms strong baseline systems in both morpheme-level F1-score and eojeol-level accuracy, achieving state-of-The-Art performance.
[ "Language Models", "Semantic Text Processing", "Morphology", "Syntactic Text Processing", "Tagging" ]
[ 52, 72, 73, 15, 63 ]
SCOPUS_ID:85045733609
A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS)
The recent advance in neural network architecture and training algorithms has shown the effectiveness of representation learning. The neural-network-based models generate better representation than the traditional ones. They have the ability to automatically learn the distributed representation for sentences and documents. To this end, we proposed a novel model that addresses several issues that are not adequately modeled by the previously proposed models, such as the memory problem and incorporating the knowledge of document structure. Our model uses a hierarchical structured self-attention mechanism to create the sentence and document embeddings. This architecture mirrors the hierarchical structure of the document and in turn enables us to obtain better feature representation. The attention mechanism provides extra source of information to guide the summary extraction. The new model treated the summarization task as a classification problem in which the model computes the respective probabilities of sentence-summary membership. The model predictions are broken up by several features such as information content, salience, novelty, and positional representation. The proposed model was evaluated on two well-known datasets, the CNN/Daily Mail and DUC 2002. The experimental results show that our model outperforms the current extractive state of the art by a considerable margin.
[ "Semantic Text Processing", "Representation Learning", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 72, 12, 30, 47, 3 ]
http://arxiv.org/abs/2003.13841v1
A Hierarchical Transformer for Unsupervised Parsing
The underlying structure of natural language is hierarchical; words combine into phrases, which in turn form clauses. An awareness of this hierarchical structure can aid machine learning models in performing many linguistic tasks. However, most such models just process text sequentially and there is no bias towards learning hierarchical structure encoded into their architecture. In this paper, we extend the recent transformer model (Vaswani et al., 2017) by enabling it to learn hierarchical representations. To achieve this, we adapt the ordering mechanism introduced in Shen et al., 2018, to the self-attention module of the transformer architecture. We train our new model on language modelling and then apply it to the task of unsupervised parsing. We achieve reasonable results on the freely available subset of the WSJ10 dataset with an F1-score of about 50%.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 4 ]
http://arxiv.org/abs/2012.14781v1
A Hierarchical Transformer with Speaker Modeling for Emotion Recognition in Conversation
Emotion Recognition in Conversation (ERC) is a more challenging task than conventional text emotion recognition. It can be regarded as a personalized and interactive emotion recognition task, which is supposed to consider not only the semantic information of text but also the influences from speakers. The current method models speakers' interactions by building a relation between every two speakers. However, this fine-grained but complicated modeling is computationally expensive, hard to extend, and can only consider local context. To address this problem, we simplify the complicated modeling to a binary version: Intra-Speaker and Inter-Speaker dependencies, without identifying every unique speaker for the targeted speaker. To better achieve the simplified interaction modeling of speakers in Transformer, which shows excellent ability to settle long-distance dependency, we design three types of masks and respectively utilize them in three independent Transformer blocks. The designed masks respectively model the conventional context modeling, Intra-Speaker dependency, and Inter-Speaker dependency. Furthermore, different speaker-aware information extracted by Transformer blocks diversely contributes to the prediction, and therefore we utilize the attention mechanism to automatically weight them. Experiments on two ERC datasets indicate that our model is efficacious to achieve better performance.
[ "Language Models", "Semantic Text Processing", "Natural Language Interfaces", "Sentiment Analysis", "Emotion Analysis", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 11, 78, 61, 38 ]
SCOPUS_ID:78049527922
A Hierarchical visual model for video object summarization
We propose a novel method for removing irrelevant frames from a video given user-provided frame-level labeling for a very small number of frames. We first hypothesize a number of windows which possibly contain the object of interest, and then determine which window(s) truly contain the object of interest. Our method enjoys several favorable properties. First, compared to approaches where a single descriptor is used to describe a whole frame, each window's feature descriptor has the chance of genuinely describing the object of interest; hence it is less affected by background clutter. Second, by considering the temporal continuity of a video instead of treating frames as independent, we can hypothesize the location of the windows more accurately. Third, by infusing prior knowledge into the patch-level model, we can precisely follow the trajectory of the object of interest. This allows us to largely reduce the number of windows and hence reduce the chance of overfitting the data during learning. We demonstrate the effectiveness of the method by comparing it to several other semi-supervised learning approaches on challenging video clips. © 2010 IEEE.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Summarization", "Text Generation", "Multimodality" ]
[ 20, 3, 30, 47, 74 ]
SCOPUS_ID:84994092411
A Hierarchical word sequence language model
Most language models used for natural language processing are continuous. However, the assumption of such kind of models is too simple to cope with data sparsity problem. Although many useful smoothing techniques are developed to estimate these unseen sequences, it is still important to make full use of contextual information in training data. In this paper, we propose a hierarchical word sequence language model to relieve the data sparsity problem. Experiments verified the effectiveness of our model.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
https://aclanthology.org//W19-3510/
A Hierarchically-Labeled Portuguese Hate Speech Dataset
Over the past years, the amount of online offensive speech has been growing steadily. To successfully cope with it, machine learning are applied. However, ML-based techniques require sufficiently large annotated datasets. In the last years, different datasets were published, mainly for English. In this paper, we present a new dataset for Portuguese, which has not been in focus so far. The dataset is composed of 5,668 tweets. For its annotation, we defined two different schemes used by annotators with different levels of expertise. Firstly, non-experts annotated the tweets with binary labels (‘hate’ vs. ‘no-hate’). Secondly, expert annotators classified the tweets following a fine-grained hierarchical multiple label scheme with 81 hate speech categories in total. The inter-annotator agreement varied from category to category, which reflects the insight that some types of hate speech are more subtle than others and that their detection depends on personal perception. This hierarchical annotation scheme is the main contribution of the presented work, as it facilitates the identification of different types of hate speech and their intersections. To demonstrate the usefulness of our dataset, we carried a baseline classification experiment with pre-trained word embeddings and LSTM on the binary classified data, with a state-of-the-art outcome.
[ "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 17, 4 ]
SCOPUS_ID:85112111487
A Hierarchy of Interests: Discursive Practices on the Value of Particle and High-Energy Physics
Current science policy emphasizes practical outcomes. In this article, I explore how a fundamental research community addresses the value of research, an area that has received a little attention. In the wake of the discovery of the Higgs boson, I analyse how particle physicists interpret the values of their research in interviews and a strategic document. The result indicates a hierarchy of interests that coordinates different values of particle physics in discourse: the status of scientific and cultural value is higher than that of societal and material value. This finding implies that value propositions are inseparable from the articulation of interests, and qualitative discourse analysis can approach a combined understanding of the two. In science policy studies, there is not yet sufficient studies on how scientists appraise different values of research. The hierarchical discursive practice on values shed lights on a culture different from policy trends.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85040053208
A Hierarchy-to-Sequence Attentional Neural Machine Translation Model
Although sequence-to-sequence attentional neural machine translation (NMT) has achieved great progress recently, it is confronted with two challenges: learning optimal model parameters for long parallel sentences and well exploiting different scopes of contexts. In this paper, partially inspired by the idea of segmenting a long sentence into short clauses, each of which can be easily translated by NMT, we propose a hierarchy-to-sequence attentional NMT model to handle these two challenges. Our encoder takes the segmented clause sequence as input and explores a hierarchical neural network structure to model words, clauses, and sentences at different levels, particularly with two layers of recurrent neural networks modeling semantic compositionality at the word and clause level. Correspondingly, the decoder sequentially translates segmented clauses and simultaneously applies two types of attention models to capture contexts of interclause and intraclause for translation prediction. In this way, we can not only improve parameter learning, but also well explore different scopes of contexts for translation. Experimental results on Chinese-English and English-German translation demonstrate the superiorities of the proposed model over the conventional NMT model.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85135075440
A High-Efficiency Knowledge Distillation Image Caption Technology
Image caption is wildly considered in the application of machine learning. Its purpose is describing one given picture into text accurately. Currently, it uses the Encoder-Decoder architecture from deep learning. To further increase the semantic transmitted after distillation by feature representation, this paper proposes a knowledge distillation framework to increase the results of the teacher section, extracting features by different semantic levels from different fields of view, and the loss function adopts the method of label normalization. Handle unmatched image-sentence pairs. In order to achieve the purpose of a more efficient process. Experimental results prove that this knowledge distillation architecture can strengthen the semantic information transmitted after distillation in the feature representation, achieve a more efficient training model on less data, and obtain a higher accuracy rate.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Green & Sustainable NLP", "Captioning", "Text Generation", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 20, 52, 72, 68, 39, 47, 4, 74 ]
SCOPUS_ID:85126812721
A High-Precision Method for Segmentation and Recognition of Shopping Mall Plans
Most studies on map segmentation and recognition are focused on architectural floor plans, while there are very few analyses of shopping mall plans. The objective of the work is to accurately segment and recognize the shopping mall plan, obtaining location and semantic information for each room via segmentation and recognition. This work can be used in other applications such as indoor robot navigation, building area and location analysis, and three-dimensional reconstruction. First, we identify and match the catalog of a mall floor plan to obtain matching text, and then we use the two-stage region growth method we proposed to segment the preprocessed floor plan. The room number is then obtained by sending each segmented room section to an OCR (optical character recognition) system for identification. Finally, the system retrieves the matching text to match the room number in order to obtain the room name, and outputs the needed room location and semantic information. It is considered a successful detection when a room region can be successfully segmented and identified. The proposed method is evaluated on a dataset including 1340 rooms. Experimental results show that the accuracy of room segmentation is 92.54%, and the accuracy of room recognition is 90.56%. The total detection accuracy is 83.81%.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
https://aclanthology.org//W19-5212/
A High-Quality Multilingual Dataset for Structured Documentation Translation
This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss trade-offs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
http://arxiv.org/abs/2208.04243v1
A High-Quality and Large-Scale Dataset for English-Vietnamese Speech Translation
In this paper, we introduce a high-quality and large-scale benchmark dataset for English-Vietnamese speech translation with 508 audio hours, consisting of 331K triplets of (sentence-lengthed audio, English source transcript sentence, Vietnamese target subtitle sentence). We also conduct empirical experiments using strong baselines and find that the traditional "Cascaded" approach still outperforms the modern "End-to-End" approach. To the best of our knowledge, this is the first large-scale English-Vietnamese speech translation study. We hope both our publicly available dataset and study can serve as a starting point for future research and applications on English-Vietnamese speech translation. Our dataset is available at https://github.com/VinAIResearch/PhoST
[ "Machine Translation", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Multilinguality" ]
[ 51, 70, 74, 47, 0 ]
SCOPUS_ID:70349733195
A High-speed word level finite field multiplier in F<inf>2m</inf> using redundant representation
In this paper, a high-speed word level finite field multiplier in FF 2m using redundant representation is proposed. For the class of fields that there exists a type I optimal normal basis, the new architecture has significantly higher speed compared to previously proposed architectures using either normal basis or redundant representation at the expense of moderately higher area complexity. One of the unique features of the proposed multiplier is that the critical path delay is not a function of the field size nor the word size. It is shown that the new multiplier outperforms all the other multipliers in comparison when considering the product of area and delay as a measure of performance. VLSI implementation of the proposed multiplier in a 0.18- μmcomplimentary metaloxidesemiconductor (CMOS) process is also presented. © 2006 IEEE.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:85105719699
A Hindi Image Caption Generation Framework Using Deep Learning
Image captioning is the process of generating a textual description of an image that aims to describe the salient parts of the given image. It is an important problem, as it involves computer vision and natural language processing, where computer vision is used for understanding images, and natural language processing is used for language modeling. A lot of works have been done for image captioning for the English language. In this article, we have developed a model for image captioning in the Hindi language. Hindi is the official language of India, and it is the fourth most spoken language in the world, spoken in India and South Asia. To the best of our knowledge, this is the first attempt to generate image captions in the Hindi language. A dataset is manually created by translating well known MSCOCO dataset from English to Hindi. Finally, different types of attention-based architectures are developed for image captioning in the Hindi language. These attention mechanisms are new for the Hindi language, as those have never been used for the Hindi language. The obtained results of the proposed model are compared with several baselines in terms of BLEU scores, and the results show that our model performs better than others. Manual evaluation of the obtained captions in terms of adequacy and fluency also reveals the effectiveness of our proposed approach.Availability of resources: The codes of the article are available at https://github.com/santosh1821cs03/Image_Captioning_Hindi_Language; The dataset will be made available: http://www.iitp.ac.in/g1/4ai-nlp-ml/resources.html.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
SCOPUS_ID:84980383359
A Hindi Question Answering System using Machine Learning approach
A Question Answering (QA) System is fairly an Information Retrieval(IR) system in which a query is stated to the system and it relocates the correct or closest results to the specific question asked in natural language. It is one of the consequences of Natural Language Interface to Database (NLIDB). The paper discusses the implementation of a Hindi Language QA system developed using Machine Learning approach. The implemented QA system is divided into three phases: Accessing natural language (NL) Query; where the input query is read, preprocessed and get tokenized; next is feature extraction (FE) phase; where specific features vectors are identified from the results of previous phase and finally the Classification phase; where the Naïve Baye's classifier has been used, along with the knowledge base already stored in the system. This paper reflects that the concepts of similarity and classification provide better results than the use of 'equals' concept by defining the overall accuracy of finding the relevant answers of the specific questions asked by the user.
[ "Text Classification", "Question Answering", "Natural Language Interfaces", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 27, 11, 24, 3 ]
http://arxiv.org/abs/1211.2741v1
A Hindi Speech Actuated Computer Interface for Web Search
Aiming at increasing system simplicity and flexibility, an audio evoked based system was developed by integrating simplified headphone and user-friendly software design. This paper describes a Hindi Speech Actuated Computer Interface for Web search (HSACIWS), which accepts spoken queries in Hindi language and provides the search result on the screen. This system recognizes spoken queries by large vocabulary continuous speech recognition (LVCSR), retrieves relevant document by text retrieval, and provides the search result on the Web by the integration of the Web and the voice systems. The LVCSR in this system showed enough performance levels for speech with acoustic and language models derived from a query corpus with target contents.
[ "Speech & Audio in NLP", "Information Retrieval", "Multimodality" ]
[ 70, 24, 74 ]
SCOPUS_ID:79952061440
A Hindi question answering system for E-learning documents
To empower the general mass through access to information and knowledge, organized efforts are being made to develop relevant content in local languages and provide local language capabilities to utility software. We have developed a Question Answering (QA) System for Hindi documents that would be relevant for masses using Hindi as primary language of education. The user should be able to access information from E-learning documents in a user friendly way, that is by questioning the system in their native language Hindi and the system will return the intended answer (also in Hindi) by searching in context from the repository of Hindi documents. The language constructs, query structure, common words, etc. are completely different in Hindi as compared to English. A novel strategy, in addition to conventional search and NLP techniques, was used to construct the Hindi QA system. The focus is on context based retrieval of information. For this purpose we implemented a Hindi search engine that works on locality-based similarity heuristics to retrieve relevant passages from the collection. It also incorporates language analysis modules like stemmer and morphological analyzer as well as self constructed lexical database of synonyms. The experimental results over corpus of two important domains of agriculture and science show effectiveness of our approach.
[ "Passage Retrieval", "Natural Language Interfaces", "Question Answering", "Information Retrieval" ]
[ 66, 11, 27, 24 ]
SCOPUS_ID:84920873456
A History of English: From Proto-Indo-European to Proto-Germanic
This volume traces the prehistory of English from Proto-Indo-European, its earliest reconstructable ancestor, to Proto-Germanic, the latest ancestor shared by all the Germanic languages. It begins with a grammatical sketch of Proto-Indo-European, then discusses in detail the linguistic changes - especially in phonology and morphology - that occurred in the development to Proto-Germanic. The final chapter presents a grammatical sketch of Proto-Germanic. This is the first volume of a linguistic history of English. It is written for fellow-linguists who are not specialists in historical linguistics, especially for theoretical linguists. Its primary purpose is to provide accurate information about linguistic changes in an accessible conceptual framework. A secondary purpose is to begin the compilation of a reliable corpus of phonological and morphological changes to improve the empirical basis of the understanding of historical phonology and morphology.
[ "Phonology", "Syntactic Text Processing", "Morphology" ]
[ 6, 15, 73 ]
SCOPUS_ID:84922269198
A History of Psycholinguistics: The Pre-Chomskyan Era
How do we manage to speak and understand language? How do children acquire these skills and how does the brain support them? These psycholinguistic issues have been studied for more than two centuries. Though many Psycholinguists tend to consider their history as beginning with the Chomskyan 'cognitive revolution' of the late 1950s/1960s, the history of empirical psycholinguistics actually goes back to the end of the eighteenth century. This book tells the fascinating history of the doctors, pedagogues, linguists, and psychologists who created this discipline, looking at how they made their important discoveries about the language regions in the brain, about the high-speed accessing of words in speaking and listening, on the child's invention of syntax, on the disruption of language in aphasic patients and so much more. Psycholinguistics has four historical roots, which, by the end of the nineteenth century, had merged. By then, the discipline, usually called the psychology of language, was established. The first root was comparative linguistics, which raised the issue of the psychological origins of language. The second root was the study of language in the brain, with Franz Gall as the pioneer and the Broca and Wernicke discoveries as major landmarks. The third root was the diary approach to child development, which emerged from Rousseau's Émile. The fourth root was the experimental laboratory approach to speech and language processing, which originated from Franciscus Donders' mental chronometry. Wilhelm Wundt unified these four approaches in his monumental Die Sprache of 1900. These four perspectives of psycholinguistics continued into the twentieth century but in quite divergent frameworks. There was German consciousness and thought psychology, Swiss/French and Prague/Viennese structuralism, Russian and American behaviorism, and almost aggressive holism in aphasiology. As well as reviewing all these perspectives, the book looks at the deep disruption of the field during the Third Reich and its optimistic, multidisciplinary re-emergence during the 1950s with the mathematical theory of communication as a major impetus.
[ "Psycholinguistics", "Linguistics & Cognitive NLP" ]
[ 77, 48 ]
SCOPUS_ID:85049331038
A Holistic Approach for Recognition of Complete Urdu Ligatures Using Hidden Markov Models
Optical Character Recognition (OCR) is one of the continuously explored problems. Presently, commercial character recognizers are available reporting near to 100% recognition rates on text in a number of scripts. Despite these advancements, OCR systems however, have yet to mature for cursive scripts like Urdu. This study presents a holistic technique for recognition of Urdu text in Nastaliq font using 'complete' ligatures as recognition units. The term 'complete' refers to a partial word including its main body and secondary components (dots and diacritic marks). Discrete Wavelet Transform (DWT) is employed as feature extractor while a separate Hidden Markov Model (HMM) is trained for each ligature considered in our study. More than 2000 frequently used unique Urdu ligatures from the standard CLE (Center of Language Engineering) dataset are considered in our evaluations. The system reads a promising accuracy of 88.87% on more than 10,000 partial words.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
https://aclanthology.org//2022.bigscience-1.8/
A Holistic Assessment of the Carbon Footprint of Noor, a Very Large Arabic Language Model
As ever larger language models grow more ubiquitous, it is crucial to consider their environmental impact. Characterised by extreme size and resource use, recent generations of models have been criticised for their voracious appetite for compute, and thus significant carbon footprint. Although reporting of carbon impact has grown more common in machine learning papers, this reporting is usually limited to compute resources used strictly for training. In this work, we propose a holistic assessment of the footprint of an extreme-scale language model, Noor. Noor is an ongoing project aiming to develop the largest multi-task Arabic language models–with up to 13B parameters–leveraging zero-shot generalisation to enable a wide range of downstream tasks via natural language instructions. We assess the total carbon bill of the entire project: starting with data collection and storage costs, including research and development budgets, pretraining costs, future serving estimates, and other exogenous costs necessary for this international cooperation. Notably, we find that inference costs and exogenous factors can have a significant impact on total budget. Finally, we discuss pathways to reduce the carbon footprint of extreme-scale models.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/2301.10606v1
A Holistic Cascade System, benchmark, and Human Evaluation Protocol for Expressive Speech-to-Speech Translation
Expressive speech-to-speech translation (S2ST) aims to transfer prosodic attributes of source speech to target speech while maintaining translation accuracy. Existing research in expressive S2ST is limited, typically focusing on a single expressivity aspect at a time. Likewise, this research area lacks standard evaluation protocols and well-curated benchmark datasets. In this work, we propose a holistic cascade system for expressive S2ST, combining multiple prosody transfer techniques previously considered only in isolation. We curate a benchmark expressivity test set in the TV series domain and explored a second dataset in the audiobook domain. Finally, we present a human evaluation protocol to assess multiple expressive dimensions across speech pairs. Experimental results indicate that bi-lingual annotators can assess the quality of expressive preservation in S2ST systems, and the holistic modeling approach outperforms single-aspect systems. Audio samples can be accessed through our demo webpage: https://facebookresearch.github.io/speech_translation/cascade_expressive_s2st.
[ "Machine Translation", "Speech & Audio in NLP", "Multimodality", "Text Generation", "Multilinguality" ]
[ 51, 70, 74, 47, 0 ]
http://arxiv.org/abs/1911.01248v1
A Holistic Natural Language Generation Framework for the Semantic Web
With the ever-growing generation of data for the Semantic Web comes an increasing demand for this data to be made available to non-semantic Web experts. One way of achieving this goal is to translate the languages of the Semantic Web into natural language. We present LD2NL, a framework for verbalizing the three key languages of the Semantic Web, i.e., RDF, OWL, and SPARQL. Our framework is based on a bottom-up approach to verbalization. We evaluated LD2NL in an open survey with 86 persons. Our results suggest that our framework can generate verbalizations that are close to natural languages and that can be easily understood by non-experts. Therewith, it enables non-domain experts to interpret Semantic Web data with more than 91\% of the accuracy of domain experts.
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85062890603
A Holistic Ranking Scheme for Apps
App stores or application distribution platforms allow users to present their sentiments about apps in the forms of ratings and reviews. However, selecting the "best one" from available apps that offer similar functionality is difficult task - especially, if the selection process only uses the average star rating of the apps. To address this challenge, we have introduced a trust-based selection and ranking system of similar apps by combining the programmatic view ("internal view") and the sentiments based on users reviews ("external view"). The rankings based on the average star ratings are compared with the rankings generated by our approach. We empirically evaluate our approach by using the publically available apps from the Google Play Store. For this study, we have chosen a dataset of 250 apps with total 114,480 reviews from top 5 different categories - of which we focused our experiments on 90 apps that have at least 1000 reviews. Our experiments indicate that proposed holistic ranking that encompasses both the internal and external views is a better alternative than any ranking that focuses only on the internal or external view.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85062916097
A Home Service-Oriented Question Answering System with High Accuracy and Stability
With the development of deep learning, neural network-based (NN-based) methods have been applied in question answering (QA) widely and achieved significant progress. Although an NN-based QA system can obtain better performance and save manual efforts, the system is likely to suffer attacks from the external perturbation, due to its character of being black boxes. Limited in the area of home service, we present an innovative method for constructing an NN-based QA system. In our method, the accuracy can be further increased, and the stability can be enhanced in the meantime. Inspired by observing the process of performing home services, the tool information (tool names and tool sequences) is integrated with question terms, as a way of extending the question representation. The conception of attribution (word importance) is introduced to gauge the word importance since NN-based models can be easily affected by the uninformative question terms. In order to optimize the model parameters effectively, the reinforcement learning is employed and both factors on accuracy and stability are regarded as rules in designing rewards. A few state-of-the-art methods are adopted to evaluate the effectiveness of the proposed method. The experimental results demonstrate that the model ability to produce effective answers in QA can be further improved with our method, and the model stability on perturbations can be enhanced with our method.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85056634887
A Homomorphic Property of the Cryptosystems Based on Word Problem
There are many cryptosystems in the literature based on formal language theory. Some of them are public key cryptosystems and others are symmetric key cryptosystems. Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext. In this paper, we discuss a homomorphic property of public key cryptosystems based on word problem. An encryption scheme is probabilistic if when the same message is encrypted several times, different ciphertexts are obtained. A homomorphic property of public key encryption schemes based on word problems along with probabilistic property are used in this paper for constructing an electronic voting scheme.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85065761157
A Hotel Review Corpus for Argument Mining
With the development of the network, the research of user reviews has become more important in academia and industry, because user reviews gradually influence the reputation of products and services. Argument mining has recently become a hot topic, and it is currently in the center of attention of the text mining research community. We can deeply dig out information contained in the user reviews with argument mining technology. This paper makes a corpus of hotel reviews and presents a novel scheme to model arguments, their components and relations in hotel reviews in English. In order to capture the structure of argumentative discourse, the annotation scheme includes the annotation of Major Claim, Claim, Premise, Background and Recommendation as well as Support and Attack relations. The sentiment polarity of argument components contains Positive, Negative and Neutral. We conduct a manual annotation study with 300 annotators on 1427 hotel reviews. And the final corpus collects 85 hotel reviews according to inter-rater agreement and it will encourage future study in argument recognition.
[ "Argument Mining", "Reasoning" ]
[ 60, 8 ]
SCOPUS_ID:85129742944
A Human Quality Text to Speech System for Sinhala
This paper proposes an approach on implementing a Text to Speech system for Sinhala language using MaryTTS framework. In this project, a set of rules for mapping text to sound were identified and proceeded with Unit selection mechanism. The datasets used for this study were gathered from newspaper articles and the corresponding sentences were recorded by a professional speaker. User level evaluation was conducted with 20 candidates, where the intelligibility and the naturalness of the developed Sinhala TTS system received an approximate score of 70%. And the overall speech quality is an approximately to 60%.
[ "Syntactic Text Processing", "Phonology", "Speech & Audio in NLP", "Multimodality" ]
[ 15, 6, 70, 74 ]
http://arxiv.org/abs/2303.06944v1
A Human Subject Study of Named Entity Recognition (NER) in Conversational Music Recommendation Queries
We conducted a human subject study of named entity recognition on a noisy corpus of conversational music recommendation queries, with many irregular and novel named entities. We evaluated the human NER linguistic behaviour in these challenging conditions and compared it with the most common NER systems nowadays, fine-tuned transformers. Our goal was to learn about the task to guide the design of better evaluation methods and NER algorithms. The results showed that NER in our context was quite hard for both human and algorithms under a strict evaluation schema; humans had higher precision, while the model higher recall because of entity exposure especially during pre-training; and entity types had different error patterns (e.g. frequent typing errors for artists). The released corpus goes beyond predefined frames of interaction and can support future work in conversational music recommendation.
[ "Information Extraction & Text Mining", "Speech & Audio in NLP", "Natural Language Interfaces", "Named Entity Recognition", "Dialogue Systems & Conversational Agents", "Multimodality" ]
[ 3, 70, 11, 34, 38, 74 ]
http://arxiv.org/abs/1912.00667v1
A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection
Microblogging platforms such as Twitter are increasingly being used in event detection. Existing approaches mainly use machine learning models and rely on event-related keywords to collect the data for model training. These approaches make strong assumptions on the distribution of the relevant micro-posts containing the keyword -- referred to as the expectation of the distribution -- and use it as a posterior regularization parameter during model training. Such approaches are, however, limited as they fail to reliably estimate the informativeness of a keyword and its expectation for model training. This paper introduces a Human-AI loop approach to jointly discover informative keywords for model training while estimating their expectation. Our approach iteratively leverages the crowd to estimate both keyword specific expectation and the disagreement between the crowd and the model in order to discover new keywords that are most beneficial for model training. These keywords and their expectation not only improve the resulting performance but also make the model training process more transparent. We empirically demonstrate the merits of our approach, both in terms of accuracy and interpretability, on multiple real-world datasets and show that our approach improves the state of the art by 24.3%.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85115649390
A Human-Human Interaction-Driven Framework to Address Societal Issues
The scientific contribution of this paper is a multilayered Human-Human Interaction driven framework that aims to connect the needs of different sectors of the society to provide a long-term, viable, robust, and implementable solution for addressing multiple societal, economic, and humanitarian issues related to the increasing population of the world. ‘Connecting dots’ here involves issues of three major constituents of the society: the loneliness in the rapidly increasing elderly population, the increasing housing needs of low-income families, and caregiver shortage. The proposed framework would facilitate mutually beneficial, sustainable, equitable, and long-term solutions, based on several factors, to address these global societal challenges. The development of this framework involved integrating the latest advancements from Human-Human Interaction, Big Data, Information Retrieval, and Natural Language Processing. The results presented and discussed uphold the significance, relevance, and potential of this framework for addressing these above-mentioned societal issues associated with the increasing global population.
[ "Information Retrieval" ]
[ 24 ]
SCOPUS_ID:85099575695
A Human-Machine Interaction Scheme Based on Background Knowledge in 6G-Enabled IoT Environment
6G-Enabled Internet of Things (IoT) is about to open a new era of Internet of Everything (IoE). It creates favorable conditions for new application services. The human-machine dialogue system, one of the most important forms of human-machine interaction, is expected to replace mobile applications in the future. This article proposes a dialogue generation scheme named background knowledge-aware dialogue generation model with pretrained encoders (BKADGPE). Dialogue generation, which takes the context as input and response as output, is a sequence-to-sequence (Seq2Seq) task. Instead of only generating the response based on the previous sequence of utterances, background knowledge-aware dialogue generation is also relying on background knowledge documents. This is because people often communicate based on their background knowledge. This article divides it into two tasks: 1) a knowledge selection task and 2) a response generation task. One of the latest language pretraining models, a lite bidirectional encoder representations from transformers (ALBERT), is applied as the encoder. In the knowledge selection task, ALBERT adds the linear layer and softmax layer to predict the content-related knowledge span. In the response generation task, the ALBERT after fine-tuning through the knowledge selection task adds the left-context-only transformer with a copy mechanism to incorporate background knowledge span into the generated response. Empirical studies on the HOLL-E dataset show that the result of BKADGPE is better than the related works.
[ "Language Models", "Semantic Text Processing", "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 14, 11, 47, 38 ]
SCOPUS_ID:85147330123
A Human-like Interactive Chatbot Framework for Vietnamese Banking Domain
In recent years, the application of chatbots evolved rapidly in numerous fields and received increasing attention in the academic and industrial communities. In this paper, we present a novel chatbot framework based on machine learning and deep learning approaches. Our framework not only answers the domain questions but also consists of three primary features of a human-like interactive chatbot, including (1) Conversation tracking, (2) Recommendation, and (3) Asking again. Further-more, we integrate one feature for adding accents to the non-accent sentence using Transformer-based architecture. Based on the experimental results and deployment on production for the banking domain, we demonstrated that our framework is stable and ensures specific requirements (e.g., computational resources, response time, performance, user experience). With flexibility and adaptation, our proposed framework can be developed and deployed to other domains or business contexts.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85123603731
A Human-machine Cooperation Protocol for Machine Translation Output Edit Annotation
We report on a study exploring automatic edit annotation in a post-editing corpus with a new method for computing edit types. We examine edit type association with quality scores assigned to the machine translation output and the postedited texts. Finally, we account for shortcomings in our method and point out edit types worth leveraging.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85037085460
A Hungarian sentiment corpus manually annotated at aspect level
In this paper we present a Hungarian sentiment corpus manually annotated at aspect level. Our corpus consists of Hungarian opinion texts written about different types of products. The main aim of creating the corpus was to produce an appropriate database providing possibilities for developing text mining software tools. The corpus is a unique Hungarian database: to the best of our knowledge, no digitized Hungarian sentiment corpus that is annotated on the level of fragments and targets has been made so far. In addition, many language elements of the corpus, relevant from the point of view of sentiment analysis, got distinct types of tags in the annotation. In this paper, on the one hand, we present the method of annotation, and we discuss the difficulties concerning text annotation process. On the other hand, we provide some quantitative and qualitative data on the corpus. We conclude with a description of the applicability of the corpus.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85137262944
A Hybrid AI Model for Improving COVID-19 Sentiment Analysis in Social Networks
The recent COVID-19 (novel coronavirus disease) pandemic induced a deep polarization among regional as well as global communities. The sentiments regarding the pandemic and its impact on lifestyle and economy, often expressed via social networks, are regarded as critical metrics for capturing such polarization and formulating appropriate intervention by the relevant authorities. While there exist a myriad of Natural Language Processing (NLP) models for mining social media data, we demonstrate the shortcomings of the individual models in this paper, and explore how to improve the COVID-19 sentiment analysis in social media network data via two hybrid predictive models based on a Long-Short-Term-Memory (LSTM)-based autoencoder and a Convolutional Neural Network (CNN) model coupled with a bi-directional LSTM. Through extensive experiments on the recently acquired Twitter dataset, we compare the COVID-19 sentiments exhibited in the USA and Canada using our proposed hybrid predictive models and demonstrate their superiority over individual Artificial Intelligence (AI) models.
[ "Language Models", "Semantic Text Processing", "Sentiment Analysis" ]
[ 52, 72, 78 ]
SCOPUS_ID:85062785864
A Hybrid Algorithm for Text Classification Based on CNN-BLSTM with Attention
We propose an effective text classification framework, which is the hybrid of different weights of character-level and word-level features through concatenation based on Convolutional Neural Network-bidirectional long short-term memory with attention (BACNN). The first step is word segmentation or character segmentation in the process of Chinese natural language processing. However, due to the different semantic relations in Chinese, Chinese sentences usually have several ways of word segmentation, which leads to the problem of word segmentation ambiguity. Although Chinese character segmentation is not ambiguity, its meaning is not accurate and rich enough. And in different situations, the character and word are different in importance. Therefore, to overcome the above problems, we propose the method of hybrid different weights of word-level and character-level features to let them make up the respective shortcomings. The experiment results indicate that our proposed method is better than the simple word or character level feature in classification performance.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Syntactic Text Processing", "Text Segmentation", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 15, 21, 36, 3 ]
http://arxiv.org/abs/1702.01587v1
A Hybrid Approach For Hindi-English Machine Translation
In this paper, an extended combined approach of phrase based statistical machine translation (SMT), example based MT (EBMT) and rule based MT (RBMT) is proposed to develop a novel hybrid data driven MT system capable of outperforming the baseline SMT, EBMT and RBMT systems from which it is derived. In short, the proposed hybrid MT process is guided by the rule based MT after getting a set of partial candidate translations provided by EBMT and SMT subsystems. Previous works have shown that EBMT systems are capable of outperforming the phrase-based SMT systems and RBMT approach has the strength of generating structurally and morphologically more accurate results. This hybrid approach increases the fluency, accuracy and grammatical precision which improve the quality of a machine translation system. A comparison of the proposed hybrid machine translation (HTM) model with renowned translators i.e. Google, BING and Babylonian is also presented which shows that the proposed model works better on sentences with ambiguity as well as comprised of idioms than others.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85077954382
A Hybrid Approach Handwritten Character Recognition for Mizo using Artificial Neural Network
In the past decade we have seen a rapid advancement in object recognition, however Mizo Handwritten Character Recognition (MHCR) remains an untapped field. In this study a handwritten is collected from 20 different writers each consisting of 456 Mizo characters. In total 20 X 456= 9120 characters are used for testing the proposed system. In this process of recognition, the challenging factor is due to the fact that Mizo handwritten consists of vowels character that are made up of multiple isolated blobs (pixel) such as circumflex (-) on top vowel character. This make segmentation of each individual character difficult and challenging. Therefore, to implement MHCR, a hybrid approach character segmentation using bounding box and morphological dilation is combined, which merges the isolated blobs of Mizo character into a single entity. A hybrid approach feature extraction using a combination of zoning and topological feature is implemented. These features are used for classification and recognition. To evaluated the performance of MHCR model an experiment is carried out using 4 different types of Artificial Neural Network Architecture. Each Architecture is compared and analysed. The Back Propagation Neural Network has the highest accuracy with a recognition rates of 98%. This proposed hybrid technique will help in building an automatic MHCR system for practical applications.
[ "Text Segmentation", "Syntactic Text Processing", "Information Extraction & Text Mining" ]
[ 21, 15, 3 ]
SCOPUS_ID:85135012138
A Hybrid Approach Towards Machine Translation System for English–Hindi and Vice Versa
With the rapid progress in the technology and data in the public domain, the machine translation and data science have made remarkable progress. In this paper, we discuss our specific use case of developing machine translation system for English to Hindi and Hindi to English language translation. For this system, we have used the daily proceedings of the Lok Sabha as data and developed NMT-based machine translation system on the top of already available rule-based machine translation system. Developed system has been evaluated using bilingual evaluation understudy (BLEU) as well as the human evaluation metrics using comprehensibility and fluency. In machine translation (MT), there is the trend of measuring post-editing time, and thus, we have also evaluated our system by measuring post-editing time using open-source tool.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/1912.00127v3
A Hybrid Approach Towards Two Stage Bengali Question Classification Utilizing Smart Data Balancing Technique
Question classification (QC) is the primary step of the Question Answering (QA) system. Question Classification (QC) system classifies the questions in particular classes so that Question Answering (QA) System can provide correct answers for the questions. Our system categorizes the factoid type questions asked in natural language after extracting features of the questions. We present a two stage QC system for Bengali. It utilizes one dimensional convolutional neural network for classifying questions into coarse classes in the first stage. Word2vec representation of existing words of the question corpus have been constructed and used for assisting 1D CNN. A smart data balancing technique has been employed for giving data hungry convolutional neural network the advantage of a greater number of effective samples to learn from. For each coarse class, a separate Stochastic Gradient Descent (SGD) based classifier has been used in order to differentiate among the finer classes within that coarse class. TF-IDF representation of each word has been used as feature for the SGD classifiers implemented as part of second stage classification. Experiments show the effectiveness of our proposed method for Bengali question classification.
[ "Text Classification", "Question Answering", "Natural Language Interfaces", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 27, 11, 24, 3 ]
SCOPUS_ID:85056478283
A Hybrid Approach Using Topic Modeling and Class-Association Rule Mining for Text Classification: The Case of Malware Detection
We propose a novel general-purpose hybrid method comprising topic modeling and Class Association Rule Mining (CARM) for text classification in tandem. While topic modeling performs dimension reduction, association rule mining aspect is taken care by Apriori and Frequent Pattern(FP)- growth algorithms, separately. In order to illustrate the effectiveness of the proposed method, malware prediction using two publicly available datasets of API calls has been performed. The proposed model has generated highly accurate class association rules and Area Under the Curve (AUC) compare to the extant models in the literature. With the help of statistical significance test, it is concluded that the performances of both proposed hybrid models, i.e., topic modelina with FP-2rowth and Apriori, are same.
[ "Topic Modeling", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 9, 24, 36, 3 ]
SCOPUS_ID:85044342793
A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms
Text summarization is the process of producing a shorter version of a specific text. Automatic summarization techniques have been applied to various domains such as medical, political, news, and legal domains proving that adapting domain-relevant features could improve the summarization performance. Despite the existence of plenty of research work in the domain-based summarization in English and other languages, there is a lack of such work in Arabic due to the shortage of existing knowledge bases. In this paper, a hybrid, single-document text summarization approach (abbreviated as (ASDKGA)) is presented. The approach incorporates domain knowledge, statistical features, and genetic algorithms to extract important points of Arabic political documents. The ASDKGA approach is tested on two corpora KALIMAT corpus and Essex Arabic Summaries Corpus (EASC). The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) framework was used to compare the automatically generated summaries by the ASDKGA approach with summaries generated by humans. Also, the approach is compared against three other Arabic text summarization approaches. The (ASDKGA) approach demonstrated promising results when summarizing Arabic political documents with average F-measure of 0.605 at the compression ratio of 40%.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
http://arxiv.org/abs/2004.08673v1
A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention
The Web has become the main platform where people express their opinions about entities of interest and their associated aspects. Aspect-Based Sentiment Analysis (ABSA) aims to automatically compute the sentiment towards these aspects from opinionated text. In this paper we extend the state-of-the-art Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) method in two directions. First we replace the non-contextual word embeddings with deep contextual word embeddings in order to better cope with the word semantics in a given text. Second, we use hierarchical attention by adding an extra attention layer to the HAABSA high-level representations in order to increase the method flexibility in modeling the input data. Using two standard datasets (SemEval 2015 and SemEval 2016) we show that the proposed extensions improve the accuracy of the built model for ABSA.
[ "Representation Learning", "Semantic Text Processing", "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 12, 72, 23, 78 ]
SCOPUS_ID:85149636440
A Hybrid Approach for Aspect-based Sentiment Analysis: A Case Study of Hotel Reviews
This study presents a method of aspect-based sentiment analysis for customer reviews related to hotels. The considered hotel aspects are staff attentiveness, room cleanliness, value for money and convenience of location. The proposed method consists of two main components. The first component is used to assemble relevant sentences for each hotel aspect into relevant clusters of hotel aspects using BM25. We developed a corpus of keywords called the Keywords of Hotel Aspect (KoHA) Corpus, and the keywords of each aspect were used as queries to assemble relevant sentences of each hotel aspect into relevant clusters. Finally, customer review sentences in each cluster were classified into positive and negative classes using sentiment classifiers. Two algorithms, Support Vector Machines (SVM) with a linear and a RBF kernel, and Convolutional Neural Network (CNN) were applied to develop the sentiment classifier models. The model based on SVM with a linear kernel returned better results than other models with an AUC score of 0.87. Therefore, this model was chosen for the sentiment classification stage. The proposed method was evaluated using recall, precision and F1 with satisfactory results at 0.85, 0.87 and 0.86, respectively. Our proposed method provided an overview of customer feelings based on score, and also provided reasons why customers liked or disliked each aspect of the hotel. The best model from the proposed method was used to compare with a state-of-the-art model. The results show that our method increased recall, precision, and F1 scores by 2.44%, 2.50% and 1.84%, respectively.
[ "Text Classification", "Text Clustering", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 29, 23, 78, 24, 3 ]
SCOPUS_ID:85146939481
A Hybrid Approach for Auto-Correcting Grammatical Errors Generated by Non-Native Arabic Speakers
Spelling correction is among the most substantial Natural Language Processing (NLP) tasks, which is used as a pre-or post-processing step in many other tasks such as Optical Character Recognition (OCR). Many challenges may face while attempting to implement correctors, including correcting the real-word errors in which the wrong word is one of the language vocabularies; however, its appearance in the context is senseless. The non-native speakers who learn Arabic as a second language might have various spelling errors, especially grammatical errors involving feminine-masculine and definite-indefinite, which are not common among native Arabic speakers; thus, there is a need to employ a spell corrector that can handle such errors however the available spell correctors are not efficient enough to work with their mistakes. The proposed approach employed a rule-based system and the Arabic Bidirectional Encoder Representations from Transformers (AraBERT) model to implement an Arabic grammatical auto-corrector for non-native speakers using the Qatar Arabic Language Bank (QALB) corpus. The corpus comprises 622 sentences containing grammatical and spelling errors generated by non-native speakers. The proposed approach has enhanced the previous works by more than 17%, in which the f1-score was 45.68%. The rules, which are the main contribution, had handled about 30%, and the AraBERT model corrected the rest.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85104673439
A Hybrid Approach for Automatic Extractive Summarization
In recent times, there have been many works in automatic text summarization as it has become a very intriguing topic of natural language processing. A summary should be concise, delivering all the important facts of a document. State-of-The-Art extractive text summarizers use sentence ranking in various ways to extract significant summary sentences. In this paper, a hybrid approach has been introduced for single document extractive summarization. A combination of approaches such as sentence-ranking based on key-phrases and sentiment analysis has been proposed. Moreover, the work combines another approach that picks summary sentences based on their interconnection with other sentences in the text for getting a better result. Through empirical experiments, the proposed approach has been found to generate better summaries than similar existing systems.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85142201011
A Hybrid Approach for Creating Knowledge Graphs: Recognizing Emerging Technologies in Dutch Companies
[ "Knowledge Representation", "Structured Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 18, 50, 72, 74 ]
SCOPUS_ID:84988890405
A Hybrid Approach for Drug Abuse Events Extraction from Twitter
Since their emergence, social media have become a reliable source of social events which attracted the interest of research community to extract them for many business requirements. However, unlike formal sources like news articles, social data exploitation for events extraction is much harder regarding the complex character of social text. Many approaches, ranging from linguistic techniques to learning algorithms, were proposed to succeed this task. Nevertheless, achieved results are weak regarding the complexity and completeness of the task. In this paper, we focus on private events extraction from Twitter by tracking digital drug abusers. We propose a hybrid approach in which we combine strengths of linguistic rules and learning techniques looking for better performance. In fact, we use linguistic rules to build an automatically annotated training set and extract a set of features as well, to be used in a learning process in order to improve obtained results. The proposed approach outperforms the baseline by 24,8% thanks to combination of techniques.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85139786764
A Hybrid Approach for Extractive Summarization of Medical Documents
Text summarization helps us to obtain the most significant content from any document saving time and resources. Many researches of automatic summarization have been done with documents of general domain. In recent years, artificial intelligence and machine learning are being more and more integrated with medical field. As the field of medical requires efficiency more than any other field of science, proper summarization of medical documents is important. Some works and studies have been done in this topic but they have many limitations and restrictions. In this paper, we have presented a hybrid approach for extractive summarization of medical documents. In the combinational method, we have filtered neutral content of a document through sentiment analysis and with interconnection and content of sentences and presence of keyphrases, summarization has been done. After evaluation, the introduced method has shown promise with good scores.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85063135772
A Hybrid Approach for French Medical Entity Recognition and Normalization
Medical document written in natural language is available in electronic form, and it constitutes an invaluable source for medical research. This paper describes our system based on hybrid approach for the task of Named Entity Recognition and Normalization of French medical documents using QUAERO corpus [1]. To evaluate our system, we took part in three subtasks: Entity Normalization, Named Entity Extraction and Classification which involved 10 categories including Anatomy, Chemicals & Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology and Procedures. The results on both tasks, Named Entity Recognition and Normalization, demonstrate high performance as compared to other methods for French Medical Entity Recognition and Normalization.
[ "Named Entity Recognition", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 34, 24, 36, 3 ]
http://arxiv.org/abs/2011.07403v3
A Hybrid Approach for Improved Low Resource Neural Machine Translation using Monolingual Data
Many language pairs are low resource, meaning the amount and/or quality of available parallel data is not sufficient to train a neural machine translation (NMT) model which can reach an acceptable standard of accuracy. Many works have explored using the readily available monolingual data in either or both of the languages to improve the standard of translation models in low, and even high, resource languages. One of the most successful of such works is the back-translation that utilizes the translations of the target language monolingual data to increase the amount of the training data. The quality of the backward model which is trained on the available parallel data has been shown to determine the performance of the back-translation approach. Despite this, only the forward model is improved on the monolingual target data in standard back-translation. A previous study proposed an iterative back-translation approach for improving both models over several iterations. But unlike in the traditional back-translation, it relied on both the target and source monolingual data. This work, therefore, proposes a novel approach that enables both the backward and forward models to benefit from the monolingual target data through a hybrid of self-learning and back-translation respectively. Experimental results have shown the superiority of the proposed approach over the traditional back-translation method on English-German low resource neural machine translation. We also proposed an iterative self-learning approach that outperforms the iterative back-translation while also relying only on the monolingual target data and require the training of less models.
[ "Low-Resource NLP", "Machine Translation", "Text Generation", "Responsible & Trustworthy NLP", "Multilinguality" ]
[ 80, 51, 47, 4, 0 ]
SCOPUS_ID:85146930883
A Hybrid Approach for Inference between Behavioral Exception API Documentation and Implementations, and Its Applications
Automatically producing behavioral exception (BE) API documentation helps developers correctly use the libraries. The state-of-the-art approaches are either rule-based, which is too restrictive in its applicability, or deep learning (DL)-based, which requires large training dataset. To address that, we propose StatGen, a novel hybrid approach between statistical machine translation (SMT) and tree-structured translation to generate the BE documentation for any code and vice versa. We consider the documentation and source code of an API method as the two abstraction levels of the same intent. StatGen is specifically designed for this two-way inference, and takes advantage of their structures for higher accuracy. We conducted several experiments to evaluate StatGen. We show that it achieves high precision (75% and 75%), and recall (81% and 84%), in inferring BE documentation from source code and vice versa. StatGen achieves higher precision, recall, and BLEU score than the state-of-the-art, DL-based baseline models. We show StatGen's usefulness in two applications. First, we use it to generate the BE documentation for Apache APIs that lack of documentation by learning from the documentation of the equivalent APIs in JDK. 44% of the generated documentation were rated as useful and 42% as somewhat useful. In the second application, we use StatGen to detect the inconsistency between the BE documentation and corresponding implementations of several JDK8 packages.
[ "Programming Languages in NLP", "Machine Translation", "Multimodality", "Text Generation", "Multilinguality" ]
[ 55, 51, 74, 47, 0 ]
SCOPUS_ID:85100513948
A Hybrid Approach for Linguistic Summarization of Time Series
Linguistic summarization is an important step to extract information from a time series in an efficient and effective manner that simulates the human perspective. Before performing this process, scientists suggested using time series representations to identify the trends, then summarize the characteristics associated with these trends. In this paper, we show how to efficiently adapt and implement a piece-wise linear representation of time series to summarize the dynamic characteristics of trends. First, we show how to build time series representation using a modified Bottom up algorithm. Then we use a set of features to characterize the trends. Based on the protoforms proposed by Yager and the classical Zadeh's calculus of linguistically quantified propositions, we derive the linguistic summaries and their measures of quality. The experimentation done on real data show interesting and promising findings.
[ "Information Extraction & Text Mining", "Green & Sustainable NLP", "Summarization", "Text Generation", "Responsible & Trustworthy NLP" ]
[ 3, 68, 30, 47, 4 ]
SCOPUS_ID:85102141770
A Hybrid Approach for Question Retrieval in Community Question Answerin
Community Question Answering (CQA) services, such as Yahoo! Answers and WikiAnswers, have become popular with users as one of the central paradigms for satisfying users' information needs. The task of question retrieval aims to answer one's query directly by finding the most relevant questions (together with their answers) from an archive of past questions. However, questions are always short text that there is a lexical gap between the queried question and the past questions. Furthermore, the underlying intents of two questions could be very different even if they bear a close lexical resemblance. To alleviate these problems, we present a hybrid approach that blends several language modelling techniques for question retrieval, namely, the Classic (query-likelihood) Language Model, the state-of-the-art Translation-based Language Model, and our proposed Semantic-based Language Model and Intent-based Language Model. The semantics of each candidate question is derived by a Probabilistic Topic Model, which makes use of local and global semantic graphs for capturing the hidden interactions among entities (e.g. people, places and concepts) in question-answer pairs. Experiments on two real-world data sets show that our approach can significantly outperform existing ones.
[ "Language Models", "Topic Modeling", "Semantic Text Processing", "Intent Recognition", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 9, 72, 79, 78, 24, 3 ]
SCOPUS_ID:85115821720
A Hybrid Approach for Stock Market Prediction Using Financial News and Stocktwits
Stock market prediction is a difficult problem that has always attracted researchers from different domains. Recently, different studies using text mining and machine learning methods were proposed. However, the efficiency of these methods is still highly dependant on the retrieval of relevant information. In this paper, we investigate novel data sources (Stocktwits in combination with financial news) and we tackle the problem as a binary classification task (i.e., stock prices moving up or down). Furthermore, we use for that end a hybrid approach which consists of sentiment and event-based features. We find that the use of Stocktwits data systematically outperforms the sole use of price data to predict the close prices of 8 companies from the NASDAQ100. We conclude on what the limits of these novel data sources are and how they could be further investigated.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85142807561
A Hybrid Approach for Text Summarization Using Social Mimic Optimization Algorithm
Every day, millions of Internet users share a lot of information on the web. In this digital era, the exponential growth of data on the web causes difficulties in getting the needed information quickly. Text summarization plays a crucial role in getting the needed information quickly. This work introduces a new extractive single-document summarization technique using the hybrid social mimic optimization algorithm. The objective function of the proposed work maximizes the summary sentences informative score and the sentence coherence factor. In this study, we used the three popular benchmark datasets (DUC2002, BBC News, and CNN) for the experimental work. In this study, we used ROUGE score as a performance evaluation measure and compared with five state-of-the-art single-document summarization techniques. The performance comparison analysis shows that the proposed summarization technique outperforms the other competitor approaches.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85145699769
A Hybrid Approach for Web Pages Classification
Currently, the internet is growing at an exponential rate and can cover just some required data. However, the immense amount of web pages makes the discovery of the target data more difficult for the user. Therefore, an efficient method to classify this huge amount of data is essential where web pages can be exploited to their full potential. In this paper, we propose an approach to classify Web pages based on their textual content. This approach is based on an unsupervised statistical technique (TF-IDF) for keyword extraction (textual content) combined with a supervised machine learning approach, namely recurrent neural networks.
[ "Information Retrieval", "Term Extraction", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 1, 36, 3 ]
https://aclanthology.org//W06-2207/
A Hybrid Approach for the Acquisition of Information Extraction Patterns
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85083455867
A Hybrid Approach for the Sentiment Analysis of Turkish Twitter Data
Social media is now playing an important role in influencing people’s sentiments. It also helps analyze how people, particularly consumers, feel about a particular topic, product or an idea. One of the recent social media platforms that people use to express their thoughts is Twitter. Due to the fact that Turkish is an agglutinative language, its complexity makes it difficult for people to perform sentiment analysis. In this study, a sum of 13K Turkish tweets has been collected from Twitter using the Twitter API and their sentiments are being analyzed using machine learning classifiers. Random forests and support vector machines are the two kinds of classifiers that are adopted. Preprocessing methods were applied on the obtained data to remove links, numbers, punctuations and un-meaningful characters. After the preprocessing phase, unsuitable data have been removed and 10,500 out of the 13K downloaded dataset are taken as the main dataset. The datasets are classified to be either positive, negative or neutral based on their contents. The main dataset was converted to a stemmed dataset by removing stopwords, applying tokenization and also applying stemming on the dataset, respectively. A portion of 3,000 and 10,500 of the stemmed data with equal distribution from each class has been identified as the first dataset and second dataset to be used in the testing phase. Experimental results have shown that while support vector machines perform better when it comes to classifying negative and neutral stemmed data, random forests algorithm perform better in classifying positive stemmed data and thus a hybrid approach which consists of the hierarchical combination of random forest and support vector machines has also been developed and used to find the result of the data. Finally, the applied methodologies have been tested on both the first and the second dataset. It has been observed that while both support vector machines and random forest algorithms could not achieve an accuracy of up to 77% on the first and 72% on the second dataset, the developed hybrid approach achieve an accuracy of up to 86.4% and 82.8% on the first and second dataset, respectively.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:85060021028
A Hybrid Approach of Text Summarization Using Latent Semantic Analysis and Deep Learning
In the current scenario of Information Technology, excessive and vast information is available on online resources but it is not always easy to find relevant and useful information. Along this issue, the paper is presented a method on extractive single document text summarization using Deep Learning method - Self-Organizing Maps (SOM) which is an unsupervised method and Artificial Neural Networks (ANN) which is a supervised method. The work involves investigating the effect of adding mapped sentences from SOM visualization, and re-training the inputs on ANN for ranking the sentences. In individual experiment of the hybrid model, a different mapping of SOM is added to the ANN network as input vector. Hybrid model uses Stochastic Gradient Descent update set of parameters in an iterative manner to minimize the cost function. In addition, using back-propagation weight is being adjusted for the input vector. The empirical results show that the hybrid model using mapping clearly provides a comprehensive result and improves the F-score on average 5% on ROUGE-1, ROUGE-2, ROUGE-L and ROUGE-SU4. This novel method has been implemented on different documents, which are publicly available on Opinosis Dataset. The ROUGE toolkit has been used to evaluate summaries which are generated from the proposed model and other existing algorithms versus human generated summary.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85129231516
A Hybrid Approach to Analyze Cybersecurity News Articles by Utilizing Information Extraction &amp; Sentiment Analysis Methods
Cybersecurity is becoming indispensable for everyone and everything in the times of the Internet of Things (IoT) revolution. Every aspect of human society - be it political, financial, technological, or cultural - is affected by cyber-attacks or incidents in one way or another. Newspapers are an excellent source that perfectly captures this web of cybersecurity. By implementing various NLP techniques such as tf-idf, word embedding and sentiment analysis (SA) (machine learning method), this research will examine the cybersecurity-related articles from 18 major newspapers (English language online version) from six countries (three newspapers from each country) collected within one year from April 2018 till March 2019. The first objective is to extract the crucial events from each country, which we will achieve by our first step - 'information extraction.' The next objective is to find out what kind of sentiments those crucial issues garnered, which we will accomplish from our second step - 'SA.' SA of news articles would also help in understanding each 'nation's mood' on critical cybersecurity issues, which can aid decision-makers in charting new policies.
[ "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 3, 78 ]
SCOPUS_ID:85044006536
A Hybrid Approach to Answer Selection in Question Answering Systems
In this paper, we present a hybrid model for answer selection in question answering systems by representing multiple kinds of features, i.e., lexical-based, word-alignment, and word-embedding. The model employs convolutional neural network, multilayer perceptron, and support vector machines to train the classifiers. We evaluate our model on the two popular QA datasets, SemEval-2016 Task 3 and TREC QA. The experimental results show that our system outperforms the top-5 proposed systems in SemEval-2016 workshop, and also achieves the-state-of-art results on TREC QA dataset.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]