id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85146435072
A Hybrid Semantic Statistical Query Expansion for Arabic Information Retrieval Systems
Query-document vocabulary mismatch, the lack of query expressiveness for user needs and the phenomenon of short queries are the main issues associated with information retrieval systems. Query Expansion (QE) is one of the well-known alternative for overcoming these problems. It mainly involves finding synonyms or related words for the query terms. There are several approaches in the query expansion field such as statistical and semantic approaches; they focus on expanding the individual query terms rather than the entire query during the expansion process. An other category of approaches deals with the whole query by using a neural approach based on Pseudo Relevance feedback (PRF) documents. In this work, we carried out an ablation study to measure the impact of the classical and semantic (word embedding, order, context) based query expansion on the retrieval performance. The experiments conducted on the Arabic EveTAR dataset reveal that our hybrid proposed approach combining classical (PRF) and transformer (AraBERT) is competitive with the state-of-the-art methods. In fact, the obtained result in terms of the Mean Average Precision (MAP) is up to 0.72.
[ "Semantic Text Processing", "Information Retrieval", "Representation Learning" ]
[ 72, 24, 12 ]
SCOPUS_ID:85130365626
A Hybrid Semantic-Topic Co-encoding Network for Social Emotion Classification
Social emotion classification is to predict the distribution of readers’ emotions evoked by a document (e.g., news article). Previous work has shown that both semantic and topical information can help improve classification performance. However, many existing topic-based neural models represent the topical feature of document with only topic probabilities, ignoring the fine-grained semantic feature of terms in each topic. Moreover, traditional RNN-based semantic networks often face the disadvantage of slow training. In this paper, we propose a hybrid semantic-topic co-encoding network. It contains a semantics-driven topic encoder to compose topic embeddings, and also utilizes a forward self-attention network to exploit document semantics. Finally, the semantic and topical features of the document are adaptively integrated through a gate layer, which generates the document representation for social emotion classification. Experimental results on three public datasets show that the proposed model outperforms the state-of-the-art approaches in terms of higher accuracy and average Pearson correlation coefficient. Moreover, the proposed model runs fast and with better explainability.
[ "Text Classification", "Sentiment Analysis", "Emotion Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 78, 61, 24, 3 ]
SCOPUS_ID:85093090755
A Hybrid Sentiment Analysis Method
Sentiment analysis has attracted a wide range of attentions in the last few years. Supervised-based and lexicon-based methods are two mainly sentiment analysis categories. Supervised-based approaches could get excellent performance with sufficient tagged samples, while the acquisition of sufficient tagged samples is difficult to implement in some cases. Lexicon-based method can be easily applied to variety domains but excellent quality lexicon is needed, otherwise it will get unsatisfactory performance. In this paper, a hybrid supervised review sentiment analysis method which takes advantage of both of the two categories methods is proposed. In training phrase, lexicon-based method is used to learn confidence parameters which used to determine classifier selection from a small-scale labeled dataset. Then training set which is used to train a Naive Bayes sentiment classifier. Finally, a sentiment analysis framework consist of the lexicon-based sentiment polarity classifier and the learned Naive Bayes classifier is constructed. The optimal hybrid classifier is obtained by obtaining the optimal threshold value. Experiments are conducted on four review datasets.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:84946555797
A Hybrid Sentiment Lexicon for Social Media Mining
Sentiment lexicon is a crucial resource for opinion mining from social media content. However, standard off-the-shelve lexicons are static and typically do not adapt, in content and context, to a target domain. This limitation, adversely affects the effectiveness of sentiment analysis algorithms. In this paper, we introduce the idea of distant-supervision to learn a domain-focused lexicon to improve coverage and sentiment context of terms. We present a weighted strategy to integrate scores from the domain-focused with the static lexicon to generate a hybrid lexicon. Evaluations of this hybrid lexicon on social media text show superior sentiment classification over either of the individual lexicons. A further comparative study with typical machine learning approaches to sentiment analysis also confirms this position. We also present promising results from our investigations into the transferability of this distant-supervised hybrid lexicon on three different social media.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85101743197
A Hybrid Sequential Model for Text Simplification
Learning different subjects to enhance knowledge of students and children, reading habit plays an important role. Students often face problems or reading difficulties are aroused when the students are non-native English learners or suffering from dyslexia. Thus, in the present work, we have built a hybrid sequential model for text simplification (i.e. to translate complex English sentences into simple English sentences). Generally, text simplification is treated as monolingual translation task, in which translation occurs between the same languages. In recent trends, encoder-decoder model plays an important role in text simplification. However, a difference exists between normal machine translation and text simplification. Text simplification can be achieved using other different operations like splitting, merging and removing complex word, etc. It is very difficult to deal with the type of operation (i.e. split, delete or replace) that is required for text simplification if we use encoder decoder model. Our encoder-decoder model is mainly based on two different approaches, one is character-based and another is word-based. We also employed a Named Entity Recognizer (NER) to improve the accuracy of our model because named entities do not change while converting a complex sentence into a simple sentence. For learning process, we also use bi-directional encoder-decoder model to improve the result. In the present attempt, we used a well-known text simplification dataset (i.e. PWKP). Our proposed model achieved 0.82 BLUE score using NER module and word-based bi-directional encoder-decoder model.
[ "Language Models", "Paraphrasing", "Machine Translation", "Semantic Text Processing", "Information Extraction & Text Mining", "Named Entity Recognition", "Text Generation", "Multilinguality" ]
[ 52, 32, 51, 72, 3, 34, 47, 0 ]
SCOPUS_ID:85114284006
A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems
Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.
[ "Reasoning", "Textual Inference" ]
[ 8, 22 ]
SCOPUS_ID:85122575847
A Hybrid Similarity Measure for Dynamic Service Discovery and Composition based on Mobile Agents
With the ever-present competition among companies, the prevalence of web services (WSs) is increasing dramatically. This leads to the diversity of the similar services and their developed nature, which makes the discovery of a relevant service during the composition phase a complex task. Since most of the competition companies aim to discover high-quality services with minimum charges in order to increase the number of customers and their profit. The semantic WSs allow performing dynamic service discovery through the entities software and intelligent agents. However, the solutions provided to the discovery process are limited to their performance in terms of the quickness to respond to the request in real-time, without considering the constraints such as the accuracy in the discovery phase and the quality of the similarity mechanism evaluation. They usually are based on the similarity measure of distance between concepts in the ontology instead of taking into consideration the relationships semantically and the strength of the semantic relationship between concepts in the context. In this paper, we proposed a novel hybrid semantic similarity method to improve the service discovery process. The hybrid method is applied to an architecture based on mobile agents, where cooperative agents are integrated to facilitate and speed up the discovery process. In the first hybrid method, we defined the Latent Semantic Analysis (LSA) with a semantic relatedness measure to avoid the ambiguity of the terms and obtain a purely semantic relatedness at level of the service description. The second one is defined to analyze the relationships at the level of the I/O service based on the subsumption reasoning, called IOMATCHING. Experimental results on a real data set demonstrate that our solution outperforms the state-of-the-art approaches in terms of precision, recall, F-measure, and consumed time of the service discovery.
[ "Semantic Text Processing", "Semantic Similarity" ]
[ 72, 53 ]
SCOPUS_ID:85078503038
A Hybrid Social Mining Approach for Companies Current Reputation Analysis
This paper presents an approach for company’s reputation analysis using data mining techniques. It obtains knowledge from huge data written about these companies and available publicly on the internet. It is done by extracting data from social media, such as twitter, containing relevant company’s mentions. Then, data is then injected into the first layer where it is classified into a positive and negative classes using machine learning, specifically artificial neural network. It takes tweets after preprocessing as an input, then outputs the sentiment of each tweet. The result will be the general reputation and perception of the company for a given timeframe. To further understand these results, the analyzed data is then transferred to a second layer of analysis where consumers to identify products, services, and announcements that lead to the positive of negative perception. Using Term Frequency (TF), this will result to ranked list of most mentioned words in each of the negative and positive classes. This will be valuable for companies to identify points of weakness and strength, advertisement impressions, and strategic decisions impact.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:84978056076
A Hybrid Strategy for Chinese Domain-Specific Terminology Extraction
Automatic Term Extraction is an important issue in Natural Language Processing. This paper presents a new approach of terminology extraction combining with machine learning based on cascaded conditional random fields and corpus-based statistical model. In this approach, firstly, the low-layer and high-layer conditional random fields (CRFs) are used to extract the simple and compound terminologies respectively. Then, Domain Relevance (DR) and Domain Consensus (DC) degrees are calculated to acquire the final domain terminologies. Experimental results show that the precision, recall and F-score are 83.29%, 80.75%, 82.01% respectively. The comparison with CRFs and MI+T-value shows that the proposed method for extracting terminology is effective.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85116495348
A Hybrid Supervised/Unsupervised Machine Learning Approach to Classify Web Services
Reusing software is a promising way to reduce software development costs. Nowadays, applications compose available web services to build new software products. In this context, service composition faces the challenge of proper service selection. This paper presents a model for classifying web services. The service dataset has been collected from the well-known public service registry called ProgrammableWeb. The results were obtained by breaking service classification into a two-step process. First, Natural Language Processing(NLP) pre-processed web service data have been clustered by the Agglomerative hierarchical clustering algorithm. Second, several supervised learning algorithms have been applied to determine service categories. The findings show that the hybrid approach using the combination of hierarchical clustering and SVM provides acceptable results in comparison with other unsupervised/supervised combinations.
[ "Low-Resource NLP", "Information Extraction & Text Mining", "Text Classification", "Text Clustering", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 80, 3, 36, 29, 24, 4 ]
SCOPUS_ID:85067393016
A Hybrid System for Chinese Grammatical Error Diagnosis and Correction
This paper introduces the DM NLP team's system for NLPTEA 2018 shared task of Chinese Grammatical Error Diagnosis (CGED), which can be used to detect and correct grammatical errors in texts written by Chinese as a Foreign Language (CFL) learners. This task aims at not only detecting four types of grammatical errors including redundant words (R), missing words (M), bad word selection (S) and disordered words (W), but also recommending corrections for errors of M and S types. We proposed a hybrid system including four models for this task with two stages: the detection stage and the correction stage. In the detection stage, we first used a BiLSTM-CRF model to tag potential errors by sequence labeling, along with some handcraft features. Then we designed three Grammatical Error Correction (GEC) models to generate corrections, which could help to tune the detection result. In the correction stage, candidates were generated by the three GEC models and then merged to output the final corrections for M and S types. Our system reached the highest precision in the correction subtask, which was the most challenging part of this shared task, and got top 3 on F1 scores for position detection of errors.
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
https://aclanthology.org//2020.nlptea-1.9/
A Hybrid System for NLPTEA-2020 CGED Shared Task
This paper introduces our system at NLPTEA2020 shared task for CGED, which is able to detect, locate, identify and correct grammatical errors in Chinese writings. The system consists of three components: GED, GEC, and post processing. GED is an ensemble of multiple BERT-based sequence labeling models for handling GED tasks. GEC performs error correction. We exploit a collection of heterogenous models, including Seq2Seq, GECToR and a candidate generation module to obtain correction candidates. Finally in the post processing stage, results from GED and GEC are fused to form the final outputs. We tune our models to lean towards optimizing precision, which we believe is more crucial in practice. As a result, among the six tracks in the shared task, our system performs well in the correction tracks: measured in F1 score, we rank first, with the highest precision, in the TOP3 correction track and third in the TOP1 correction track, also with the highest precision. Ours are among the top 4 to 6 in other tracks, except for FPR where we rank 12. And our system achieves the highest precisions among the top 10 submissions at IDENTIFICATION and POSITION tracks.
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
http://arxiv.org/abs/2102.04506v1
A Hybrid Task-Oriented Dialog System with Domain and Task Adaptive Pretraining
This paper describes our submission for the End-to-end Multi-domain Task Completion Dialog shared task at the 9th Dialog System Technology Challenge (DSTC-9). Participants in the shared task build an end-to-end task completion dialog system which is evaluated by human evaluation and a user simulator based automatic evaluation. Different from traditional pipelined approaches where modules are optimized individually and suffer from cascading failure, we propose an end-to-end dialog system that 1) uses Generative Pretraining 2 (GPT-2) as the backbone to jointly solve Natural Language Understanding, Dialog State Tracking, and Natural Language Generation tasks, 2) adopts Domain and Task Adaptive Pretraining to tailor GPT-2 to the dialog domain before finetuning, 3) utilizes heuristic pre/post-processing rules that greatly simplify the prediction tasks and improve generalizability, and 4) equips a fault tolerance module to correct errors and inappropriate responses. Our proposed method significantly outperforms baselines and ties for first place in the official evaluation. We make our source code publicly available.
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 52, 11, 72, 38 ]
SCOPUS_ID:85141710090
A Hybrid Translation Model for Pidgin English to English Language Translation
The African continent is made up of people with rich diverse cultures and spoken languages. Despite the diversity, one common point of unification, especially among the West African communities is the spoken pidgin-English language. With the development in web technology and the English language dominancy of web content, this growing population stands disadvantaged in understanding content on the web. To proffer a solution, researchers in machine translation from Pidgin English to the English language have leveraged only unsupervised and supervised Neural Machine Translation-based models. In this paper, we propose a hybrid-strategic model that improves the accuracy of the baseline Neural Machine Translation Model (NMT) in translating pidgin English to the English language. From the JW300 public dataset, we used 22,047 sentence pairs for training our model,1000 for tuning, and 2520 for testing. The Bi-Lingual Evaluation Understudy (BLEU) score was employed as a metric of measurement. From our findings, our hybrid model outperforms the baseline NMT model with a BLEU score of 1.05 on two-level translation. This indicates that the accuracy is dependent on the level and type of hybrid used. Studies that look at in-depth pre-translation strategies for developing translation machine model are green areas for pidgin-English translation.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85143058204
A Hybrid Video-to-Text Summarization Framework and Algorithm on Cascading Advanced Extractiveand Abstractive-based Approaches for Supporting Viewers' Video Navigation and Understanding
In this work, we propose the development of a hybrid video-to-text summarization (VTS) framework on cascading the advanced and code-accessible extractive and abstractive (EA) approaches for supporting viewers' video navigation and understanding. More precisely, the contributions of this paper are three-fold. First, we devise an automated and unified hybrid VTS framework that takes an arbitrary video as an input, generates the text transcripts from its human dialogues, and then summarizes the text transcripts into one short video synopsis. Second, we advance the binary merge-sort approach and expand its use to develop an intuitive and heuristic abstractive-based algorithm, with the time complexity O(TLlogTL) and the space complexity O(TL), where TL is the total number of word tokens on a text, to dynamically and successively split and merge a long piece of text transcripts, which exceeds the input text size limitation of an abstractive model, to generate one final semantic video synopsis. At the end, we test the feasibility of applying this proposed framework and algorithm in conducting the preliminarily experimental evaluations on three different videos, as a pilot study, in genres, contents, and lengths. We show that our approach outperforms and/or levels most of the individual EA methods stated above by 75% in terms of the ROUGE F1-Score measurement.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Captioning", "Summarization", "Text Generation", "Multimodality" ]
[ 20, 3, 39, 30, 47, 74 ]
http://arxiv.org/abs/1802.09968v2
A Hybrid Word-Character Approach to Abstractive Summarization
Automatic abstractive text summarization is an important and challenging research topic of natural language processing. Among many widely used languages, the Chinese language has a special property that a Chinese character contains rich information comparable to a word. Existing Chinese text summarization methods, either adopt totally character-based or word-based representations, fail to fully exploit the information carried by both representations. To accurately capture the essence of articles, we propose a hybrid word-character approach (HWC) which preserves the advantages of both word-based and character-based representations. We evaluate the advantage of the proposed HWC approach by applying it to two existing methods, and discover that it generates state-of-the-art performance with a margin of 24 ROUGE points on a widely used dataset LCSTS. In addition, we find an issue contained in the LCSTS dataset and offer a script to remove overlapping pairs (a summary and a short text) to create a clean dataset for the community. The proposed HWC approach also generates the best performance on the new, clean LCSTS dataset.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85075244038
A Hybrid and Adaptive Approach for Classification of Indian Stock Market-Related Tweets
Twitter generates an enormous amount of data daily. Various studies over the years have concluded that tweets have a significant impact in predicting and understanding the stock price movement. Designing a system to store relevant tweets and extracting information for specific stocks and industry is a relevant and unattempted problem for Indian stock market, which is the eighth largest in terms of market capitalization. As people with diverse backgrounds are tweeting about many topics simultaneously, it is nontrivial to identify tweets which are relevant for the stock market. Therefore, a critical component of the aforesaid system should contain one module for the extraction and storage of the tweets and another module for text classification. In the current study, we have proposed a hybrid approach for text classification which combines lexicon-based and machine learning-based techniques. The proposed scheme handles class imbalance problems effectively and has an adaptive characteristic, where it automatically grows the lexicon both through WordNet and by using a machine learning techniques. This system achieves F1-score over 98% of the relevant class, as compared to 60% achieved using the baseline method over a corpus of 10,000 tweets. The coverage of tweets by lexicons also improves by 8%.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85101962545
A Hybrid and Explainable Deep Learning Framework for SAR Images
Deep learning based patch-wise Synthetic Aperture Radar (SAR) image classification usually requires a large number of labeled data for training. Aiming at understanding SAR images with very limited annotation and taking full advantage of complex-valued SAR data, this paper proposes a general and practical framework for quad-, dual-, and single-polarized SAR data. In this framework, two important elements are taken into consideration: image representation and physical scattering properties. Firstly, a convolutional neural network is applied for SAR image representation. Based on time-frequency analysis and polarimetric decomposition, the scattering labels are extracted from complex SAR data with unsupervised deep learning. Then, a bag of scattering topics for a patch is obtained via topic modeling. By assuming that the generated scattering topics can be regarded as the abstract attributes of SAR images, we propose a soft constraint between scattering topics and image representations to refine the network. Finally, a classifier for land cover and land use semantic labels can be learned with only a few annotated samples. The framework is hybrid for the combination of deep neural network and explainable approaches. Experiments are conducted on Gaofen-3 complex SAR data and the results demonstrate the effectiveness of our proposed framework.
[ "Visual Data in NLP", "Topic Modeling", "Explainability & Interpretability in NLP", "Multimodality", "Responsible & Trustworthy NLP", "Information Extraction & Text Mining" ]
[ 20, 9, 81, 74, 4, 3 ]
SCOPUS_ID:84970950210
A Hybrid method of analyzing patents for sustainable technology management in humanoid robot industry
A humanoid, which refers to a robot that resembles a human body, imitates a human's intelligence, behavior, sense, and interaction in order to provide various types of services to human beings. Humanoids have been studied and developed constantly in order to improve their performance. Humanoids were previously developed for simple repetitive or hard work that required significant human power. However, intelligent service robots have been developed actively these days to provide necessary information and enjoyment; these include robots manufactured for home, entertainment, and personal use. It has become generally known that artificial intelligence humanoid technology will significantly benefit civilization. On the other hand, Successful Research and Development (R & D) on humanoids is possible only if they are developed in a proper direction in accordance with changes in markets and society. Therefore, it is necessary to analyze changes in technology markets and society for developing sustainable Management of Technology (MOT) strategies. In this study, patent data related to humanoids are analyzed by various data mining techniques, including topic modeling, cross-impact analysis, association rule mining, and social network analysis, to suggest sustainable strategies and methodologies for MOT.
[ "Responsible & Trustworthy NLP", "Topic Modeling", "Information Extraction & Text Mining", "Green & Sustainable NLP" ]
[ 4, 9, 3, 68 ]
SCOPUS_ID:85125187669
A Hybrid of Rule-based and HMM-based Part-of-Speech Tagger for Indonesian
Aksara is an Indonesian NLP tool that conforms to Universal Dependencies annotation guidelines. So far, Aksara can perform four tasks: word segmentation, lemmatization, POS tagging, and morphological features analysis. However, one of its weaknesses is that it has not solved the word sense disambiguation problem. This work’s objective is to build a hybrid of rule-based and Hidden Markov Model (HMM) based POS taggers that utilized the output of Aksara’s rule-based POS tagger and solved the ambiguity problem using HMM and the Viterbi algorithm. We use the bigram and trigram model to train HMM. Our hybrid model is evaluated using a 10-fold cross-validation method and achieves an acceptable result with the trigram model slightly better. Trigram model managed to get 86.62% accuracy and an average F1-score of 82.32%, while the bigram model managed to get 86.47% accuracy and an average F1-score of 81.55%. The experiments also show that the hybrid model of rule-based and HMM-based is better than the HMM-based model alone, with a margin of 2.03% of accuracy.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]
SCOPUS_ID:85102083087
A Hybridized Deep Learning Method for Bengali Image Captioning
An omnipresent challenging research topic in computer vision is the generation of captions from an input image. Previously, numerous experiments have been conducted on image captioning in English but the generation of the caption from the image in Bengali is still sparse and in need of more refining. Only a few papers till now have worked on image captioning in Bengali. Hence, we proffer a standard strategy for Bengali image caption generation on two different sizes of the Flickr8k dataset and BanglaLekha dataset which is the only publicly available Bengali dataset for image captioning. Afterward, the Bengali captions of our model were compared with Bengali captions generated by other researchers using different architectures. Additionally, we employed a hybrid approach based on InceptionResnetV2 or Xception as Convolution Neural Network and Bidirectional Long Short-Term Memory or Bidirectional Gated Recurrent Unit on two Bengali datasets. Furthermore, a different combination of word embedding was also adapted. Lastly, the performance was evaluated using Bilingual Evaluation Understudy and proved that the proposed model indeed performed better for the Bengali dataset consisting of 4000 images and the BanglaLekha dataset.
[ "Visual Data in NLP", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 39, 47, 74 ]
SCOPUS_ID:85107338758
A Hyperintensional Theory of Intelligent Question Answering in TIL
The paper deals with natural language processing and question answering over large corpora of formalised natural language texts. Our background theory is the system of Transparent Intensional Logic (TIL) which is a partial, hyperintensional, typed λ -calculus. Having a fine-grained analysis of natural language sentences in the form of TIL constructions, we apply Gentzen’s system of natural deduction adjusted for TIL to answer questions in an ‘intelligent’ way. It means that our system derives logical consequences entailed by the input sentences rather than merely searching answers by keywords. The theory of question answering must involve special rules rooted in the rich semantics of a natural language, and the TIL system makes it possible to formalise all the semantically salient features of natural languages in a fine-grained way. In particular, since TIL is a logic of partial functions, it is apt for dealing with non-referring terms and sentences with truth-value gaps. It is important because sentences often come attached with a presupposition that must be true so that a given sentence had any truth-value. And since answering is no less important than raising questions, we also propose a method of adequate unambiguous answering questions with presuppositions. In case the presupposition of a question is not true (because either false or ‘gappy’), there is no unambiguous direct answer, and an adequate complete answer is instead a negated presupposition. There are two novelties; one is the analysis and answering of Wh-questions that transform into λ -terms referring to α -objects where α is not the type of a truth-value. The second is integration of special rules rooted in the semantics of natural language into Gentzen’s system of natural deduction, together with a heuristic method of searching relevant sentences in the labyrinth of input text data that is driven by constituents of a given question.
[ "Linguistic Theories", "Question Answering", "Natural Language Interfaces", "Linguistics & Cognitive NLP", "Reasoning" ]
[ 57, 27, 11, 48, 8 ]
SCOPUS_ID:78649575005
A Hyperlipemia Information Analysis System based on immune algorithm
This paper designs a Hyperlipemia Information Analysis System, which can realize hyperlipemia document classification and information analysis. In document indexing, we propose an improved approach, called Term Frequency, Inverted Document Frequency and Inverted Entropy (TFIDFIE), to compute term weights in document indexing. In addition, an improved immune algorithm proposed by us is used in this system, which called Clonal Selection Algorithm Based on Antibody Density (CSABAD). According to the clonal selection principle and density control mechanism, only those cells that have higher affinity and lower density are selected to proliferate. The system obtains better classification performance. In further work, we will research the feature selection and data mining for hyperlipemia. © 2010 IEEE.
[ "Indexing", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 69, 24, 36, 3 ]
SCOPUS_ID:80052246594
A Hypothesis About the Biological Basis of Expert Intuition
It is well established that intuition plays an important role in experts' decision making and thinking generally. However, the theories that have been developed at the cognitive level have limits in their explanatory power and lack detailed explanation of the underlying biological mechanisms. In this paper, we bridge this gap by proposing that Hebb's (1949) concept of cell assembly is the biological realization of Simon's (1974) concept of chunking. This view provides mechanisms at the biological level that are consistent with both biological and psychological findings. To further address the limits of previous theories, we introduce emotions as a component of intuition by showing how they modulate the perception-memory interaction. The idea that intuition lies at the crossroads between perception, knowledge, and emotional modulation sheds new light on the phenomena of expertise and intuition. © 2011 American Psychological Association.
[ "Chunking", "Syntactic Text Processing", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 43, 15, 48, 57 ]
SCOPUS_ID:84937046300
A Iacopone's anthology. The Tresatti collection
In 1617, the Franciscan monk Francesco Tresatti da Lugnano produced an extensive annotated edition of Le poesie spirituali del B. Iacopone da Todi for Nicolò Misserini's printing house in Venice. This article reveals the limitations which emerge from the questionable methods of textual criticism adopted by the monk. In fact, the work seems merely to hide behind the façade of an editorial undertaking, as numerous structural and conceptual elements tend to push the very figure of lacopone out of the central focus and instead highlight Tresatti in the role of author. This study reveals how the reworking of lacopone s material, even visually through outlines and conceptual diagrams, seems to adhere to the model offered by Bonaventura da Bagnoregio in Itinerarium mentis in Deum. It also aims to underline the unique qualities of the work with respect to the poetic and linguistic theories in vogue during the editor's lifetime, while also providing new biographical information on Tresatti.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85065412446
A Information Retrieval Based on Question and Answering and NER for Unstructured Information Without Using SQL
In today’s world, the availability of information in the form of unstructured data is in abundance. The unstructured information received is more often than not in the form of natural language text. For any defense establishment, the spy data or any sensitive information received may be best utilized when the information can be extracted efficiently and easily. The proposed model is applicable wherever the influx of text-heavy (unstructured data) is high like the information from the world wide web, documents related to a particular domain, or any other source where the information is in the form of natural language. The proposed Natural Language Information Interpretation and Representation System (NLIIRS) accepts the information in the form of natural language text, processes the information and allows the user to retrieve information by rendering questions in natural language. The questions thus asked by the user are responded by NLIIRS in the form of factoid or phrase based answers. In comparison to the conventional question and answering systems the proposed NLIIRS uses the advantages of both named entity recognition as well as sequential pattern matching based answer search technique. The proposed technique helps us to avoid the use of structured query language (SQL) at the back-end for information processing, storage and extraction. The conversion of user query to SQL statements and also storing the unstructured text in the form of relation tables can be avoided by using NLIIRS. By using this approach in our novel text processing algorithm, after every execution step, the pattern matching and extraction process of the answers to the queries becomes concise and faster. The whole system has been designed on natural language tool kit of Stanford University which helped us to generate parts of speech tag, tokenize the data, and forming tree structure. The novel text processing algorithm utilizes the lemmatizer, stemmer and ne_chunker to prepare the text for information retrieval via Q&A. The advantage of this system is that it does not need training. This system will enable the user to retrieve any information of his/her choice from the available unstructured information.
[ "Programming Languages in NLP", "Structured Data in NLP", "Question Answering", "Named Entity Recognition", "Multimodality", "Natural Language Interfaces", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 55, 50, 27, 34, 74, 11, 24, 3 ]
SCOPUS_ID:85101724948
A Intelligent CNN-BiLSTM Approach for Chinese Sentiment Analysis on Spark
Short text Chinese sentiment classification has become an important task in sentiment analysis fields. In recent years, deep learning-based methods have been widely used in sentiment classification. However, with the complexity of deep learning models, the number of model parameters has increased. The effectiveness of the model depends largely on the choice of hyperparameter combinations. The optimization of model parameters becomes a tricky problem. In this paper, we propose an intelligent neural network sentiment classification approach that uses particle swarm optimization algorithm to optimize the parameters of the neural network model (CNN-BiLSTM). Meanwhile, in order to improve the calculation efficiency of the algorithm, we propose a parallel variant of particle swarm optimization algorithm, which can be iteratively calculated on the Spark distributed platform (big data technology). Experiments on Chinese sentiment analysis dataset validate the effectiveness of our approach.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Sentiment Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 78, 24, 3 ]
SCOPUS_ID:0342725926
A JAPANESE TEXT-TO-SPEECH SYSTEM BASED ON MULTI-FORM UNITS WITH CONSIDERATION OF FREQUENCY DISTRIBUTION IN JAPANESE
This paper proposes our new text-to-speech (TTS) system that concatenates large numbers of speech segments to produce very natural and intelligible synthetic speech. One novel point of our system is its new synthesis unit, which is has three remarkable characteristics as follows; (1) The synthesis units contain all Japanese syllables together with all possible vowel sequences, so very smooth synthetic speech is produced. (2) Both previous and succeeding phoneme environments are considered when speech segments are concatenated, so natural sounding transients from a vowel to a consonant, which is the only concatenation point with the proposed unit, are present in the synthetic speech. (3) Each unit has various fundamental frequency (F0) contours. Therefore, F0 modification rates are very small in any synthesis event, and the F0 modification process causes only minor distortion. To develop a unit database efficiently and effectively, we analyzed 4,850,000 Japanese phrases (breath-group) containing 87,810,000 phonemes and ranked them in order of appearance frequency. Listening tests confirm the high intelligibility and naturalness of speech produced by our new TTS system. It uses the 50,000 highest frequency units that cover over 77% of Japanese texts.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
SCOPUS_ID:0034229902
A JPEG variable quantization method for compound documents
In this paper, we present a JPEG-compliant method for the efficient compression of compound documents using variable quantization. Based on the DCT activity of each 8 x 8 block, our scheme automatically adjusts the quantization scaling factors so that text blocks are compressed at higher quality than image blocks. Results from three different quantization mappings are also reported. © 2000 IEEE.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85126259121
A Japanese 4-year-old with protracted phonological development: the challenge of coronals
This study examines the phonology of a Japanese four-year-old with mildly protracted phonological development (PPD) as a contribution to a special crosslinguistic issue presenting individual profiles in PPD within the framework of constraint-based nonlinear phonology. Although the child’s word structure and vowels were well-established, certain consonant classes presented challenges. Coronal anterior obstruents often showed posteriorization (backing): dorsal stops replaced coronal stops, and with some exceptions, alveolopalatal affricates replaced anterior fricatives and affricates. The feature [+continuant] was also not yet established: palatal and bilabial fricatives and /h/ were either deleted or replaced with glottal stop; and non-anterior affricates replaced coronal fricatives. If affricates are analyzed as a sequence of [-continuant]-[+continuant], they were possible transitional elements from non-continuants to continuants. The profile culminates with suggestions for intervention based on the nonlinear phonological analysis, consistent with other papers in this special issue.
[ "Phonology", "Syntactic Text Processing" ]
[ 6, 15 ]
https://aclanthology.org//W09-0618/
A Japanese Corpus of Referring Expressions Used in a Situated Collaboration Task
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85141884336
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain
We annotate 35,000 SNS posts with both the writer's subjective sentiment polarity labels and the reader's objective ones to construct a Japanese sentiment analysis dataset. Our dataset includes intensity labels (none, weak, medium, and strong) for each of the eight basic emotions by Plutchik (joy, sadness, anticipation, surprise, anger, fear, disgust, and trust) as well as sentiment polarity labels (strong positive, positive, neutral, negative, and strong negative). Previous studies on emotion analysis have studied the analysis of basic emotions and sentiment polarity independently. In other words, there are few corpora that are annotated with both basic emotions and sentiment polarity. Our dataset is the first large-scale corpus to annotate both of these emotion labels, and from both the writer's and reader's perspectives. In this paper, we analyze the relationship between basic emotion intensity and sentiment polarity on our dataset and report the results of benchmarking sentiment polarity classification.
[ "Text Classification", "Polarity Analysis", "Sentiment Analysis", "Emotion Analysis", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 33, 78, 61, 24, 3 ]
SCOPUS_ID:84885462223
A Japanese OCR post-processing approach based on dictionary matching
This paper describes a post-processing approach for Japanese character recognition based on dictionary. By the analysis of experimental data in the processing of OCR, we find that some segmentation and recognition results do not conform to the rules of lexical and just generate the character based on the shape. If the fonts of pending recognized characters are similar with the others, it will easily lead to going wrong in the processing of OCR. For these errors we put forward an idea based on the Limited Length Segmentation Matching and the Bayesian Statistical Classifier. Through the above method, most of the font recognized mistakes can be solved. By the experimental results, it can be proved that this method is an effective way to improve the recognized rate of Japanese character. © 2013 IEEE.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
https://aclanthology.org//W02-1114/
A Japanese Semantic Network Built on a Pulsed Neural Network with Encoding Associative Concept Dictionaries
[ "Knowledge Representation", "Semantic Text Processing" ]
[ 18, 72 ]
SCOPUS_ID:57849165426
A Japanese language model with quote detection by using surface information
In natural language processing, quotes are an important grammatical category which needs consideration. In this paper, we propose a Japanese language model that includes quotes as a category. The quotes are recognized by using surface information and dependencies between the words. Then, they are divided into direct and indirect speech. Finally, we extract the quotes and create a relation between them and the original text. After the text has been analyzed, we obtain a tree structure with all the elements hierarchically categorized. We have experimentally tested the accuracy of the parsing process by creating a prototype system. The results show a 67.29% overall correct quote detection. © 2008 IEEE.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
https://aclanthology.org//2007.mtsummit-papers.63/
A Japanese-English patent parallel corpus
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:79959912368
A Java implementation of a Question Answering System based on conditional knowledge in client-server technology
A conditional schema is a graph-based structure which is able to represent conditional knowledge. This structure was introduced in [1]. The inference mechanism corresponding to the conditional schema representations was developed in [2]. A Question Answering System based on conditional knowledge was presented in [3]. In this paper we describe the evolution of that QA System into a client-server application. The conditional schema will be defined and exemplified. The Question Answer System was described in a previous article. In this paper we will describe in a few words the functionality of the QA system and in a more elaborate way the server and the client will be presented step by step in two separate sections. The two sections are oriented specifically towards the task that the described part of the application performs.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85062808373
A Javanese Syllabifier Based on its Orthographic System
Automatic syllabification is considered as a finished process in high-resource languages. However, it is still badly needed in under-resourced and critical languages such as Javanese. Syllabification becomes the basic backbone in any task related to transliteration process for Abugida or syllabary scripts, word recognition, and speech synthesis. Due to the lack of data set and resources, this research applied a Finite State Transducer model to build a syllabifier for Javanese documents written in Latin. The segmentation rules are based on the orthograpic system of Javanese script. The experiment shows that the accuracy rate of segmented words into syllables achieves 95.56% for data set scrapped from Wiki and 97.92% for data set taken from Javanese magazine Djaka Lodang. The satisfying accuracy rates signifies that our syllabifier is capable of providing a corpus of Javanese syllables for more complex applications such as transliteration, word boundary prediction, or Optical Character Recognition for Javanese scripts.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85125350735
A Joint Entity-Relation Extraction Method with Sparse Parameter Sharing Architecture
The existing parameter sharing joint entity-relation extraction models cannot learn task-specific features for named entity recognition and relation classification subtasks. To address the problem, this work proposed a sparse parameter sharing joint entity-relation extraction method. The proposed method incorporates a sparse parameter sharing architecture with the existing parameter sharing joint entity-relation extraction model SpERT to learn subnetworks for the two subtasks. The subnetworks inherit and partially share the base network parameters, which provide the implicit feature interaction learning ability and task-specific feature learning ability for the model. Experiments are conducted on the ADE, SciERC, and CoNLL04 datasets. The experimental results show that the proposed method outperforms the baseline model SpERT by up to 1.40% in the F1 value. In addition, the performance on datasets SciERC and ADE can exceed the state-of-the-art results.
[ "Relation Extraction", "Information Extraction & Text Mining" ]
[ 75, 3 ]
SCOPUS_ID:85137176909
A Joint Extraction Strategy for Chinese Medical Text Based on Sequence Tagging
The research on entities and relations extraction in medical text is the basis of constructing medical knowledge graphs. Currently the mainstream pipelined extraction method do not consider the connection between entity recognition and relation classification, and could not address the problem of the overlapping relations among the triplets. This paper proposes a joint extraction strategy of entities and relations in chinese medical based on sequence tagging, which splits the joint extraction task into two sequence tagging subtasks, namely HE and TRE, establishing the connection of subtasks through shared encoding layer and semantic information of head entity. By incorporating the pre-Trained language model RoBERTa to obtain a richer numerical representations of word vectors, then fusing word vectors and part-of-speech vectors as inputs of word representation for joint extraction, in combination with the GRU-BiLSTM model to extract entities and relations directly. Experimental results show that this model achieves 54.44% F-value on the chinese medical dataset CMeIE, which outperforms the extraction performance of other pre-Trained language models.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Tagging", "Information Extraction & Text Mining" ]
[ 52, 72, 15, 12, 63, 3 ]
SCOPUS_ID:85128929881
A Joint Framework for Explainable Recommendation with Knowledge Reasoning and Graph Representation
With the development of recommendation systems (RSs), researchers are no longer only satisfied with the recommendation results, but also put forward requirements for the recommendation reasons, which helps improve user experience and discover system defects. Recently, some methods develop knowledge graph reasoning via reinforcement learning for explainable recommendation. Different from traditional RSs, these methods generate corresponding paths reasoned from KG to achieve explicit explainability while providing recommended items. But they suffer from a limitation of the fixed representations that are pre-trained on the KG, which leads to a gap between KG representation and explainable recommendation. To tackle this issue, we propose a joint framework for explainable recommendation with knowledge reasoning and graph representation. A sub-graph is constructed from the paths generated through knowledge reasoning and utilized to optimize the KG representations. In this way, knowledge reasoning and graph representation are optimized jointly and form a positive regulation system. Besides, due to more than one candidate in the step of knowledge reasoning, an attention mechanism is also employed to capture the preference. Extensive experiments are conducted on public real-world datasets to show the superior performance of the proposed method. Moreover, the results of the online A/B test on the large-scale Meituan Waimai (MTWM) KG consistently show our method brings benefits to the industry.
[ "Semantic Text Processing", "Structured Data in NLP", "Representation Learning", "Explainability & Interpretability in NLP", "Knowledge Representation", "Knowledge Graph Reasoning", "Responsible & Trustworthy NLP", "Reasoning", "Multimodality" ]
[ 72, 50, 12, 81, 18, 54, 4, 8, 74 ]
SCOPUS_ID:85145349140
A Joint Knowledge Graph Reasoning Method
Facing the massive data generated by edge intelligent interconnection applications in the mobile edge computing (MEC) environment, timely and efficient data mining has become an urgent technical problem to be solved. Knowledge graph reasoning is a promising solution to the above challenges. However, the traditional knowledge graph reasoning method can not meet the requirements of MEC for low latency and fewer resources. This paper presents a MEC-oriented knowledge graph reasoning method gate recursive unit for logic reasoning (GRULR). Specifically, the technology regards logical rules as variables and trains two models in an iterative manner under the MEC architecture, namely, Rule Miner and Reasoning Evaluator. The two models are deployed in the central cloud and the edge cloud respectively, jointly trained and mutually enhanced. Rule Miner generates rule sequences based on gate recurrent unit (GRU) network, and optimizes network parameters by using high-quality rules generated by Reasoning Evaluator. Experiments show that this method has a good edge reasoning effect, and can generate high-quality logic rules and send them to the central cloud server for sharing.
[ "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Knowledge Graph Reasoning", "Reasoning", "Multimodality" ]
[ 72, 50, 18, 54, 8, 74 ]
SCOPUS_ID:85140453672
A Joint Label-Enhanced Representation Based on Pre-trained Model for Charge Prediction
As one of the important subtasks of legal judgment prediction, charge prediction aims to predict the final charge according to the fact description of a legal case. It can help make legal judgments or provide legal professional guidance for non-professionals. Most existing works focus on predicting charges only based on the fact description of a legal case while ignoring the semantic information of charge labels. Moreover, suffering from data imbalance in real applications, they are not applicable to predict few-shot charges by lack of training data. To address these issues, we propose a novel legal text presentation based on pre-trained model for charge prediction, named joint label-enhanced representation (JLER), which provides abundant information of charge labels as additional legal knowledge for pre-trained model to improve the charge prediction performance. JLER can improve predicting accuracy and interpretability by combining the charge label information enhanced by double-layer attention with legal text information, along with relieving the impact of data imbalance by fine-tuning pre-trained model from both text features side and charge label one. Experimental results on two real-world datasets demonstrate that our proposed model achieves significant and consistent improvements compared to the state-of-the-art baselines. Specifically, our model outperforms the baselines by about 13.9% accuracy on few-shot charge prediction. It is indicated that the proposed JLER model has good performance for charge prediction and is prospected to be applied to other subtasks of legal judgement prediction.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 12, 4 ]
http://arxiv.org/abs/2010.11980v1
A Joint Learning Approach based on Self-Distillation for Keyphrase Extraction from Scientific Documents
Keyphrase extraction is the task of extracting a small set of phrases that best describe a document. Most existing benchmark datasets for the task typically have limited numbers of annotated documents, making it challenging to train increasingly complex neural networks. In contrast, digital libraries store millions of scientific articles online, covering a wide range of topics. While a significant portion of these articles contain keyphrases provided by their authors, most other articles lack such kind of annotations. Therefore, to effectively utilize these large amounts of unlabeled articles, we propose a simple and efficient joint learning approach based on the idea of self-distillation. Experimental results show that our approach consistently improves the performance of baseline models for keyphrase extraction. Furthermore, our best models outperform previous methods for the task, achieving new state-of-the-art results on two public benchmarks: Inspec and SemEval-2017.
[ "Language Models", "Information Extraction & Text Mining", "Semantic Text Processing", "Green & Sustainable NLP", "Term Extraction", "Responsible & Trustworthy NLP" ]
[ 52, 3, 72, 68, 1, 4 ]
http://arxiv.org/abs/2204.03208v1
A Joint Learning Approach for Semi-supervised Neural Topic Modeling
Topic models are some of the most popular ways to represent textual data in an interpret-able manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semi-supervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.
[ "Low-Resource NLP", "Topic Modeling", "Information Retrieval", "Responsible & Trustworthy NLP", "Text Classification", "Information Extraction & Text Mining" ]
[ 80, 9, 24, 4, 36, 3 ]
SCOPUS_ID:85128855720
A Joint Learning Method for Biomedical Entity Linking
Biomedical texts contain valuable domain knowledge for biomedical researchers. It is of great significance to make full use of massive biomedical literature, discover important hidden information and acquire professional knowledge from it. Biomedical entity linking is the identification of a named entity in a biomedical text and the mapping of the mention strings representing that entity to the corresponding concepts in the biomedical domain-specialized knowledge base. However, the biomedical entity linking task usually faces two major challenges: (1) Ambiguity in the description of entities by natural language. (2) Heterogeneity between natural language texts and biomedical knowledge base. Traditional methods based on feature engineering or rule discovery rely on manual feature selection or rule definition, and error propagation may also occur in the pipeline model. Therefore, this work presents an entity linking method combing deep learning and knowledge base, mining the structure similarity between natural language text and the knowledge base. The biomedical entity recognition and alignment are jointly processed aiming at automatically acquiring the semantic information of biomedical entities and mining the semantic relationship between biomedical entities through the standard biomedical knowledge base. Experiments show that this method achieves good results in entity recognition and alignment, and improves task accuracy significantly by achieving over 10% performance improvement on the entity link task.
[ "Knowledge Representation", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 18, 34, 72, 3 ]
SCOPUS_ID:85133264484
A Joint Learning Model to Extract Entities and Relations for Chinese Literature Based on Self-Attention
Extracting structured information from massive and heterogeneous text is a hot research topic in the field of natural language processing. It includes two key technologies: named entity recognition (NER) and relation extraction (RE). However, previous NER models consider less about the influence of mutual attention between words in the text on the prediction of entity labels, and there is less research on how to more fully extract sentence information for relational classification. In addition, previous research treats NER and RE as a pipeline of two separated tasks, which neglects the connection between them, and is mainly focused on the English corpus. In this paper, based on the self-attention mechanism, bidirectional long short-term memory (BiLSTM) neural network and conditional random field (CRF) model, we put forth a Chinese NER method based on BiLSTM-Self-Attention-CRF and a RE method based on BiLSTM-Multilevel-Attention in the field of Chinese literature. In particular, considering the relationship between these two tasks in terms of word vector and context feature representation in the neural network model, we put forth a joint learning method for NER and RE tasks based on the same underlying module, which jointly updates the parameters of the shared module during the training of these two tasks. For performance evaluation, we make use of the largest Chinese data set containing these two tasks. Experimental results show that the proposed independently trained NER and RE models achieve better performance than all previous methods, and our joint NER-RE training model outperforms the independently-trained NER and RE model.
[ "Language Models", "Semantic Text Processing", "Relation Extraction", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 52, 72, 75, 34, 3 ]
SCOPUS_ID:85147673109
A Joint Learning Sentiment Analysis Method Incorporating Emoji-Augmentation
Social media is the platform for most people to share their opinions, emojis are also widely used to express moods, emotions, and feelings on social media. There have been many researched on emojis and sentiment analysis. However, existing methods mainly face two limitations. First, since deep learning relies on large amounts of labeled data, the training samples of emoji are not enough to achieve the training effect. Second, they consider the sentiment of emojis and texts separately, not fully exploring the impact of emojis on the sentiment polarity of texts. In this paper, we propose a joint learning sentiment analysis method incorporating emoji-augmentation, and the method has two advantages compared with the existing work. First, We optimize the easy data augmentation method so that the newly generated sentences can also preserve the semantic information of emojis, which relieves the problem of insufficient training data with emojis. Second, it fuses emojis and text features to allow the model to better learn the mutual emotional semantics between text and emojis, jointly training emojis and words to obtain the sentence representations containing more semantic information of both emojis and text. Our experimental results show that the proposed method can significantly improve the performance compared with several baselines on two datasets.
[ "Visual Data in NLP", "Multimodality", "Sentiment Analysis" ]
[ 20, 74, 78 ]
SCOPUS_ID:85063904782
A Joint Model based on CNN-LSTMs in Dialogue Understanding
In Task-oriented Dialogue System, intent recognition and slot filling are two key subtasks of dialogue understanding (DU) module. Considering the strong relationship between intent and slots, this paper proposes an encoder-decoder architecture (using CNN-LSTMs) which based on attention mechanism to jointly model the two subtasks. Meanwhile, this paper also discusses the performance impact of the emitted slots information on the recognition of intent when jointly modeling. Our proposed model obtains 1.31% accuracy promotion on intent recognition and 0.90% gain on slot filling over the baseline model.
[ "Language Models", "Semantic Text Processing", "Semantic Parsing", "Sentiment Analysis", "Intent Recognition", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 40, 78, 79, 11, 38 ]
SCOPUS_ID:85097309159
A Joint Model for Aspect-Category Sentiment Analysis with Shared Sentiment Prediction Layer
Aspect-category sentiment analysis (ACSA) aims to predict the aspect categories mentioned in texts and their corresponding sentiment polarities. Some joint models have been proposed to address this task. Given a text, these joint models detect all the aspect categories mentioned in the text and predict the sentiment polarities toward them at once. Although these joint models obtain promising performances, they train separate parameters for each aspect category and therefore suffer from data deficiency of some aspect categories. To solve this problem, we propose a novel joint model which contains a shared sentiment prediction layer. The shared sentiment prediction layer transfers sentiment knowledge between aspect categories and alleviates the problem caused by data deficiency. Experiments conducted on SemEval-2016 Datasets demonstrate the effectiveness of our model.
[ "Aspect-based Sentiment Analysis", "Sentiment Analysis" ]
[ 23, 78 ]
http://arxiv.org/abs/1911.01678v4
A Joint Model for Definition Extraction with Syntactic Connection and Semantic Consistency
Definition Extraction (DE) is one of the well-known topics in Information Extraction that aims to identify terms and their corresponding definitions in unstructured texts. This task can be formalized either as a sentence classification task (i.e., containing term-definition pairs or not) or a sequential labeling task (i.e., identifying the boundaries of the terms and definitions). The previous works for DE have only focused on one of the two approaches, failing to model the inter-dependencies between the two tasks. In this work, we propose a novel model for DE that simultaneously performs the two tasks in a single framework to benefit from their inter-dependencies. Our model features deep learning architectures to exploit the global structures of the input sentences as well as the semantic consistencies between the terms and the definitions, thereby improving the quality of the representation vectors for DE. Besides the joint inference between sentence classification and sequential labeling, the proposed model is fundamentally different from the prior work for DE in that the prior work has only employed the local structures of the input sentences (i.e., word-to-word relations), and not yet considered the semantic consistencies between terms and definitions. In order to implement these novel ideas, our model presents a multi-task learning framework that employs graph convolutional neural networks and predicts the dependency paths between the terms and the definitions. We also seek to enforce the consistency between the representations of the terms and definitions both globally (i.e., increasing semantic consistency between the representations of the entire sentences and the terms/definitions) and locally (i.e., promoting the similarity between the representations of the terms and the definitions).
[ "Syntactic Text Processing", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 15, 24, 36, 3 ]
http://arxiv.org/abs/2106.03345v1
A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech
In this paper, we present a neural model for joint dropped pronoun recovery (DPR) and conversational discourse parsing (CDP) in Chinese conversational speech. We show that DPR and CDP are closely related, and a joint model benefits both tasks. We refer to our model as DiscProReco, and it first encodes the tokens in each utterance in a conversation with a directed Graph Convolutional Network (GCN). The token states for an utterance are then aggregated to produce a single state for each utterance. The utterance states are then fed into a biaffine classifier to construct a conversational discourse graph. A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer. The joint model is trained and evaluated on a new Structure Parsing-enhanced Dropped Pronoun Recovery (SPDPR) dataset that we annotated with both two types of information. Experimental results on the SPDPR dataset and other benchmarks show that DiscProReco significantly outperforms the state-of-the-art baselines of both tasks.
[ "Semantic Text Processing", "Structured Data in NLP", "Semantic Parsing", "Speech & Audio in NLP", "Discourse & Pragmatics", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Multimodality" ]
[ 72, 50, 40, 70, 71, 11, 38, 74 ]
SCOPUS_ID:85129664788
A Joint Model for Extracting Latent Aspects and Their Ratings From Online Employee Reviews
The personal description of a company associated with job satisfaction, company culture, and opinions of senior leadership is available on workplace community websites. However, it is almost impossible to read all of the different and possibly even contradictory reviews and make an accurate overall rating. Therefore, extracting aspects or sentiments from online reviews and the corresponding ratings is an important challenge. We collect online anonymous employees’ reviews from Glassdoor.com which allows people to evaluate and review the companies they have worked for or are working for. Here, we propose a joint rules-based model which combines the numerical evaluation reflected in the form of 1–5 stars, and the reviewed context to extract aspects. The model first inputs the five aspects with the initial word sets that are manually screened, and expands the aspect keyword sets through bootstrapping semi-supervised learning, and then uses latent rating regression to obtain the aspect score and aspect weight to update the corresponding score. Our experimental evaluation has shown better results as compared with an unsupervised learning of the latent Dirichlet allocation. The results could not only help companies understand their strengths and weaknesses, but also help job seekers apply for companies.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85097264282
A Joint Model for Graph-Based Chinese Dependency Parsing
In Chinese dependency parsing, the joint model of word segmentation, POS tagging and dependency parsing has become the mainstream framework because it can eliminate error propagation and share knowledge, where the transition-based model with feature templates maintains the best performance. Recently, the graph-based joint model[19] on word segmentation and dependency parsing has achieved better performance, demonstrating the advantages of the graph-based models. However, this work can not provide POS information for downstream tasks, and the POS tagging task was proved to be helpful to the dependency parsing according to the research of the transition-based model. Therefore, we propose a graph-based joint model for Chinese word segmentation, POS tagging and dependency parsing. We designed a character-level POS tagging task, and then train it jointly with the model of[19]. We adopt two methods of joint POS tagging task, one is by sharing parameters, the other is by using tag attention mechanism, which enables the three tasks to better share intermediate information and improve each other’s performance. The experimental results on the Penn Chinese treebank (CTB5) show that our proposed joint model improved by 0.38% on dependency parsing than the model of[19]. Compared with the best transition-based joint model, our model improved by 0.18%, 0.35% and 5.99% respectively in terms of word segmentation, POS tagging and dependency parsing.
[ "Structured Data in NLP", "Syntactic Text Processing", "Syntactic Parsing", "Tagging", "Text Segmentation", "Multimodality" ]
[ 50, 15, 28, 63, 21, 74 ]
SCOPUS_ID:85130800236
A Joint Model for Hierarchical Nested Information Extraction
During the long-term power construction process, the power dispatching department has saved many notification texts related to adjustment of grid operation mode. There is an urgent need to study named entity recognition techniques to automatically recognize the power equipment and operation mode, in order to support automatic verification of grid operation mode. By analyzing the characteristics of notification texts, a classification method of hierarchical nested named entities is proposed for the first time in power domain. The entities are divided into two layers with nested relationships, and the corpus of grid operation mode is constructed. We further propose a joint model based on character-word feature fusion and attention mechanism. The model is based on the parameter sharing approach for joint recognition of hierarchical nested entities in the corpus and further introduces an attention mechanism to optimize the feature interaction between hierarchical nested entities. In addition, we splice embeddings of characters and words as feature input to obtain richer semantic features. Experimental results show that our model achieves state-of-the-art results. Eventually, the recognition results can be stored as a standardized verification information chain, providing effective data support for automatic verification of the grid operation mode and ensuring safe and stable operation of the grid.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
http://arxiv.org/abs/1901.01010v2
A Joint Model for Multimodal Document Quality Assessment
The quality of a document is affected by various factors, including grammaticality, readability, stylistics, and expertise depth, making the task of document quality assessment a complex one. In this paper, we explore this task in the context of assessing the quality of Wikipedia articles and academic papers. Observing that the visual rendering of a document can capture implicit quality indicators that are not present in the document text --- such as images, font choices, and visual layout --- we propose a joint model that combines the text content with a visual rendering of the document for document quality assessment. Experimental results over two datasets reveal that textual and visual features are complementary, achieving state-of-the-art results.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85103754875
A Joint Model for Named Entity Recognition with Sentence-Level Entity Type Attentions
Named entity recognition (NER) is one fundamental task in natural language processing, which is typically addressed by neural condition random field (CRF) models, regarding the task as a sequence labeling problem. Sentence-level information has been shown positive for the task. Equipped with sophisticated neural structures such as long-short term memory network (LSTM), implicit sentence-level global information can be exploited fully, and has also been demonstrated effective in previous studies. In this work, we propose a new method for better learning of these sentence-level features in an explicit manner. Concretely, we suggest an auxiliary task, namely sentence-level named type prediction (i.e., determining whether a sentence includes a certain kind of named type), to supervise the feature representation learning globally. We conduct experiments on six benchmark datasets of various languages to evaluate our method. The results show that our final model is highly effective, resulting in significant improvements and leading to highly competitive results on all datasets.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
https://aclanthology.org//W09-4503/
A Joint Model for Normalizing Gene and Organism Mentions in Text
[ "Information Extraction & Text Mining" ]
[ 3 ]
http://arxiv.org/abs/1706.01450v1
A Joint Model for Question Answering and Question Generation
We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents. The proposed model uses a sequence-to-sequence framework that encodes the document and generates a question (answer) given an answer (question). Significant improvement in model performance is observed empirically on the SQuAD corpus, confirming our hypothesis that the model benefits from jointly learning to perform both tasks. We believe the joint model's novelty offers a new perspective on machine comprehension beyond architectural engineering, and serves as a first step towards autonomous information seeking.
[ "Question Answering", "Natural Language Interfaces", "Question Generation", "Text Generation" ]
[ 27, 11, 76, 47 ]
SCOPUS_ID:84936930061
A Joint Model for Topic-Sentiment Evolution over Time
Most existing topic models focus either on extracting static topic-sentiment conjunctions or topic-wise evolution over time leaving out topic-sentiment dynamics and missing the opportunity to provide a more in-depth analysis of textual data. In this paper, we propose an LDA-based topic model for analyzing topic-sentiment evolution over time by modeling time jointly with topics and sentiments. We derive inference algorithm based on Gibbs Sampling process. Finally, we present results on reviews and news datasets showing interpretable trends and strong correlation with ground truth in particular for topic-sentiment evolution over time.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 9, 3, 78 ]
https://aclanthology.org//W16-1603/
A Joint Model for Word Embedding and Word Morphology
This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings. Our model splits individual words into segments, and weights each segment according to its ability to predict context words. Our morphological analysis is comparable to dedicated morphological analyzers at the task of morpheme boundary recovery, and also performs better than word-based embedding models at the task of syntactic analogy answering. Finally, we show that incorporating morphology explicitly into character-level models help them produce embeddings for unseen words which correlate better with human judgments.
[ "Representation Learning", "Semantic Text Processing", "Syntactic Text Processing", "Morphology" ]
[ 12, 72, 15, 73 ]
SCOPUS_ID:85085038971
A Joint Model of Named Entity Recognition and Coreference Resolution Based on Hybrid Neural Network
Considering that both named entity recognition and coreference resolution depend on the same context of the entity word, this paper proposes a hybrid neural network model to settle these problems which contains a named entity recognition (NER) module and a coreference resolution (CR) module.NER and CR share a same bidirectional LSTM encoding layer, which is used to encode each input word by taking into account the context on both sides of the word.The contextual information of entities obtained in BiLSTM encoding layer further pass through to FFNN module to improve the coreference resolution.Furthermore, by adding domain documents and chapter semantic vectors to FFNN, the coreference resolution algorithm is improved and the coreference resolution model is optimized.Finally, we conduct experiments on the domain dataset to verify the effectiveness of our method.The joint model can effectively improve the accuracy of coreference resolution task.
[ "Language Models", "Semantic Text Processing", "Named Entity Recognition", "Coreference Resolution", "Information Extraction & Text Mining" ]
[ 52, 72, 34, 13, 3 ]
SCOPUS_ID:85060384403
A Joint Model of Term Extraction and Polarity Classification for Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) is a significant task in opinion mining, which aims to extract explicit aspects of an entity along with the sentiment expressed towards these aspects. To achieve this goal, two subtasks are performed: aspect term extraction (ATE) and aspect polarity classification (APC). However, recent work has solved these two subtasks separately or has only focused on either subtask. In addition, the sequential model of two subtasks may cause chain errors from ATE to APC and designing and running two models consumes too many resources. In this paper, we propose a joint model for ABSA that can deal with two subtasks, ATE and APC, simultaneously. The experimental results on two datasets from SemEval 2014 show that our model, which is named MATEPC (Model of Aspect Term Extraction and Polarity Classification), outperforms several baseline models in the ATE task and gives a promising result in the APC task by dealing with ATE and APC at the same time.
[ "Information Retrieval", "Opinion Mining", "Term Extraction", "Polarity Analysis", "Aspect-based Sentiment Analysis", "Sentiment Analysis", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 49, 1, 33, 23, 78, 36, 3 ]
SCOPUS_ID:85123445683
A Joint Model with Multi-Granularity Features of Low-resource Language POS Tagging and Dependency Parsing
The study of part-of-speech tags and dependency parsing of low-resource languages plays an important role in promoting low-resource natural language processing tasks. For low-resource language word embedding representation, the existing work does not make full use of character and sub-word level information encoding, resulting in models that cannot use features of different granularities. For this reason, a word embedding representation that integrates multi-granularity features is proposed, and different language models are used separately to obtained on different semantic information at the character, sub-word and word level. And three granular word embeddings are combined to achieve the purpose of enriching semantic information and alleviate the problem of poor performance of the dependency parsing model caused by the scarcity of annotation data. The part-of-speech tagging and dependency parsing model are further jointly trained, so that the models can share knowledge with each other and reduce the linear transmission of part-of-speech tagging errors on the dependency parsing task. Taking Thai and Vietnamese as the research objects, on the Penn Treebank data set, the proposed method is significantly improved compared to the baseline model UAS, LAS, and POS.
[ "Low-Resource NLP", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning", "Syntactic Parsing", "Tagging", "Responsible & Trustworthy NLP" ]
[ 80, 72, 15, 12, 28, 63, 4 ]
SCOPUS_ID:85054275888
A Joint Multi-Task Learning Framework for Spoken Language Understanding
Spoken language understanding (SLU), which mainly involves intent prediction and slot filling, is a core component of a spoken dialogue system. Usually, intent determination and slot filling are carried out independently. Recently, joint learning of intent determination and slot filling has been proved effective in SLU. In this paper, we propose a novel joint multi-task learning framework for SLU, which predicts user intent and slot label via shared LSTM architecture, as well as next word's part of speech (POS) via neural language model. The proposed model exploits the correlation among different tasks and makes full advantage of all supervised signals. We conduct experiments on popular benchmark ATIS dataset, which consists of rich dialogues collected from real world. The experiment results show that our model achieves state-of-the-art in terms of several popular metrics.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Semantic Parsing", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 40, 11, 38, 4 ]
http://arxiv.org/abs/1411.5732v1
A Joint Probabilistic Classification Model of Relevant and Irrelevant Sentences in Mathematical Word Problems
Estimating the difficulty level of math word problems is an important task for many educational applications. Identification of relevant and irrelevant sentences in math word problems is an important step for calculating the difficulty levels of such problems. This paper addresses a novel application of text categorization to identify two types of sentences in mathematical word problems, namely relevant and irrelevant sentences. A novel joint probabilistic classification model is proposed to estimate the joint probability of classification decisions for all sentences of a math word problem by utilizing the correlation among all sentences along with the correlation between the question sentence and other sentences, and sentence text. The proposed model is compared with i) a SVM classifier which makes independent classification decisions for individual sentences by only using the sentence text and ii) a novel SVM classifier that considers the correlation between the question sentence and other sentences along with the sentence text. An extensive set of experiments demonstrates the effectiveness of the joint probabilistic classification model for identifying relevant and irrelevant sentences as well as the novel SVM classifier that utilizes the correlation between the question sentence and other sentences. Furthermore, empirical results and analysis show that i) it is highly beneficial not to remove stopwords and ii) utilizing part of speech tagging does not make a significant improvement although it has been shown to be effective for the related task of math word problem type classification.
[ "Text Classification", "Reasoning", "Numerical Reasoning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 8, 5, 24, 3 ]
SCOPUS_ID:84936797525
A Joint Segmentation and Classification Framework for Sentence Level Sentiment Classification
In this paper, we propose a joint segmentation and classification framework for sentence-level sentiment classification. It is widely recognized that phrasal information is crucial for sentiment classification. However, existing sentiment classification algorithms typically split a sentence as a word sequence, which does not effectively handle the inconsistent sentiment polarity between a phrase and the words it contains, such as {'not bad,' 'bad'} and {'a great deal of,' 'great'}. We address this issue by developing a joint framework for sentence-level sentiment classification. It simultaneously generates useful segmentations and predicts sentence-level polarity based on the segmentation results. Specifically, we develop a candidate generation model to produce segmentation candidates of a sentence; a segmentation ranking model to score the usefulness of a segmentation candidate for sentiment classification; and a classification model for predicting the sentiment polarity of a segmentation. We train the joint framework directly from sentences annotated with only sentiment polarity, without using any syntactic or sentiment annotations in segmentation level. We conduct experiments for sentiment classification on two benchmark datasets: a tweet dataset and a review dataset. Experimental results show that: 1) our method performs comparably with state-of-The-art methods on both datasets; 2) joint modeling segmentation and classification outperforms pipelined baseline methods in various experimental settings.
[ "Information Retrieval", "Syntactic Text Processing", "Polarity Analysis", "Sentiment Analysis", "Text Segmentation", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 15, 33, 78, 21, 36, 3 ]
SCOPUS_ID:85106008006
A Joint Sentiment-Topic Model for Product Review Analysis of Electronic Goods
Online product review plays an important role in raising the voice of customers and their purchase decision. However, a conventional method in the area has largely focused on field research and surveys to acquire information about customer preferences. The already existing frameworks largely focus on supervised learning model techniques which is not feasible in all cases as it requires a labelled corpus which might be difficult to gather at times. This paper proposes a new framework for a review of topic modelling named joint sentiment topic model. The proposed research work extracts key features of online reviews and present a detailed analysis of different approaches and techniques, comparing the reliability of diverse systems among them. Also, this research work attempts to implement topic modelling in order to find how much a customer's occupation and interests match a particular product and accordingly give weightage to their reviews. This idea is an effort to explore different approaches and techniques of topic modelling and provide valuable executive implications for electronic goods.
[ "Topic Modeling", "Information Extraction & Text Mining", "Sentiment Analysis" ]
[ 9, 3, 78 ]
http://arxiv.org/abs/2101.00816v2
A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis
Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Early works only focused on solving one of these subtasks individually. Some recent work focused on solving a combination of two subtasks, e.g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely. More recently, the triple extraction task has been proposed, i.e., extracting the (aspect term, opinion term, sentiment polarity) triples from a sentence. However, previous approaches fail to solve all subtasks in a unified end-to-end framework. In this paper, we propose a complete solution for ABSA. We construct two machine reading comprehension (MRC) problems and solve all subtasks by joint training two BERT-MRC models with parameters sharing. We conduct experiments on these subtasks, and results on several benchmark datasets demonstrate the effectiveness of our proposed framework, which significantly outperforms existing state-of-the-art methods.
[ "Sentiment Analysis", "Term Extraction", "Aspect-based Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 78, 1, 23, 3 ]
http://arxiv.org/abs/2107.11768v1
A Joint and Domain-Adaptive Approach to Spoken Language Understanding
Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF). There are two lines of research on SLU. One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks. In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU. We formulate SLU as a constrained generation task and utilize a dynamic vocabulary based on domain-specific ontology. We conduct experiments on the ASMixed and MTOD datasets and achieve competitive performance with previous state-of-the-art joint models. Besides, results show that our joint model can be effectively adapted to a new domain.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP" ]
[ 80, 4 ]
SCOPUS_ID:85078863096
A Joint sentence scoring and selection framework for neural extractive document summarization
Extractive document summarization methods aim to extract important sentences to form a summary. Previous works perform this task by first scoring all sentences in the document then selecting most informative ones; while we propose to jointly learn the two steps with a novel end-To-end neural network framework. Specifically, the sentences in the input document are represented as real-valued vectors through a neural document encoder. Then the method builds the output summary by extracting important sentences one by one. Different from previous works, the proposed joint sentence scoring and selection framework directly predicts the relative sentence importance score according to both sentence content and previously selected sentences. We evaluate the proposed framework with two realizations: A hierarchical recurrent neural network based model; and a pre-Training based model that uses BERT as the document encoder. Experiments on two datasets show that the proposed joint framework outperforms the state-of-The-Art extractive summarization models which treat sentence scoring and selection as two subtasks.
[ "Language Models", "Semantic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 52, 72, 30, 47, 3 ]
SCOPUS_ID:85144015207
A Joint-Training Two-Stage Method for Remote Sensing Image Captioning
Compared with remote sensing image (RSI) captioning methods based on the traditional encoder-decoder model, two-stage RSI captioning methods include an auxiliary remote sensing task to provide prior information, which enables them to generate more accurate descriptions. In previous two-stage RSI captioning methods, however, the image captioning and the auxiliary remote sensing tasks are handled separately, which is time-consuming and ignores mutual interference between tasks. To solve this problem, we propose a novel joint-training two-stage (JTTS) RSI captioning method. We use multilabel classification to provide prior information, and we design a differentiable sampling operator to replace the traditional nondifferentiable sampling operation to index the multilabel classification result. In contrast to previous two-stage RSI captioning methods, our method can implement joint training, and the joint loss allows the error of the generated description to flow into the optimization of the multilabel classification via backpropagation. Specifically, we approximate the Heaviside step function with the steep logistic function to implement a differentiable sampling operator for the multilabel classification. We propose a dynamic contrast loss function for multilabel classification tasks to ensure that a certain margin is maintained between the probabilities of the positive label and the negative label during sampling. We design an attribute-guided decoder to filter the multilabel prior information obtained by the sampling operator to generate more accurate image captions. The results of extensive experiments show that the JTTS method achieves state-of-the-art performance on the RSI captioning dataset (RSICD), the University of California, Merced (UCM)-captions, and the Sydney-captions datasets.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Captioning", "Text Generation", "Text Classification", "Multimodality" ]
[ 20, 52, 72, 24, 3, 39, 47, 36, 74 ]
SCOPUS_ID:85013036619
A Journey of Bounty Hunters: Analyzing the Influence of Reward Systems on StackOverflow Question Response Times
Question and Answering (Q&A) platforms are an important source for information and a first place to go when searching for help. Q&A sites, like StackOverflow (SO), use reward systems to incentivize users to answer fast and accurately. In this paper we study and predict the response time for those questions on StackOverflow, that benefit from an additional incentive through so called bounties. Shaped by different motivations and rules these questions perform unlike regular questions. As our key finding we note that topic related factors provide a much stronger evidence than previously found factors for these questions. Finally, we compare models based on these features predicting the response time in the context of bounty questions.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85122653960
A Judgment Method of Network News Value Orientation Based on Sentiment Analysis
Nowadays, recommendation algorithms are playing an increasingly important role in online news platforms. Current personalized recommendation algorithms aim to find connections between user characteristics and news to be recommended, so as to achieve accurate recommendations. The goal of the personalized recommendation algorithm is to increase the click-through rate, which faces the problem of excessively catering to user interests, and may even recommend content that does not conform to mainstream values in order to satisfy users' curiosity. Considering that the emotional tendency of news reflects a certain value orientation, we propose a judgment method of network news value orientation based on sentiment analysis, calculate the objective value representation and subjective value representation of news through the sentiment analysis model, and analyze these two representations by Kano model to judge the value orientation. The judgment of value orientation with the help of sentiment analysis model can effectively reduce the audit of online news.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85072852226
A Judicial Sentencing Method Based on Fused Deep Neural Networks
Nowadays, the judicial system has been hard to satisfy the growing judicial needs of the people. Therefore, the introduction of artificial intelligence into the judicial field is an inevitable trend. This paper incorporates deep learning into intelligent judicial sentencing and proposes a comprehensive network fusion model based on massive legal documents. The proposed method combines multiple networks, e.g., recurrent neural network and convolutional neural network, in the procedure of sentencing prediction. Specially, we use text classification and post-classification regression to predict the defendant’s conviction, articles of law related to the case and prison term. Moreover, we use the simulated gradient descent method to build a fusion model. Experimental results on legal documents datasets justify the effectiveness of the proposed method in sentencing prediction. The fused network model outperforms each individual model in terms of higher accuracy and stability when predicting the conviction, law article and prison term.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
https://aclanthology.org//W11-2045/
A Just-in-Time Document Retrieval System for Dialogues or Monologues
[ "Natural Language Interfaces", "Document Retrieval", "Information Retrieval", "Dialogue Systems & Conversational Agents" ]
[ 11, 56, 24, 38 ]
https://aclanthology.org//W18-4410/
A K-Competitive Autoencoder for Aggression Detection in Social Media Text
We present an approach to detect aggression from social media text in this work. A winner-takes-all autoencoder, called Emoti-KATE is proposed for this purpose. Using a log-normalized, weighted word-count vector at input dimensions, the autoencoder simulates a competition between neurons in the hidden layer to minimize the reconstruction loss between the input and final output layers. We have evaluated the performance of our system on the datasets provided by the organizers of TRAC workshop, 2018. Using the encoding generated by Emoti-KATE, a 3-way classification is performed for every social media text in the dataset. Each data point is classified as ‘Overtly Aggressive’, ‘Covertly Aggressive’ or ‘Non-aggressive’. Results show that our (team name: PMRS) proposed method is able to achieve promising results on some of these datasets. In this paper, we have described the effects of introducing an winner-takes-all autoencoder for the task of aggression detection, reported its performance on four different datasets, analyzed some of its limitations and how to improve its performance in future works.
[ "Language Models", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 52, 72, 17, 4 ]
SCOPUS_ID:78650009559
A K-Nearest Neighbor Algorithm based on cluster in text classification
The K-Nearest Neighbor Algorithm (K-NN) is an important approach for automatic text classification. In this paper, cluster was applied In order to overcome the disadvantages of the traditional K-NN algorithm. First Clustering was utilized in training set through an improved K-mean approach to select the most representative samples as cluster center. Then we compute the comparability between the testing samples and the central vector of each cluster. A K-NN algorithm based on cluster was presented. The experiment results verify that this classification algorithm is much faster than the traditional K-NN algorithm, and it can raise the accuracy. © 2010 IEEE.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Text Clustering" ]
[ 3, 24, 36, 29 ]
SCOPUS_ID:84902176387
A K-main routes approach to spatial network activity summarization
Data summarization is an important concept in data mining for finding a compact representation of a dataset. In spatial network activity summarization (SNAS), we are given a spatial network and a collection of activities (e.g., pedestrian fatality reports, crime reports) and the goal is to find \(k\) shortest paths that summarize the activities. SNAS is important for applications where observations occur along linear paths such as roadways, train tracks, etc. SNAS is computationally challenging because of the large number of \(k\) subsets of shortest paths in a spatial network. Previous work has focused on either geometry or subgraph-based approaches (e.g., only one path), and cannot summarize activities using multiple paths. This paper proposes a K-Main Routes (KMR) approach that discovers \(k\) shortest paths to summarize activities. KMR generalizes K-means for network space but uses shortest paths instead of ellipses to summarize activities. To improve performance, KMR uses network Voronoi, divide and conquer, and pruning strategies. We present a case study comparing KMR's network-based output (i.e., shortest paths) to geometry-based outputs (e.g., ellipses) on pedestrian fatality data. Experimental results on synthetic and real data show that KMR with our performance-tuning decisions yields substantial computational savings without reducing summary path coverage. © 1989-2012 IEEE.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85040554388
A K-medoids based clustering scheme with an application to document clustering
Clustering is an important unsupervised data analysis technique, which divides data objects into clusters based on similarity. Clustering has been studied and applied in many different fields, including pattern recognition, data mining, decision science and statistics. Clustering algorithms can be mainly classified as hierarchical and partitional clustering approaches. Partitioning around medoids (PAM) is a partitional clustering algorithms, which is less sensitive to outliers, but greatly affected by the poor initialization of medoids. In this paper, we augment the randomized seeding technique to overcome problem of poor initialization of medoids in PAM algorithm. The proposed approach (PAM++) is compared with other partitional clustering algorithms, such as K-means and K-means++ on text document clustering benchmarks and evaluated in terms of F-measure. The results for experiments indicate that the randomized seeding can improve the performance of PAM algorithm on text document clustering.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
SCOPUS_ID:84857335867
A K-mixture connective-strength-based approach to automatic text summarisation
This research focuses on developing a hybrid automatic text summarisation approach, KCS, to enhance the quality of summaries. KCS employs the K-mixture probabilistic model to establish term weight distributions in a statistical sense. It further identifies the lexical relations between nouns and nouns, as well as nouns and verbs to derive the connective strength (CS) of nouns. Sentences are ranked and extracted according to the accumulated CS values they contain. We conduct two experiments to justify the proposed approach. The results show that the K-mixture model itself is more conducive to document classification than traditional TFIDF weighting scheme since the best macro F-measure increases from 0.63 to 0.67. It, however, is still no better than the more complex linguistic-based approach that takes noun's CS into consideration. Most importantly, our proposed approach, KCS, performs best among all approaches considered (with the best macro F-measure of 0.8). It implies that KCS can extract more representative sentences from the document and its feasibility in text summarisation applications is thus justified. © 2011 Inderscience Enterprises Ltd.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:84905865743
A K-nearest-neighbour based classifier for securities text categorization
Event-driven investments have gained great importance and popularity. Due to the importance of the timely and effective messages for successful investment, the automated categorization of documents into predefined labels has received an ever-increased attention in the recent years. This paper implements a new text document classifier by integrating the K-nearest neighbour (KNN) classification approach with the VSM vector space model. By screening the feature items and weighted key items, the proposed classifier turns the financial information text into N-dimensional vector and identified the positive and negative information, furthermore achieve to the classification optimized. In addition, the classification model constructed by the proposed algorithm can be updated incrementally, and it has great scalability in event-driven securities investment for investors. © (2014) Trans Tech Publications, Switzerland.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:84994257442
A KL divergence and DNN-based approach to voice conversion without parallel training sentences
We extend our recently proposed approach to cross-lingual TTS training to voice conversion, without using parallel training sentences. It employs Speaker Independent, Deep Neural Net (SIDNN) ASR to equalize the difference between source and target speakers and Kullback-Leibler Divergence (KLD) to convert spectral parameters probabilistically in the phonetic space via ASR senone posterior probabilities of the two speakers. With or without knowing the transcriptions of the target speaker's training speech, the approach can be either supervised or unsupervised. In a supervised mode, where adequate training data of the target speaker with transcriptions is used to train a GMM-HMM TTS of the target speaker, each frame of the source speakers input data is mapped to the closest senone in thus trained TTS. The mapping is done via the posterior probabilities computed by SI-DNN ASR and the minimum KLD matching. In a unsupervised mode, all training data of the target speaker is first grouped into phonetic clusters where KLD is used as the sole distortion measure. Once the phonetic clusters are trained, each frame of the source speakers input is then mapped to the mean of the closest phonetic cluster. The final converted speech is generated with the max probability trajectory generation algorithm. Both objective and subjective evaluations show the proposed approach can achieve higher speaker similarity and better spectral distortions, when comparing with the baseline system based upon our sequential error minimization trained DNN algorithm.
[ "Multilinguality", "Low-Resource NLP", "Cross-Lingual Transfer", "Information Extraction & Text Mining", "Speech & Audio in NLP", "Syntactic Text Processing", "Multimodality", "Text Generation", "Text Clustering", "Phonetics", "Speech Recognition", "Responsible & Trustworthy NLP" ]
[ 0, 80, 19, 3, 70, 15, 74, 47, 29, 64, 10, 4 ]
SCOPUS_ID:84897977516
A KNN based algorithm for text categorization
In the recent decade categorization of web texts has experienced increased attention. Huge amount of textual information available on the web emerged a need to find and obtain relevant information for strategically supported decisions. There are many machine learning algorithms dealing with text categorization and classification issues. In the paper the experiment has been conducted on the k-Nearest Neighbor (KNN) classifier. Because of its simplicity and effectiveness it is widely applied method in a field of machine learning and pattern recognition.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85073248887
A KNOWLEDGE REPRESENTATION LANGUAGE for NATURAL LANGUAGE PROCESSING, SIMULATION and REASONING
OntoAgent is an environment that supports the cognitive modeling of societies of intelligent agents that emulate human beings. Like traditional intelligent agents, OntoAgent agents execute the core functionalities of perception, reasoning and action. Unlike most traditional agents, they engage in extensive "translation" functions in order to render perceived inputs into the unambiguous, ontologically-grounded knowledge representation language (KRL) that is used to model their knowledge, memory and reasoning. This paper describes the KRL of OntoAgent with a special focus on the many runtime functions used to translate between perceived inputs and the KRL, as well as to manipulate KRL structures for reasoning and simulation.
[ "Machine Translation", "Semantic Text Processing", "Representation Learning", "Knowledge Representation", "Text Generation", "Reasoning", "Multilinguality" ]
[ 51, 72, 12, 18, 47, 8, 0 ]
SCOPUS_ID:85131257722
A KNOWLEDGE/DATA ENHANCED METHOD FOR JOINT EVENT AND TEMPORAL RELATION EXTRACTION
Understanding temporal relations (TempRels) between events is an important task that could benefit many downstream NLP applications. This task inevitably faces the challenges of both a limited amount of high-quality training data and a very biased distribution of TempRels. These problems will substantially hurt the performance of extraction systems because they are inclined to predict dominant TempRels when training with a limited amount of data. To alleviate those issues, we propose a Knowledge/Data Enhanced method for Event and TempRel Extraction, which integrates the temporal commonsense knowledge, data augmentation and Focal Loss function into one single extraction system. Altogether, these components improve the performance of the system on two public benchmark datasets TB-Dense and MATRES.
[ "Event Extraction", "Relation Extraction", "Information Extraction & Text Mining" ]
[ 31, 75, 3 ]
SCOPUS_ID:85131679253
A Kaleidoscope of I-Positions: Chinese Volunteers’ Enactment of Teacher Identity in Australian Classrooms
This article explores the enactment of teacher identity by Chinese international students volunteering in Australian schools. Dialogical Self Theory offers a theoretical framework for understanding the intrapersonal and interpersonal nature of a teacher’s identity, but lacks an analytical tool for describing self-dialogue. This article addresses this gap by focusing on language-in-use as the lens for investigating the inner dynamics of teacher identity. Descriptive discourse analysis highlights linguistic processes that shed light on self-dialogue, revealing a kaleidoscopic experience of I-positions emerging, receding, shifting and interacting within transitional identities. Findings suggest the dialogical relationships and movements between I-positions distinguish one individual’s transitional identity from those of others. This article contributes to teacher identity research by illuminating idiosyncratic dialogical processes in the experience of international students becoming teachers and posits student volunteer programs as contexts within which to investigate and foster teacher identity construction.
[ "Discourse & Pragmatics", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 71, 11, 72, 38 ]
SCOPUS_ID:84893035237
A Kalman filter based human-computer interactive word segmentation system for ancient Chinese Texts
Previous research showed that Kalman filter based human-computer interaction Chinese word segmentation algorithm achieves an encouraging effect in reducing user interventions. This paper designs an improved statistical model for ancient Chinese texts, and integrates it with the Kalman filter based framework. An online interactive system is presented to segment ancient Chinese corpora. Experiments showed that this approach has advantage in processing domain-specific text without the support of dictionaries or annotated corpora. Our improved statistical model outperformed the baseline model by 30% in segmentation precision. © Springer-Verlag 2013.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:84957801042
A Kansei evaluation approach based on the technique of computing with words
Kansei evaluation plays a vital role in the implementation of Kansei engineering; however, it is difficult to quantitatively evaluate customer preferences of a product's Kansei attributes as such preferences involve human perceptual interpretation with certain subjectivity, uncertainty, and imprecision. An effective Kansei evaluation requires justifying the classification of Kansei attributes extracted from a set of collected Kansei words, establishing priorities for customer preferences of product alternatives with respect to each attribute, and synthesizing the priorities for the evaluated alternatives. Moreover, psychometric Kansei evaluation systems essentially require dealing with Kansei words. This paper presents a Kansei evaluation approach based on the technique of computing with words (CWW). The aims of this study were (1) to classify collected Kansei words into a set of Kansei attributes by using cluster analysis based on fuzzy relations; (2) to model Kansei preferences based on semantic labels for the priority analysis; and (3) to synthesize priority information and rank the order of decision alternatives by means of the linguistic aggregation operation. An empirical study is presented to demonstrate the implementation process and applicability of the proposed Kansei evaluation approach. The theoretical and practical implications of the proposed approach are also discussed.
[ "Information Extraction & Text Mining", "Text Clustering" ]
[ 3, 29 ]
SCOPUS_ID:51749084711
A Kernel for measuring structural semantic similarities
Semantic similarity is nowadays one of the widely discussed topics in data mining, natural language processing and some related research fields. Semantic similarity between two entities usually comes into one's sight when tackling such issues. In this paper, however, we adopt such a standpoint that semantic similarities also exhibit within the document structures besides within linguistic hierarchies the entities embed. We discuss the measurements of such structural semantic similarity in the paper. We define semantic content to depict the semantic capacity of a structure and present a kernel for measuring semantic similarities between tree-structured data. After the recursive generation of all matched subtrees, the semantic similarity between two structures is calculated.
[ "Semantic Text Processing", "Semantic Similarity" ]
[ 72, 53 ]
http://arxiv.org/abs/2210.05643v2
A Kernel-Based View of Language Model Fine-Tuning
It has become standard to solve NLP tasks by fine-tuning pre-trained language models (LMs), especially in low-data settings. There is minimal theoretical understanding of empirical success, e.g., why fine-tuning a model with $10^8$ or more parameters on a couple dozen training points does not result in overfitting. We investigate whether the Neural Tangent Kernel (NTK) - which originated as a model to study the gradient descent dynamics of infinitely wide networks with suitable random initialization - describes fine-tuning of pre-trained LMs. This study was inspired by the decent performance of NTK for computer vision tasks (Wei et al., 2022). We extend the NTK formalism to Adam and use Tensor Programs (Yang, 2020) to characterize conditions under which the NTK lens may describe fine-tuning updates to pre-trained language models. Extensive experiments on 14 NLP tasks validate our theory and show that formulating the downstream task as a masked word prediction problem through prompting often induces kernel-based dynamics during fine-tuning. Finally, we use this kernel view to propose an explanation for the success of parameter-efficient subspace-based fine-tuning methods.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85075837624
A Key-Phrase Aware End2end Neural Response Generation Model
Previous Seq2Seq models for chitchat assume that each word in the target sequence has direct corresponding relationship with words in the source sequence, and all the target words are equally important. However, it is invalid since sometimes only parts of the response are relevant to the message. For models with the above mentioned assumption, irrelevant response words might have a negative impact on the performance in semantic association modeling that is a core task for open-domain dialogue modeling. In this work, to address the challenge of semantic association modeling, we automatically recognize key-phrases from responses in training data, and then feed this supervision information into an enhanced key-phrase aware seq2seq model for better capability in semantic association modeling. This model consists of an encoder and a two-layer decoder, where the encoder and the first layer sub-decoder is mainly for learning semantic association and the second layer sub-decoder is for responses generation. Experimental results show that this model can effectively utilize the key phrase information for semantic association modeling, and it can significantly outperform baseline models in terms of response appropriateness and informativeness.
[ "Dialogue Response Generation", "Language Models", "Semantic Text Processing", "Text Generation" ]
[ 14, 52, 72, 47 ]
SCOPUS_ID:85129349261
A Keylogging Inference Attack on Air-Tapping Keyboards in Virtual Environments
Enabling users to push the physical world's limits, augmented and virtual reality platforms opened a new chapter in perception. Novel immersive experiences resulted in the emergence of new interaction methods for virtual environments, which came with unprecedented security and privacy risks. This paper presents a keylogging inference attack to infer user inputs typed with in-air tapping keyboards. We observe that hands follow specific patterns when typing in the air and exploit this observation to carry out our attack. Starting with three plausible attack scenarios where the adversary obtains the hand trace patterns of the victim, we build a pipeline to reconstruct the user input. Our attack pipeline takes the hand traces of the victim as an input and outputs a set of input inferences ordered from the best to worst. Through various experiments, we showed that our inference attack achieves a pinpoint accuracy ranging from 40% to 87% within at most the top-500 candidate reconstructions. Finally, we discuss countermeasures, while the results presented provide a cautionary tale of the security and privacy risk of the immersive mobile technology.
[ "Ethical NLP", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 17, 58, 4 ]
SCOPUS_ID:85146420140
A Keyphrase Extraction Method Based on Multi-feature Evaluation and Mask Mechanism
Keyphrase extraction aims to identify phrases in documents that contain core content. However, existing unsupervised keyphrase extraction models are limited to focusing on a single feature leading to biased results. In response to the above problems, it evaluates keyphrase scores through multiple features of semantic importance, topic diversity, and position features. Firstly, it masked the candidate keyphrase from a document and the Manhattan distance between the mask document and the original document is calculated as the semantic importance feature. Secondly, it calculated the topic-word distribution of candidate keyphrases as topic diversity, and the position features are calculated. Finally, the phrase importance score is calculated by integrating the three sub-models. Experiments are conducted on three academic datasets and compared with six state-of-the-art baseline models, outperforming existing methods. The results show that evaluating phrase importance from multiple features significantly improves the performance of extracting keyphrases.
[ "Language Models", "Semantic Text Processing", "Term Extraction", "Information Extraction & Text Mining" ]
[ 52, 72, 1, 3 ]
SCOPUS_ID:85124991818
A Keyword Detection and Context Filtering Method for Document Level Relation Extraction
Relation extraction (RE) is the core link of downstream tasks, such as information retrieval, question answering systems, and knowledge graphs. Most of the current mainstream RE technologies focus on the sentence-level corpus, which has great limitations in practical applications. Moreover, the previously proposed models based on graph neural networks or transformers try to obtain context features from the global text, ignoring the importance of local features. In practice, the relation between entity pairs can usually be inferred just through a few keywords. This paper proposes a keyword detection and context filtering method based on the Self-Attention mechanism for document-level RE. In addition, a Self-Attention Memory (SAM) module in ConvLSTM is introduced to process the document context and capture keyword features. By searching for word embeddings with high cross-attention of entity pairs, we update and record critical local features to enhance the performance of the final classification model. The experimental results on three benchmark datasets (DocRED, CDR, and GBA) show that our model achieves advanced performance within open and specialized domain relationship extraction tasks, with up to 0.87% F1 value improvement compared to the state-of-the-art methods. We have also designed experiments to demonstrate that our model can achieve superior results by its stronger contextual filtering capability compared to other methods.
[ "Language Models", "Semantic Text Processing", "Relation Extraction", "Structured Data in NLP", "Multimodality", "Information Extraction & Text Mining" ]
[ 52, 72, 75, 50, 74, 3 ]
SCOPUS_ID:85124369176
A Keyword Extraction Method for Transportation Industry Standards based on improved TextRank
Facing the current standards of large scale and large quantity in transportation industry, how to efficiently extract standard keywords to provide professional services is a problem that needs to be solved in the industry at present. According to the text characteristics of transportation industry standards, this paper proposes a keyword extraction method based on improved TextRank and uses TF-IDF and Word2Vec algorithm. Then, different weights are assigned according to different factors such as the position, word frequency, semantics and part of speech of the industry standard text, so as to quickly extract more authoritative keywords in industry standard. Experiments show that compared with the classical TextRank, TF-IDF and word2vec algorithms, the proposed method has a great improvement in Precision, Recall and F value for the data set of transportation industry standards.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85024713009
A Keyword Extraction Method for generating a User Profile in the Paper Collection and Sharing System MiDoc
In this paper, we propose a new keyword extraction method for generation a user profile using collected papers without using a large corpus. We assume that a user's interest exists in papers. Our method can extract keywords that can express user's interest in papers that user's interest exit. Our method can be used for enhancing the paper collection and sharing system, MiDoc. In MiDoc, user profiles are automatically constructed by using the method. We conducted several experiments to show how effectively our method can extract keywords that represent user's interests. In the experiment, our method was compared with the exsiting methods. The results lead to the conclusion that the method can effectively extract keywords that represent user's interests. In this paper, We define user profile is keywords that express user's interest. © 2004, The Institute of Electrical Engineers of Japan. All rights reserved.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:85071856313
A Keyword Extraction Scheme from CQI Based on Graph Centrality
Recently, most of the universities in Korea is doing a lecture evaluation survey every semester. The continuous quality improvement (CQI) report is one of the most popular lecture evaluation service systems, which able to summaries and analysis the mean of evaluation reports. Since 2016, education office allows CQI system to begin uploads and analysis the CQI report in all subjects. To improve the school and support to students, the school has to do a lecture evaluation after midterm and final exam every semester. The problems are the school getting so long to make the report on students lecture evaluation. In this paper, we propose a summary keywords extraction method form CQI and represented as graph tools based on centrality. We expected that this method can be efficiently extracted the most important relation keywords from huge CQI data of each lecture evaluations to summaries for the report.
[ "Multimodality", "Structured Data in NLP", "Term Extraction", "Information Extraction & Text Mining" ]
[ 74, 50, 1, 3 ]
SCOPUS_ID:84866603493
A Keyword-topic model for contextual advertising
Contextual advertising is a type of online advertising in which the placement of commercial ads within a web page depends on the relevance of the ads to the page content. A common approach to determine relevance is to score the match between ads and the content of the viewed page, for example, by simple keyword or syntactic matching. However, because of the sparseness of advertising language and the lack of context, this approach often leads to the selection of irrelevant ads. In this paper, we propose using topic modeling to improve the relevance of retrieved ads. Unlike existing methods that directly model the content of an ad as a distribution over topics, the proposed method uses a keyword-topic model that associates each keyword provided by the advertiser with a multinomial distribution over topics. Then, an ad with multiple keywords is represented as a mixture of topic distributions associated with those keywords. We empirically evaluated the performance of the proposed method on a set of real ads and web pages. The results show that using the keyword-topic model gives improved accuracy over traditional keyword matching and a topic modeling methods that do not include information about keyword-topic association. Further, combining the keyword-topic model with other methods yields extra increase in ad recommendation accuracy. Copyright © 2012 ACM.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85028028266
A Khmer NER method based on conditional random fields fusing with Khmer entity characteristics constraints
In order to improve the performance of Khmer named entity recognition(NER), a NER method based on conditional random field (CRF) model fusing with Khmer entity characteristics constraints is proposed in this paper. First of all, we carried out analyses on the Khmer entity characteristics, summarized the constraint on these entity characteristics and introduced into the CRF; then solved the labeling sequence by integer linear programming integrated entity characteristic constraint and obtained a model of CRF integrated with constraints based on Khmer entity characteristics. Based on a contrastive experiment, CRF model of the constraint has a better performance than traditional CRF model when carrying out the Khmer NER.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]