id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:0033602211
A Chinese phonetic morse code recognition system for person with physical disability
The purpose of this study is to present a new approach to adaptive Morse code recognition for disabled persons whose hand coordination and dexterity are impaired but whose mentality and cognition level are at least fair to good. Because to maintain a stable typing rate is a challenge for the disabled, the automatic recognition of Morse code type is difficult. Therefore, a suitable adaptive automatic recognition method is offered. In this new adaptive Morse code recognition method, three processes are involved critical value adjustment, element recognition, and character translation. Experimental findings revealed that the proposed method results in a better recognition rate when compared to alternative methods from the literature.
[ "Phonetics", "Syntactic Text Processing" ]
[ 64, 15 ]
SCOPUS_ID:85082848538
A Chinese question answering system based on GPT
The Chinese question-answering system needs to select the most appropriate answer from the answer library for user according to the given question on the natural language form. Previous question-answering systems required modeling for specific task characteristics and designing multiple modules. This paper first proposes to use the Generative Pre-trained Transformer (GPT) to implement the Chinese question-answering system. To optimize and improve the model, this Chinese model pays more attention to the contextual content and semantic characteristics, and we designed a new method to train this model. This model reduces the number of modules in the question-answering system. This paper evaluates the model on the Document-Based Chinese Question and Answer (DBQA) dataset and achieves a 2.5% improvement in MRR/MAP over the latest lattice convolutional neural networks (Lattice CNNs). (Abstract)
[ "Language Models", "Natural Language Interfaces", "Semantic Text Processing", "Question Answering" ]
[ 52, 11, 72, 27 ]
SCOPUS_ID:84921469922
A Chinese question answering system based on web search
With the rapid development of search engine technology, the massive information on the internet becomes increasingly easy to search and utilize. However, Reading a large number of web pages of search engine results is also a hard work for users. Therefore how to conveniently and directly get the answers is a recent research focus. In this paper, we put forward a Chinese question answering system which uses the real-time network information retrieved by search engines. By inputting a natural language question, users can get an accurate answer. There are three main steps to extract the answers in our system. First is the question analysis, extract keywords and type of the question. Then the second step is to retrieve relevant pages through web search engines. The last and most important step is answer extraction, evaluate all extracted candidate answers, and the final answer will be the one with highest score. In addition to the system implementation, we also evaluated the performance of our system with an artificial building question-answer dataset. And the results obviously proved the feasibility of our system.
[ "Natural Language Interfaces", "Question Answering", "Information Retrieval" ]
[ 11, 27, 24 ]
SCOPUS_ID:85041212583
A Chinese question answering system for single-relation factoid questions
Aiming at the task of open domain question answering based on knowledge base in NLPCC 2017, we build a question answering system which can automatically find the promised entities and predicates for single-relation questions. After a features based entity linking component and a word vector based candidate predicates generation component, deep convolutional neural networks are used to rerank the entity-predicate pairs, and all intermediary scores are used to choose the final predicted answers. Our approach achieved the F1-score of 47.23% on test data which obtained the first place in the contest of NLPCC 2017 Shared Task 5 (KBQA sub-task). Furthermore, there are also a series of experiments which can help other developers understand the contribution of every part of our system.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:78649264567
A Chinese question-answering system with question classification and answer clustering
In this paper we propose an approach for Chinese question analysis and answer extraction. A general question analysis process contains keyword extraction and question classification. Question classification plays a crucial role in automatic question answering. To implement the question classification, we have carried out experiments with Support Vector Machines (SVM) using four kinds of features: words, part of speech (POS), named entity, semantic. And the answer extraction is converted into a clustering problem. The experimental results show the excellent performance of the proposed approach. ©2010 IEEE.
[ "Information Retrieval", "Question Answering", "Natural Language Interfaces", "Text Clustering", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 27, 11, 29, 36, 3 ]
SCOPUS_ID:85047664579
A Chinese route to sustainability: Postsocialist transitions and the construction of ecological civilization
This article explores the concept of sustainability in a postsocialist context through an analysis of official discourses relating to sustainability in more than 700 articles published in the Chinese-language newspaper People's Daily in 2015. The Chinese conception of sustainability, which emerges as a top-down model built upon traditional ideologies and Chinese socialist legacies, inclusive of economic growth, environmental sustainability, social justice and quality of life. This Chinese official discourse of sustainability places less emphasis on individuals' rights and more on the state's interests, and is encompassed in the Chinese concept of the “ecological civilization.” This article argues that if we are to build a full picture of the internationalized idea of sustainability we need to adopt a more international approach to thinking about the issue, drawing upon the sustainability-related discourses constructed from different national contexts using local languages and rhetoric.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:78650893033
A Chinese sentence compression method for opinion mining
The Chinese sentences in news articles are usually very long, which set up obstacles for further opinion mining steps. Sentence compression is the task of producing a brief summary at the sentence level. Conventional compression methods do not distinguish the opinionated information from factual information in each sentence. In this paper, we propose a weakly supervised Chinese sentence compression method which aiming at eliminating the negligible factual parts and preserving the core opinionated parts of the sentence. No parallel corpus is needed during the compression. Experiments that involve both automatic evaluations and human subjective evaluations validate that the proposed method is effective in finding the desired parts from the long Chinese sentences. © 2010 Springer-Verlag.
[ "Opinion Mining", "Sentiment Analysis" ]
[ 49, 78 ]
SCOPUS_ID:84874186746
A Chinese sentence segmentation approach based on comma
Chinese sentence segmentation is considered to be a very fundamental step in natural language processing. A successful solution for sentence boundary detection is a key step in the subsequent NLP tasks, such as parsing and machine translation, etc. In this paper, we consider comma as a sign-of-the-sentence boundary, and then divide it into two major types, i.e., the true (EOS) and the pseudo (Non-EOS). Finally, a system framework of Chinese sentence segmentation based on two-layer classifiers is presented and implemented. The experimental results on Chinese Treebank 6.0. Results show that our model achieve the F-measure of 90.7% overall, which improves by 1.5%. © 2013 Springer-Verlag.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85049689893
A Chinese short text semantic similarity computation model based on stop words and TongyiciCilin
Short text similarity computing plays an important role in natural language processing, and it can be applied to many tasks. In recent years, there are lots of researches getting important results on natural language processing. Although there are some good results in English, there is no major breakthrough in Chinese. Different from the proposed methods, we reserve the Stop words in the training dataset of word vector for Chinese characteristics, and add the TongyiciCilin to the training data of the short text semantic similarity computation model. We compared the effect of Word2vec and Glove methods in our model. We use the Chinese short text semantic similarity dataset which is designed by Chinese grammar experts. The results show that the accuracy of the model is improved by 2%-3% by retaining Stop words in word vector training data and adding TongyiciCilin to training data. The accuracy of our model is better than Baidu short text similarity calculation platform on the same testing dataset.
[ "Semantic Text Processing", "Semantic Similarity", "Representation Learning" ]
[ 72, 53, 12 ]
SCOPUS_ID:79959995635
A Chinese sign language recognition approach based on multiple sensor information fusion and statistical language model
Accelerometer (ACC) and surface electromyography (SEMG) sensors are two effective portable devices to capture gestures. In this paper, a multi-sensor information fusion method was proposed to recognize the Chinese sign language gestures. Firstly, a hierarchical decision tree was constructed for the information fusion of ACC and EMG signals to recognize the subwords of Chinese Sign Language (CSL). Then the statistical language model was constructed to detect and correct error in the process of the recognition. For the recognition of 120 CSL subwords and 200 sentences, the average recognition accuracies of our method could up to 91% and 84% respectively. The comparative analysis of experimental results showed that the statistical language model could improve the recognition accuracies of 120 CSL subwords by 9% and the recognition accuracies of 200 sentences by 13%. The results indicated that the proposed method could effectively recognize Chinese sign language.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:0035725784
A Chinese spoken dialog system for blind men
This paper introduces a Chinese spoken dialog system providing services for blind men through which they can use computers conveniently. A description of the architecture of the dialog system is presented briefly and how each component work is also explained. The key factor of such a dialog system is the extraction of the intention of a user's utterance so as to make an appropriate response. To achieve this purpose, a case grammar formalism was applied for semantic description and a robust spoken language parsing method based on case-frame was adopted to get the semantic interpretation of the input. It shows that this parsing method can tolerate errors of speech recognition and grammatical deviation of spoken language to some extent.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:0242660449
A Chinese spoken dialogue system for train information
In this paper a Chinese spoken dialogue system developed for train information retrieval is presented. After a brief description of the system architecture and the individual modules, a dialogue manager, which integrates user plan inference with the topic tree model, is proposed. Also dialogue strategies based on this mechanism, including consistent information sharing across multiple topics, reliable user response expectation and proper system prompt design, are presented and explained in detail. Experiments show that sentence meaning understanding error rate decreased by 23.5% with the guide of user plan inference. Preliminary subjective evaluation shows that the users are interested and willing to talk with the system although there's still much to be improved.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:78650326202
A Chinese synonyms reduced algorithm based on sememe tree
Question Understanding of Chinese Question-Answering System generally includes steps such as: word segmentation, POS Tagging, keywords expansion, information retrieval etc. The extended keyword set usually has redundant messages and part of the words and phrases may be not relevant to the question. Consequently, information retrieval with the extended keywords set may bring about large numbers of noise information and enhance the difficulty of answer pick-up. This paper explores the use of distance between vocabularies in the sememe tree for reducing keywords set. It analyzes the detailed steps of question understanding and the improved algorithm. Empirical results support the theoretical findings. The algorithm proposed in the paper achieves substantial improvement by 23% on the average, and wipes off the vocabulary beside the mark. Furthermore, it will improve the accuracy rate of Question Understanding in the subsequent steps. © 2010 IEEE.
[ "Visual Data in NLP", "Question Answering", "Natural Language Interfaces", "Information Retrieval", "Multimodality" ]
[ 20, 27, 11, 24, 74 ]
SCOPUS_ID:34047199491
A Chinese text classification algorithm based on granular computing
Classification is a basis ingredient of knowledge processing, and algorithms play a key role in a classifying system. In the paper, granular computing is applied in the domain of Chinese text classification. First, some concepts such as information granules, feature granules, decision granules, operations of granules and closeness of granules, are defined. Secondly, based on granular computing generating algorithm for building classification model and automatic classifying algorithm for Chinese text classification are proposed. Finally, they are illustrated by a real world example. It is shown that proposed algorithms are useful and effective. Because of granular computing aiming at reducing difficulties of problem solving, Chinese text classification based on granular computing is of theoretical meaning and practical value. ©2006 IEEE.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85147668256
A Chinese text classification model based on radicals and character distinctions
Chinese characters are generally correlated with their semantic meanings, and the structure of radicals, in particular, can be a clear indication of how characters are related to each other. In the Chinese characters simplification movement, some different traditional characters have been transferred into one simplified character (many-to-one mapping), resulting in the phenomenon of 'one simplified character corresponding to many traditional characters. Compared to the simplified characters, the traditional characters contain richer structural information, which is also more meaningful to semantic understanding. Traditional approaches of text modelling often overlook the structural content of Chinese characters and the role of human cognitive behaviour in the process of text comprehension. Hence, we propose a Chinese text classification model derived from the construction methods and evolution of Chinese characters. The model consists of two branches: the simplified and the traditional, with an attention module based on the radical classification in each branch. Specifically, we first develop a sequential modelling structure to obtain sequence information of Chinese texts. Afterwards, an associated word module using the part head as a medium is designed to filter out keywords with high semantic differentiation among the auxiliary units. An attention module is then implemented to balance the importance of each keyword in a particular context. Our proposed method is conducted on three datasets to demonstrate validity and plausibility.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:6344231568
A Chinese text classification model based on vector space and semantic meaning
Aiming at the status that various electronic text materials are increasing rapidly, this paper brings forward a model of automatic classification of electronic text information in order to manage and use these text information effectively: the algorithm of segmentation of word based on word dictionary and statistics, preprocessing of text, design of weight function of feature words and collecting them, expression of text vector space, latent semantic indexing and clustering algorithm of text, etc. The experiment has proved that the model had satisfactory classification effect as well as high calculation and storage efficiency.
[ "Semantic Text Processing", "Information Retrieval", "Syntactic Text Processing", "Representation Learning", "Text Clustering", "Text Segmentation", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 15, 12, 29, 21, 36, 3 ]
SCOPUS_ID:84963629074
A Chinese text classification system based on Naive Bayes algorithm
In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences) for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier, and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85048766214
A Chinese text correction and intention identification method for speech interactive context
ASR (Automatic Speech Recognition) is an important technology in man-machine interaction. Due to the complexity of natural language, the interference of environment and other factors, the recognition accuracy has low accuracy. This paper analyzes the use cases of speech recognition errors and proposes a text correction and intent recognition method based on the phonation principle and language characteristics peculiar to Chinese, and proposes an improved edit distance method to better calculate the text distance. Through a large number of experiments, this method can improves 22.9% accuracy of text recognition in ASR system.
[ "Text Error Correction", "Speech & Audio in NLP", "Syntactic Text Processing", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 26, 70, 15, 47, 10, 74 ]
SCOPUS_ID:84978128822
A Chinese text paraphrase detection method based on dependency tree
Paraphrase detection is regarded as an important subtask in lots of natural language processing tasks. For example, in question answering, finding similar relations between questions needs paraphrase detection, also it is widely used in information retrieval, machine translation, document clustering, etc. Traditional solutions are mainly divided to two types. One is based on bag of words, which only considers the words in the sentences and similarity degrees between words. The other type is based on word embedding and deep neural networks, which learns word vectors to sentence vectors in deep models, in these models, deep layers may represent deep information in a sentence like phrase information and syntactic information, but these models may also lose some sentence information. We proposed a new method that considers word similarity and also directly uses dependency relations in sentences. We train our model in a Chinese text corpus. By working out dependency relation similarities and word similarities, we decide whether a sentence is a paraphrase of another one.
[ "Paraphrasing", "Syntactic Text Processing", "Syntactic Parsing", "Text Generation" ]
[ 32, 15, 28, 47 ]
SCOPUS_ID:85145347495
A Chinese text similarity algorithm based on Yake and neural network
Traditional text similarity algorithm has the disadvantage of a large amount of text data and high complexity. Keywords are highly concentrated thematic ideas in the text. Extracting them can reduce the complexity of text similarity calculation. Therefore, this paper proposes a Chinese text similarity calculation method that integrates improved YAKE and neural network(YANN). With Aim to the problem that Yet Another Keyword Extractor(YAKE) algorithm is not suitable for Chinese text keyword extraction, keyword candidate stage. First the new feature value of words is calculated by using word span, position, frequency, word context relevance, and the number of different sentences. Next we calculate the keyword score of each candidate word after synthesizing all the features values, and output the keywords in the ascending order of the score. Finally, the keyword set is inputted into the trained word2vec model for vectorization. Summation and averaging where the keyword vector values are derived from the trained word2vec model, and the similarity between different texts is calculated by cosine similarity. The experimental results show that the method proposed in this paper has better performance than other algorithms in Chinese text keyword extraction, and the similarity calculation results prove the merit of the method used.
[ "Term Extraction", "Information Extraction & Text Mining" ]
[ 1, 3 ]
SCOPUS_ID:80054955018
A Chinese text watermark algorithm based on polyphone
Based on the principle of zero-watermarking, this paper adopts the pronunciation of polyphone characters and the features between two polyphones of Chinese characters to construct the zero-watermarking algorithm, and produces a Chinese text watermark algorithm based on polyphone. Simulation results show that the algorithm not only meets with better invisibility and robustness, but can identify the attacks of deleting, adding and replacing characters. © 2011 IEEE.
[ "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 58, 4 ]
SCOPUS_ID:54049103655
A Chinese text watermarking based on statistic of phrase frequency
Although tamper and delete attack problem is difficult to overcome, this paper proposes a novel algorithm based on the statistical characteristics of phrase frequency to solve it. The algorithm takes watermark as information entropy and Chinese characters' code as partial probability distribution values. As a result, the problem of sequence absence brought by delete attack is converted into sorting order changing problem. To approximate watermark information entropy, two approximation algorithms are presented, the performance of which were also analyzed. Moreover after being organized in cyclic codes, the rest probability distribution value is embedded in cover text. Not only does cyclic code with redundancy guarantee the overall data safety, but error correction reduces delete attack impact. © 2008 IEEE.
[ "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 58, 4 ]
SCOPUS_ID:84861911285
A Chinese unsupervised word sense disambiguation method based on semantic vector
The supervise machine learning word sense disambiguation method need to annotate the words of the training corpus, in order to overcome the data sparseness problem to achieve the good word sense disambiguation effect we must establish a large-scale marked Corpus, but obtaining the marked corpus requires high artificial price. Against this problem this paper proposes an unsupervised learning method without manual annotation. Firstly we mine the feature words based on PMI (Point-wise Mutual Information) and Z test, defining the v words to describe a certain sense of polysemy, and then calculating the similarity between sense words and the features of polysemy in the context to determine the correct sense of the polysemy. This paper disambiguates ten typical polysemy, and experimental results prove that the method is effective. © 2012 IEEE.
[ "Low-Resource NLP", "Semantic Text Processing", "Word Sense Disambiguation", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 80, 72, 65, 12, 4 ]
SCOPUS_ID:85145885524
A Chinese verb semantic feature dataset (CVFD)
Language is an advanced cognitive function of humans, and verbs play a crucial role in language. To understand how the human brain represents verbs, it is critical to analyze what knowledge humans have about verbs. Thus, several verb feature datasets have been developed in different languages such as English, Spanish, and German. However, there is still a lack of a dataset of Chinese verbs. In this study, we developed a semantic feature dataset of 1140 Chinese Mandarin verbs (CVFD) with 11 dimensions including verb familiarity, agentive subject, patient, action effector, perceptual modality, instrumentality, emotional valence, action imageability, action complexity, action intensity, and the usage scenario of action. We calculated the semantic features of each verb and the correlation between dimensions. We also compared the difference between action, mental, and other verbs and gave some examples about how to use CVFD to classify verbs according to different dimensions. Finally, we discussed the potential applications of CVFD in the fields of neuroscience, psycholinguistics, cultural differences, and artificial intelligence. All the data can be found at https://osf.io/pv29z/.
[ "Psycholinguistics", "Linguistics & Cognitive NLP" ]
[ 77, 48 ]
SCOPUS_ID:85083235403
A Chinese word segment model for energy literature based on Neural Networks with Electricity User Dictionary
Traditional Chinese word segmentation (CWS) methods are based on supervised machine learning such as Condtional Random Fields(CRFs), Maximum Entropy(ME), whose features are mostly manual features. These manual features are often derived from local contexts. Currently, most state-of-Art methods for Chinese word segmentation are based on neural networks. However these neural networks rarely introduct the user dictionary. We propose a LSTMbased Chinese word segmentation which can take advantage of the user dictionary. The experiments show that our model performs better than a popular segment tool in electricity domain. It is noticed that it achieves a better performance when transfered to a new domain using the user dictionary.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:78149305559
A Chinese word segmentation algorithm based on maximum entropy
Automatic word segmentation technology is an important component part of modern Chinese information processing. It is the key technology of the Chinese full-text retrieval. This paper presents a Chinese word segmentation algorithm based on maximum entropy. It uses of part-of-speech tagging and word frequency tagging of corpus to establish maximum entropy model based on mutual information as a word segmentation language model to make word segmentation. At last, the binary model was used to test whether the expansion of the training corpus may impact the word segmentation accuracy, and the relationship curves between the expansion of training corpus and the word segmentation accuracy was obtained. © 2010 IEEE.
[ "Syntactic Text Processing", "Tagging", "Text Segmentation", "Information Retrieval" ]
[ 15, 63, 21, 24 ]
SCOPUS_ID:2442649391
A Chinese word segmentation based on language situation in processing ambiguous words
While the processing of natural language is beneficial to the text mining, Chinese word segmentation is an important step in the processing of Chinese natural language. In this paper, the convergence essence of the segmentation process is analyzed, and a theory of Chinese word segmentation based on language situation is deducted. Based on the segmentation theory, an algorithm of Chinese word segmentation is presented. Both in theory and from the experiment results, the algorithm is efficient. © 2003 Elsevier Inc. All rights reserved.
[ "Linguistics & Cognitive NLP", "Text Segmentation", "Syntactic Text Processing", "Linguistic Theories" ]
[ 48, 21, 15, 57 ]
SCOPUS_ID:67650706826
A Chinese word segmentation based on machine learning
Different from English, there are no interval marks between words in Chinese. Segmenting Chinese text to words is the first job for every kind of Chinese information processing, so Chinese word segmentation is a basal and difficult issue in the field of Chinese information processing. Traditional word segmentation systems have to establish the dictionary and add unknown words out of the dictionary with manual work. This paper proposes a new Chinese word segmentation model which can automatically establish a dictionary, gradually update it and perfect it based on machine learning. Four modules of the machine learning model for Chinese word segmentation system are introduced in detail and some improvements of the algorithms are made on some module to improve system performance. After the test of closed corpus and open corpus, the results show that the method alleviates the workload of building and maintaining the dictionary, furthermore, it resolves the issues of ambiguity processing and unknown words recognition. © 2009 IEEE.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85075867455
A Chinese word segmentation scheme based on a deep neural network model
In order to solve the problem of reduced performance of existing segmentation algorithms and programs when processing massive network text segmentation, a Chinese word segmentation scheme based on a deep neural network model is proposed in this paper. The encoder-decoder model (EDM), based on the long short-term memory (LSTM) network, was employed to train the data model, which derived a model to perform the word segmentation. In order to improve the word segmentation performance, a modification method based on word vectors is further provided. Experimental results on the typical Weibo dataset suggested that the performance of the proposed scheme is significantly improved compared with traditional word segmentation software. In addition, word segmentation precision and F-values, after modification by the presented method based on the word vectors, were slightly improved compared with those without modification. All these facts indicated the effectiveness of the proposed segmentation scheme.
[ "Text Segmentation", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning" ]
[ 21, 72, 15, 12 ]
SCOPUS_ID:84867415265
A Chinese-English patent machine translation system based on the theory of hierarchical network of concepts
Compared with ordinary text, patent text often has more complex sentence structure and more ambiguity of multiple verbs. To deal with these problems, this paper presents a rule-based Chinese-English patent machine translation (MT) system based on the theory of hierarchical network of concepts (HNC). In this system, the whole procedure are divided into three main parts, the semantic analysis of the source language, the transitional transformation from the source language to the target language and the generation of the target language. The knowledge base and the rule set are obtained from manually analyzing the semantic features of a training set which contains more than 6 000 Chinese patent sentences, and a specific method of evaluation is provided during the experiment. © 2012 The Journal of China Universities of Posts and Telecommunications.
[ "Machine Translation", "Linguistic Theories", "Text Generation", "Linguistics & Cognitive NLP", "Multilinguality" ]
[ 51, 57, 47, 48, 0 ]
SCOPUS_ID:67650287651
A Chinese-Japanese lexical machine translation through a Pivot language
The bilingual lexicon is an expensive but critical resource for multilingual applications in natural language processing. This article proposes an integrated framework for building a bilingual lexicon between the Chinese and Japanese languages. Since the language pair Chinese-Japanese does not include English, which is a central language of the world, few large-scale bilingual resources between Chinese and Japanese have been constructed. One solution to alleviate this problem is to build a Chinese-Japanese bilingual lexicon through English as the pivot language. In addition to the pivotal approach, we can make use of the characteristics of Chinese and Japanese languages that use Han characters. We incorporate a translation model obtained from a small Chinese-Japanese lexicon and use the similarity of the hanzi and kanji characters by using the log-linear model. Our experimental results show that the use of the pivotal approach can improve the translation performance over the translation model built from a small Chinese-Japanese lexicon. The results also demonstrate that the similarity between the hanzi and kanji characters provides a positive effect for translating technical terms. © 2009 ACM.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85138071337
A Chinese-Malay Neural Machine Translation Model Based on CA-Transformer and Transfer Learning
Neural machine translation (NMT) has achieved good results in many applications, but large-scale corpora are needed as support, and high-quality corpora are often difficult to obtain, especially for low-resource corpora such as small languages, resulting in poor translation results. Therefore, a transfer learning method is proposed, in which the training parameters of Chinese-English high resource corpus are used to replace the model parameters of Chinese-Malay and English-Malay respectively, so as to solve the problem of insufficient training and scarce corpus. Compared with the basic transformer, this method can effectively reduce the number of parameters, speed up the training, and solve the problem that the pretrained language model is too large and difficult to optimize. Moreover, the performance is basically similar to transformer. Compared with multiple models, the translation model in this paper is faster, with better translation effect and better BLEU.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
SCOPUS_ID:85123501475
A Chinese-Thai Cross-language Word Embedding Method Based on Unequal Corpus of Small Dictionaries
This paper proposes a Chinese-Thailand cross-language word embedding method based on the unequal corpus of a small dictionary. The method first normalizes the word vectors of Chinese and the small dictionary, and obtains the gradient descent for the orthogonal optimal linear transformation of the small dictionary words. The initial value is then clustered on a large Chinese corpus, with the help of a small dictionary to find the Chinese word vector corresponding to each cluster cluster, and take the mean value of each cluster word vector obtained by the clustering and the mean value of the word vector corresponding to Chinese and Thai, Establish a new bilingual word vector correspondence, and extend the newly established bilingual word vector to the small dictionary, so that the Chinese-Thai small dictionary can be generalized and expanded. Finally, use the Chinese-Thai dictionary after generalization to perform gradient descent on the cross-language word embedding mapping model to obtain the optimal value.
[ "Language Models", "Semantic Text Processing", "Information Extraction & Text Mining", "Representation Learning", "Text Clustering", "Multilinguality" ]
[ 52, 72, 3, 12, 29, 0 ]
SCOPUS_ID:85074087511
A Chinese-english translation model for mobile terminal system based on neural network
This paper focuses on the improvement of translation results by neural network. Since different translation models and methods have their own advantages and disadvantages, sentence-level paraphrase is used to rewrite the translation, which is regarded as a translation task between the same language. In the absence of large-scale parallel paraphrase corpus, we approximate parallel paraphrase corpus using machine translation results and reference translations of source language. Then, the model is used to train a repetition system from machine translation results to reference translation, to produce sentence-level repetition results with consistent semantics. After that, the restatement results are introduced into the translation hypothesis candidates for system integration. Finally, based on translation and paraphrase, mobile application-oriented design and development are performed to achieve machine translation between Chinese and English.
[ "Paraphrasing", "Machine Translation", "Text Generation", "Multilinguality" ]
[ 32, 51, 47, 0 ]
https://aclanthology.org//2007.mtsummit-papers.13/
A Chinese-to-Chinese statistical machine translation model for mining synonymous simplified-traditional Chinese terms
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
https://aclanthology.org//1995.iwpt-1.10/
A Chunking-and-Raising Partial Parser
Parsing is often seen as a combinatorial problem. It is not due to the properties of the natural languages, but due to the parsing strategies. This paper investigates a Constrained Grammar extracted from a Treebank and applies it in a non-combinatorial partial parser. This parser is a simpler version of a chunking-and-raising parser. The chunking and raising actions can be done in linear time. The short-term goal of this research is to help the development of a partially bracketed corpus, i.e., a simpler version of a treebank. The long-term goal is to provide high level linguistic constraints for many natural language applications.
[ "Syntactic Parsing", "Syntactic Text Processing", "Chunking" ]
[ 28, 15, 43 ]
SCOPUS_ID:85079812969
A Citizen-Centred Sentiment Analysis Towards India’s Critically Endangered Avian and Mammalian Species
Conservation Science (CS), nowadays, is a vital area for research and development because of its linkage with multiple domains. Sustainable goals as populated by the United Nations also emphasis on the need of the hour to protect biodiversity both on land and in water. In stimulating the research in this area, the role of citizen participation has evolved to a larger extent in this direction. Social media plays a significant role in providing citizen-centric data for analysing multifaceted problems. A lot of notable research in the past was carried out for determining illegal trade of animal species, threat and popularity assessment, animal identification in camera trap pictures, etc. using social media. However, analysis of people’s attitudes towards endangered species based upon different factors lacks in the number of studies and not has been covered in a well-versed manner. This research comprises the sentiment analysis of tweets concerned with the five most critically endangered Indian avian and mammalian Species. The study evaluates the variability in the sentiment scores and the intensity of the information shared for these different species using the Valence Aware Dictionary for sentiment reasoning algorithm. The lower number of tweets is due to less popularity of the species among citizens. However, the negative sentiments for even the most popular species signify the exasperation towards ineffectiveness of popular flagship programs.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85080896832
A Class Content Summary Method Based on Media-driven Real-time Content Management Framework
In this paper, we present a class content summary method based on media-driven real-time content management framework. It is important to summarize contents in the class for not only students but also teachers. This method acquires teacher's voice in the class in real time, and generates a summary according to the importance of contents in the class. For generating a summary, this method not only extracts important keywords from acquired teacher's voice but also retrieves legacy contents on the web. This method consists of teacher's voice acquisition, important keyword extraction, and sentence retrieval. This method is based on a media-driven real-time content management framework for interconnection between real world media and legacy media contents. We apply our framework and realize summarize contents system in the class. By this method, it is possible to show a class content summary in real-time according to importance of keywords. This method organizes educational contents for re-use.
[ "Information Extraction & Text Mining", "Speech & Audio in NLP", "Term Extraction", "Information Retrieval", "Multimodality" ]
[ 3, 70, 1, 24, 74 ]
SCOPUS_ID:84865625810
A Class-Feature-Centroid classifier for text categorization
Automated text categorization is an important technique for many web applications, such as document indexing, document filtering, and cataloging web resources. Many different approaches have been proposed for the automated text categorization problem. Among them, centroid-based approaches have the advantages of short training time and testing time due to its computational efficiency. As a result, centroid-based classifiers have been widely used in many web applications. However, the accuracy of centroid-based classifiers is inferior to SVM, mainly because centroids found during construction are far from perfect locations. We design a fast Class-Feature-Centroid (CFC) classifier for multi-class, single-label text categorization. In CFC, a centroid is built from two important class distributions: inter-class term index and inner-class term index. CFC proposes a novel combination of these indices and employs a denormalized cosine measure to calculate the similarity score between a text vector and a centroid. Experiments on the Reuters-21578 corpus and 20-newsgroup email collection show that CFC consistently outperforms the state-of-the-art SVM classifiers on both micro-F1 and macro-F1 scores. Particularly, CFC is more effective and robust than SVM when data is sparse. Copyright is held by the International World Wide Web Conference Committee (IW3C2).
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85124047560
A Classical Approach to Handcrafted Feature Extraction Techniques for Bangla Handwritten Digit Recognition
Bangla Handwritten Digit recognition is a significant step forward in the development of Bangla OCR. However, intricate shape, structural likeness and distinctive composition style of Bangla digits makes it relatively challenging to distinguish. Thus, in this paper, we benchmarked four rigorous classifiers to recognize Bangla Handwritten Digit: K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Gradient-Boosted Decision Trees (GBDT) based on three handcrafted feature extraction techniques: Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP), and Gabor filter on four publicly available Bangla handwriting digits datasets: NumtaDB, CMARTdb, Ekush and BDRW. Here, handcrafted feature extraction methods are used to extract features from the dataset image, which are then utilized to train machine learning classifiers to identify Bangla handwritten digits. We further fine-tuned the hyperparameters of the classification algorithms in order to acquire the finest Bangla handwritten digits recognition performance from these algorithms, and among all the models we employed, the HOG features combined with SVM model (HOG+SVM) attained the best performance metrics across all datasets. The recognition accuracy of the HOG+SVM method on the NumtaDB, CMARTdb, Ekush and BDRW datasets reached 93.32%, 98.08%, 95.68% and 89.68%, respectively as well as we compared the model performance with recent state-of-art methods.
[ "Visual Data in NLP", "Language Models", "Information Extraction & Text Mining", "Semantic Text Processing", "Information Retrieval", "Text Classification", "Multimodality" ]
[ 20, 52, 3, 72, 24, 36, 74 ]
https://aclanthology.org//W09-2807/
A Classification Algorithm for Predicting the Structure of Summaries
[ "Text Classification", "Summarization", "Text Generation", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 30, 47, 24, 3 ]
http://arxiv.org/abs/cs/0009027v1
A Classification Approach to Word Prediction
The eventual goal of a language model is to accurately predict the value of a missing word given its context. We present an approach to word prediction that is based on learning a representation for each word as a function of words and linguistics predicates in its context. This approach raises a few new questions that we address. First, in order to learn good word representations it is necessary to use an expressive representation of the context. We present a way that uses external knowledge to generate expressive context representations, along with a learning method capable of handling the large number of features generated this way that can, potentially, contribute to each prediction. Second, since the number of words ``competing'' for each prediction is large, there is a need to ``focus the attention'' on a smaller subset of these. We exhibit the contribution of a ``focus of attention'' mechanism to the performance of the word predictor. Finally, we describe a large scale experimental study in which the approach presented is shown to yield significant improvements in word prediction tasks.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85123753584
A Classification Based Approach to the Prediction of Song Popularity
Music is a continuously evolving field that has existed for centuries as a form of relaxation and entertainment. The field of music and its industry have grown considerably over the last few decades and constant efforts are being made to make hit songs so as to maximize the revenues generated. Online music streaming platforms have come into existence in the last couple of years and have become the most popular methods of streaming music. These platforms provide ways to evaluate the popularity of songs through rankings which are calculated by the number of streams and other factors. While different studies have been carried out to predict the success of songs, this research focuses on using the metadata of the songs and performing profanity and sentiment analysis on the lyrics to predict its popularity. Six machine learning algorithms (Random Forest Classifier, SVM, Decision Tree Classifier, K-Nearest Neighbors, Logistic Regression and Naïve Bayes) were compared and the one with the best accuracy was used.
[ "Information Extraction & Text Mining", "Text Classification", "Speech & Audio in NLP", "Sentiment Analysis", "Information Retrieval", "Multimodality" ]
[ 3, 36, 70, 78, 24, 74 ]
SCOPUS_ID:85069817722
A Classification Framework for Online Social Support Using Deep Learning
Health consumers engage in social interactions in online health communities (OHCs) to seek or provide social support. Automatic classification of social support exchanged online is important for both researchers and practitioners of online health communities, especially when a large number of messages are posted on regular basis. Classification of social support in OHCs provides an efficient way to assess the effectiveness of social interactions in the virtual environment. Most previous studies of online social support classification are based on “bag-of-words” assumption and have not considered the semantic meaning of words/terms embedded in the online messages. This research proposes a classification framework for online social support using the recent development of word space models and deep learning methods. Specifically, doc2vec models, bag-of-words representations, and linguistic analysis methods are used to extract features from the text messages that are posted in OHC for online social interaction or social support exchange. Then a deep learning model is applied to classify two major types of social support (i.e., informational and emotional support) expressed in OHC reply messages.
[ "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 72, 36, 12, 24, 3 ]
SCOPUS_ID:85110139767
A Classification Framework of Identifying Major Documents with Search Engine Suggestions and Unsupervised Subtopic Clustering
This paper addresses the problem of automatic recognition of out-of-topic documents from a small set of similar documents that are expected to be on some common topic. The objective is to remove documents of noise from a set. A topic model based classification framework is proposed for the task of discovering out-of-topic documents. This paper introduces a new concept of annotated {\it search engine suggests}, where this paper takes whichever search queries were used to search for a page as representations of content in that page. This paper adopted word embedding to create distributed representation of words and documents, and perform similarity comparison on search engine suggests. It is shown that search engine suggests can be highly accurate semantic representations of textual content and demonstrate that our document analysis algorithm using such representation for relevance measure gives satisfactory performance in terms of in-topic content filtering compared to the baseline technique of topic probability ranking.
[ "Low-Resource NLP", "Topic Modeling", "Information Extraction & Text Mining", "Semantic Text Processing", "Text Classification", "Representation Learning", "Text Clustering", "Information Retrieval", "Responsible & Trustworthy NLP" ]
[ 80, 9, 3, 72, 36, 12, 29, 24, 4 ]
SCOPUS_ID:85140573874
A Classification Model for Road Traffic Incidents on Twitter Data
This study aims to create a classification model for road traffic incidents in Thailand using Twitter data. The challenging issue of our work is to deal with highly imbalanced dataset of 5 classes. As we surveyed, some pieces of research solved this issue by the Markov Chains method. However, using the Markov Chains in our dataset provides low performance, so we study the Undersampling, Oversampling, Markov Chains, and Bi-directional Long Short-Term Memory (Bi-LSTM). As we use the Markov Chains as the baseline, the result of our experiment found that using Bi-LSTM provides the improvement of F1-score up to 15.44% against the baseline.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85081093214
A Classification Model for Thai Statement Sentiments by Deep Learning Techniques
At present, many organizations realized the importance of sentiment analysis for consumer reviews. The positive and negative comments can help to evaluate the user satisfaction of products and services to control and improve their qualities. In addition, the deep learning techniques are very interesting methods for current researches in the data mining field. Therefore, this research studied on the deep learning techniques to analyzed user reviews and comments in Thai Language from the TripAdvisor website. To begin with, user comments in four categories: hotels, restaurants, tourist attractions, and airlines were collected and tested on the combination of two basic deep learning technique that are convolutional neural network and long-short term memory. All user comments were divided into individual statements to classify into three groups: positive feelings, negative feelings, non-expressed feelings or neutrality. The research results found that the best classification model is the combination of three convolutional neural networks with 32, 64, and 128 filters, respectively, and the kernel size of 2 equal to the three components. Moreover, the performance of the proposed classification model was evaluated by accuracy, precision, and recall values which were higher than 80% in positive and negative groups, including F1 score about 0.8.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:85073897245
A Classification Model of Power Equipment Defect Texts Based on Convolutional Neural Network
A large amount of equipment defect texts are left unused in power management system. According to the features of power equipment defect texts, a classification model of defect texts based on convolutional neural network is established. Firstly, the features of power equipment defect texts are extracted by analyzing a large number of defect records. Then, referencing general process of Chinese text classification and considering the features of defect texts, we establishes a classification model of defect texts based on convolutional neural network. Finally, we develop classification effect evaluation indicators to evaluate the effect of the model based on one case. Compared with multiple traditional machine learning classification models and according to the classification effect evaluation indicators, the proposed defect text classification model can significantly reduce error rate with considerable efficiency.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85063902757
A Classification Retrieval Approach for English Legal Texts
It is a practical and complicated problem to find corresponding legal provisions automatically in the written description of the events in English legal texts. In order to solve this problem, this paper designs a classification method of legal texts based on feature words. Firstly, the relationship between legal provisions and characteristic words is established by taking the legal judgment documents as the training corpus, since the relevant legal provisions can be extracted accurately in the judgment documents. Then the characteristic words of the documents can be calculated by TF-IDF so it will be easy to establish corresponding relationship between legal provisions and the characteristic words. The chi-square statistic(CHI) and the position of feature words in the text are introduced as correction factors and integrated with traditional TF-IDF weight formula, which solve the problem of the distribution of feature words between classes and the insufficient importance of keywords. The experiments show that the algorithm can extract feature words from a variety of legal texts, and classify them into corresponding legal terms through calculation, so the classification effect of legal texts is proved to be excellent.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/1303.1931v1
A Classification of Adjectives for Polarity Lexicons Enhancement
Subjective language detection is one of the most important challenges in Sentiment Analysis. Because of the weight and frequency in opinionated texts, adjectives are considered a key piece in the opinion extraction process. These subjective units are more and more frequently collected in polarity lexicons in which they appear annotated with their prior polarity. However, at the moment, any polarity lexicon takes into account prior polarity variations across domains. This paper proves that a majority of adjectives change their prior polarity value depending on the domain. We propose a distinction between domain dependent and domain independent adjectives. Moreover, our analysis led us to propose a further classification related to subjectivity degree: constant, mixed and highly subjective adjectives. Following this classification, polarity values will be a better support for Sentiment Analysis.
[ "Information Extraction & Text Mining", "Information Retrieval", "Text Classification", "Sentiment Analysis" ]
[ 3, 24, 36, 78 ]
SCOPUS_ID:0016050052
A Classifier Design Technique for Discrete Variable Pattern Recognition Problems
This paper presents a new computerized technique to aid the designers of pattern classifiers when the measurement variables are discrete and the values form a simple nominal scale (no inherent metric). A theory of “prime events” which applies to patterns with measurements of this type is presented. A procedure for applying the theory of “prime events” and an analysis of the “prime event estimates” is given. To manifest additional characteristics of this technique, an example optical character recognition (OCR) application is discussed. Copyright © 1974 by The Institute of Electrical and Electronics Engineers, Inc.
[ "Visual Data in NLP", "Information Extraction & Text Mining", "Linguistic Theories", "Text Classification", "Linguistics & Cognitive NLP", "Information Retrieval", "Multimodality" ]
[ 20, 3, 57, 36, 48, 24, 74 ]
SCOPUS_ID:85044073956
A Classifier to Identify Soft Skills in a Researcher Textual Description
Find Your Doctor (FYD) aims at becoming the first Job-placement agency in Italy dedicated to PhDs who are undergoing the transition outside Academia. To support the FYD Human Resources team we started a research project aimed at extracting, from texts (questionnaires) provided by a person telling his/her experience, a set of well defined soft skills. The final aim of the project is to produce a list of researchers ranked w.r.t. their degree of soft skills ownership. In the context of this project, this paper presents an approach employing machine learning techniques aimed at classifying the researchers questionnaires w.r.t. a pre-defined soft skills taxonomy. This paper also presents some preliminary results obtained in the “communication” area of the taxonomy, which are promising and worth of further research in this direction.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
https://aclanthology.org//W05-1513/
A Classifier-Based Parser with Linear Run-Time Complexity
[ "Text Classification", "Syntactic Text Processing", "Syntactic Parsing", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 15, 28, 24, 3 ]
SCOPUS_ID:85130061316
A Classroom-Centered Study of Third Tone in Mandarin Chinese
—Phonological third tone sandhi studies in Mandarin Chinese will lead more often to a lab-centered research, whereas the incorporation of phonetic tone sandhi studies into phonological analyses will shed a light on a classroom-centered study. This incorporation suggests a revised approach to the third tone sandhi from an articulatory perspective. As a result, the study of pitch values and pitch contours of a third tone are taken over by the study of sound positions and jaw/chin movements. The well-known five-level tonal diagram is challenged and replaced by a seven-level tonal diagram with an application of only two forms of pronouncing a third tone---pseudo third tone and pseudo second tone. All these aim to investigate a classroom-centered study of a third tone as an understudied area to provide practical guidance for Mandarin instructors and learners of Mandarin as second language.
[ "Phonetics", "Phonology", "Syntactic Text Processing" ]
[ 64, 6, 15 ]
SCOPUS_ID:85061919258
A Cleaning Algorithm for Noiseless Opinion Mining Corpus Construction
This paper presents DyCorC, an extractor and cleaner of web forums contents. Its main points are that the process is entirely automatic, language-independent and adaptable to all kinds of forum architectures. The corpus is built accordingly to user queries using expressions or item keywords as in research engines, and then DyCorC minimizes the boilerplate for further feature-based opinion mining and sentiment analysis, gathering comments and scorings. Such noiseless corpora are usually hand made with the help of crawlers and scrapers, with specific containers devised for each type of forum, entailing lots of work and skills. Our aim is to cut down this preprocessing stage. Our algorithm is compared to state of the art models (Apache Nutch, BootCat, JusText), with a gold standard corpus we released. DyCorC offers a better quality of noiseless content extraction. Its algorithm is based on DOM trees with string distances, seven of which have been compared on the reference corpus, and feature-distance has been chosen as the best fit.
[ "Opinion Mining", "Sentiment Analysis" ]
[ 49, 78 ]
SCOPUS_ID:84958942086
A Clearer Picture: The Contribution of Visuals and Text to Framing Effects
Visuals in news media help define, or frame issues, but less is known about how they influence opinions and behavior. The authors use an experiment to present image and text exemplars of frames from war and conflict news in isolation and in image-text congruent and incongruent pairs. Results show that, when presented alone, images generate stronger framing effects on opinions and behavioral intentions than text. When images and text are presented together, as in a typical news report, the frame carried by the text influences opinions regardless of the accompanying image, whereas the frame carried by the image drives behavioral intentions irrespective of the linked text. These effects are explained by the salience enhancing and emotional consequences of visuals.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85019653412
A Clearer Vision: Creating and Evolving a Model to Support the Development of Science Teacher Leaders
This paper describes a professional development model for developing science teacher leaders that has evolved and been refined through working with 16 high school chemistry and physics teachers in high-needs schools over the past 3 years. The theoretical framework draws upon Goodwin’s notion of professional vision and Dempsey’s four metaphors to inform an understanding of professional identity consistent with innovation and empowerment in (a) the classroom setting with students and (b) with teachers in collegial environments. Thus, leadership practices and purposes are discussed at two distinct levels and contexts, which interact reflexively. Sociolinguistic discourse analysis of multiple data sources enabled us to identify the professional development features that promoted or hindered the teachers’ growth toward a leadership perspective and disposition. Implications for science teacher renewal and retention as well as limitations of the study and proposed leadership model are also shared and discussed.
[ "Discourse & Pragmatics", "Visual Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 71, 20, 72, 74 ]
SCOPUS_ID:85098241518
A Click-Through Rate Prediction Algorithm Based on Real-Time Advertising Data Logs
Advertisement is one of the ways merchants attract consumers and enhance their influence. In recent years, under the big picture of smart city and smart life, the advertising business has shown a trend of specialization. It has always been a tough problem for advertisers of how to show potential consumers in real time the advertisements in a precise way. This passage reveals an experiment that alters the logistic regression function model and fits the data logs that users produce in order to forecast the possibility that the user clicks the real-time advertisement designed for him/her. The results of the experiment show that compared to traditional advertising, such altering improves the effectiveness of precise advertising, and has the edge of shorter operating time against other models based on deep learning. An aspect to improve is the comparatively lower accuracy of forecast. Such results indicate an alternative for certain small and medium enterprises to lower their advertising costs.
[ "Passage Retrieval", "Information Retrieval" ]
[ 66, 24 ]
SCOPUS_ID:85137517815
A Clinical Reasoning-Encoded Case Library Developed through Natural Language Processing
Importance: Case reports that externalize expert diagnostic reasoning are utilized for clinical reasoning instruction but are difficult to search based on symptoms, final diagnosis, or differential diagnosis construction. Computational approaches that uncover how experienced diagnosticians analyze the medical information in a case as they formulate a differential diagnosis can guide educational uses of case reports. Objective: To develop a “reasoning-encoded” case database for advanced clinical reasoning instruction by applying natural language processing (NLP), a sub-field of artificial intelligence, to a large case report library. Design: We collected 2525 cases from the New England Journal of Medicine (NEJM) Clinical Pathological Conference (CPC) from 1965 to 2020 and used NLP to analyze the medical terminology in each case to derive unbiased (not prespecified) categories of analysis used by the clinical discussant. We then analyzed and mapped the degree of category overlap between cases. Results: Our NLP algorithms identified clinically relevant categories that reflected the relationships between medical terms (which included symptoms, signs, test results, pathophysiology, and diagnoses). NLP extracted 43,291 symptoms across 2525 cases and physician-annotated 6532 diagnoses (both primary and related diagnoses). Our unsupervised learning computational approach identified 12 categories of medical terms that characterized the differential diagnosis discussions within individual cases. We used these categories to derive a measure of differential diagnosis similarity between cases and developed a website (universeofcpc.com) to allow visualization and exploration of 55 years of NEJM CPC case series. Conclusions: Applying NLP to curated instances of diagnostic reasoning can provide insight into how expert clinicians correlate and coordinate disease categories and processes when creating a differential diagnosis. Our reasoning-encoded CPC case database can be used by clinician-educators to design a case-based curriculum and by physicians to direct their lifelong learning efforts.
[ "Reasoning" ]
[ 8 ]
https://aclanthology.org//2022.textgraphs-1.4/
A Clique-based Graphical Approach to Detect Interpretable Adjectival Senses in Hungarian
The present paper introduces an ongoing research which aims to detect interpretable adjectival senses from monolingual corpora applying an unsupervised WSI approach. According to our expectations the findings of our investigation are going to contribute to the work of lexicographers, linguists and also facilitate the creation of benchmarks with semantic information for the NLP community. For doing so, we set up four criteria to distinguish between senses. We experiment with a graphical approach to model our criteria and then perform a detailed, linguistically motivated manual evaluation of the results.
[ "Explainability & Interpretability in NLP", "Knowledge Representation", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 81, 18, 72, 4 ]
http://arxiv.org/abs/2211.00151v2
A Close Look into the Calibration of Pre-trained Language Models
Pre-trained language models (PLMs) achieve remarkable performance on many downstream tasks, but may fail in giving reliable estimates of their predictive uncertainty. Given the lack of a comprehensive understanding of PLMs calibration, we take a close look into this new research problem, aiming to answer two questions: (1) Do PLMs learn to become calibrated in the training process? (2) How effective are existing calibration methods? For the first question, we conduct fine-grained control experiments to study the dynamic change in PLMs' calibration performance in training. We consider six factors as control variables, including dataset difficulty, available training samples, training steps, the number of tunable parameters, model scale, and pretraining. In experiments, we observe a consistent change in calibration performance across six factors. We find that PLMs don't learn to become calibrated in training, evidenced by the continual increase in confidence, no matter the predictions are correct or not. We highlight that our finding presents some contradiction with two established conclusions: (a) Larger PLMs are more calibrated; (b) Pretraining improves model calibration. Next, we study the effectiveness of existing calibration methods in mitigating the overconfidence issue, in both in-distribution and various out-of-distribution settings. Besides unlearnable calibration methods, we adapt two recently proposed learnable methods that directly collect data to train models to have reasonable confidence estimations. Also, we propose extended learnable methods based on existing ones to further improve or maintain PLMs calibration without sacrificing the original task performance. Experimental results show that learnable methods significantly reduce PLMs' confidence in wrong predictions, and our methods exhibit superior performance compared with previous methods.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85031667923
A Closeness Index-Based TODIM Method for Hesitant Qualitative Group Decision Making
The purpose of this study is to develop a hesitant trapezoidal fuzzy TODIM (interactive and multi-criteria decision making) with a closeness index-based ranking method to handle hesitant qualitative group decision making problems. First, a novel closeness index-based ranking method is presented to compare the magnitude of hesitant trapezoidal fuzzy numbers (HTrFNs). Based on the developed ranking method, the dominance values of alternatives over others for each expert are calculated. Then, a nonlinear programming model is established to derive the dominance values of alternatives over others for the group and correspondingly the optimal ranking order of alternatives is obtained.
[ "Indexing", "Information Retrieval" ]
[ 69, 24 ]
https://aclanthology.org//D19-6101/
A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification
New conversation topics and functionalities are constantly being added to conversational AI agents like Amazon Alexa and Apple Siri. As data collection and annotation is not scalable and is often costly, only a handful of examples for the new functionalities are available, which results in poor generalization performance. We formulate it as a Few-Shot Integration (FSI) problem where a few examples are used to introduce a new intent. In this paper, we study six feature space data augmentation methods to improve classification performance in FSI setting in combination with both supervised and unsupervised representation learning methods such as BERT. Through realistic experiments on two public conversational datasets, SNIPS, and the Facebook Dialog corpus, we show that data augmentation in feature space provides an effective way to improve intent classification performance in few-shot setting beyond traditional transfer learning approaches. In particular, we show that (a) upsampling in latent space is a competitive baseline for feature space augmentation (b) adding the difference between two examples to a new example is a simple yet effective data augmentation method.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Sentiment Analysis", "Intent Recognition", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 24, 3, 78, 79, 11, 38, 36, 4 ]
https://aclanthology.org//D19-5410/
A Closer Look at Data Bias in Neural Extractive Summarization Models
In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing state-of-the-art model.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
http://arxiv.org/abs/2203.05243v1
A Closer Look at Debiased Temporal Sentence Grounding in Videos: Dataset, Metric, and Approach
Temporal Sentence Grounding in Videos (TSGV), which aims to ground a natural language sentence in an untrimmed video, has drawn widespread attention over the past few years. However, recent studies have found that current benchmark datasets may have obvious moment annotation biases, enabling several simple baselines even without training to achieve SOTA performance. In this paper, we take a closer look at existing evaluation protocols, and find both the prevailing dataset and evaluation metrics are the devils that lead to untrustworthy benchmarking. Therefore, we propose to re-organize the two widely-used datasets, making the ground-truth moment distributions different in the training and test splits, i.e., out-of-distribution (OOD) test. Meanwhile, we introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets. New benchmarking results indicate that our proposed evaluation protocols can better monitor the research progress. Furthermore, we propose a novel causality-based Multi-branch Deconfounding Debiasing (MDD) framework for unbiased moment prediction. Specifically, we design a multi-branch deconfounder to eliminate the effects caused by multiple confounders with causal intervention. In order to help the model better align the semantics between sentence queries and video moments, we enhance the representations during feature encoding. Specifically, for textual information, the query is parsed into several verb-centered phrases to obtain a more fine-grained textual feature. For visual information, the positional information has been decomposed from moment features to enhance representations of moments with diverse locations. Extensive experiments demonstrate that our proposed approach can achieve competitive results among existing SOTA approaches and outperform the base model with great gains.
[ "Visual Data in NLP", "Responsible & Trustworthy NLP", "Robustness in NLP", "Multimodality" ]
[ 20, 4, 58, 74 ]
http://arxiv.org/abs/2106.14282v3
A Closer Look at How Fine-tuning Changes BERT
Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful. The most common approach to use these representations involves fine-tuning them for an end task. Yet, how fine-tuning changes the underlying embedding space is less studied. In this work, we study the English BERT family and use two probing techniques to analyze how fine-tuning changes the space. We hypothesize that fine-tuning affects classification performance by increasing the distances between examples associated with different labels. We confirm this hypothesis with carefully designed experiments on five different NLP tasks. Via these experiments, we also discover an exception to the prevailing wisdom that "fine-tuning always improves performance". Finally, by comparing the representations before and after fine-tuning, we discover that fine-tuning does not introduce arbitrary changes to representations; instead, it adjusts the representations to downstream tasks while largely preserving the original spatial structure of the data points.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
http://arxiv.org/abs/2011.00960v1
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English
Transformer-based language models achieve high performance on various tasks, but we still lack understanding of the kind of linguistic knowledge they learn and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing their grammatical and semantic knowledge by sentence-level probing, diagnostic cases, and masked prediction tasks. We focus on relative clauses (in American English) as a complex phenomenon needing contextual information and antecedent identification to be resolved. Based on a naturalistic dataset, probing shows that all three models indeed capture linguistic knowledge about grammaticality, achieving high performance. Evaluation on diagnostic cases and masked prediction tasks considering fine-grained linguistic knowledge, however, shows pronounced model-specific weaknesses especially on semantic knowledge, strongly impacting models' performance. Our results highlight the importance of (a)model comparison in evaluation task and (b) building up claims of model performance and the linguistic knowledge they capture beyond purely probing-based evaluations.
[ "Language Models", "Knowledge Representation", "Semantic Text Processing" ]
[ 52, 18, 72 ]
https://aclanthology.org//W19-8622/
A Closer Look at Recent Results of Verb Selection for Data-to-Text NLG
Automatic natural language generation systems need to use the contextually-appropriate verbs when describing different kinds of facts or events, which has triggered research interest on verb selection for data-to-text generation. In this paper, we discuss a few limitations of the current task settings and the evaluation metrics. We also provide two simple, efficient, interpretable baseline approaches for statistical selection of trend verbs, which give a strong performance on both previously used evaluation metrics and our new evaluation.
[ "Data-to-Text Generation", "Text Generation" ]
[ 16, 47 ]
http://arxiv.org/abs/2012.08673v2
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Large-scale pre-trained multimodal transformers, such as ViLBERT and UNITER, have propelled the state of the art in vision-and-language (V+L) research to a new level. Although achieving impressive performance on standard tasks, to date, it still remains unclear how robust these pre-trained models are. To investigate, we conduct a host of thorough evaluations on existing pre-trained models over 4 different types of V+L specific model robustness: (i) Linguistic Variation; (ii) Logical Reasoning; (iii) Visual Content Manipulation; and (iv) Answer Distribution Shift. Interestingly, by standard model finetuning, pre-trained V+L models already exhibit better robustness than many task-specific state-of-the-art methods. To further enhance model robustness, we propose Mango, a generic and efficient approach that learns a Multimodal Adversarial Noise GeneratOr in the embedding space to fool pre-trained V+L models. Differing from previous studies focused on one specific type of robustness, Mango is task-agnostic, and enables universal performance lift for pre-trained models over diverse tasks designed to evaluate broad aspects of robustness. Comprehensive experiments demonstrate that Mango achieves new state of the art on 7 out of 9 robustness benchmarks, surpassing existing methods by a significant margin. As the first comprehensive study on V+L robustness, this work puts robustness of pre-trained models into sharper focus, pointing new directions for future study.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Robustness in NLP", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 20, 52, 72, 58, 4, 74 ]
SCOPUS_ID:85136895009
A Cloud Based Sentiment Analysis through Logistic Regression in AWS Platform
The use of Amazon Web Services is growing rapidly as more users are adopting the technology. It has various functionalities that can be used by large corporates and individuals as well. Sentiment analysis is used to build an intelligent system that can study the opinions of the people and help to classify those related emotions. In this research work, sentiment analysis is performed on the AWS Elastic Compute Cloud (EC2) through Twitter data. The data is managed to the EC2 by using elastic load balancing. The collected data is subjected to preprocessing approaches to clean the data, and then machine learning-based logistic regression is employed to categorize the sentiments into positive and negative sentiments. High accuracy of 94.17% is obtained through the proposed machine learning model which is higher than the other models that are developed using the existing algorithms.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85014531079
A Cloud-Based Architecture with embedded Pragmatics Renderer for Ubiquitous and Cloud Manufacturing
The paper presents a Cloud-based architecture for Ubiquitous and Cloud Manufacturing as a multilayer communicational architecture designated as the Communicational Architecture. It is characterised as (a) rich client interfaces (Rich Internet Application) with sufficient interaction to allow user agility and competence, (b) multimodal, for multiple client device classes support and (c) communicational to allow pragmatics, where human-to-human real interaction is completely supported. The main innovative part of this architecture is sustained by a semiotic framework organised on three main logical levels: (a) device level, which allows the user ‘to use’ pragmatics with the system, (b) application level which results for a set of tools which allows users pragmatics-based interaction and (c) application server level that implements the Pragmatics renderer, a pragmatics supporting engine that supports all pragmatics services. The Pragmatics renderer works as a communication enabler, and consists of a set of integrated collaboration technology that makes the bridge between the user/devices and the ‘system’. A federated or community cloud is developed using a particular cloud RESTful Application Programming Interface that supports (cloud) services registration, composition and governance (pragmatics services behaves as SaaS in the cloud).
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:84055214173
A Cloud-Based Recommender System - A case study of delicacy recommendation
Delicacy recommendation services are the trend of the future. In this paper, we propose an effective decision support systems (DSS), the Cloud-Based Recommender System (CBRS), which provides the introduction and commentaries of delicacies and restaurants with relevant recommendation. CBRS provides the web content retrieval agent (WCRA) and multiple document summarization (MDS) technology to generate summary of commentaries. Finally, CBRS combines with the cloud computing for MDS to provide delicacy recommendation services. © 2011 Published by Elsevier Ltd.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85075992701
A Cloud-Hosted MapReduce Architecture for Syntactic Parsing
Syntactic parsing is a time-consuming task in natural language processing particularly where a large number of text files are being processed. Parsing algorithms are conventionally designed to operate on a single machine in a sequential fashion and, as a consequence, fail to benefit from high performance and parallel computing resources available on the cloud. We designed and implemented a scalable cloud-based architecture supporting parallel and distributed syntactic parsing for large datasets. The main architecture consists of a syntactic parser (constituency and dependency parsing) and a MapReduce framework running on clusters of machines. The resulting cloud-based MapReduce parsing is able to build a map where syntactic trees of the same input file have the same key and collect into a single file containing sentences along with their corresponding trees. Our experimental evaluation shows that the architecture scales well with regard to number or processing nodes and number of cores per node. In the fastest tested cloud-based setup, the proposed design performs 7 times faster when compared to a local setup. In summary, this study takes an important step toward providing and evaluating a cloud-hosted solution for efficient syntactic parsing of natural language data sets consisting of a large number of files.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
SCOPUS_ID:85144859316
A Cloud-Native Web Application for Assisted Metadata Generation and Retrieval: THESPIAN-NER †
Within the context of the Competence Centre for the Conservation of Cultural Heritage (4CH) project, the design and deployment of a platform-as-a-service cloud infrastructure for the first European competence centre of cultural heritage (CH) has begun, and some web services have been integrated into the platform. The first integrated service is the INFN-CHNet web application for FAIR storage of scientific analysis on CH: THESPIAN-Mask. It is based on CIDOC-CRM-compatible ontology and CRMhs, describing the scientific metadata. To ease the process of metadata generation and data injection, another web service has been developed: THESPIAN-NER. It is a tool based on a deep neural network for named entity recognition (NER), enabling users to upload their Italian-written report files and obtain labelled entities. Those entities are used as keywords either to serve as (semi)automatically custom queries for the database, or to fill (part of) the metadata form as a descriptor for the file to be uploaded. The services have been made freely available in the 4CH PaaS cloud platform.
[ "Named Entity Recognition", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 34, 24, 3 ]
SCOPUS_ID:85124559594
A Cloud-based Framework for COVID-19 Media Classification, Information Extraction, and Trends Analysis
The coronavirus COVID-19 pandemic has become the center of concern worldwide and hence the focus of media attention. Checking the coronavirus-related news and updates has become a daily routine of everyone. Hence, news processing and analytics become key solutions to harvest the real value of this massive amount of news. This conscious growth of published news about COVID-19 makes it hard for a variety of audiences to navigate through, analyze, and select the most important news (e.g., relevant information about the pandemic, its evolution, the vital precautions, and the necessary interventions). This can be realized using current and emerging technologies including Cloud computing, Artificial Intelligence (AI) and Deep Learning (DL). In this paper, we propose a framework to analyze the massive amount of public Covid-19 media reports over the Cloud. This framework encompasses four modules, including text preprocessing, deep learning, and machine learning-based news information extraction, and recommendation. We conducted experiments to evaluate three modules of our framework and the results we have obtained prove that combining derived information from the news reports provides the policymakers, health authorities, and the public, a complete picture of the way this virus is proliferating. Analyzing this data swiftly is a powerful tool to provide imperative answers to questions that are relevant to public health.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85124797103
A Cloud-based Robot System for Long-term Interaction: Principles, Implementation, Lessons Learned
Making the transition to long-term interaction with social-robot systems has been identified as one of the main challenges in human-robot interaction. This article identifies four design principles to address this challenge and applies them in a real-world implementation: cloud-based robot control, a modular design, one common knowledge base for all applications, and hybrid artificial intelligence for decision making and reasoning. The control architecture for this robot includes a common Knowledge-base (ontologies), Data-base, "Hybrid Artificial Brain"(dialogue manager, action selection and explainable AI), Activities Centre (Timeline, Quiz, Break and Sort, Memory, Tip of the Day, ), Embodied Conversational Agent (ECA, i.e., robot and avatar), and Dashboards (for authoring and monitoring the interaction). Further, the ECA is integrated with an expandable set of (mobile) health applications. The resulting system is a Personal Assistant for a healthy Lifestyle (PAL), which supports diabetic children with self-management and educates them on health-related issues (48 children, aged 6-14, recruited via hospitals in the Netherlands and in Italy). It is capable of autonomous interaction "in the wild"for prolonged periods of time without the need for a "Wizard-of-Oz"(up until 6 months online). PAL is an exemplary system that provides personalised, stable and diverse, long-term human-robot interaction.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
https://aclanthology.org//2021.sigdial-1.29/
A Cloud-based User-Centered Time-Offset Interaction Application
Time-offset interaction applications (TOIA) allow simulating conversations with people who have previously recorded relevant video utterances, which are played in response to their interacting user. TOIAs have great potential for preserving cross-generational and cross-cultural histories, online teaching, simulated interviews, etc. Current TOIAs exist in niche contexts involving high production costs. Democratizing TOIA presents different challenges when creating appropriate pre-recordings, designing different user stories, and creating simple online interfaces for experimentation. We open-source TOIA 2.0, a user-centered time-offset interaction application, and make it available for everyone who wants to interact with people’s pre-recordings, or create their pre-recordings.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
http://arxiv.org/abs/1911.09532v2
A Cluster Ranking Model for Full Anaphora Resolution
Anaphora resolution (coreference) systems designed for the CONLL 2012 dataset typically cannot handle key aspects of the full anaphora resolution task such as the identification of singletons and of certain types of non-referring expressions (e.g., expletives), as these aspects are not annotated in that corpus. However, the recently released dataset for the CRAC 2018 Shared Task can now be used for that purpose. In this paper, we introduce an architecture to simultaneously identify non-referring expressions (including expletives, predicative s, and other types) and build coreference chains, including singletons. Our cluster-ranking system uses an attention mechanism to determine the relative importance of the mentions in the same cluster. Additional classifiers are used to identify singletons and non-referring markables. Our contributions are as follows. First all, we report the first result on the CRAC data using system mentions; our result is 5.8% better than the shared task baseline system, which used gold mentions. Second, we demonstrate that the availability of singleton clusters and non-referring expressions can lead to substantially improved performance on non-singleton clusters as well. Third, we show that despite our model not being designed specifically for the CONLL data, it achieves a score equivalent to that of the state-of-the-art system by Kantor and Globerson (2019) on that dataset.
[ "Coreference Resolution", "Information Extraction & Text Mining", "Text Clustering" ]
[ 13, 3, 29 ]
http://arxiv.org/abs/2106.01183v1
A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
The representation degeneration problem in Contextual Word Representations (CWRs) hurts the expressiveness of the embedding space by forming an anisotropic cone where even unrelated words have excessively positive correlations. Existing techniques for tackling this issue require a learning process to re-train models with additional objectives and mostly employ a global assessment to study isotropy. Our quantitative analysis over isotropy shows that a local assessment could be more accurate due to the clustered structure of CWRs. Based on this observation, we propose a local cluster-based method to address the degeneration issue in contextual embedding spaces. We show that in clusters including punctuations and stop words, local dominant directions encode structural information, removing which can improve CWRs performance on semantic tasks. Moreover, we find that tense information in verb representations dominates sense semantics. We show that removing dominant directions of verb representations can transform the space to better suit semantic applications. Our experiments demonstrate that the proposed cluster-based method can mitigate the degeneration problem on multiple tasks.
[ "Representation Learning", "Information Extraction & Text Mining", "Semantic Text Processing", "Text Clustering" ]
[ 12, 3, 72, 29 ]
http://arxiv.org/abs/1406.3287v3
A Clustering Analysis of Tweet Length and its Relation to Sentiment
Sentiment analysis of Twitter data is performed. The researcher has made the following contributions via this paper: (1) an innovative method for deriving sentiment score dictionaries using an existing sentiment dictionary as seed words is explored, and (2) an analysis of clustered tweet sentiment scores based on tweet length is performed.
[ "Information Extraction & Text Mining", "Sentiment Analysis", "Text Clustering" ]
[ 3, 78, 29 ]
SCOPUS_ID:85048492868
A Clustering Based Adaptive Sequence-to-Sequence Model for Dialogue Systems
Dialogue systems which can communicate with people in natural language is popularly used in entertainments and language learning tools. As the development of deep neural networks, Sequence-to-Sequence models become the main stream models of conversation generation tasks which are the key part of dialogue systems, because Sequence-to-Sequence models is good at dealing with the tasks like machine translation and conversation generation whose input's length and output's length is unknown previously. However, recent works find that Sequence-to-Sequence models tend to respond in dull sentences. We propose a clustering based adaptive Sequence-to-Sequence model to improve the performance of dialogue systems. Different with previous models who treat all the dialogue data as input of a single model, we cluster the dialogue data and use several Sequence-to-Sequence models to train different cluster of data to catch different characteristic in different cluster. Our experiments show that our models can improve the performance of dialogue systems.
[ "Language Models", "Semantic Text Processing", "Dialogue Response Generation", "Natural Language Interfaces", "Text Clustering", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Extraction & Text Mining" ]
[ 52, 72, 14, 11, 29, 47, 38, 3 ]
SCOPUS_ID:85132825037
A Clustering-Based Method for Detecting Text Area in Videos Recorded with the Aid of a Smartphone
Text detection is a crucial task in image processing and computer vision applications. Several methods have been proposed, but little attention has been given to the detection of text area in the video frame recorded with the aid of a smartphone. To answer this gap, we propose in this work a new method for text detection in video acquired by a smartphone. The method is based on the use of the Line Segment Detector (LSD) algorithm and the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm to detect the line segments that belong to the text region. Subsequently, we carry out a set of filters to select the four segments that represent the four sides of the text area in each video frame. The experimental results on ICDAR2015 Smartphone Document Capture Dataset demonstrate that the proposed method provides better performance and achieves promising detection accuracy.
[ "Visual Data in NLP", "Multimodality", "Information Extraction & Text Mining", "Text Clustering" ]
[ 20, 74, 3, 29 ]
SCOPUS_ID:85141818969
A Clustering-based Approach for Topic Modeling via Word Network Analysis
This paper presents a clustering-based approach to topic modeling via analyzing word networks based on the adaptation of a community detection algorithm. Word networks are constructed with different word representations, and two types of topic assignments are introduced. Topic coherence score and the document clustering results are reported for topic model evaluation. Experimental results showed that it achieved comparable results with the current best. It also showed that the proposed approach produced a higher performance as the number of most relevant words gets larger in $C_{cv}$ coherence score.
[ "Topic Modeling", "Information Extraction & Text Mining", "Text Clustering" ]
[ 9, 3, 29 ]
http://arxiv.org/abs/2010.03880v3
A Co-Interactive Transformer for Joint Slot Filling and Intent Detection
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system. The two tasks are closely related and the information of one task can be utilized in the other task. Previous studies either model the two tasks separately or only consider the single information flow from intent to slot. None of the prior approaches model the bidirectional connection between the two tasks simultaneously. In this paper, we propose a Co-Interactive Transformer to consider the cross-impact between the two tasks. Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks. In addition, the proposed co-interactive module can be stacked to incrementally enhance each other with mutual features. The experimental results on two public datasets (SNIPS and ATIS) show that our model achieves the state-of-the-art performance with considerable improvements (+3.4% and +0.9% on overall acc). Extensive experiments empirically verify that our model successfully captures the mutual interaction knowledge.
[ "Language Models", "Semantic Text Processing", "Semantic Parsing", "Intent Recognition", "Sentiment Analysis" ]
[ 52, 72, 40, 79, 78 ]
http://arxiv.org/abs/1806.04068v1
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
[ "Reasoning", "Machine Reading Comprehension" ]
[ 8, 37 ]
http://arxiv.org/abs/2106.00257v1
A Coarse to Fine Question Answering System based on Reinforcement Learning
In this paper, we present a coarse to fine question answering (CFQA) system based on reinforcement learning which can efficiently processes documents with different lengths by choosing appropriate actions. The system is designed using an actor-critic based deep reinforcement learning model to achieve multi-step question answering. Compared to previous QA models targeting on datasets mainly containing either short or long documents, our multi-step coarse to fine model takes the merits from multiple system modules, which can handle both short and long documents. The system hence obtains a much better accuracy and faster trainings speed compared to the current state-of-the-art models. We test our model on four QA datasets, WIKEREADING, WIKIREADING LONG, CNN and SQuAD, and demonstrate 1.3$\%$-1.7$\%$ accuracy improvements with 1.5x-3.4x training speed-ups in comparison to the baselines using state-of-the-art models.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85147999703
A Coarse-to-Fine Text Matching Framework for Customer Service Question Answering
Customer service question answering have recently seen increased interest in NLP due to their potential commercial values. However, existing methods are largely based on Deep Neural Networks (DNNs) that are computationally expensive and memory intensive, which hinder their deployment in many real-world scenarios. In addition, the customer service dialogue data is very domain-specific, and it is difficult to achieve a high matching accuracy without specific model optimization. In this paper, we propose CFTM, A Coarse-to-Fine Text Matching Framework, which consists of Fasttext coarse-grained classification, and Roformer-sim fine-grained sentence vector matching. This Coarse-to-Fine structure can effectively reduce the amount of model parameters and speed up system inference. We also use the CoSENT loss function to optimize the Roformer-sim model according to the characteristics of customer service dialogue data, which effectively improves the matching accuracy of the framework. We conduct extensive experiments on CHUZHOU and EIP customer service questioning datasets from KONKA. The result shows that CFTM outperforms baselines across all metrics, achieving a 2.5 improvement in F1-Score and a 30% improvement in inference time, which demonstrates that our CFTM gets higher response accuracy and faster interaction speed in customer service question answering.
[ "Natural Language Interfaces", "Question Answering", "Dialogue Systems & Conversational Agents" ]
[ 11, 27, 38 ]
SCOPUS_ID:85138795928
A Coarse-to-Fine Training Paradigm for Dialogue Summarization
Pre-trained language models (PLMs) have achieved promising results on dialogue summarization. Previous works mainly encode semantic features from wordy dialogues to help PLMs model dialogues, but extracting those features from the original dialogue text is costly. Besides, the resulting semantic features may be also redundant, which is harmful for PLMs to catch the dialogue’s main idea. Without searching for dispensable features, this paper proposes a coarse-to-fine training paradigm for dialogue summarization. Instead of directly fine-tuning PLMs to obtain complete summaries, this paradigm constructs a coarse-grained summarizer which automatically infers the key information to annotate each dialogue. Further, a fine-grained summarizer would generate detailed summaries based on the annotated dialogues. Moreover, to utilize the knowledge from out-of-domain pre-training, a meta learning mechanism is adopted, which could cooperate with our training paradigm and help the model pre-trained on other domains adapt to the dialogue summarization. Experimental results demonstrate that our method could outperform competitive baselines.
[ "Language Models", "Semantic Text Processing", "Summarization", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Information Extraction & Text Mining" ]
[ 52, 72, 30, 11, 47, 38, 3 ]
http://arxiv.org/abs/2209.14642v1
A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection
Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances. However, they often tailor automated solutions on manual fact-checked reports, suffering from limited news coverage and debunking delays. When a piece of news has not yet been fact-checked or debunked, certain amounts of relevant raw reports are usually disseminated on various media outlets, containing the wisdom of crowds to verify the news claim and explain its verdict. In this paper, we propose a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection based on such raw reports, alleviating the dependency on fact-checked ones. Specifically, we first utilize a hierarchical encoder for web text representation, and then develop two cascaded selectors to select the most explainable sentences for verdicts on top of the selected top-K reports in a coarse-to-fine manner. Besides, we construct two explainable fake news datasets, which are publicly available. Experimental results demonstrate that our model significantly outperforms state-of-the-art baselines and generates high-quality explanations from diverse evaluation perspectives.
[ "Language Models", "Semantic Text Processing", "Explainability & Interpretability in NLP", "Ethical NLP", "Responsible & Trustworthy NLP", "Reasoning", "Fact & Claim Verification", "Green & Sustainable NLP" ]
[ 52, 72, 81, 17, 4, 8, 46, 68 ]
SCOPUS_ID:85083559812
A Code-Description Representation Learning Model Based on Attention
Code search is to retrieve source code given a query. By deep learning, the existing work embeds source code and its description into a shared vector space; however, this space is so general that each code token is associated with each description term. In this paper, we propose a code-description representation learning model (CDRL) based on attention. This model refines the general shared space into the specific one. In such space, only semantically related code tokens and description terms are associated. The experimental results show that this model could retrieve relevant source code effectively and outperform the state-of-the-art method (e.g., CODEnn and QECK) by 4-8% in terms of precision when the first query result is inspected.
[ "Programming Languages in NLP", "Semantic Text Processing", "Representation Learning", "Information Retrieval", "Multimodality" ]
[ 55, 72, 12, 24, 74 ]
SCOPUS_ID:85125851265
A Code-Diverse Kannada-English Dataset for NLP Based Sentiment Analysis Applications
Due to expanded praxis of social media, there is an elevated interest in the Natural Language Processing (NLP) of textual substance. Code swapping is a ubiquitous paradox in multilingual nation and the social communication shows mixing of a low resourced language with a highly resourced language mostly written in non-native script in the same text. It is essential to refine the code swapped text to support distinctive NLP tasks such as Machine Translation, Automated Conversational Systems and Sentiment Analysis (SA). The preeminent objective of SA is to identify and analyze the attitude, opinion, emotion or the sentiment in the dataset. Though there are multiple systems skilled on mono-dialectal dataset, all of them break down when it comes for code-diverse data because of the heightened intricacy of blending at various standards of text. Nonetheless, there exist a smaller number of assets for modelling such definitive code-mixed data and the Machine Learning or the Deep Learning algorithms enforcing supervised learning approach yield the better results compared to the unsupervised learning. Such datasets are available for Hindi-English, Tamil-English, Malayalam-English, Bengali-English, German-English, Spanish-English, Japanese-English, Arabic-English etc. Though our research is concentrated towards NLP for emotion and sentiment detection of Kannada, a vibrant south Indian language, to start with, we build the first ever platinum standard corpus for NLP applications of code-diverse text in Kannada-English, as there is no such resource in our native language. The performance analysis of our dataset through Krippendorff's Alpha value of 0.89 indicates that it is a benchmark in development of Automatic Sentiment Analysis system for Kannada.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85126716062
A Code-Diverse Tulu-English Dataset For NLP Based Sentiment Analysis Applications
Due to expanded praxis of social media, there is an elevated interest in the Natural Language Processing (NLP) of textual substance. Code swapping is a ubiquitous paradox in multilingual nation and the social communication shows mixing of a low resourced language with a highly resourced language mostly written in non-native script in the same text. It is essential to refine the code swapped text to support distinctive NLP tasks such as Machine Translation, Automated Conversational Systems and Sentiment Analysis (SA). The preeminent objective of SA is to identify and analyze the attitude, opinion, emotion or the sentiment in the dataset. Though there are multiple systems skilled on monodialectal dataset, all of them break down when it comes for code-diverse data because of the heightened intricacy of blending at various standards of text. Nonetheless, there exist a smaller number of assets for modelling such definitive code-mixed data and the Machine Learning or the Deep Learning algorithms enforcing supervised learning approach yield the better results compared to the unsupervised learning. Such datasets are available for Hindi-English, Tamil-English, Malayalam-English, Bengali-English, German-English, Spanish-English, Japanese-English, Arabic-English etc. Though our research is concentrated towards NLP for emotion and sentiment detection of Tulu, a vibrant south Indian language, to start with, we build the first ever platinum standard corpus for NLP applications of code-diverse text in Tulu-English, as there is no such resource in our native language. The performance analysis of our dataset through Krippendorff's Alpha value of 0.9 indicates that it is a benchmark in development of Automatic Sentiment Analysis system for Tulu.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:78149322723
A Cognitive Approach to Clinical Phonology
[ "Phonology", "Syntactic Text Processing" ]
[ 6, 15 ]
SCOPUS_ID:85107492338
A Cognitive Approach to Determining the Effectiveness of Teamwork
The paper considers approaches to the concept of team effectiveness. The work aims to consider approaches to the concept of team effectiveness. This paper proposes to use cognitive simulation modeling as a tool for assessing personnel in teamwork. Team effectiveness is facilitated by the ability of its members to work together over an extended period. The efficiency also depends on the volume of work performed by the team, the amount of knowledge and skills that can be used in work, the compliance of the strategies for completing tasks, available resources, and the availability of suitable tools and equipment.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:85096438199
A Cognitive Approach to Teaching Italian Prepositions to Polish Students
The present work aims to show a variety of teaching methods of Italian prepositions to Polish students. The study is based on cognitive linguistics, or, more specifically, Cognitive Grammar (Langacker, 1987) and one its key notions—imagery. One of the difficulties in learning Italian prepositions by Polish students is the differences between the two linguistic systems, which express the same relations by means of different lexical structures. An application of cognitive linguistics in teaching can, in many cases, help students understand the sources of these differences and, consequently, learn the uses of prepositions in Italian despite the fact that the corresponding Polish structures frequently do not include prepositions. The most common methods of teaching Italian prepositions, while not always effective, are based on the classification of objects and adverbials which a given preposition introduces. This research aims to present a different means of explaining the link created by a preposition based on human cognitive abilities. The chapter will examine several examples of phrases and sentences containing prepositional expressions that cause considerable difficulties for Polish students. Finally, it will highlight the need to improve the classical methods of teaching prepositions, making an important contribution to the field of language teaching.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:67649565526
A Cognitive Grammar account of time motion 'metaphors': A view from Japanese
This article focuses on motion metaphors of time and complements Moore's (Cognitive Linguistics 17: 199-244, 2006) analysis, which insightfully discusses time metaphors, from the standpoint of Japanese data. The present paper argues that time metaphors should be analyzed in terms of Langacker's (Observations and speculations on subjectivity, John Benjamins, 1985, Cognitive Linguistics, 1: 5-38, 1990a, Concept, Image, and Symbol: The Cognitive Basis of Grammar, Mouton de Gruyter, 1990b) subjective/objective construal if Japanese data are considered. More specifically, the present analysis classifies time metaphors in terms of the ground's subjective/objective construal and depends on whether they are deictic or not. When the ground is placed offstage, a given expression is non-deictic and can have the order meaning. This article emphasizes that the order meaning is produced by the ground's objective construal and that this cognitive ability is crucial for comparing two events or persons. By focusing on the ground's subjective/objective construal, the difference between Japanese mae and saki can be captured. The present paper will show that the application of the Cognitive Grammar approach to Japanese temporal expressions can supplement existing metaphor theories and that the cognitive linguistic theory of subjectivity is a useful tool with respect to capturing the properties of Japanese temporal expressions. © 2009 by Walter de Gruyter GmbH.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 2, 48, 57 ]
SCOPUS_ID:85098232669
A Cognitive Linguistic and Sentiment Analysis of Blogs: Monterosso 2011 Flooding
The aim of this study is to explore the use of web resources in order to trace the discursive strategies enacted to restore the image of a tourist destination. In particular, we analyze the case of Monterosso, damaged by a flood in 2011. The innovation of this paper consists in a twofold approach: a linguistic approach within the framework of Discourse Analysis, and a sentiment analysis approach realised through tools available on the Internet and specific procedures we have developed in the R environment. The findings are interesting and encourage to refine our approach in the future.
[ "Cognitive Modeling", "Semantic Text Processing", "Sentiment Analysis", "Discourse & Pragmatics", "Linguistics & Cognitive NLP" ]
[ 2, 72, 78, 71, 48 ]
SCOPUS_ID:85130338859
A Cognitive Linguistics approach to menstruation as a taboo in Gĩkũyũ
The taboos concerning bodily effluvia are generally motivated by our distaste and concern about pollution. Menstruation, for example, one of the bodily effluvia, is a physiological characteristic of the female human experience. However, cultural and social factors influence the way people conceptualize menstruation and, therefore, the meaning of menstruation may differ across different cultures. It is against this background that the chapter is anchored. Since the sanctioning of taboo does not originate in the object itself, but in society (Bobel and Kissling 2011), the study identifies the metaphors of menstruation in Gĩkũyũ and interprets them using the Conceptual Metaphor Theory (CMT). To achieve this objective, a purposive sample of 60 speakers of Gĩkũyũ was interviewed. The study collected 29 metaphors of menstruation in Gĩkũyũ. The study also identified four conceptual metaphors of menstruation in Gĩkũyũ: menstruation is a period; menstruation is a visitor; menstruation is an indisposition; menstruation is a colour; and menstruation is a valuable possession. The study concludes that metaphor is an integral component of the way people conceptualize and embody menstruation in Gĩkũyũ.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]
SCOPUS_ID:85056446681
A Cognitive Machine Learning System for Phrases Composition and Semantic Comprehension
Although lexical and syntactic theories for phrase analyses have been well studied in linguistic theories and computational linguistics, semantic synthesis theories for cognitive computing are still a challenge in machine learning and brain-inspired systems. This paper studies theories and mathematical models of machine knowledge learning and semantic comprehension. An Algorithm of Unsupervised Phrase Learning (AUPL) is developed that enables cognitive machines to autonomously learn phrase semantics in the sixth category of machine knowledge learning. A set of experimental results is reported to demonstrate the methodology and algorithm. This work plays a fundamental role for sentence learning where the semantics of natural languages is reduced onto those of phrases and terminal words represented by formal concepts in cognitive systems.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85130743378
A Cognitive Model for Understanding the Takeover in Highly Automated Driving Depending on the Objective Complexity of Non-Driving Related Tasks and the Traffic Environment
The aim of this study is to refine a cognitive model for the takeover in highly automated driving. The focus lies on the impact of objective complexity on the takeover and resulting outcomes. Complexity consists of various aspects. In this study, objective complexities are divided into the complexity of the non-driving-related task (no-task, listening, playing, reading, searching) and the traffic complexity (relevant vehicles in the driving environment). The impact of a non-driving related tasks' complexity on the takeover is evaluated in empirical data. Following, the cognitive model is run through situations of different traffic complexities and compared to empirical results. The model can account for empirical data in most of the objective complexities. Additionally, model predictions are tested on significant variations in different complexities until the action decision is made. In more complex traffic conditions, the model predicts longer times on different processing steps. Altogether, the model can be used to explain cognitive mechanisms in differently complex traffic situations.
[ "Cognitive Modeling", "Linguistics & Cognitive NLP" ]
[ 2, 48 ]