id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85125877137
A Case Study of Visualizing Emotions with Social Media Emotion Analysis – Focused on Media Art Cases -
Background The amount of information produced, distributed, and stored is increasing exponentially as people produce and share information on the internet without restriction of time and space through smartphones. Most of the information produced on social media is used as unstructured data to analyze subjective opinions such as assessment, attitude, and emotion, which has a significant impact on management, politics, and industry. While various studies on sentiment analysis are underway due to the growing social demand and awareness of its importance, academic research of sentiment analysis in the design field is still in its early stages due to technical barrier and limited opportunity of designer engagement. Therefore, the purpose of this study is to analyze visualization cases of sentiment analysis using social media data to identify features and types of cases and seek directionality of visualization of emotion in the design field. Methods The research methods are divided into three main categories. First, a literature study was conducted for theoretical consideration, such as the concept of social media and emotion analysis. Second, the case was analyzed by using the triangle model for data visualization proposed by Andrew Vande Moer and Helen Purchase(2011). Third, case analysis of the case that visualized emotions using social media was conducted. The theoretical background of case analysis was based on the evolutionary, neural, psychoanalytic, autonomic, facial expressions, empirical classification and developmental approaches proposed by the sociologist, Theodore D. Kemper (1987). The case was limited to media art works that analyze and visualize emotions. Fourth, a comprehensive conclusion of the research was concluded by classifying the case analysis based on the triangular model presented above and proposing the subdivision of the triangular model for visualization of emotions in the design field. Results The analysis results were classified into four areas of the triangular model: visualization practice, visualization studies, visualization exploration, and design located in the center of the three areas. First, the visualization practice area is based on emotion analysis studies, and the informative and efficient use of statistical and schematic visual elements are emphasized. Second, the visualization studies area was analyzed based on the seven emotion analysis approaches proposed by Kemper(1987), and connections between theories and visual elements such as facial expressions approaches and using of emoticons, empirical approaches and multidimensional scale analysis were found. Third, the case of the visualization exploration area utilized various audiovisual elements, and based on the artist’s personality and subjective interpretation, novelty and aesthetics were emphasized rather than the value of information delivery of emotion analysis data. Fourth, according to the purpose of the project, data visualization in the design area is subdivided from the triangle model proposed by Andrew Vande Moer and Helen Purchase(2011) and it is proposed to be divided into another triangle within the design area. Conclusions In this research, the attempt to connect and visualize social media and emotions is a work of expressing the process in which emotions are socially constructed. In particular, attention was paid to basic emotions as an important part of the designer’s role in determining the shape of the design area located in the center of the three areas: visualization practice, visualization studies, visualization exploration. This has significance as a prior research to examine the elements that can be utilized for visualization of emotion analysis using social media in the design field adapting from the fields of art, research, and business, and to explore the development direction and possibility of the data visualization field. It is expected that this research will serve as an opportunity to explore how the participation of designers in the field of emotion analysis will contribute in the future.
[ "Visual Data in NLP", "Emotion Analysis", "Multimodality", "Sentiment Analysis" ]
[ 20, 61, 74, 78 ]
SCOPUS_ID:85122318925
A Case Study of the Shortcut Effects in Visual Commonsense Reasoning
Visual reasoning and question-answering have gathered attention in recent years. Many datasets and evaluation protocols have been proposed; some have been shown to contain bias that allows models to “cheat” without performing true, generalizable reasoning. A well-known bias is dependence on language priors (frequency of answers) resulting in the model not looking at the image. We discover a new type of bias in the Visual Commonsense Reasoning (VCR) dataset. In particular we show that most state-of-the-art models exploit co-occurring text between input (question) and output (answer options), and rely on only a few pieces of information in the candidate options, to make a decision. Unfortunately, relying on such superficial evidence causes models to be very fragile. To measure fragility, we propose two ways to modify the validation data, in which a few words in the answer choices are modified without significant changes in meaning. We find such insignificant changes cause models' performance to degrade significantly. To resolve the issue, we propose a curriculum-based masking approach, as a mechanism to perform more robust training. Our method improves the baseline by requiring it to pay attention to the answers as a whole, and is more effective than prior masking strategies.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Commonsense Reasoning", "Reasoning", "Multimodality" ]
[ 20, 52, 72, 62, 8, 74 ]
http://arxiv.org/abs/1910.02930v1
A Case Study on Combining ASR and Visual Features for Generating Instructional Video Captions
Instructional videos get high-traffic on video sharing platforms, and prior work suggests that providing time-stamped, subtask annotations (e.g., "heat the oil in the pan") improves user experiences. However, current automatic annotation methods based on visual features alone perform only slightly better than constant prediction. Taking cues from prior work, we show that we can improve performance significantly by considering automatic speech recognition (ASR) tokens as input. Furthermore, jointly modeling ASR tokens and visual features results in higher performance compared to training individually on either modality. We find that unstated background information is better explained by visual features, whereas fine-grained distinctions (e.g., "add oil" vs. "add olive oil") are disambiguated more easily via ASR tokens.
[ "Visual Data in NLP", "Captioning", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 20, 39, 70, 47, 10, 74 ]
https://aclanthology.org//W19-3317/
A Case Study on Meaning Representation for Vietnamese
This paper presents a case study on meaning representation for Vietnamese. Having introduced several existing semantic representation schemes for different languages, we select as basis for our work on Vietnamese AMR (Abstract Meaning Representation). From it, we define a meaning representation label set by adapting the English schema and taking into account the specific characteristics of Vietnamese.
[ "Knowledge Representation", "Semantic Text Processing", "Representation Learning" ]
[ 18, 72, 12 ]
SCOPUS_ID:85030167717
A Case Study on Moral Disengagement and Rationalization in the Context of Portuguese Bullfighting
Bullfighting is increasingly seen as a contested practice in Portugal. The Portuguese public generally disapproves of the practice and the Portuguese animal rights movement has dedicated a significant number of their campaigns to protesting against it. Despite this opposition to the practice, however, there is still legal protection of the practice on grounds of preserving it as a national tradition. This contestation and legality has led bullfighting supporters to actively try to defend and rationalize the practice. This paper analyses this defence and rationalization by exploring a case study of the quasi-lobbyist Portuguese organization, Prótoiro. The aforementioned case study is analyzed through the use of critical discourse analysis and neutralization theory. The conclusion reached in this article is that the analysis of speech reveals that Prótoiro and its supporters try to morally disengage with the harm done to the bull by using justifications that bullfighting is an ethical activity.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
http://arxiv.org/abs/2105.09702v1
A Case Study on Pros and Cons of Regular Expression Detection and Dependency Parsing for Negation Extraction from German Medical Documents. Technical Report
We describe our work on information extraction in medical documents written in German, especially detecting negations using an architecture based on the UIMA pipeline. Based on our previous work on software modules to cover medical concepts like diagnoses, examinations, etc. we employ a version of the NegEx regular expression algorithm with a large set of triggers as a baseline. We show how a significantly smaller trigger set is sufficient to achieve similar results, in order to reduce adaptation times to new text types. We elaborate on the question whether dependency parsing (based on the Stanford CoreNLP model) is a good alternative and describe the potentials and shortcomings of both approaches.
[ "Syntactic Parsing", "Syntactic Text Processing", "Information Extraction & Text Mining" ]
[ 28, 15, 3 ]
SCOPUS_ID:85118658114
A Case Study on Social Media Analytics for Malaysia Budget
Malaysia citizen always looks forward to the budget announcement, which is presented by the government each year. Due to the direct effect on the economy, the citizens' opinions are crucial in understanding what they want and whether the budget satisfies them or not. Social media analytics can gather netizens’ opinions on Twitter and conduct sentiment analysis. Most of the corpora in previous sentiment analysis research use English-based corpus. However, the current scenario of tweets in Malaysia uses a combination of English-Malay words. Therefore, this study uses a hybrid of the corpus-based and support vector machine approach. Semantic corpus-based combines the Malay and English words. Then, the domain-specific corpus on Malaysia Budget is constructed, which is budget corpus. Two separate analyses include category classification and sentiment analysis. Overall, most netizens have a positive sentiment about Malaysia's Budget with 56.28% of the tweets being positive sentiments. The majority of the netizens focus on social welfare and education that have the highest tweets. The discussion highlights the suggestion to improve the accuracy of this study
[ "Sentiment Analysis" ]
[ 78 ]
https://aclanthology.org//2022.eamt-1.24/
A Case Study on the Importance of Named Entities in a Machine Translation Pipeline for Customer Support Content
This paper describes the research developed at Unbabel, a Portuguese Machine-translation start-up, that combines MT with human post-edition and focuses strictly on customer service content. We aim to contribute to furthering MT quality and good-practices by exposing the importance of having a continuously-in-development robust Named Entity Recognition system compliant with General Data Protection Regulation (GDPR). Moreover, we have tested semiautomatic strategies that support and enhance the creation of Named Entities gold standards to allow a more seamless implementation of Multilingual Named Entities Recognition Systems. The project described in this paper is the result of a shared work between Unbabel ́s linguists and Unbabel ́s AI engineering team, matured over a year. The project should, also, be taken as a statement of multidisciplinary, proving and validating the much-needed articulation between the different scientific fields that compose and characterize the area of Natural Language Processing (NLP).
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/2111.10776v3
A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features
A language agnostic approach to recognizing emotions from speech remains an incomplete and challenging task. In this paper, we performed a step-by-step comparative analysis of Speech Emotion Recognition (SER) using Bangla and English languages to assess whether distinguishing emotions from speech is independent of language. Six emotions were categorized for this study, such as - happy, angry, neutral, sad, disgust, and fear. We employed three Emotional Speech Sets (ESS), of which the first two were developed by native Bengali speakers in Bangla and English languages separately. The third was a subset of the Toronto Emotional Speech Set (TESS), which was developed by native English speakers from Canada. We carefully selected language-independent prosodic features, adopted a Support Vector Machine (SVM) model, and conducted three experiments to carry out our proposition. In the first experiment, we measured the performance of the three speech sets individually, followed by the second experiment, where different ESS pairs were integrated to analyze the impact on SER. Finally, we measured the recognition rate by training and testing the model with different speech sets in the third experiment. Although this study reveals that SER in Bangla and English languages is mostly language-independent, some disparities were observed while recognizing emotional states like disgust and fear in these two languages. Moreover, our investigations revealed that non-native speakers convey emotions through speech, much like expressing themselves in their native tongue.
[ "Emotion Analysis", "Multimodality", "Speech & Audio in NLP", "Sentiment Analysis" ]
[ 61, 74, 70, 78 ]
http://arxiv.org/abs/2110.00866v1
A Case Study to Reveal if an Area of Interest has a Trend in Ongoing Tweets Using Word and Sentence Embeddings
In the field of Natural Language Processing, information extraction from texts has been the objective of many researchers for years. Many different techniques have been applied in order to reveal the opinion that a tweet might have, thus understanding the sentiment of the small writing up to 280 characters. Other than figuring out the sentiment of a tweet, a study can also focus on finding the correlation of the tweets with a certain area of interest, which constitutes the purpose of this study. In order to reveal if an area of interest has a trend in ongoing tweets, we have proposed an easily applicable automated methodology in which the Daily Mean Similarity Scores that show the similarity between the daily tweet corpus and the target words representing our area of interest is calculated by using a na\"ive correlation-based technique without training any Machine Learning Model. The Daily Mean Similarity Scores have mainly based on cosine similarity and word/sentence embeddings computed by Multilanguage Universal Sentence Encoder and showed main opinion stream of the tweets with respect to a certain area of interest, which proves that an ongoing trend of a specific subject on Twitter can easily be captured in almost real time by using the proposed methodology in this study. We have also compared the effectiveness of using word versus sentence embeddings while applying our methodology and realized that both give almost the same results, whereas using word embeddings requires less computational time than sentence embeddings, thus being more effective. This paper will start with an introduction followed by the background information about the basics, then continue with the explanation of the proposed methodology and later on finish by interpreting the results and concluding the findings.
[ "Representation Learning", "Semantic Text Processing", "Sentiment Analysis" ]
[ 12, 72, 78 ]
https://aclanthology.org//W14-4401/
A Case Study: NLG meeting Weather Industry Demand for Quality and Quantity of Textual Weather Forecasts
[ "Text Generation" ]
[ 47 ]
SCOPUS_ID:85136976302
A Case-Based Approach for Content Planning in Data-to-Text Generation
The problem of Data-to-Text Generation (D2T) is usually solved using a modular approach by breaking the generation process into some variant of planning and realisation phases. Traditional methods have been very good at producing high quality texts but are difficult to build for complex domains and also lack diversity. On the other hand, current neural systems offer scalability and diversity but at the expense of being inaccurate. Case-Based approaches try to mitigate the accuracy and diversity trade-off by providing better accuracy than neural systems and better diversity than traditional systems. However, they still fare poorly against neural systems when measured on the dimensions of content selection and diversity. In this work, a Case-Based approach for content-planning in D2T, called CBR-Plan, is proposed which selects and organises the key components required for producing a summary, based on similar previous examples. Extensive experiments are performed to demonstrate the effectiveness of the proposed method against a variety of benchmark and baseline systems, ranging from template-based, to case-based and neural systems. The experimental results indicate that CBR-Plan is able to select more relevant and diverse content than other systems.
[ "Data-to-Text Generation", "Text Generation" ]
[ 16, 47 ]
SCOPUS_ID:85115441488
A Case-Based Approach to Data-to-Text Generation
Traditional Data-to-Text Generation (D2T) systems utilise carefully crafted domain specific rules and templates to generate high quality accurate texts. More recent approaches use neural systems to learn domain rules from the training data to produce very fluent and diverse texts. However, there is a trade-off with rule-based systems producing accurate text but that may lack variation, while learning-based systems produce more diverse texts but often with poorer accuracy. In this paper, we propose a Case-Based approach for D2T that mitigates the impact of this trade-off by dynamically selecting templates from the training corpora. In our approach we develop a novel case-alignment based, feature weighing method that is used to build an effective similarity measure. Extensive experimentation is performed on a sports domain dataset. Through Extractive Evaluation metrics, we demonstrate the benefit of the CBR system over a rule-based baseline and a neural benchmark.
[ "Data-to-Text Generation", "Text Generation" ]
[ 16, 47 ]
SCOPUS_ID:85026640633
A Case-Based Reasoning Approach to Convert Natural Language into First Order Logic
Text is the backbone of the web and most of the information and human knowledge is represented in natural language. Every day, a vast amount of textual information is posted in web portals, wikis and news sites and necessitates automated approaches to analyze and understand their content. In this paper, we present a case-based reasoning approach to transform natural language sentences into first order logic formulas. The formalization approach relies on the principle that natural language sentences with similar grammatical structures and dependency trees would have similar representation in first order logic. The approach consists of two main stages. First, a deep analysis of the natural language sentence is performed, where proper characteristics are extracted and dependencies are specified. After that, in the second stage a cased based reasoning approach is used to utilize existing knowledge (formalized sentences) and drive the formalization of a new sentence. The similarity between natural language sentences is conducted on their dependency trees and is calculated based on the tree edit distance. Then, if needed, the adaptation of a solution is made based on rules. Example studies have shown the applicability of the method and the results on a small number of sentences are very promising.
[ "Reasoning", "Syntactic Parsing", "Syntactic Text Processing" ]
[ 8, 28, 15 ]
SCOPUS_ID:85050736095
A Case-Based Reasoning Decision-Making Model for Hesitant Fuzzy Linguistic Information
In some complicated decision-making problems, because of time pressure or the lack of necessary information, decision makers (DMs) infrequently select optimal alternatives, but acquire satisfactory alternatives that can be obtained by analyzing the correlation between the decision problems and past similar cases. Case-based reasoning (CBR) is an effective approach to obtain preferential information for DMs from past successful decision cases. Using the CBR approach, we aim to process hesitant fuzzy linguistic information, and classify and rank the alternatives according to past successful decision cases. We first sum the distance measures for hesitant fuzzy linguistic term sets (HFLTSs) and then propose a new axiomatic definition for HFLTSs, which are compared with existing distance measures from relationships and properties. Furthermore, based on our proposed distance measure, we propose a CBR decision model for hesitant fuzzy linguistic information to calculate the weights of criteria and classifying thresholds. We then classify and rank the alternatives according to the most satisfactory solution in past successful decision cases. Finally, we consider an example to demonstrate the effectiveness and advantages of our proposed method.
[ "Reasoning", "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 8, 24, 36, 3 ]
http://arxiv.org/abs/2302.03494v7
A Categorical Archive of ChatGPT Failures
Large language models have been demonstrated to be valuable in different fields. ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation by comprehending context and generating appropriate responses. It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries, with fluent and comprehensive answers surpassing prior public chatbots in both security and usefulness. However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study. Eleven categories of failures, including reasoning, factual errors, math, coding, and bias, are presented and discussed. The risks, limitations, and societal implications of ChatGPT are also highlighted. The goal of this study is to assist researchers and developers in enhancing future language models and chatbots.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Natural Language Interfaces", "Dialogue Systems & Conversational Agents", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 11, 38, 24, 3 ]
http://arxiv.org/abs/2211.01290v3
A Categorical Framework for Modeling with Stock and Flow Diagrams
Stock and flow diagrams are already an important tool in epidemiology, but category theory lets us go further and treat these diagrams as mathematical entities in their own right. In this chapter we use communicable disease models created with our software, StockFlow.jl, to explain the benefits of the categorical approach. We first explain the category of stock-flow diagrams and note the clear separation between the syntax of these diagrams and their semantics, demonstrating three examples of semantics already implemented in the software: ODEs, causal loop diagrams, and system structure diagrams. We then turn to two methods for building large stock-flow diagrams from smaller ones in a modular fashion: composition and stratification. Finally, we introduce the open-source ModelCollab software for diagram-based collaborative modeling. The graphical user interface of this web-based software lets modelers take advantage of the ideas discussed here without any knowledge of their categorical foundations.
[ "Text Classification", "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 81, 4, 24, 3 ]
SCOPUS_ID:85075827604
A Category Detection Method for Evidence-Based Medicine
Evidence-Based Medicine (EBM) gathers evidence by analyzing large databases of medical literatures and retrieving relevant clinical thematic texts. However, the abstracts of medical articles generally show the themes of clinical practice, populations, research methods and experimental results of the thesis in an unstructurized manner, rendering inefficient retrieval of medical evidence. Abstract sentences contain contextual information, and there are complex semantic and grammatical correlations between them, making its classification different from that of independent sentences. This paper proposes a category detection algorithm based on Hierarchical Multi-connected Network (HMcN), regarding the category detection of EBM as a matter of classification of sequential sentences. The algorithm contains multiple structures: (1) The underlying layer produces a sentence vector by combining the pre-trained language model with Bi-directional Long Short Term Memory Network (Bi-LSTM), and applies a multi-layered self-attention structure to the sentence vector so as to work out the internal dependencies of the sentences. (2) The upper layer uses the multi-connected Bi-LSTMs model to directly read the original input sequence to add the contextual information for the sentence vector in the abstract. (3) The top layer optimizes the tag sequence by means of the conditional random field (CRF) model. The extensive experiments on public datasets have demonstrated that the performance of the HMcN model in medical category detection is superior to that of the state-of-the-art text classification method, and the F1 value has increased by 0.4%–0.9%.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Representation Learning", "Reasoning", "Fact & Claim Verification", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 12, 8, 46, 36, 3 ]
SCOPUS_ID:85124624190
A Category Hybrid Embedding Based Approach for Power Text Hierarchical Classification
Aiming at the problem that the current power text classification methods ignore the latent semantic association between category labels and therefore lead to low classification performance, a hierarchical multi-label power text classification method is proposed. Firstly, a power multi-label text dataset is built using automatic information extraction based on power unstructured texts, and the hierarchical structural relationships between categories are constructed by leveraging relevant domain knowledge. Secondly, a text classification method HONLSTM-BERT is proposed based on hybrid embeddings of category structure and label semantics for hierarchically classifying power texts in a top-down manner. At last, experiments were made in comparison with some popular text classification methods, and the experimental results show that proposed HONLSTM-BERT method achieves superior classification accuracy, and can efficiently improve the performance of automatic text classification.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Representation Learning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 12, 24, 3 ]
SCOPUS_ID:85145884856
A Category Theory Framework for Sense Systems
Sense repositories are a key component of many NLP applications that require the identification of word senses, a task known as word sense disambiguation. WordNet synsets form the most prominent repository, but many others exist and over the years these repositories have been mapped to each other. However, there have been no attempts (until now) to provide any theoretical grounding for such mappings, causing inconsistencies and unintuitive results. The present paper draws on category theory to formalise assumptions about mapped repositories that are often left implicit, providing formal grounding for this type of language resource. We introduce notation to represent the mappings and repositories as a category, which we call a sense system; and we propose and motivate four basic and two guiding criteria for such sense systems.
[ "Linguistics & Cognitive NLP", "Semantic Text Processing", "Word Sense Disambiguation", "Linguistic Theories" ]
[ 48, 72, 65, 57 ]
http://arxiv.org/abs/2210.12023v2
A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models
We have recently witnessed a number of impressive results on hard mathematical reasoning problems with language models. At the same time, the robustness of these models has also been called into question; recent works have shown that models can rely on shallow patterns in the problem description when predicting a solution. Building on the idea of behavioral testing, we propose a novel framework, which pins down the causal effect of various factors in the input, e.g., the surface form of the problem text, the operands and math operators on the output solution. By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space. We apply our framework on a test bed of bivariate math word problems. Our analysis shows that robustness does not appear to continuously improve as a function of scale, but that the recent LLM, GPT-3-Instruct (175B), achieves a dramatic improvement in both robustness and sensitivity, compared to all other GPT variants.
[ "Language Models", "Semantic Text Processing", "Robustness in NLP", "Reasoning", "Numerical Reasoning", "Responsible & Trustworthy NLP" ]
[ 52, 72, 58, 8, 5, 4 ]
http://arxiv.org/abs/1911.10787v1
A Causal Inference Method for Reducing Gender Bias in Word Embedding Relations
Word embedding has become essential for natural language processing as it boosts empirical performances of various tasks. However, recent research discovers that gender bias is incorporated in neural word embeddings, and downstream tasks that rely on these biased word vectors also produce gender-biased results. While some word-embedding gender-debiasing methods have been developed, these methods mainly focus on reducing gender bias associated with gender direction and fail to reduce the gender bias presented in word embedding relations. In this paper, we design a causal and simple approach for mitigating gender bias in word vector relation by utilizing the statistical dependency between gender-definition word embeddings and gender-biased word embeddings. Our method attains state-of-the-art results on gender-debiasing tasks, lexical- and sentence-level evaluation tasks, and downstream coreference resolution tasks.
[ "Semantic Text Processing", "Robustness in NLP", "Representation Learning", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 72, 58, 12, 17, 4 ]
http://arxiv.org/abs/2201.09119v1
A Causal Lens for Controllable Text Generation
Controllable text generation concerns two fundamental tasks of wide applications, namely generating text of given attributes (i.e., attribute-conditional generation), and minimally editing existing text to possess desired attributes (i.e., text attribute transfer). Extensive prior work has largely studied the two problems separately, and developed different conditional models which, however, are prone to producing biased text (e.g., various gender stereotypes). This paper proposes to formulate controllable text generation from a principled causal perspective which models the two tasks with a unified framework. A direct advantage of the causal formulation is the use of rich causality tools to mitigate generation biases and improve control. We treat the two tasks as interventional and counterfactual causal inference based on a structural causal model, respectively. We then apply the framework to the challenging practical setting where confounding factors (that induce spurious correlations) are observable only on a small fraction of data. Experiments show significant superiority of the causal approach over previous conditional models for improved control accuracy and reduced bias.
[ "Text Generation" ]
[ 47 ]
http://arxiv.org/abs/1905.08392v1
A Causality-Guided Prediction of the TED Talk Ratings from the Speech-Transcripts using Neural Networks
Automated prediction of public speaking performance enables novel systems for tutoring public speaking skills. We use the largest open repository---TED Talks---to predict the ratings provided by the online viewers. The dataset contains over 2200 talk transcripts and the associated meta information including over 5.5 million ratings from spontaneous visitors to the website. We carefully removed the bias present in the dataset (e.g., the speakers' reputations, popularity gained by publicity, etc.) by modeling the data generating process using a causal diagram. We use a word sequence based recurrent architecture and a dependency tree based recursive architecture as the neural networks for predicting the TED talk ratings. Our neural network models can predict the ratings with an average F-score of 0.77 which largely outperforms the competitive baseline method.
[ "Speech & Audio in NLP", "Multimodality" ]
[ 70, 74 ]
SCOPUS_ID:85127372132
A Centered Convolutional Restricted Boltzmann Machine Optimized by Hybrid Atom Search Arithmetic Optimization Algorithm for Sentimental Analysis
Sentiment analysis uses natural language processing (NLP) to track online conversations and uncover additional information about a subject, business, or theme. Existing machine-learning algorithms are accurate and perform well, but they struggle to reduce computational time and cope with the noisy and high-dimensional feature space of social media data. To resolve these concerns, this paper introduced a Centered Convolutional Restricted Boltzmann Machines (CCRBM), a revolutionary deep learning technique for user behavioral sentimental analysis. The DBN architecture is mainly selected in this work due to its ability to extract in-depth sentimental features, dimensionality reduction, and higher classification accuracy. However, the improper parameter setting can lead to non-convergence, large randomness, and weak generalization capability. To tackle this issue, this work proposes a Hybrid Atom Search Arithmetic Optimization (HASAO) approach, which optimizes DBN parameters such as batch size and decay rate while minimizing DBN issues such as randomness and instability. The performance of the proposed model is analyzed by comparing it with different baseline models and the accuracy value above 90% for the nine datasets proves the efficiency of the proposed technique. When compared to the existing techniques, the proposed methodology offers improved accuracy and speedup capacity.
[ "Information Retrieval", "Sentiment Analysis" ]
[ 24, 78 ]
SCOPUS_ID:85123470669
A Central Opinion Extraction Framework for Boosting Performance on Sentiment Analysis
With the rapid development of the Internet, mining opinions and emotions from the explosive growth of user-generated content is a key field of social media analysis. However, the expression forms of the central opinion which strongly expresses the essential points and converges the main sentiments of the overall document are diverse in practice, such as sequential sentences, a sentence fragment, or an individual sentence. Previous research studies on sentiment analysis based on document level and sentence level fail to deal with this actual situation uniformly. To address this issue, we propose a Central Opinion Extraction (COE) framework to boost performance on sentiment analysis with social media texts. Our framework first extracts a span-level central opinion text, which expresses the essential opinion related to sentiment representation among the whole text, and then uses extracted textual span to boost the performance of sentiment classifiers. The experimental results on a public dataset show the effectiveness of our framework for boosting the performance on document-level sentiment analysis task.
[ "Opinion Mining", "Sentiment Analysis", "Information Extraction & Text Mining" ]
[ 49, 78, 3 ]
SCOPUS_ID:85034631424
A Centralized Service Discovery Algorithm via Multi-Stage Semantic Service Matching in Internet of Things
In recent years, the number of services in Internet of Things (IoT) has increased rapidly, and service discovery in IoT has become more difficult in large-scale registration. In the traditional matching method, in order to get a better match results, all the matching parameters for services had to be calculated together, thus it would waste a lot of computing resource and time. This paper presents a service discovery algorithm via multi-stage semantic service matching algorithm. It adopts the method of layer filters, considering the various constraint parameters of IoT services, such as service category, input/output (IO), precondition/effect (PE) and quality of experience (QoE). It can obtain the proper matching results in a more efficient way. Firstly, we use IoT service description language OWL-Siot to describe IoT services and request uniformly. Then, we propose a four-layer structure model for service discovery, namely interactive interface layer, parsing annotation layer, service matching layer and data semantic layer. We also propose a hybrid service matching degree measurement by synthetically calculating the concept logic and semantic similarity for each layer separately. Experimental results show that the method can effectively improve the performance of service discovery.
[ "Semantic Text Processing", "Semantic Similarity" ]
[ 72, 53 ]
https://aclanthology.org//W16-6628/
A Challenge Proposal for Narrative Generation Using CNLs
[ "Text Generation" ]
[ 47 ]
http://arxiv.org/abs/1704.07431v5
A Challenge Set Approach to Evaluating Machine Translation
Neural machine translation represents an exciting leap forward in translation quality. But what longstanding weaknesses does it resolve, and which remain? We address these questions with a challenge set approach to translation evaluation and error analysis. A challenge set consists of a small set of sentences, each hand-designed to probe a system's capacity to bridge a particular structural divergence between languages. To exemplify this approach, we present an English-French challenge set, and use it to analyze phrase-based and neural systems. The resulting analysis provides not only a more fine-grained picture of the strengths of neural systems, but also insight into which linguistic phenomena remain out of reach.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/1806.02725v2
A Challenge Set for French --> English Machine Translation
We present a challenge set for French --> English machine translation based on the approach introduced in Isabelle, Cherry and Foster (EMNLP 2017). Such challenge sets are made up of sentences that are expected to be relatively difficult for machines to translate correctly because their most straightforward translations tend to be linguistically divergent. We present here a set of 506 manually constructed French sentences, 307 of which are targeted to the same kinds of structural divergences as in the paper mentioned above. The remaining 199 sentences are designed to test the ability of the systems to correctly translate difficult grammatical words such as prepositions. We report on the results of using this challenge set for testing two different systems, namely Google Translate and DEEPL, each on two different dates (October 2017 and January 2018). All the resulting data are made publicly available.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
SCOPUS_ID:85120946648
A Challenge for Contrastive L1/L2 Corpus Studies: Large Inter- and Intra-Individual Variation Across Morphological, but Not Global Syntactic Categories in Task-Based Corpus Data of a Homogeneous L1 German Group
In this paper, we present corpus data that questions the concept of native speaker homogeneity as it is presumed in many studies using native speakers (L1) as a control group for learner data (L2), especially in corpus contexts. Usage-based research on second and foreign language acquisition often investigates quantitative differences between learners, and usually a group of native speakers serves as a control group, but often without elaborating on differences within this group to the same extent. We examine inter-personal differences using data from two well-controlled German native speaker corpora collected as control groups in the context of second and foreign language research. Our results suggest that certain linguistic aspects vary to an extent in the native speaker data that undermines general statements about quantitative expectations in L1. However, we also find differences between phenomena: while morphological and syntactic sub-classes of verbs and nouns show great variability in their distribution in native speaker writing, other, coarser categories, like parts of speech, or types of syntactic dependencies, behave more predictably and homogeneously. Our results highlight the necessity of accounting for inter-individual variance in native speakers where L1 is used as a target ideal for L2. They also raise theoretical questions concerning a) explanations for the divergence between phenomena, b) the role of frequency distributions of morphosyntactic phenomena in usage-based linguistic frameworks, and c) the notion of the individual adult native speaker as a general representative of the target language in language acquisition studies or language in general.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
http://arxiv.org/abs/2207.02657v2
A Challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems
A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems, Co-located with EMNLP2022 SereTOD Workshop.
[ "Low-Resource NLP", "Natural Language Interfaces", "Responsible & Trustworthy NLP", "Dialogue Systems & Conversational Agents" ]
[ 80, 11, 4, 38 ]
https://aclanthology.org//W16-5505/
A Challenge to the Third Hoshi Shinichi Award
[ "Text Generation" ]
[ 47 ]
http://arxiv.org/abs/2303.03840v2
A Challenging Benchmark for Low-Resource Learning
With promising yet saturated results in high-resource settings, low-resource datasets have gradually become popular benchmarks for evaluating the learning ability of advanced neural networks (e.g., BigBench, superGLUE). Some models even surpass humans according to benchmark test results. However, we find that there exists a set of hard examples in low-resource settings that challenge neural networks but are not well evaluated, which causes over-estimated performance. We first give a theoretical analysis on which factors bring the difficulty of low-resource learning. It then motivate us to propose a challenging benchmark hardBench to better evaluate the learning ability, which covers 11 datasets, including 3 computer vision (CV) datasets and 8 natural language process (NLP) datasets. Experiments on a wide range of models show that neural networks, even pre-trained language models, have sharp performance drops on our benchmark, demonstrating the effectiveness on evaluating the weaknesses of neural networks. On NLP tasks, we surprisingly find that despite better results on traditional low-resource benchmarks, pre-trained networks, does not show performance improvements on our benchmarks. These results demonstrate that there are still a large robustness gap between existing models and human-level performance.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 4 ]
SCOPUS_ID:85133024446
A Chaotic Antlion Optimization Algorithm for Text Feature Selection
Text classification is one of the important technologies in the field of text data mining. Feature selection, as a key step in processing text classification tasks, is used to process high-dimensional feature sets, which directly affects the final classification performance. At present, the most widely used text feature selection methods in academia are to calculate the importance of each feature for classification through an evaluation function, and then select the most important feature subsets that meet the quantitative requirements in turn. However, ignoring the correlation between the features and the effect of their mutual combination in this way may not guarantee the best classification effect. Therefore, this paper proposes a chaotic antlion feature selection algorithm (CAFSA) to solve this problem. The main contributions include: (1) Propose a chaotic antlion algorithm (CAA) based on quasi-opposition learning mechanism and chaos strategy, and compare it with the other four algorithms on 11 benchmark functions. The algorithm has achieved a higher convergence speed and the highest optimization accuracy. (2) Study the performance of CAFSA using CAA for feature selection when using different learning models, including decision tree, Naive Bayes, and SVM classifier. (3) The performance of CAFSA is compared with that of eight other feature selection methods on three Chinese datasets. The experimental results show that using CAFSA can reduce the number of features and improve the classification accuracy of the classifier, which has a better classification effect than other feature selection methods.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85107916382
A Character Flow Framework for Multi-Oriented Scene Text Detection
Scene text detection plays a significant role in various applications, such as object recognition, document management, and visual navigation. The instance segmentation based method has been mostly used in existing research due to its advantages in dealing with multi-oriented texts. However, a large number of non-text pixels exist in the labels during the model training, leading to text mis-segmentation. In this paper, we propose a novel multi-oriented scene text detection framework, which includes two main modules: character instance segmentation (one instance corresponds to one character), and character flow construction (one character flow corresponds to one word). We use feature pyramid network (FPN) to predict character and non-character instances with arbitrary directions. A joint network of FPN and bidirectional long short-term memory (BLSTM) is developed to explore the context information among isolated characters, which are finally grouped into character flows. Extensive experiments are conducted on ICDAR2013, ICDAR2015, MSRA-TD500 and MLT datasets to demonstrate the effectiveness of our approach. The F-measures are 92.62%, 88.02%, 83.69% and 77.81%, respectively.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85128030972
A Character String-Based Stemming for Morphologically Derivative Languages
Morphologically derivative languages form words by fusing stems and suffixes, stems are important to be extracted in order to make cross lingual alignment and knowledge transfer. As there are phonetic harmony and disharmony when linguistic particles are combined, both phonetic and morphological changes need to be analyzed. This paper proposes a multilingual stemming method that learns morpho-phonetic changes automatically based on character based embedding and sequential modeling. Firstly, the character feature embedding at the sentence level is used as input, and the BiLSTM model is used to obtain the forward and reverse context sequence, and the attention mechanism is added to this model for weight learning, and the global feature information is extracted to capture the stem and affix boundaries; finally CRF model is used to learn more information from sequence features to describe context information more effectively. In order to verify the effectiveness of the above model, the model in this paper is compared with the traditional model on two different data sets of three derivative languages: Uyghur, Kazakh and Kirghiz. The experimental results show that the model in this paper has the best stemming effect on multilingual sentence-level datasets, which leads to more effective stemming. In addition, the proposed model outperforms other traditional models, and fully consider the data characteristics, and has certain advantages with less human intervention.
[ "Semantic Text Processing", "Morphology", "Syntactic Text Processing", "Representation Learning", "Phonetics", "Cross-Lingual Transfer", "Multilinguality" ]
[ 72, 73, 15, 12, 64, 19, 0 ]
SCOPUS_ID:85073232408
A Character-Enhanced Chinese Word Embedding Model
Distributed word representation has demonstrated its advantages in many natural language processing tasks. Such as named entity recognition, entity relation extraction, and text classification. Traditional one-hot word representation represents a word as a high-dimensional and sparse vector. Instead, distributed word representation represents a word as a low-dimensional and dense vector, which are more suitable as inputs of deep neural networks. Furthermore, distributed word representation can express the semantic relatedness and syntactic regularities between different words. Word embedding is a kind of distributed word representation technology, which is very popular and useful in many natural language processing tasks. Recently, more and more researches have focused on learning word embeddings with internal morphological knowledge in words, such as character, sub-words, and other kinds of morphological information. For example, Chinese characters contain rich semantic information related to words they compose. Thus, characters can help improving the representation of words. In this paper, we present a character-enhanced Chinese word embeddings model (CCWE). In the model, we train character and word embeddings simultaneously in two parallel tasks. The framework of our model is based-on Skip-Gram. We evaluate CCWE on word similarity, analogical reasoning, text classification, and named entity recognition. The results demonstrate that our model can learn both better Chinese word and character embeddings than other baseline models.
[ "Semantic Text Processing", "Information Retrieval", "Morphology", "Syntactic Text Processing", "Representation Learning", "Named Entity Recognition", "Text Classification", "Information Extraction & Text Mining" ]
[ 72, 24, 73, 15, 12, 34, 36, 3 ]
http://arxiv.org/abs/1903.02642v1
A Character-Level Approach to the Text Normalization Problem Based on a New Causal Encoder
Text normalization is a ubiquitous process that appears as the first step of many Natural Language Processing problems. However, previous Deep Learning approaches have suffered from so-called silly errors, which are undetectable on unsupervised frameworks, making those models unsuitable for deployment. In this work, we make use of an attention-based encoder-decoder architecture that overcomes these undetectable errors by using a fine-grained character-level approach rather than a word-level one. Furthermore, our new general-purpose encoder based on causal convolutions, called Causal Feature Extractor (CFE), is introduced and compared to other common encoders. The experimental results show the feasibility of this encoder, which leverages the attention mechanisms the most and obtains better results in terms of accuracy, number of parameters and convergence time. While our method results in a slightly worse initial accuracy (92.74%), errors can be automatically detected and, thus, more readily solved, obtaining a more robust model for deployment. Furthermore, there is still plenty of room for future improvements that will push even further these advantages.
[ "Language Models", "Text Normalization", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 52, 59, 72, 15 ]
SCOPUS_ID:85078924636
A Character-Level BiLSTM-CRF Model with Multi-Representations for Chinese Event Detection
Using the word as a basic unit may undermine Chinese event detection model's performance because of the inaccurate word boundaries generated by segmentation tools. Besides, word embeddings are contextual independent and cannot handle the polysemy of event triggers, which may prevent us from obtaining the desired performance. To address these issues, we propose a BiLSTM-CRF (Bidirectional Long Short-Term Memory Conditional Random Field) model using contextualized representations, which regards event detection task as a character-level sequence labeling problem and uses contextualized representations to disambiguate event triggers. Experiments show that our proposed method sets a new state-of-the-art, which proves Chinese characters could replace words for the Chinese event detection task. Besides, using contextualized representation reduces the false positive case, which verifies that this kind of representation could remedy the weakness of the word embedding technique. Based on the results, we believe that character-level models are worth exploring in the future.
[ "Language Models", "Semantic Text Processing", "Representation Learning", "Event Extraction", "Information Extraction & Text Mining" ]
[ 52, 72, 12, 31, 3 ]
http://arxiv.org/abs/1603.06147v4
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder-decoder with a subword-level encoder and a character-level decoder on four language pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
SCOPUS_ID:85064555247
A Character-Level Deep Lifelong Learning Model for Named Entity Recognition in Vietnamese Text
Lifelong Machine Learning (LML) is a continuous learning process, in which the knowledge learned from previous tasks is accumulated in the knowledge base, then the knowledge will be used to support future learning tasks, for which it may be only a few of samples exists. However, there is a little of studies on LML based on deep neural networks for Named Entity Recognition (NER), especial in Vietnamese. We propose DeepLML-NER model, a lifelong learning model based on using deep learning methods with a CRFs layer, for NER in Vietnamese text. DeepLML-NER includes an algorithm to extract the knowledge of “prefix-features” of Named Entities in previous domains. Then the model uses the knowledge stored in the knowledge base to solve a new NER task. The effect of the model was demonstrated by in-domain and cross-domain experiments, achieving promising results.
[ "Language Models", "Semantic Text Processing", "Knowledge Representation", "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 52, 72, 18, 34, 3 ]
http://arxiv.org/abs/2205.14522v2
A Character-Level Length-Control Algorithm for Non-Autoregressive Sentence Summarization
Sentence summarization aims at compressing a long sentence into a short one that keeps the main gist, and has extensive real-world applications such as headline generation. In previous work, researchers have developed various approaches to improve the ROUGE score, which is the main evaluation metric for summarization, whereas controlling the summary length has not drawn much attention. In our work, we address a new problem of explicit character-level length control for summarization, and propose a dynamic programming algorithm based on the Connectionist Temporal Classification (CTC) model. Results show that our approach not only achieves higher ROUGE scores but also yields more complete sentences.
[ "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 30, 47, 3 ]
SCOPUS_ID:85055682657
A Character-Level Method for Text Classification
We propose a language model of mix CNN (Convolution Neural Network) with bi-RNN (Bi-directional Recurrent Neural Network) to classify the text at the character-level. Unlike word-level model is that avoiding the problem of unregistered words and improves the robustness of the text representation in character-level model. The language model mainly uses the data augment by different convolution filters of CNN and then the bi-RNN obtain the contextual information in both directions to classify the text. The results show that this model have a better performance than the common CNN and LSTM(long short-term memory) classification methods.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/1612.03266v1
A Character-Word Compositional Neural Language Model for Finnish
Inspired by recent research, we explore ways to model the highly morphological Finnish language at the level of characters while maintaining the performance of word-level models. We propose a new Character-to-Word-to-Character (C2W2C) compositional language model that uses characters as input and output while still internally processing word level embeddings. Our preliminary experiments, using the Finnish Europarl V7 corpus, indicate that C2W2C can respond well to the challenges of morphologically rich languages such as high out of vocabulary rates, the prediction of novel words, and growing vocabulary size. Notably, the model is able to correctly score inflectional forms that are not present in the training data and sample grammatically and semantically correct Finnish sentences character by character.
[ "Language Models", "Semantic Text Processing", "Syntactic Text Processing", "Morphology" ]
[ 52, 72, 15, 73 ]
SCOPUS_ID:85125037810
A Character-Word Graph Attention Networks for Chinese Text Classification
Text classification is an important task in natural language processing. Different from English, Chinese text owns two representations, character-level and word-level. The former has abundant connotations and the latter owns specific meanings. Current researches often simply concatenated two-level features with little processing and failed to explore the affiliation relation-ship between Chinese characters and words. In this paper, we proposed a character-word graph attention network (CW-GAT) to explore the interactive information between characters and words for Chinese text classification. A graph attention network is adopted to capture the context of sentences and the interaction between characters and words. Extensive experiments on six real Chinese text datasets show that the proposed model outperforms the latest baseline methods.
[ "Structured Data in NLP", "Text Classification", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 50, 36, 74, 24, 3 ]
https://aclanthology.org//W02-1117/
A Character-net Based Chinese Text Segmentation Method
[ "Knowledge Representation", "Text Segmentation", "Semantic Text Processing", "Syntactic Text Processing" ]
[ 18, 21, 72, 15 ]
SCOPUS_ID:85124904965
A Characterization of Dante Alighieri: An NLP approach to the Divine Comedy
The main goal of this research is to offer a new perspective of Dante Alighieri and his most famous work: The Divine Comedy based on natural language processing. We seek to provide rigorous evidence to enhance literary analysis on The Divine Comedy. We utilized sentiment analysis, text classification and topic modeling. We analyzed both the original Divine Comedy written in Italian, and the Divine Comedy translated to English. We also used natural language processing to compare the Divine Comedy to a variety of Shakespeare's plays and poems.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
SCOPUS_ID:85137086794
A Characterization of Word- Usage of Students Using Part-of-Speech Information
The goal of the study presented in this paper is to understand attitudes and viewpoints of university students toward learning. We have been investigating text data obtained as answer texts for a term-end questionnaire in a class, and have found some interesting facts about students' attitude toward learning. This paper aims to investigate the texts further by using the part-of-speech (POS) information so that we can extract different features from those obtained in our former studies. In this paper, we develop a couple of distances between students so that we can see the features from different methods of measurement. As a result, we can find out some students who are different from other students regarding usage of POS.
[ "Information Extraction & Text Mining", "Multimodality" ]
[ 3, 74 ]
http://arxiv.org/abs/1808.07214v2
A Characterwise Windowed Approach to Hebrew Morphological Segmentation
This paper presents a novel approach to the segmentation of orthographic word forms in contemporary Hebrew, focusing purely on splitting without carrying out morphological analysis or disambiguation. Casting the analysis task as character-wise binary classification and using adjacent character and word-based lexicon-lookup features, this approach achieves over 98% accuracy on the benchmark SPMRL shared task data for Hebrew, and 97% accuracy on a new out of domain Wikipedia dataset, an improvement of ~4% and 5% over previous state of the art performance.
[ "Syntactic Text Processing", "Morphology" ]
[ 15, 73 ]
http://arxiv.org/abs/cmp-lg/9605017v1
A Chart Generator for Shake and Bake Machine Translation
A generation algorithm based on an active chart parsing algorithm is introduced which can be used in conjunction with a Shake and Bake machine translation system. A concise Prolog implementation of the algorithm is provided, and some performance comparisons with a shift-reduce based algorithm are given which show the chart generator is much more efficient for generating all possible sentences from an input specification.
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/cs/0209002v1
A Chart-Parsing Algorithm for Efficient Semantic Analysis
In some contexts, well-formed natural language cannot be expected as input to information or communication systems. In these contexts, the use of grammar-independent input (sequences of uninflected semantic units like e.g. language-independent icons) can be an answer to the users' needs. A semantic analysis can be performed, based on lexical semantic knowledge: it is equivalent to a dependency analysis with no syntactic or morphological clues. However, this requires that an intelligent system should be able to interpret this input with reasonable accuracy and in reasonable time. Here we propose a method allowing a purely semantic-based analysis of sequences of semantic units. It uses an algorithm inspired by the idea of ``chart parsing'' known in Natural Language Processing, which stores intermediate parsing results in order to bring the calculation time down. In comparison with using declarative logic programming - where the calculation time, left to a prolog engine, is hyperexponential -, this method brings the calculation time down to a polynomial time, where the order depends on the valency of the predicates.
[ "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 4, 68 ]
SCOPUS_ID:85115716676
A Chatbot Solution for Self-Reading Energy Consumption via Chatting Applications
To mitigate financial loss and follow the recommended sanitary measures due to the COVID-19 pandemic, self-reading, a method in which a consumer reads and reports his own energy consumption, has been presented as an efficient alternative for power companies. In such context, this work presents a solution for self-reading via chatbot in chatting applications. This solution is under development as part of a research and development (R&D) project. It is integrated with a method based on image processing that automatically reads the energy consumption and recognizes the identification code of a meter for validation purposes. Furthermore, all processes utilize cognitive services from the IBM Watson platform to recognize intentions in the dialog with the consumers. The dataset used to validate the proposed method for self-reading contains examples of analogical and digital meters used by Equatorial Energy group. Preliminary results presented accuracies of 77.20% and 84.30%, respectively, for the recognition of complete reading sequences and identification codes in digital meters and accuracies of 89% and 95.20% in the context of analogical meters. Considering both meter types, the method obtains an accuracy per digit of 97%. The proposed method was also evaluated with UFPR-AMR public dataset and achieves a result comparable to the state of the art.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85123303359
A Chatbot as a Support System for Educational Institutions
The constant advance of technology in recent years has generated that several educational institutions are in the need to implement various technological tools to improve educational services; however, several of these programs have not been able to meet the needs they required. For this reason, the use of a chatbot emerges as a solution, which represents an interactive communication, allowing to perform a series of tasks in a novel, attractive way and finding information efficiently. Therefore, this article aims to provide a chatbot that serves as a support system in educational centers, giving answers in a short time and that are accessible at any time. The methodology used was based on the review of published articles no older than 5 years on the identification of technologies in the architecture of a chatbot that serves as a support system in educational centers. Of the 70 articles collected, after being analyzed, 22 articles were selected that provided the necessary information for the research topic. The results obtained coincide in using a web platform in the user interface component, the Dialogflow platform in the dialog management component and Firebase in the database component as technologies when developing the architecture of a chatbot.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85087638546
A Chatbot based on Deep Neural Network and Public Cloud Services with TJBot Interface
This paper describes an adaptation of the IBM TJBot as a chatbot interface. TJBot is an open source project designed to use artificial intelligence services in user friendly way. Primary it has been developed to be used with IBM Watson services. The adaptation has been done in three steps. In the first step deep neural network (DNN) based chat has been designed. Subsequently, three DNN where designed to perform experiments. They differed in training sets. The second step presents a join of DNN based chat with IBM Watson services (Speech-To-Text, Text-To-Speach). Originally, IBM Watson provides these services for limited number of languages. Slovak language is not included, too. Google cloud services fill this gap quite in good manner. This led to replacement of the IBM Watson services by the Google services. Finally, the chatbot is able to communicate in multiple languages, Slovak language including. Any non-English conversation has to be translated to English language and vice-versa by the Google translate service. This modified chatbot was tested by chat with randomly selected users.
[ "Machine Translation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents", "Multilinguality" ]
[ 51, 11, 47, 38, 0 ]
SCOPUS_ID:85115441331
A Chatbot for Recipe Recommendation and Preference Modeling
This paper describes the main steps and challenges in building a chatbot for a nutritional recommendation system addressed to the elderly population. We identified 4 main components: Natural Language Understanding (NLU), Dialogue Management, Preference Modeling and Ingredient Matching and Extraction. To address the specific challenges of a chatbot for this domain we have tested transformer-based models both in the development of the NLU component and the Dialogue Management component. Moreover, we explored word embeddings and nutritional knowledge bases combined with sentiment analysis for user preferences modeling. The sentiment analysis algorithms used to model food preferences showed to correctly match the real feeling of the users. Each one of these components were evaluated individually using appropriate metrics. Moreover, the developed chatbot was successfully tested by users and its opinions were recorded by means of usability and user experience questionnaires. The results of usability tests show that the components were well integrated. The scores obtained were higher than the benchmark values for both the System Usability and the User Experience Questionnaires.
[ "Semantic Text Processing", "Sentiment Analysis", "Natural Language Interfaces", "Knowledge Representation", "Dialogue Systems & Conversational Agents" ]
[ 72, 78, 11, 18, 38 ]
SCOPUS_ID:85122673642
A Chatbot to Support Basic Students Questions
Chatbots are tools that use artificial intelligence to simulate a human conversation. They can be used for different applications such as providing customer service within an e-commerce or answering FAQs (Frequently Asked Questions). This work proposes the development of a chatbot to help students from a Brazilian public university in the search for information related to the university's administrative processes and general questions about its course. The developed system is able to deliver a high accuracy in the classification of the question's intention and have user answers in a wide range of different topics. c 2021
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
SCOPUS_ID:85116858854
A Chatbot to promote Students Mental Health through Emotion Recognition
The objective of this paper is to develop a chatbot for students to promote their mental health through emotion recognition technique. Nowadays, students are facing a lot of mental health issues due to various reasons like pandemic lockdowns, peer pressure, social media bullying, academic stress, loneliness, sexual harassment, etc. Because of which students are unable to progress well in their life, both emotionally and academically. Due to the battle of life, the students are also unable to receive proper guidance from experienced and knowledgeable humans to solve their personal issues. Therefore, developing a human friendly chatbot that can help students get the right guidance for their issues at the right time is a much needed one. These kind of chatbots can play a vital role in reducing the number of suicides in the country due to depression and stress. The chatbot named Maxx helps students solve or prevent any mental health issue in their day-to-day life. Maxx converses with the student, understands his/her present emotional state/mental health issue (if any), identifies the cause of that emotion/mental health issue and provides the right guidance based on the reason identified. Maxx uses technologies like DialogFlow for Natural Language Processing (NLP), Flutter for app development and Google Cloud Platform (GCP) for data storage and security.
[ "Natural Language Interfaces", "Ethical NLP", "Sentiment Analysis", "Responsible & Trustworthy NLP", "Emotion Analysis", "Reasoning", "Dialogue Systems & Conversational Agents" ]
[ 11, 17, 78, 4, 61, 8, 38 ]
SCOPUS_ID:85119006966
A Chatterbot Based on Genetic Algorithm: Preliminary Results
Chatterbots are programs that simulate an intelligent conversation with people. They are commonly used in customer service, product suggestions, e-commerce, travel and vacations, queries, and complaints. Although some works have presented valuable studies by using several technologies including evolutionary computing, artificial intelligence, machine learning, and natural language processing, creating chatterbots with a low rate of grammatical errors and good user satisfaction is still a challenging task. Therefore, this work introduces a preliminary study for the development of a GA-based chatterbot that generates intelligent dialogues with a low rate of grammatical errors and a strong sense of responsiveness, so boosting the personals satisfaction of individuals who interact with it. Preliminary results show that the proposed GA-based chatterbot yields 69% of “Good” responses for typical conversations regarding orders and receipts in a cafeteria.
[ "Natural Language Interfaces", "Programming Languages in NLP", "Multimodality", "Dialogue Systems & Conversational Agents" ]
[ 11, 55, 74, 38 ]
SCOPUS_ID:85063628888
A Chi-Square Statistics Based Feature Selection Method in Text Classification
Text classification refers to the process of automatically determining text categories based on text content in a given classification system. Text classification mainly includes several steps such as word segmentation, feature selection, weight calculation and classification performance evaluation. Among them, feature selection is a key step in text classification, which affects the classification accuracy. Feature selection can help indicate the relevance of text contents and can better classify the text. Meanwhile feature selection has a great influence on the classification result. Text classification is a very important module in text processing, and it is widely applied in areas like spam filtering, news classification, sentiment classification, and part-of-speech tagging. This paper proposes a method for extracting feature words based on Chi-square Statistics. Because the feature words that appear together or separately may differ in different situations, we classify texts by using single word and double words as features at the same time. Based on our method, we performed experiments using classical Naive Bayes and Support Vector Machine classification algorithms. The efficiency of our method was demonstrated by the comparison and analysis of experimental results.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85055536286
A Childhood Disease Database Based on Word Segmentation Technology: Research and Practice
Most Chinese EMRs, text mining over which enable compilation of specific disease databases, are stored as unstructured text, natural language in the field of medicine is distinct and differs from general Chinese. word segmentation of EMRs, pattern matching and feature mining methods were used to find the quantity of disease characteristics in case histories. a full-scale, comprehensive medical database was compiled. After construction of a semantic analysis model based on clinical natural language processing, this study perform pattern matching and disease feature mining and complete mining and analysis of diseases, realizing a semantic parsing and analysis system for diseases.
[ "Semantic Parsing", "Semantic Text Processing", "Syntactic Text Processing", "Text Segmentation" ]
[ 40, 72, 15, 21 ]
SCOPUS_ID:85147881687
A Chinese BERT-Based Dual-Channel Named Entity Recognition Method for Solid Rocket Engines
With the Chinese data for solid rocket engines, traditional named entity recognition cannot be used to learn both character features and contextual sequence-related information from the input text, and there is a lack of research on the advantages of dual-channel networks. To address this problem, this paper proposes a BERT-based dual-channel named entity recognition model for solid rocket engines. This model uses a BERT pre-trained language model to encode individual characters, obtaining a vector representation corresponding to each character. The dual-channel network consists of a CNN and BiLSTM, using the convolutional layer for feature extraction and the BiLSTM layer to extract sequential and sequence-related information from the text. The experimental results showed that the model proposed in this paper achieved good results in the named entity recognition task using the solid rocket engine dataset. The accuracy, recall and F1-score were 85.40%, 87.70% and 86.53%, respectively, which were all higher than the results of the comparison models.
[ "Language Models", "Named Entity Recognition", "Semantic Text Processing", "Information Extraction & Text Mining" ]
[ 52, 34, 72, 3 ]
SCOPUS_ID:85142282434
A Chinese Business License Text Detection Algorithm Based On Multi-Scale Features
In practice, text detection is needed for document image recognition, where the images have long text, large text, as well as dense small text areas. Connection Text Proposal Network (CTPN) is a classical model for text detection, but it is challenging for CTPN to detect dense small text areas. To overcome the challenge, a text detection model is proposed based on CTPN in this paper. The proposed model includes the following components: the residual network (ResNet50) and Feature Pyramid Network (FPN) are used to extract the feature layers with both high-level semantic information and shallow detail information; A Bi-directional Long Short-Term Memory (BiLSTM) network is applied to augment the representation of context information by the multi-scale feature layers; The text boxes on each scale are predicted by the feature layer, by which effectively detecting the text areas on various of scales; The ground-truth bounding box of each text box can be matched to the most appropriate anchors using a centralized approach, and the bounding box of text line is obtained by the post-processing method for text line construction. In particular, our experiment focuses on the text detection for Chinese business license. The experimental results show that the proposed model is more effective than the CTPN in terms of generating higher F -score and using less training data, which is only one third of that for the CTPN. Furthermore, the proposed model works well for the images with long text, large text and dense small text areas simultaneously, for which the CTPN fails.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
SCOPUS_ID:85103837068
A Chinese Character-Level and Word-Level Complementary Text Classification Method
Text classification is a basic but important task in many natural language processing tasks. Nowadays, the mainstream classification methods mostly use deep learning technology, which shows better accuracy and stability in English text classification. Different from English text, Chinese text classification task involves the granularity of feature description in text decomposition. The two commonly used feature granularity are word-level feature and character-level feature. The former will bring semantic loss in the process of word segmentation, while the latter can't use the advanced semantic feature in the pre-trained word vector. We propose a method to fuse the word-level and the character-level information with attention mechanism. We train the CWC-Net, which combines the features to make the embedded information of characters and words complementary, so as to improve the semantic understanding ability of the network for Chinese text and reduce semantic loss. The comparative experiments on four Chinese text datasets, which involving topic classification and emotion analysis show that our model is more accurate than the traditional model which only relies on word-level features or character-level features. That verifies the effectiveness of the fusion of word-level features and character-level features on the improvement of model capability.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
http://arxiv.org/abs/2004.08825v1
A Chinese Corpus for Fine-grained Entity Typing
Fine-grained entity typing is a challenging task with wide applications. However, most existing datasets for this task are in English. In this paper, we introduce a corpus for Chinese fine-grained entity typing that contains 4,800 mentions manually labeled through crowdsourcing. Each mention is annotated with free-form entity types. To make our dataset useful in more possible scenarios, we also categorize all the fine-grained types into 10 general types. Finally, we conduct experiments with some neural models whose structures are typical in fine-grained entity typing and show how well they perform on our dataset. We also show the possibility of improving Chinese fine-grained entity typing through cross-lingual transfer learning.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:85107683710
A Chinese Dataset for Exploring Financial Numeral Attributes
The existing datasets are mostly composed of official documents, statements, news articles, and so forth. So far, only a little attention has been paid to the numerals in financial social comments. Therefore, this paper presents CFinNumAttr, a financial numeral attribute dataset in Chinese via annotating the stock reviews and comments collected from social networking platform. We also conduct several experiments on the CFinNumAttr dataset with state-of-The-Art methods to discover the importance of the financial numeral attributes. The experimental results on the CFinNumAttr dataset show that the numeral attributes in social reviews or comments contain rich semantic information, and the numeral clue extraction and attribute classification tasks can make a great improvement in financial text understanding.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85128661832
A Chinese Document-level Event Extraction Method based on ERNIE
Event extraction is a key technology in natural language processing and has strong application prospects for extracting knowledge from unstructured data. The current event extraction technique is mainly based on sentence-based event extraction, which has the disadvantages of incomplete coverage of extracted events and ambiguity in event classification. In this paper, we propose the ERNIE-BiGRU-CRF model for chapter-level event extraction, which encodes the semantic enhancement of paragraph text by ERNIE pretrained language model, inputs a bidirectional gated neural network for feature extraction, and finally obtains the annotated sequence by CRF layer. In this paper, we perform event extraction on the Baidu financial domain documen-level event extraction dataset using the sequence-labeled trigger extraction model and the sequence-labeled event element extraction model, and the results show that the final model's F1 value is 5.45 percentage points higher than the baseline model.
[ "Event Extraction", "Information Extraction & Text Mining" ]
[ 31, 3 ]
SCOPUS_ID:85048253826
A Chinese Drama Rehearsal System Based on Phonetic Matching and Augmented Reality
Antiphonal singing is a common and important form of expression in Chinese drama. Antiphonal singing requires two actors performs in turns, and their action, expression, gesture need to correspond to the other one. In this paper the authors present a Chinese drama rehearsal system based on phonetic matching and augmented reality to help Chinese traditional drama actor and enthusiasts to rehearse and experience drama in a more immersive and realistic way.
[ "Phonetics", "Syntactic Text Processing" ]
[ 64, 15 ]
SCOPUS_ID:85073262697
A Chinese Event Relation Extraction Model Based on BERT
Relation extraction and event extraction are important subtasks of information extraction. To identify the relations and events in Chinese text accurately can help to improve the performance of tasks such as graph construction and risk conduction. Different from the traditional methods, this paper proposes a joint model to extract entities and events in the text, and gives the concept of event relation, to discover the potential relations between the arguments of events and the relations between two or more events. We conduct experiments on a financial dataset, the results show that the new model is 4%-6% higher than the existing event extraction model in F1 score, and the proposed event relation is also meaningful and practical.
[ "Language Models", "Semantic Text Processing", "Relation Extraction", "Event Extraction", "Information Extraction & Text Mining" ]
[ 52, 72, 75, 31, 3 ]
SCOPUS_ID:85149933667
A Chinese Few-Shot Text Classification Method Utilizing Improved Prompt Learning and Unlabeled Data
Insufficiently labeled samples and low-generalization performance have become significant natural language processing problems, drawing significant concern for few-shot text classification (FSTC). Advances in prompt learning have significantly improved the performance of FSTC. However, prompt learning methods typically require the pre-trained language model and tokens of the vocabulary list for model training, while different language models have different token coding structures, making it impractical to build effective Chinese prompt learning methods from previous approaches related to English. In addition, a majority of current prompt learning methods do not make use of existing unlabeled data, thus often leading to unsatisfactory performance in real-world applications. To address the above limitations, we propose a novel Chinese FSTC method called CIPLUD that combines an improved prompt learning method and existing unlabeled data, which are used for the classification of a small amount of Chinese text data. We used the Chinese pre-trained language model to build two modules: the Multiple Masks Optimization-based Prompt Learning (MMOPL) module and the One-Class Support Vector Machine-based Unlabeled Data Leveraging (OCSVM-UDL) module. The former generates prompt prefixes with multiple masks and constructs suitable prompt templates for Chinese labels. It optimizes the random token combination problem during label prediction with joint probability and length constraints. The latter, by establishing an OCSVM model in the trained text vector space, selects reasonable pseudo-label data for each category from a large amount of unlabeled data. After selecting the pseudo-label data, we mixed them with the previous few-shot annotated data to obtain brand new training data and then repeated the steps of the two modules as an iterative semi-supervised optimization process. The experimental results on the four Chinese FSTC benchmark datasets demonstrate that our proposed solution outperformed other prompt learning methods with an average accuracy improvement of 2.3%.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 24, 3, 36, 4 ]
SCOPUS_ID:85129007993
A Chinese Grammatical Error Correction Method Based on Iterative Training and Sequence Tagging
Chinese grammatical error correction (GEC) is under continuous development and improvement, and this is a challenging task in the field of natural language processing due to the high complexity and flexibility of Chinese grammar. Nowadays, the iterative sequence tagging approach is widely applied to Chinese GEC tasks because it has a faster inference speed than sequence generation approaches. However, the training phase of the iterative sequence tagging approach uses sentences for only one round, while the inference phase is an iterative process. This makes the model focus only on the current sentence’s current error correction results rather than considering the results after multiple rounds of correction. In order to address this problem of mismatch between the training and inference processes, we propose a Chinese GEC method based on iterative training and sequence tagging (CGEC-IT). First, in the iterative training phase, we dynamically generate the target tags for each round by using the final target sentences and the input sentences of the current round. The final loss is the average of each round’s loss. Next, by adding conditional random fields for sequence labeling, we ensure that the model pays more attention to the overall labeling results. In addition, we use the focal loss to solve the problem of category imbalance caused by the fact that most words in text error correction do not need error correction. Furthermore, the experiments on NLPCC 2018 Task 2 show that our method outperforms prior work by up to 2% on the F0.5 score, which verifies the efficiency of iterative training on the Chinese GEC model.
[ "Text Error Correction", "Tagging", "Syntactic Text Processing" ]
[ 26, 63, 15 ]
SCOPUS_ID:85119196306
A Chinese Knowledge Base Question Answering System
This paper presents a HAO-Interaction question answering system, which exploits knowledge based question answering (KBQA) technology to quickly obtain an answer path for the input question, and then a creative text generation mechanism to acquire the final answer text. The system also provides visibility of the answer path on the user interface in order to facilitate user understanding. Different from other KBQA systems, HAO-Interaction supports users to incorporate an organizational graph database while accessing all system functionalities. In addition, the answer generation solution implemented in the system does not require any training data. HAO-Interaction keeps low response latency while ensuring a high user satisfaction. The effectiveness of HAO-Interaction has been verified by analyzing thousands of user reviews collected by the system.
[ "Semantic Text Processing", "Question Answering", "Natural Language Interfaces", "Knowledge Representation", "Text Generation" ]
[ 72, 27, 11, 18, 47 ]
SCOPUS_ID:85145776691
A Chinese L2 Learners' Dynamic Vocabulary Growth Network Model Based on Graph Deep Learning
This paper regards vocabulary networks mastered by Chinese second language(L2) learners at different levels as sub graphs of a Chinese Word Co-occurrence Network, embeds these subgraphs with the help of graph deep learning techniques such as TSPMiner model and Order Embedding algorithm, and builds a dynamic vocabulary growth network model for the learners. This model can predict nodes and links between nodes, simulate the growth process of a learner vocabulary, so as to offer guidance to learners. With this model, a smooth, efficient, and dynamic adaptive vocabulary learning process becomes possible on learning platforms. Through a questionnaire and data analysis on it, the model is verified in that participating Chinese teachers have great consistency with model recommended word learning sequences.
[ "Multimodality", "Structured Data in NLP", "Semantic Text Processing", "Representation Learning" ]
[ 74, 50, 72, 12 ]
SCOPUS_ID:85113578050
A Chinese Machine Reading Comprehension Dataset Automatic Generated Based on Knowledge Graph
Machine reading comprehension (MRC) is a typical natural language processing (NLP) task and has developed rapidly in the last few years. Various reading comprehension datasets have been built to support MRC studies. However, large-scale and high-quality datasets are rare due to the high complexity and huge workforce cost of making such a dataset. Besides, most reading comprehension datasets are in English, and Chinese datasets are insufficient. In this paper, we propose an automatic method for MRC dataset generation, and build the largest Chinese medical reading comprehension dataset presently named CMedRC. Our dataset contains 17k questions generated by our automatic method and some seed questions. We obtain the corresponding answers from a medical knowledge graph and manually check all of them. Finally, we test BiLSTM and BERT-based pre-trained language models (PLMs) on our dataset and propose a baseline for the following studies. Results show that the automatic MRC dataset generation method is considerable for future model improvements.
[ "Semantic Text Processing", "Structured Data in NLP", "Knowledge Representation", "Multimodality", "Text Generation", "Reasoning", "Machine Reading Comprehension" ]
[ 72, 50, 18, 74, 47, 8, 37 ]
SCOPUS_ID:85136155456
A Chinese Multi-modal Relation Extraction Model for Internet Security of Finance
As the base of the whole economy and society, internet security of finance directly affects the overall development of the country. With the development of the Internet, it is essential to effectively extract the relation between financial entities from internet financial intelligence and build a financial security knowledge graph, which lays the foundation for monitoring of internet security of finance. For relation extraction of Chinese internet financial intelligence, the existing models are all based on single-modal text semantics ignoring the role of Chinese pictographic semantics, while the shape and structure of Chinese characters contains useful semantics. In addition, the pictographic semantic fusion method of Chinese text also needs to be improved for better performance. To solve these shortcomings, we propose a Chinese Multimodal Relation Extraction model (CMRE), which improves the relation extraction ability on the Chinese internet financial intelligence. In CMRE, we extract pictographic semantics based on Chinese character shape and structure. Furthermore, we design a novel multi-modal semantic fusion module based on improved Transformer to effectively fuse the text and pictographic semantics. Additionally, we design experiments on the Chinese literature dataset(Sanwen) to test the relation extraction capability of CMRE. Finally, we employ CMRE to extract relations between financial entities on the internet financial intelligence dataset(FinRE) to compare with other baseline models.
[ "Multimodality", "Relation Extraction", "Information Extraction & Text Mining" ]
[ 74, 75, 3 ]
http://arxiv.org/abs/2111.06086v1
A Chinese Multi-type Complex Questions Answering Dataset over Wikidata
Complex Knowledge Base Question Answering is a popular area of research in the past decade. Recent public datasets have led to encouraging results in this field, but are mostly limited to English and only involve a small number of question types and relations, hindering research in more realistic settings and in languages other than English. In addition, few state-of-the-art KBQA models are trained on Wikidata, one of the most popular real-world knowledge bases. We propose CLC-QuAD, the first large scale complex Chinese semantic parsing dataset over Wikidata to address these challenges. Together with the dataset, we present a text-to-SPARQL baseline model, which can effectively answer multi-type complex questions, such as factual questions, dual intent questions, boolean questions, and counting questions, with Wikidata as the background knowledge. We finally analyze the performance of SOTA KBQA models on this dataset and identify the challenges facing Chinese KBQA.
[ "Natural Language Interfaces", "Knowledge Representation", "Semantic Text Processing", "Question Answering" ]
[ 11, 18, 72, 27 ]
SCOPUS_ID:85149946762
A Chinese Named Entity Recognition Method Based on ERNIE-BiLSTM-CRF for Food Safety Domain
Food safety is closely related to human health. Therefore, named entity recognition technology is used to extract named entities related to food safety, and building a regulatory knowledge graph in the field of food safety can help relevant authorities to regulate food safety issues and mitigate the hazards caused by food safety problems. However, there is no publicly available named entity recognition dataset in the food safety domain. In contrast, the non-standardized Chinese short texts generated from user comments on the web contain rich implicit information that can help identify named entities in specific domains (e.g., food safety domain) where the corpus is scarce. Therefore, in this paper, named entities related to food safety are extracted from these unstandardized texts on the web. However, the existing Chinese named entity identification methods are mainly for standardized texts. Meanwhile, these unstandardized texts have the following problems: (1) their corpus size is small; (2) there are various new and wrong words and noise; (3) and they do not follow strict syntactic rules. These problems make the recognition of Chinese named entities for online texts more challenging. Therefore, this paper proposes the ERNIE-Adv-BiLSTM-Att-CRF model to improve the recognition of food safety domain entities in unstandardized texts. Specifically, adversarial training is added to the model training as a regularization method to alleviate the influence of noise on the model, while self-attention is added to the BiLSTM-CRF model to capture features that significant impact entity classification and improve the accuracy of entity classification. This paper conducts experiments on the public dataset Weibo NER and the self-built food domain dataset Food. The experimental results show that our model achieves a SOTA performance of 72.64% and a good performance of 69.68% for F1 values on the public and self-built datasets, respectively. The validity and reasonableness of our model are verified. In addition, the paper further analyses the impact of various components and settings on the model. The study has practical implications in the field of food safety.
[ "Language Models", "Semantic Text Processing", "Information Retrieval", "Robustness in NLP", "Named Entity Recognition", "Responsible & Trustworthy NLP", "Text Classification", "Information Extraction & Text Mining" ]
[ 52, 72, 24, 58, 34, 4, 36, 3 ]
SCOPUS_ID:85137913982
A Chinese Named Entity Recognition Method Based on Fusion of Character and Word Features
Named entity recognition is the upstream task in natural language processing tasks and is the basis for carrying out other downstream tasks. To enhance the effectiveness of the model for named entity recognition, a character vector with semantic features is obtained using BERT as the underlying encoder, followed by contextual features of the text sequence via BILISTM. In the Chinese named entity recognition task, both words and characters are equally important to the text, so a FLAT network is embedded to fuse word and character features. The network uses clever relative position encoding to preserve the location information of the input token, and generates potential word and character vectors to be added to the model for training. Experimental results show an increase in F1 values of 1.86% and 1.47% on the Resume and self-annotated news corpus datasets, respectively.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:57849140825
A Chinese QA system based on lexical chain
Most technologies of Chinese Question Answering System didn't make full use of the semantic information. This paper introduces an idea of building QA System based on lexical chain. The approach is to build lexical chains between question and the candidate answers, and then elect the most exact answer by comparing the chain connectivity. In the process, HowNet was used to analyze semantic roles of the key components in the question sentence. This method makes use of the semantic information in a deeper level, and improves the performance of QA System. © 2008 IEEE.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85051560802
A Chinese Question Answering System in Medical Domain
Question answering systems offer a friendly interface for human beings to interact with massive online information. It is time consuming for users to retrieve useful medical information with search engines among massive online websites. An effort is made to build a Chinese Question Answering System in Medical Domain (CQASMD) to provide useful medical information for users. A large medical knowledge base with more than 300 thousand medical terms and their descriptions is firstly constructed to store the structured medical knowledge data, and classified with the FastText model. Furthermore, a Word2Vec model is adopted to capture the semantic meanings of words, and the questions and answers are processed with sentence embedding to capture semantic context information. Users’ questions are firstly classified and processed into a sentence vector and a matching algorithm is adopted to match the most similar question. After querying the constructed medical knowledge base, the corresponding answers to previous questions are responded to users. The architecture and flowchart of CQASMD is proposed, which will play an important role in self disease diagnosis and treatment.
[ "Semantic Text Processing", "Question Answering", "Representation Learning", "Natural Language Interfaces", "Knowledge Representation" ]
[ 72, 27, 12, 11, 18 ]
SCOPUS_ID:78651444185
A Chinese Question Answering system using web service on restricted domain
Nowadays, the knowledge base of Question Answering system, usually stores question answering pairs or text content extracted from web pages. But in last decades, in order to improve work efficiency, many enterprises had made large investment on ERP (Enterprise Resource Planning) and MIS (Management Information System). And the systems have accumulated a great deal of useful business knowledge, of which the majority is in relational databases. Therefore, how to retrieval knowledge intelligently from relational databases with nature language makes practical significance. To deal with this situation, with the help of the idea of Box Computing, the paper presents the technology of Question Answering with web service. Considering the features of web service, the paper adopts question patterns matching with web service. And based on similarity calculation of user questions and question patterns, it is able to provide the accurate data service for the purpose of answering. This paper includes two types of web services: web API (Application Programming Interface) and web APP (Application).And it is propitious to reuse web service developed for shortening the time of exploiting. © 2010 IEEE.
[ "Natural Language Interfaces", "Question Answering" ]
[ 11, 27 ]
SCOPUS_ID:85146964458
A Chinese Short Text Classification Method Based on TF-IDF and Gradient Boosting Decision Tree
To solve the problem of feature extraction and semantic sparsity in Chinese short text classification, this paper uses TF-IDF algorithm to extract category keywords and uses the set of category keywords as the feature set of short text classification. Next, the weight of keyword features is obtained by calculating the maximum similarity between the category keywords and each word in the essay. Based on the weighted keyword feature vector set, the short text is represented by vectors. Finally, we use the GBDT algorithm to train the classifier for short text classification and carry out experiments to verify the effectiveness of this method in improving the classification effect.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85124418307
A Chinese Speech Recognition System Based on Fusion Network Structure
The purpose of an automatic speech recognition system is to convert speech into recognizable text. Chinese is a language in which the same pronunciation but different writing means different meanings. At present, there are relatively few researches on Chinese speech recognition. Therefore, we propose a Chinese automatic speech recognition system based on the fusion network RRAINet and End-to-End structure acoustic model + language model. We treat the speech signal as a visual problem, and use the Mel spectrum and SpecAugment methods to preprocess the data. The model is trained by connected time series classification criteria and decoded based on a greedy algorithm, which can convert speech signals into Chinese characters. Experiments show that the model phoneme error rate is 12.56% and 12.38% on the dev set and the test set of Free ST(ST-CMDS-20170001_1-OS). The model word error rates are 18.79% and 18.74%, which are about 5% lower than the baseline VGG-CTC model.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 47, 10, 74 ]
http://arxiv.org/abs/2210.13823v1
A Chinese Spelling Check Framework Based on Reverse Contrastive Learning
Chinese spelling check is a task to detect and correct spelling mistakes in Chinese text. Existing research aims to enhance the text representation and use multi-source information to improve the detection and correction capabilities of models, but does not pay too much attention to improving their ability to distinguish between confusable words. Contrastive learning, whose aim is to minimize the distance in representation space between similar sample pairs, has recently become a dominant technique in natural language processing. Inspired by contrastive learning, we present a novel framework for Chinese spelling checking, which consists of three modules: language representation, spelling check and reverse contrastive learning. Specifically, we propose a reverse contrastive learning strategy, which explicitly forces the model to minimize the agreement between the similar examples, namely, the phonetically and visually confusable characters. Experimental results show that our framework is model-agnostic and could be combined with existing Chinese spelling check models to yield state-of-the-art performance.
[ "Language Models", "Text Error Correction", "Semantic Text Processing", "Syntactic Text Processing", "Representation Learning" ]
[ 52, 26, 72, 15, 12 ]
SCOPUS_ID:85063913820
A Chinese Text Classier Based on Strong Class Feature Selection and Bayesian Algorithm
For improve the efficiency and accuracy of Chinese text categorization, this paper presents a new Chinese text classier, in which word segmentation system is based on forward scan corpus, and word frequency statistics method is different in the training stage and the test stage, after that, a novel feature selection is proposed according to word frequency, mutual information and classificatory information, Finally a fast Bayes theory classier is designed. Experiments prove this classier is simple and effective.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85124961195
A Chinese Text Classification Method Based on BERT and Convolutional Neural Network
Text classification has always been an important task in natural language processing. In recent years, text classification has been widely used in emotion analysis, intention recognition, intelligent question answering and other fields. In this paper, the word vector is generated based on the Bert model, and the text features extracted by Convolutional Neural Network (CNN) are fused to get more effective features, so as to complete the Chinese text classification. Experiments are conducted on the public data set. Compared with the text classification model in recent years, it is proved that the Bert+CNN model can accurately classify Chinese text, effectively prevent over fitting, and has good generalization.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
http://arxiv.org/abs/2010.14784v2
A Chinese Text Classification Method With Low Hardware Requirement Based on Improved Model Concatenation
In order to improve the accuracy performance of Chinese text classification models with low hardware requirements, an improved concatenation-based model is designed in this paper, which is a concatenation of 5 different sub-models, including TextCNN, LSTM, and Bi-LSTM. Compared with the existing ensemble learning method, for a text classification mission, this model's accuracy is 2% higher. Meanwhile, the hardware requirements of this model are much lower than the BERT-based model.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 24, 3 ]
SCOPUS_ID:85097582751
A Chinese Textual Entailment Recognition Method Incorporating Semantic Role and Self-Attention
Recognizing textual entailment is intended to infer the logical relationship between two given sentences.In this paper, we incorporate the deep semantic information of sentences and the encoder of Transformer by constructing the SRL-Attention fusion module, and it effectively improves the ability of self-attention mechanism to capture sentence semantics.Furthermore, concerning the small scale and high noise problems on the dataset, we use large-scale pre-trained language model improving the recognition performance of model on small-scale dataset.Experimental results show that the accuracy of our model on the dataset CNLI, it is released as Chinese textual entailment recognition evaluation corpus at the 17th China National Conference on Computational Linguistics, reaches 80.28%.
[ "Language Models", "Reasoning", "Semantic Text Processing", "Textual Inference" ]
[ 52, 8, 72, 22 ]
SCOPUS_ID:85060846058
A Chinese Word Segmentation Model for Energy Literature Based on Conditional Random Fields
Chinese word segmentation is one of the foundation and core tasks for Chinese natural language processing. Although some achievements have been made for Chinese word segmentation system in general domains, it is far away to meet practical requirements in energy domain. We focus on Chinese word segmentation standard and segmentation technology in the energy domain which consists of 13283 energy basic terms. This paper firstly proposes a conditional random field segmentation model. Then, the character features, character type features and conditional entropy features which influence the word segmentation performance are chose and described. Finally, the proposed model is tested on the dataset of the State Grid energy literature and compared with current word segmentation tools, such as the Harbin Institute of Technology's Language Technology Platform and the Tsinghua's THU Lexical Analyzer for Chinese language processing tools. The F1 value of the best result of the proposed model is 0.8319.
[ "Text Segmentation", "Syntactic Text Processing" ]
[ 21, 15 ]
SCOPUS_ID:85096580641
A Chinese corpus for fine-grained entity typing
Fine-grained entity typing is a challenging task with wide applications. However, most existing datasets for this task are in English. In this paper, we introduce a corpus for Chinese fine-grained entity typing that contains 4,800 mentions manually labeled through crowdsourcing. Each mention is annotated with free-form entity types. To make our dataset useful in more possible scenarios, we also categorize all the fine-grained types into 10 general types. Finally, we conduct experiments with some neural models whose structures are typical in fine-grained entity typing and show how well they perform on our dataset. We also show the possibility of improving Chinese fine-grained entity typing through cross-lingual transfer learning.
[ "Multilinguality", "Named Entity Recognition", "Cross-Lingual Transfer", "Information Extraction & Text Mining" ]
[ 0, 34, 19, 3 ]
SCOPUS_ID:85097393336
A Chinese doctoral student's experience of L2 English academic writing in Australia: Negotiating practices and identities
Increasing numbers of international students are choosing to study abroad at English speaking universities, where their L2 English academic writing is assessed alongside local students’ L1 writing. Research has investigated the difficulties these student writers encounter, the deficits in their L2 academic writing products, and the strategies used to address these deficits. However, there has been little investigation into the experiences of L2 student writers of academic texts in particular international contexts, and the identity work associated with negotiating linguistic, cultural and institutional dimensions of these experiences. This narrative-based, qualitative case study addresses this gap in the literature by investigating the L2 English academic writing experiences of one Chinese student in an Australian university over the four years of her PhD candidature. Drawing on Bakhtinian dialogic theories of language and identity, in association with Gee's theorising of identity, the authors show how the challenges experienced by the Chinese student both constrained and facilitated her writing practices. An ongoing process of negotiating tensions in her writer identity mediated these practices, surfacing differing dimensions of the student's bilingual, transcultural, researcher-writer identity. The study offers recommendations for how students, academic supervisors and institutions can support this nuanced process of negotiation.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85104657381
A Chinese document parsing and code recognition system using Regex and SVM
We design a document parsing and code recognition system to promote the informatization and automation of work. Its key function is to obtain and analyze the content of the document, and then identify the equipment code in the document, thereby extracting the valuable information from the document. This allows the staff to use it more effectively or retrieve the required document faster. The proposed document parsing and code recognition system is designed based on the regex and support vector machine (SVM). Specifically, regex is used to extract suspected codes from documents. A two-class classifier trained by SVM is used to judge whether the suspected codes are real codes. The research results show that the system can be effectively applied to engineering practice, and it has high availability. Most importantly, it can improve the work efficiency of staff.
[ "Programming Languages in NLP", "Text Classification", "Multimodality", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 55, 36, 74, 24, 3 ]
SCOPUS_ID:84957030825
A Chinese event argument inference approach based on entity semantics and event relevance
Currently, Chinese argument extraction mainly focuses on feature engineering, which cannot exploit inner relationships between event mentions in the same document. To address this issue, this paper learns the probabilities of entities fulfilling a specific role from the training set and the relationship among events to infer more arguments using Markov Logic Networks. Experimental results on the ACE 2005 Chinese corpus show that our approach outperforms the baseline significantly, with an improvement of 8.6% and 8.2% in argument identification and role determination respectively.
[ "Argument Mining", "Reasoning" ]
[ 60, 8 ]
SCOPUS_ID:84951736203
A Chinese framework of semantic taxonomy and description: Preliminary experimental evaluation using web information extraction
The Chinese Framework of Semantic Taxonomy and Description (FSTD) is a linguistic resource that stores lexical and predicate argument semantics about events or states in Chinese text, developed with the application of knowledge acquisition from Chinese text in mind. In this paper we build a web information extraction system, called NkiExtractor, to evaluate FSTD experimentally. We use two metrics: grammar coverage measures whether there is a semantic category of FSTD that corresponds to an event description in text, and extraction precision measures whether the correct predicate-argument structure can be extracted from text. Experimental results show that FSTD is a fairly comprehensive and effective resource for knowledge acquisition. We also discuss future work for expanding FSTD and improving extraction precision of NkiExtractor.
[ "Information Extraction & Text Mining" ]
[ 3 ]
SCOPUS_ID:85039444569
A Chinese handwriting word segmentation method via faster R-CNN
The segmentation of Chinese handwritten document image into individual words is an essential step for the character recognition. Conventional methods frequently use feature extraction and classification algorithm to segment. However, since the features of the words mostly depend on people, it is considered a difficult task. In order to avoid this problem, we use a method of object detection—Faster R-CNN. The words are treated as the especial object and people do not concern on features extraction. Experimental results on HIT-MW databases show that our method achieves the preferable performance.
[ "Text Segmentation", "Syntactic Text Processing", "Information Extraction & Text Mining" ]
[ 21, 15, 3 ]
SCOPUS_ID:82155192186
A Chinese intelligent question answering system based on domain ontology and sentence templates
With the development of network technology, distance education is becoming increasingly important. Intelligent question answering system is an important part of remote learning system. Currently, there have been a lot of mature English question answering systems. Owing to the Chinese natural language complexity and processing technical limitations, there is still no mature Chinese QA system exploited by now. This paper designs and implements a Chinese intelligent question answering system based on domain ontology and sentence templates according to Chinese characteristics. This system packages question template, semantic template and answer model into a sentence template, adopts automatic word segmentation and sentence templates matching method to understand the users' questions, and uses domain ontology as knowledge base which provide domain vocabulary for question analysis and knowledge retrieval for answers' generation. Our system achieves to question with limited natural language, and finally returns to the user a precise and concise answer.
[ "Semantic Text Processing", "Question Answering", "Syntactic Text Processing", "Knowledge Representation", "Natural Language Interfaces", "Text Segmentation" ]
[ 72, 27, 15, 18, 11, 21 ]
SCOPUS_ID:33749408738
A Chinese mobile phone input method based on the dynamic and self-study language model
This paper birefly introduces a Chinese digital input method named as CKCDIM (CKC Digital Input Method) and then applies it to the Symbian OS as an example, and it also proposes a framework of input method which adopted the Client/Server architecture for the handheld computers. To improve the performance of CKCDIM, this paper puts forward a dynamic and self-study language model which based on a general language model and user language model, and proposes two indexes which are the average number of pressed-keys (ANPK) and the hit rate of first characters (HRFC) to measure the performance of the input method. Meanwhile, this paper brings forward a modified Church-Gale smoothing method to reduce the size of general language model to meet the need of mobile phone. At last, the experiments prove that the dynamic and self-study language model is a steady model and can improve the performance of CKCDIM. © IFIP International Federation for Information Processing 2006.
[ "Language Models", "Semantic Text Processing" ]
[ 52, 72 ]
SCOPUS_ID:85105392230
A Chinese named entity recognition method combined with relative position information
Named entity recognition is one of the important tasks of natural language processing, which can help people to select entity information from massive text data. Researchers try to use different methods and improve the recognition effect from different perspectives, including machine learning and deep learning methods, and have made great progress in English datasets. However, in Chinese named entity recognition, it is difficult to recognize entity class because of the complexity of semantic environment and the variety of word formation grammar. Therefore, in order to solve this problem, this paper proposes to use the multi-head attention mechanism of relative position, using the difference of relative position encoding between characters of different positions, to extract the feature of full sentence information, so as to make up for the lack of attention of Lattice-LSTM model to the feature information of full sentence, resulting in the weak ability to recognize complex sentences. Experiments on Chinese Weibo dataset, resume dataset, OntoNotes 4.0 dataset and MSRA dataset verify the model in terms of statement complexity and data volume respectively, and the recognition effect is improved. Finally, we find out a better combination of super parameters, which are further improved on the four datasets.
[ "Named Entity Recognition", "Information Extraction & Text Mining" ]
[ 34, 3 ]
SCOPUS_ID:79959606210
A Chinese online document clustering algorithm based on hidden sentiment vector
The core idea of clustering algorithm is the division of data into groups of similar objects. Some clustering algorithms are proven good performance on document clustering, such as k-means and UPGMA etc. However, few document clustering algorithms pay attention to the hidden sentiment which is a very important feature of the documents. This paper presents an improved k-means algorithm (HSK-Means) based on hidden sentiment vector for Chinese document clustering. Chinese dependency grammar rules are used to extract document feature and evaluate hidden sentiment. A new method for selecting initial cluster centroids based on rank mechanism of similarity is also proposed to improve accuracy of clustering algorithm. The experimental results on real online document sets illustrate that the performance of HSK-Means algorithm is better than classic document clustering algorithms. Copyright © 2011 Binary Information Press.
[ "Information Extraction & Text Mining", "Sentiment Analysis", "Text Clustering" ]
[ 3, 78, 29 ]
SCOPUS_ID:34047242845
A Chinese person name recognition system based on agent-based HMM position tagging model
An Agent-based HMM Position Tagging (AHPT) model was proposed for Chinese person name recognition. The model unified unknown word identification and person name recognition as a single tagging task. Based on context pattern, special name table and position dependent information, the model could integrate both the internal information and surrounding contextual clues for name entity recognition (NER) under the HMM. The experiment shows that the recall rate and precise rate are respectively 95.11% and 94.02%. The result indicates the application of multi-agent framework can substantially improve the performance of HMM in person name recognition. © 2006 IEEE.
[ "Tagging", "Syntactic Text Processing" ]
[ 63, 15 ]