id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
SCOPUS_ID:84894432914
|
A Bracketed Grid account of the Italian endecasillabo meter
|
This paper offers a generative account of the Italian endecasillabo meter, based on a revision of the Bracketed Grid Theory put forth in Fabb and Halle (2008). The aim is to define a single set of rules which are valid for each possible endecasillabo line, regardless of author and epoch. To do so, the paper analyzes a chronologically wide-ranging choice of examples. After a critical overview of previous analyses of this meter, including Piera's proposal within Fabb and Halle's (2008), a new analysis is developed that accommodates both the whole set of Italian data and the theoretical problems affecting the Bracketed Grid Theory in its application to the endecasillabo. This new analysis proposes that (i) Bracketed Gridline 1 must be built by a ternary grouping rule; and (ii) designated limited bits of prosodic information must be visible to the metrical rules. (i) Is formally implemented in a new algorithm, which simplifies the scansion rules by reducing the possible underlying patters of endecasillabo to two. (ii) Solves the cases of ambiguous pattern attribution. The combination of (i-ii), finally, explains why non-canonical forms are possible but minoritary in the corpus. This brings out a number of consequences for Fabb and Halle (2008), and for the generative theory of poetic meter in general. © 2014 Elsevier B.V.
|
[
"Linguistics & Cognitive NLP",
"Phonology",
"Syntactic Text Processing",
"Linguistic Theories"
] |
[
48,
6,
15,
57
] |
http://arxiv.org/abs/2301.02809v1
|
A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering
|
Reasoning and question answering as a basic cognitive function for humans, is nevertheless a great challenge for current artificial intelligence. Although the Differentiable Neural Computer (DNC) model could solve such problems to a certain extent, the development is still limited by its high algorithm complexity, slow convergence speed, and poor test robustness. Inspired by the learning and memory mechanism of the brain, this paper proposed a Memory Transformation based Differentiable Neural Computer (MT-DNC) model. MT-DNC incorporates working memory and long-term memory into DNC, and realizes the autonomous transformation of acquired experience between working memory and long-term memory, thereby helping to effectively extract acquired knowledge to improve reasoning ability. Experimental results on bAbI question answering task demonstrated that our proposed method achieves superior performance and faster convergence speed compared to other existing DNN and DNC models. Ablation studies also indicated that the memory transformation from working memory to long-term memory plays essential role in improving the robustness and stability of reasoning. This work explores how brain-inspired memory transformation can be integrated and applied to complex intelligent dialogue and reasoning systems.
|
[
"Language Models",
"Semantic Text Processing",
"Question Answering",
"Robustness in NLP",
"Natural Language Interfaces",
"Reasoning",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
27,
58,
11,
8,
4
] |
SCOPUS_ID:85098657036
|
A Brazilian Portuguese Moral Foundations Dictionary for Fake News classification
|
The Moral Foundations Theory defines foundations to explain human moral reasoning and its role in the decision-making process, including how information is perceived and interpreted a problem related to aspects of moral values that is currently gaining notoriety is the spread of false information known as "Fake News". Natural language processing techniques are being used in social sciences studies to deal with the Fake News detection task. This work introduces and brings details from the development of MFD-BR, a Brazilian Portuguese lexicon based on the Moral Foundations Theory, designed to measure Moral Sentiment in texts. It also contributes to Fake News detection strategies by assessing the difference in moral dimensions to distinguish between reliable sources texts and texts originated from low-reputation sources (considered by fact-checking agencies).
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Linguistics & Cognitive NLP",
"Linguistic Theories",
"Ethical NLP",
"Sentiment Analysis",
"Reasoning",
"Fact & Claim Verification",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
3,
24,
48,
57,
17,
78,
8,
46,
36,
4
] |
SCOPUS_ID:85083336402
|
A Brazilian Portuguese Real-Time Voice Recognition to deal with sensitive data
|
Speech recognition is generally performed by a few cloud providers centrally. Such an approach is not suitable for financial and medical institutions because sensitive data cannot be provided openly to third parties. This work proposes a simple chatbot for Brazilian Portuguese real-time speech recognition, well-defined purpose, specific vocabulary, and a secure way without exposing sensitive data for use on mobile devices. The proposed system got a word error rate of 2.38% for a speech recognition task.
|
[
"Text Generation",
"Speech Recognition",
"Speech & Audio in NLP",
"Multimodality"
] |
[
47,
10,
70,
74
] |
SCOPUS_ID:85149280294
|
A Brief History of Deep Learning-Based Text Generation
|
A dynamic domain in Artificial Intelligence research, Natural Language Generation centres on the automatic generation of realistic text. To help navigate this vast and swiftly developing body of work, the study provides a concise overview of noteworthy stages in the history of text generation. To this end, the paper describes deep learning models for a broad audience, focusing on traditional, convolutional, recurrent and generative adversarial networks, as well as transformer architecture.
|
[
"Language Models",
"Semantic Text Processing",
"Text Generation"
] |
[
52,
72,
47
] |
SCOPUS_ID:85128677313
|
A Brief Introduction of the Text Classification Methods
|
Text classification is basically to categorize text data into different groups. Text classification has been applied in various domains, including news filtering and organization, document organization and retrieval, opinion mining, and email classification and spam filtering. However, in the Era of Big Data, in which text data is generated every second, it is almost impossible to classify text manually. This paper will briefly talk about common task classification methods with greater emphasis on DL classification models, CNN and RNN. CNN is good at extracting word pattern of the text. A classification is done by detecting particular words' presence and location. RNN converts the text into a word embedding vector and processes the vector as a sequential data. It makes a classification by a series of computation based on the word sequence. This paper will also analyze several research papers which further explore the nature of CNN and RNN classification models. Finally, it will address the problems that current text classification models face.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
https://aclanthology.org//W18-6601/
|
A Brief Introduction to Natural Language Generation within Computational Creativity
|
[
"Text Generation"
] |
[
47
] |
|
SCOPUS_ID:85146488980
|
A Brief Overview of Universal Sentence Representation Methods: A Linguistic View
|
How to transfer the semantic information in a sentence to a computable numerical embedding form is a fundamental problem in natural language processing. An informative universal sentence embedding can greatly promote subsequent natural language processing tasks. However, unlike universal word embeddings, a widely accepted general-purpose sentence embedding technique has not been developed. This survey summarizes the current universal sentence-embedding methods, categorizes them into four groups from a linguistic view, and ultimately analyzes their reported performance. Sentence embeddings trained from words in a bottom-up manner are observed to have different, nearly opposite, performance patterns in downstream tasks compared to those trained from logical relationships between sentences. By comparing differences of training schemes in and between groups, we analyze possible essential reasons for different performance patterns. We additionally collect incentive strategies handling sentences from other models and propose potentially inspiring future research directions.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
http://arxiv.org/abs/1406.2903v2
|
A Brief State of the Art for Ontology Authoring
|
One of the main challenges for building the Semantic web is Ontology Authoring. Controlled Natural Languages CNLs offer a user friendly means for non-experts to author ontologies. This paper provides a snapshot of the state-of-the-art for the core CNLs for ontology authoring and reviews their respective evaluations.
|
[
"Knowledge Representation",
"Semantic Text Processing"
] |
[
18,
72
] |
SCOPUS_ID:85104983499
|
A Brief Study on Approaches for Extractive Summarization
|
Moved by the cutting edge mechanical advancements, information is to this century what oil was to the past one. Today, our reality is dropped by the gettogether and spread of gigantic measures of data. With a particularly enormous measure of information flowing in the advanced space, there is a need to create Artificial Intelligence calculations that can naturally abbreviate longer messages and convey exact outlines that can fluidly pass the proposed messages. This paper puts forth a brief survey of five major extractive methods of text summarization-the TFIDF, clustering, neural network, fuzzy logic, and graph-based approaches. A comparison of the five approaches is also presented.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
https://aclanthology.org//2021.sigdial-1.49/
|
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
|
Neural models trained for next utterance generation in dialogue task learn to mimic the n-gram sequences in the training set with training objectives like negative log-likelihood (NLL) or cross-entropy. Such commonly used training objectives do not foster generating alternate responses to a context. But, the effects of minimizing an alternate training objective that fosters a model to generate alternate response and score it on semantic similarity has not been well studied. We hypothesize that a language generation model can improve on its diversity by learning to generate alternate text during training and minimizing a semantic loss as an auxiliary objective. We explore this idea on two different sized data sets on the task of next utterance generation in goal oriented dialogues. We make two observations (1) minimizing a semantic objective improved diversity in responses in the smaller data set (Frames) but only as-good-as minimizing the NLL in the larger data set (MultiWoZ) (2) large language model embeddings can be more useful as a semantic loss objective than as initialization for token embeddings.
|
[
"Natural Language Interfaces",
"Semantic Text Processing",
"Dialogue Systems & Conversational Agents",
"Representation Learning"
] |
[
11,
72,
38,
12
] |
http://arxiv.org/abs/2009.12721v1
|
A Brief Survey and Comparative Study of Recent Development of Pronoun Coreference Resolution
|
Pronoun Coreference Resolution (PCR) is the task of resolving pronominal expressions to all mentions they refer to. Compared with the general coreference resolution task, the main challenge of PCR is the coreference relation prediction rather than the mention detection. As one important natural language understanding (NLU) component, pronoun resolution is crucial for many downstream tasks and still challenging for existing models, which motivates us to survey existing approaches and think about how to do better. In this survey, we first introduce representative datasets and models for the ordinary pronoun coreference resolution task. Then we focus on recent progress on hard pronoun coreference resolution problems (e.g., Winograd Schema Challenge) to analyze how well current models can understand commonsense. We conduct extensive experiments to show that even though current models are achieving good performance on the standard evaluation set, they are still not ready to be used in real applications (e.g., all SOTA models struggle on correctly resolving pronouns to infrequent objects). All experiment codes are available at https://github.com/HKUST-KnowComp/PCR.
|
[
"Coreference Resolution",
"Information Extraction & Text Mining"
] |
[
13,
3
] |
SCOPUS_ID:85146120381
|
A Brief Survey for Fake News Detection via Deep Learning Models
|
Social networks have become indispensable in people's lives. Despite the conveniences brought by social networks, the fake news on those online platforms also induces negative impacts and losses for users. With the development of deep learning technologies, detecting fake news in a data-driven manner has attracted great attention. In this paper, we give a brief survey that discusses the recent development of deep learning methods in fake news detection. Compared with previous surveys, we focus on the different data structures instead of the models they used to process those data. We give a new taxonomy that categorizes current models into the following three parts: models that formulate fake news detection as text classification, models that formulate fake news detection as graph classification, and models that formulate fake news detection as hybrid classification. The advantages and drawbacks of those methods are also discussed.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Ethical NLP",
"Reasoning",
"Fact & Claim Verification",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
3,
24,
17,
8,
46,
36,
4
] |
http://arxiv.org/abs/1905.05395v3
|
A Brief Survey of Multilingual Neural Machine Translation
|
We present a survey on multilingual neural machine translation (MNMT), which has gained a lot of traction in the recent years. MNMT has been useful in improving translation quality as a result of knowledge transfer. MNMT is more promising and interesting than its statistical machine translation counterpart because end-to-end modeling and distributed representations open new avenues. Many approaches have been proposed in order to exploit multilingual parallel corpora for improving translation quality. However, the lack of a comprehensive survey makes it difficult to determine which approaches are promising and hence deserve further exploration. In this paper, we present an in-depth survey of existing literature on MNMT. We categorize various approaches based on the resource scenarios as well as underlying modeling principles. We hope this paper will serve as a starting point for researchers and engineers interested in MNMT.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85067831993
|
A Brief Survey of Relation Extraction Based on Distant Supervision
|
As a core task and important part of Information ExtractionEntity Relation Extraction can realize the identification of the semantic relation between entity pairs. And it plays an important role in semantic understanding of sentences and the construction of entity knowledge base. It has the potential of employing distant supervision method, end-to-end model and other deep learning model with the creation of large datasets. In this review, we compare the contributions and defect of the various models that have been used for the task, to help guide the path ahead.
|
[
"Relation Extraction",
"Information Extraction & Text Mining"
] |
[
75,
3
] |
http://arxiv.org/abs/1707.02919v2
|
A Brief Survey of Text Mining: Classification, Clustering and Extraction Techniques
|
The amount of text that is generated every day is increasing dramatically. This tremendous volume of mostly unstructured text cannot be simply processed and perceived by computers. Therefore, efficient and effective techniques and algorithms are required to discover useful patterns. Text mining is the task of extracting meaningful information from text, which has gained significant attentions in recent years. In this paper, we describe several of the most fundamental text mining tasks and techniques including text pre-processing, classification and clustering. Additionally, we briefly explain text mining in biomedical and health care domains.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Text Clustering"
] |
[
3,
24,
36,
29
] |
SCOPUS_ID:85144366055
|
A Brief Survey of Textual Dialogue Corpora
|
Several dialogue corpora are currently available for research purposes, but they still fall short for the growing interest in the development of dialogue systems with their own specific requirements. In order to help those requiring such a corpus, this paper surveys a range of available options, in terms of aspects like speakers, size, languages, collection, annotations, and domains. Some trends are identified and possible approaches for the creation of new corpora are also discussed.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85104615509
|
A Brief Survey of Word Embedding and Its Recent Development
|
Learning effective representations of text words has long been a research focus in natural language processing and other machine learning tasks. In many early tasks, a text word is often represented by a one-hot vector in a discrete manner. Such a solution is not only restricted by the dimension curse, but also unable to reflect the semantic relationships between words. Recent developments focus on the learning of low-dimension and continuous vector representations of text words, known as word embedding, which can be easily applied to downstream tasks such as machine translation, natural language inference, semantic analysis and so on. In this paper, we will introduce the development of word embedding, describe the representative methods, and report its recent research trend. This paper can provide a quick guide for understanding the principle of word embedding and its development.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85123781470
|
A Brief Survey of text driven image generation and maniulation
|
Text-to-image synthesis refers to the translation of textual description to images with similar semantic meaning. This process mainly relies on the correlation of text and image to find the best alignment between them. So, the recent development of generative models, especially GANs, using natural language as a condition paves a new path for research. In this paper, we review the most notable developments in the Field of text-to-image synthesis. In contrast to others, were viewed new methods of GAN-based text to image synthesis comparatively, considering two meaningful divisions, generation and manipulation. Additionally, there is a summary table for the characteristic of all these methods
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85002938839
|
A Brief Tutorial on How to Extract Information from User-Generated Content (UGC)
|
In this brief tutorial, we provide an overview of investigating text-based user-generated content for information that is relevant in the corporate context. We structure the overall process along three stages: collection, analysis, and visualization. Corresponding to the stages we outline challenges and basic techniques to extract information of different levels of granularity.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
https://aclanthology.org//W89-0244/
|
A Broad-Coverage Natural Language Analysis System
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
SCOPUS_ID:85104874218
|
A Building Topical 2-Gram Model: Discovering and Visualizing the Topics Using Frequent Pattern Mining
|
The current decade is witnessing a visible escalation in publications at almost all domains. In turn, the Reviewer Assignment Problem (RAP) is attracting the attention of researchers for assigning a set of received papers to the most appropriate experts for fair and accurate reviews. The process of assigning the reviewer to the paper includes building a topic model of paper as one of the core tasks. In this paper, we present an efficient technique to build a topic model for papers that will help in precisely assigning the papers to appropriate track that in turn will support assigning papers to appropriate experts. We believe in fact that for extracting meaningful topics, it is essential to process relevant sections of paper with appropriate weightage. 145 of submitted papers of international conference Interspeech2019 are used for testing the proposed work. The perplexity of the LDA model is measured and results in yield low perplexity. The experimentation results demonstrate that papers are correctly categorized to appropriate tracks. Further, the work can assist conference organizers for analysis like visualizing the topical distribution of papers within and across the tracks and similar.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85099882847
|
A Burmese Dependency Parsing Method Based on Transfer Learning
|
Dependency parsing is a fundmental task in natural language processing(NLP). Burmese belongs to a low resource language with a special language structure, therefore, it exists the problems with extremely lacking of high quality data for Burmese dependency parsing and the inaccurate representation of semantic. We propose a Burmese dependency parsing model based on transfer learning, our method generate partially accurate Burmese dependency parsing data by constructing the relationship of English-Burmese. The embedding of Burmese represented by syllables and words to obtain accurate bilingual word vectors representation of English-Burmese. To verify the effectiveness of our method, during the training process, we fuse the dependency parsing data of Burmese and English, which transfer the dependency arc and POS tagging of English to Burmese. The experimental results show that our proposed method has a UAS value of 44.10% and a LAS value of 30.01% on Burmese datadset.
|
[
"Language Models",
"Semantic Text Processing",
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
52,
72,
28,
15
] |
SCOPUS_ID:85063463541
|
A Business Card Reader Application for iOS devices based on Tesseract
|
As the accessibility of high-resolution smartphone camera has increased and an improved computational speed, it is now convenient to build Business Card Readers on mobile phones. The project aims to design and develop a Business Card Reader (BCR) Application for iOS devices, using an open-source OCR Engine-Tesseract. The system accuracy was tested and evaluated using a dataset of 55 digital business cards obtained from an online repository. The accuracy result of the system was up to 74% in terms of both text recognition and data detection. A comparative analysis was carried out against a commercial business card reader application and our application performed vastly reasonable.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85076842892
|
A Business Reputation Methodology Using Social Network Analysis
|
Nowadays, every facet of people’s lifestyle is impacted by the continuous use of technologies and in particular the Internet. Social interactions have been radically changed since new technologies have removed communication barriers. The obstacles of time and space have been overcome, letting people from different places and cultures communicate. People tend to become part of a dense social network, where the distribution of information becomes an almost immediate process. This is the reason why companies, public institutions and business activities have opted for social networks as communication medium. As a consequence, people seem to incorporate information gained from the social networks into their decision-making processes. In this paper we analyze how what is said in the social network could influence people’s decisions. As social network, we consider YELP community, that is a user generated content platform based on ‘word of mouth’, in which users can share their opinions about news, product, community, businesses. We propose a methodology for reviews analysis with the aim to compute business attractiveness, combining user’s sentiment and business reputation. Our results show that reviews analysis should be performed because it may provide useful information to monitor how business public opinion changes over time.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85147914362
|
A Business Workflow For Providing Open-Domain Question Answering Reader Systems on The Wikipedia Dataset
|
In a variety of sectors, we observe the emerging need for responding to user questions in a fast and efficient manner. We argue that addressing this need by developing question answering reader system applications will lead to several benefits: a) the density of call centers is reduced, and b) time is saved by getting answers through the application instead of going to the company itself. Examples of these applications, such as search engines help users find answers to their questions on documents containing important information, such as legal documents. These applications can be applied in digital banking, electronic commerce, and legal documents. In this study, we investigate the design of a business workflow that can provide answers to questions through documents containing important information, such as legal documents. In this study, we examine open-domain reader systems and propose a business workflow for open-domain reader systems. We are implementing a prototype application on the dataset to investigate the usability of the proposed business workflow. We discuss the prototype's implementation details and share its evaluation results. The results show that the T5-based model provides better results in open-domain reader systems.
|
[
"Natural Language Interfaces",
"Question Answering"
] |
[
11,
27
] |
http://arxiv.org/abs/1809.08386v1
|
A Byte-sized Approach to Named Entity Recognition
|
In biomedical literature, it is common for entity boundaries to not align with word boundaries. Therefore, effective identification of entity spans requires approaches capable of considering tokens that are smaller than words. We introduce a novel, subword approach for named entity recognition (NER) that uses byte-pair encodings (BPE) in combination with convolutional and recurrent neural networks to produce byte-level tags of entities. We present experimental results on several standard biomedical datasets, namely the BioCreative VI Bio-ID, JNLPBA, and GENETAG datasets. We demonstrate competitive performance while bypassing the specialized domain expertise needed to create biomedical text tokenization rules.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85090388469
|
A C-BiLSTM approach to classify construction accident reports
|
The construction sector is widely recognized as having the most hazardous working environment among the various business sectors, and many research studies have focused on injury prevention strategies for use on construction sites. The risk-based theory emphasizes the analysis of accident causes extracted from accident reports to understand, predict, and prevent the occurrence of construction accidents. The first step in the analysis is to classify the incidents from a massive number of reports into different cause categories, a task which is usually performed on a manual basis by domain experts. The research described in this paper proposes a convolutional bidirectional long short-term memory (C-BiLSTM)-based method to automatically classify construction accident reports. The proposed approach was applied on a dataset of construction accident narratives obtained from the Occupational Safety and Health Administration website, and the results indicate that this model performs better than some of the classic machine learning models commonly used in classification tasks, including support vector machine (SVM), naïve Bayes (NB), and logistic regression (LR). The results of this study can help safety managers to develop risk management strategies.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
http://arxiv.org/abs/1511.08630v2
|
A C-LSTM Neural Network for Text Classification
|
Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
12,
24,
3
] |
SCOPUS_ID:85078030788
|
A C-LSTM with word embedding model for news text classification
|
Traditional text classification methods are based on statistics and feature selection. It does not perform well in processing large - scale corpus. In recent years, with the rapid development of deep learning and artificial neural networks, many scholars use them to solve text classification problems and achieve good results. Common text classification neural network models include textCNN, LSTM, and C-LSTM. Using a specific model can obtain more accurate features but ignore the context information. This paper proposes a C-LSTM with word embedding model to deal with this problem. Experiments show that the model proposed in this paper has great advantages in Chinese news text classification.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Representation Learning",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
12,
24,
3
] |
SCOPUS_ID:85027437137
|
A C4.5 algorithm for english emotional classification
|
The solutions for processing sentiment analysis are very important and very helpful for many researchers, many applications, etc. This new model has been proposed in this paper, used in the English document-level sentiment classification. In this research, we propose a new model using C4.5 Algorithm of a decision tree to classify semantics (positive, negative, neutral) for the English documents. Our English training data set has 140,000 English sentences, including 70,000 English positive sentences and 70,000 English negative sentences. We use the C4.5 algorithm on the 70,000 English positive sentences to generate a decision tree and many association rules of the positive polarity are created by the decision tree. We also use the C4.5 algorithm on the 70,000 English negative sentences to generate a decision tree and many association rules of the negative polarity are created by the decision tree. Classifying sentiments of one English document is identified based on the association rules of the positive polarity and the negative polarity. Our English testing data set has 25,000 English documents, including 12,500 English positive reviews and 12,500 English negative reviews. We have tested our new model on our testing data set and we have achieved 60.3% accuracy of sentiment classification on this English testing data set.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:84901687300
|
A CAPTCHA scheme based on the identification of character locations
|
CAPTCHAs are a standard security mechanism used on many websites to protect online services against abuse by automated programs, or bots. The purpose of a CAPTCHA is to distinguish whether an online transaction is being carried out by a human or a bot. Unfortunately, to date many existing CAPTCHA schemes have been found to be vulnerable to automated attacks. It is widely accepted that state-of-the-art in text-based CAPTCHA design requires that a CAPTCHA be resistant against segmentation. In this paper, we examine CAPTCHA usability issues and current segmentation techniques that have been used to attack various CAPTCHA schemes. We then introduce the design of a new CAPTCHA scheme that was designed based on these usability and segmentation considerations. Our goal was to also design a text-based CAPTCHA scheme that can easily be used on increasingly pervasive touch-screen devices, without the need for keyboard input. This paper also examines the usability and robustness of the proposed CAPTCHA scheme. © 2014 Springer International Publishing.
|
[
"Syntactic Text Processing",
"Text Segmentation",
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
15,
21,
58,
4
] |
SCOPUS_ID:84874277234
|
A CAPTCHA with tips related to alphabets upper or lower case
|
Due to the ubiquity of web browsers and the dramatic development of dynamic web pages, web applications become the most popular computing model for providing services over the Internet. Unfortunately, this type of systems is vulnerable to the attacks issued by automated programs. CAPTCHA, a challenge-response test, is the most widely used mechanism to protect web application systems from attacks issued by computer programs. By considering the maintenance cost, most of the CAPTCHA systems employ the text-based challenge in which a string is represented as an image. A user is classified as a human by a text-based CAPTCHA if the user can respond a challenge with the exact string shown in the image. One drawback of a text-based CAPTCHA system is that the string is possibly recognized by a computer program equipped with OCR function. Obfuscating text images can strength the robustness of CAPTCHAs. But, it also degrades the usability of the systems. In this paper, we propose a new text-based CAPTCHA mechanism in which each challenge is associated with a tip providing humans enough information to recognize the alphabets in a highly distorted text image. Therefore, our system provides a balance between the effectiveness and human success rates of text-based CAPTCHAs. © 2012 IEEE.
|
[
"Visual Data in NLP",
"Programming Languages in NLP",
"Robustness in NLP",
"Multimodality",
"Responsible & Trustworthy NLP"
] |
[
20,
55,
58,
74,
4
] |
SCOPUS_ID:85124373539
|
A CASE STUDY ON PICTURE BOOK APPLICATION FOR CHILDREN AS SEMIOTIC TECHNOLOGY IN REPRESENTING ASIAN IDENTITIES
|
Digital book applications have emerged as new formats of picture books, with an integration of digitally mediated resources to construct meaning. These picture book apps are written artefacts that contain cultural messages and values about the world that children live in. Dually, as semiotic artefacts, they consist of meaning potentials through the concept of interactivity along with other technologically created multimodal resources. These resources form the semiotic surfaces of picture book apps and thus the picture book app is classified as Semiotic Technology. Picture book apps as semiotic technology are instrumental in changing the way in which current young readers engage with stories. The multimodal aspect is amplified in this digital text by the concept of interactivity. In recognition of the impact of these semiotic artefacts on users, this case study proposes a model based on a semiotic technology approach (media dimension) and the concept of interactivity, to interpret meaning making in digital picture book apps for children. Through a qualitative approach that has employed purposeful sampling, a digital picture book app, Green Riding Hood, has been selected as a case study for this article to illustrate how Asian identities are represented through interactive features. It is found that, through semiotic meaning-making captured in interactivity defining features, South Asian (Indian) identity markers like ethnicity, religion, age, and gender have been represented as multicultural or diversity awareness conduits to users. Through main values like unity (friendship) and health and fitness (including environmentalism), this digitized children’s literature thus is an ideal mirror, window, and door to society. This approach on digitized picture books is deemed important to be understood and recognized because types of messages from children’s literatures that get across through lenses of children can impact their path of identity realization and understanding in an increasingly digital world.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85134786399
|
A CBR for integrating sentiment and stress analysis for guiding users on social network sites
|
This work presents a Case-Based Reasoning (CBR) module that integrates sentiment and stress analysis on text and keystroke dynamics data with context information of users interacting on Social Network Sites (SNSs). The context information used in this work is the history of positive or negative messages of the user, and the topics being discussed on the SNSs. The CBR module uses this data to generate useful feedback for users, providing them with warnings if it detects potential future negative repercussions caused by the interaction of the users in the system. We aim to help create a safer and more satisfactory experience for users on SNSs or in other social environments. In a set of experiments, we compare the effectiveness of the CBR module to the effectiveness of different affective state detection methods. We compare the capacity to detect cases of messages that would generate future problems or negative repercussions on the SNS. For this purpose, we use messages generated in a private SNS, called Pesedia. In the experiments in the laboratory, the CBR module managed to outperform the other proposed analyzers in almost every case. The CBR module was fine-tuned to explore its performance when populating the case base with different configurations.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85070220139
|
A CCD based machine vision system for real-time text detection
|
Text detection and recognition is a hot topic in computer vision, which is considered to be the further development of the traditional optical character recognition (OCR) technology. With the rapid development of machine vision system and the wide application of deep learning algorithms, text recognition has achieved excellent performance. In contrast, detecting text block from complex natural scenes is still a challenging task. At present, many advanced natural scene text detection algorithms have been proposed, but most of them run slow due to the complexity of the detection pipeline and cannot be applied to industrial scenes. In this paper, we proposed a CCD based machine vision system for real-time text detection in invoice images. In this system, we applied optimizations from several aspects including the optical system, the hardware architecture, and the deep learning algorithm to improve the speed performance of the machine vision system. The experimental data confirms that the optimization methods can significantly improve the running speed of the machine vision system and make it meeting the real-time text detection requirements in industrial scenarios.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
https://aclanthology.org//W12-5302/
|
A CCG-based Approach to Fine-Grained Sentiment Analysis
|
[
"Sentiment Analysis"
] |
[
78
] |
|
SCOPUS_ID:84900403629
|
A CDA representation of the May 31, 2010 Gaza-bound aid flotilla raid: Portrayal of the events and actors
|
News media as both a site and a process of social interaction and ideological construction (van Dijk 1993) play a unique role and carry a signifying power in structuring social thinking and disseminating social knowledge on issues related to national or international agendas, and in representing events in particular ways (Fairclough 1995). Through a comparative analysis of 30 articles from four newspapers on the events of the May 31, 2010 Gaza-bound aid flotilla raid and their aftermath, the present study examines the discursive properties of the articles in the process of construction of the events and representation of their participants through in and out-group identity. Using van Dijk's (1991) approach to news analysis and drawing on the analytical framework of transitivity and lexical cohesion proposed in Halliday (1994), the study investigates the representation of the events and social actors. The results reveal the links between choices of certain discourse strategies realized in certain linguistic forms and the role of ideologies and power relations underlying such forms and strategies. © John Benjamins Publishing Company.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Representation Learning"
] |
[
71,
72,
12
] |
SCOPUS_ID:85026675242
|
A CDT-styled end-to-end Chinese discourse parser
|
Discourse parsing is a challenging task and plays a critical role in discourse analysis. Since the release of the Rhetorical Structure Theory Discourse Treebank and the Penn Discourse Treebank, the research on English discourse parsing has attracted increasing attention and achieved considerable success in recent years. At the same time, some preliminary research on certain subtasks about discourse parsing for other languages, such as Chinese, has been conducted. In this article, we present an end-to-end Chinese discourse parser with the Connective-Driven Dependency Tree scheme, which consists of multiple components in a pipeline architecture, such as the elementary discourse unit (EDU) detector, discourse relation recognizer, discourse parse tree generator, and attribution labeler. In particular, the attribution labeler determines two attributions (i.e., sense and centering) for every nonterminal node (i.e., discourse relation) in the discourse parse trees. Systematically, our parser detects all EDUs in a free text, generates the discourse parse tree in a bottom-up way, and determines the sense and centering attributions for all nonterminal nodes by traversing the discourse parse tree. Comprehensive evaluation on the Connective-Driven Dependency Treebank corpus from both component-wise and error-cascading perspectives is conducted to illustrate how each component performs in isolation, and how the pipeline performs with error propagation. Finally, it shows that our end-to-end Chinese discourse parser achieves an overall F1 score of 20% with full automation.
|
[
"Semantic Text Processing",
"Semantic Parsing",
"Syntactic Text Processing",
"Discourse & Pragmatics",
"Syntactic Parsing"
] |
[
72,
40,
15,
71,
28
] |
SCOPUS_ID:85148895038
|
A CENTURY OF HAPPINESS IN CHILDREN'S LITERATURE (1920-2020): A STALINIST CANON AND ITS LONG TERM CONSEQUENCES
|
In the Stalin era, the category of happiness was sharply politicized, and a new canon of representation of Soviet happiness was established in literature and cinema. The article presents empirical data that permits a quantitative evaluation of the scale and nature of the Stalinist transformation of the happiness narrative in a single genre - realistic children's prose. The corpus of 19th-20th century Russian prose for children and youth (Detcorpus) served as the source of data. The scale of changes was assessed by measuring the frequency of the lexemes 'happiness' and 'happy' in the corpus. Semantic transformations were assessed based on changes in the contexts of the use of these lexemes, measured using diachronic word embeddings. The results of the study partially confirm the findings of previous studies and raise new questions. In particular, the number of mentions of happiness dropped sharply in children's literature in the 1920s. It can be assumed that the “cancellation” of happiness was the first stage in the formation of the Stalinist canon. The data also provides evidence that, starting in the Stalin period, emotions became much more relevant in the representation of happiness in children's literature. This process could also be linked to the formation of the Stalinist canon.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:84958538703
|
A CFS-based feature weighting approach to naive bayes text classifiers
|
Recent work in supervised learning has shown that naive Bayes text classifiers with strong assumptions of independence among features, such as multinomial naive Bayes (MNB), complement naive Bayes (CNB) and the one-versus-all-but-one model (OVA), have achieved remarkable classification performance. This fact raises the question of whether a naive Bayes text classifier with less restrictive assumptions can perform even better. Responding to this question, we firstly evaluate the correlation-based feature selection (CFS) approach in this paper and find that it performs even worse than the original versions. Then, we propose a CFS-based feature weighting approach to these naive Bayes text classifiers. We call our feature weighted versions FWMNB, FWCNB and FWOVA respectively. Our proposed approach weakens the strong assumptions of independence among features by weighting the correlated features. The experimental results on a large suite of benchmark datasets show that our feature weighted versions significantly outperform the original versions in terms of classification accuracy. © 2014 Springer International Publishing Switzerland.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85131263063
|
A CHARACTER-LEVEL SPAN-BASED MODEL FOR MANDARIN PROSODIC STRUCTURE PREDICTION
|
The accuracy of prosodic structure prediction is crucial to the naturalness of synthesized speech in Mandarin text-to-speech system, but now is limited by widely-used sequence-to-sequence framework and error accumulation from previous word segmentation results. In this paper, we propose a span-based Mandarin prosodic structure prediction model to obtain an optimal prosodic structure tree, which can be converted to corresponding prosodic label sequence. Instead of the prerequisite for word segmentation, rich linguistic features are provided by Chinese character-level BERT and sent to encoder with self-attention architecture. On top of this, span representation and label scoring are used to describe all possible prosodic structure trees, of which each tree has its corresponding score. To find the optimal tree with the highest score for a given sentence, a bottom-up CKY-style algorithm is further used. The proposed method can predict prosodic labels of different levels at the same time and accomplish the process directly from Chinese characters in an end-to-end manner. Experiment results on two real-world datasets demonstrate the excellent performance of our span-based method over all sequence-to-sequence baseline approaches.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Syntactic Text Processing",
"Text Segmentation",
"Multimodality"
] |
[
52,
72,
70,
15,
21,
74
] |
http://arxiv.org/abs/2110.07137v1
|
A CLIP-Enhanced Method for Video-Language Understanding
|
This technical report summarizes our method for the Video-And-Language Understanding Evaluation (VALUE) challenge (https://value-benchmark.github.io/challenge\_2021.html). We propose a CLIP-Enhanced method to incorporate the image-text pretrained knowledge into downstream video-text tasks. Combined with several other improved designs, our method outperforms the state-of-the-art by $2.4\%$ ($57.58$ to $60.00$) Meta-Ave score on VALUE benchmark.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85081688143
|
A CLSTM-TMN for marketing intention detection
|
In recent years, neural network-based models such as machine learning and deep learning have achieved excellent results in text classification. On the research of marketing intention detection, classification measures are adopted to identify news with marketing intent. However, most of current news appears in the form of dialogs. There are some challenges to find potential relevance between news sentences to determine the latent semantics. In order to address this issue, this paper has proposed a CLSTM-based topic memory network (called CLSTM-TMN for short) for marketing intention detection. A ReLU-Neuro Topic Model (RNTM) is proposed. A hidden layer is constructed to efficiently capture the subject document representation, Potential variables are applied to enhance the granularity of subject model learning. We have changed the structure of current Neural Topic Model (NTM) to add CLSTM classifier. This method is a new combination ensemble both long and short term memory (LSTM) and convolution neural network (CNN). The CLSTM structure has the ability to find relationships from a sequence of text input, and the ability to extract local and dense features through convolution operations. The effectiveness of the method for marketing intention detection is illustrated in the experiments. Our detection model has a more significant improvement in F1 (7%) than other compared models.
|
[
"Language Models",
"Topic Modeling",
"Semantic Text Processing",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
9,
72,
24,
36,
3
] |
SCOPUS_ID:84983670765
|
A CMAC-based scheme for determining membership with classification of text strings
|
Membership determination of text strings has been an important procedure for analyzing textual data of a tremendous amount, especially when time is a crucial factor. Bloom filter has been a well-known approach for dealing with such a problem because of its succinct structure and simple determination procedure. As determination of membership with classification is becoming increasingly desirable, parallel Bloom filters are often implemented for facilitating the additional classification requirement. The parallel Bloom filters, however, tend to produce additional false-positive errors since membership determination must be performed on each of the parallel layers. We propose a scheme based on CMAC, a neural network mapping, which only requires a single-layer calculation to simultaneously obtain information of both the membership and classification. A hash function specifically designed for text strings is also proposed. The proposed scheme could effectively reduce false-positive errors by converging the range of membership acceptance to the minimum for each class during the neural network mapping. Simulation results show that the proposed scheme committed significantly less errors than the benchmark, parallel Bloom filters, with limited and identical memory usage at different classification levels.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
SCOPUS_ID:85140720269
|
A CNN-Based Born-Again TSK Fuzzy Classifier Integrating Soft Label Information and Knowledge Distillation
|
This paper proposes a CNN-based born-again Takagi-Sugeno-Kang (TSK) fuzzy classifier denoted as CNNBaTSK. CNNBaTSK achieves the following distinctive characteristics: 1) CNNBaTSK provides a new perspective of knowledge distillation with a non-iterative learning method (least learning machine with knowledge distillation, LLM-KD) to solve the consequent parameters of fuzzy rule, where consequent parameters are trained jointly on the ground-truth label loss, knowledge distillation loss, and regularization term; 2) with the inherent advantage of the fuzzy rule, CNNBaTSK has the capability to express the dark knowledge acquired from the CNN in an interpretable manner. Specifically, the dark knowledge (soft label information) is partitioned into five fixed antecedent fuzzy spaces. The centers of each soft label information in different fuzzy rules are {0, 0.25, 0.5, 0.75, 1}, which may have corresponding linguistic explanations: {<italic>very low, low, medium, high, very high</italic>}. For the consequent part of the fuzzy rule, the original features are employed to train the consequent parameters that ensure the direct interpretability in the original feature space. The experimental results on the benchmark datasets and the CHB-MIT EEG dataset demonstrate that CNNBaTSK can simultaneously improve the classification performance and model interpretability.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP",
"Text Classification",
"Green & Sustainable NLP"
] |
[
52,
72,
24,
3,
81,
4,
36,
68
] |
SCOPUS_ID:85060020728
|
A CNN-Based approach to detecting text from images of whiteboards and handwritten notes
|
Detecting handwritten text from images of whiteboards and handwritten notes is an important yet under-researched topic. In this paper, we propose a convolutional neural network (CNN) based approach to address this problem. First, to detect text instances of different scales, a feature pyramid network is adopted as a backbone network to extract three feature maps of different scales from a given input image, where a scale-specific detection module is attached to each feature map. Then, for a pixel on each feature map, a detection module is used to predict whether there exists a text instance at its corresponding location in the input image. For positive prediction, the bounding box of the detected text segment and the links between the concerned pixel and its 8 neighbors on the feature map are predicted simultaneously. Based on the linkage information, text segments extracted from each feature map are grouped into text-lines respectively and wrongly grouped text-lines are separated by a graph-based text-line segmentation method. Finally, detection results from three different feature maps are aggregated by a skewed non-maximum suppression algorithm. Our proposed approach has achieved superior results on a testing set consisting of 285 natural scene images of whiteboards and handwritten notes.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85073276861
|
A CNN-BiLSTM Model for Document-Level Sentiment Analysis
|
Document-level sentiment analysis is a challenging task given the large size of the text, which leads to an abundance of words and opinions, at times contradictory, in the same document. This analysis is particularly useful in analyzing press articles and blog posts about a particular product or company, and it requires a high concentration, especially when the topic being discussed is sensitive. Nevertheless, most existing models and techniques are designed to process short text from social networks and collaborative platforms. In this paper, we propose a combination of Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) models, with Doc2vec embedding, suitable for opinion analysis in long texts. The CNN-BiLSTM model is compared with CNN, LSTM, BiLSTM and CNN-LSTM models with Word2vec/Doc2vec embeddings. The Doc2vec with CNN-BiLSTM model was applied on French newspapers articles and outperformed the other models with 90.66% accuracy.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis",
"Representation Learning"
] |
[
52,
72,
78,
12
] |
SCOPUS_ID:85079838755
|
A CNN-LSTM network with attention approach for learning universal sentence representation in embedded system
|
The model for obtaining universal sentence representation is getting larger and larger, making it unsuitable for small embedded systems. The paper presents an extended encoder-decoder model with introduced an attention mechanism for learning distributed sentence representation. We can extract the CNN encoder and apply to other NLP downstream tasks on small embedded systems. Inspired by the linguistic features of the word embeddings, the different dimensions of the sentence representation can be aligned to especially linguistic features. The decoder which decodes one word will focus on the partial dimension of sentence representation into a fixed-length vector where CNN is more effective than LSTM, especially on devices with limited computing power. Moreover, the expanded LSTM with attention mechanism as the decoder to learn multitask that reconstruct the original sentence and predict the next sentence. The model was trained on an extensive collection of a novel to learning sentence representation encoder. Finally, the small-scale CNN encoder obtained encouraging results on several benchmark datasets and multiple task.
|
[
"Language Models",
"Semantic Text Processing",
"Representation Learning"
] |
[
52,
72,
12
] |
SCOPUS_ID:85126951190
|
A CNN-RNN Based Fake News Detection Model Using Deep Learning
|
False news has become widespread in the last decade in political, economic, and social dimensions. This has been aided by the deep entrenchment of social media networking in these dimensions. Facebook and Twitter have been known to influence the behavior of people significantly. People rely on news/information posted on their favorite social media sites to make purchase decisions. Also, news posted on mainstream and social media platforms has a significant impact on a particular country's economic stability and social tranquility. Therefore, there is a need to develop a deceptive system that evaluates the news to avoid the repercussions resulting from the rapid dispersion of fake news on social media platforms and other online platforms. To achieve this, the proposed system uses the preprocessing stage results to assign specific vectors to words. Each vector assigned to a word represents an intrinsic characteristic of the word. The resulting word vectors are then applied to RNN models before proceeding to the LSTM model. The output of the LSTM is used to determine whether the news article/piece is fake or otherwise.
|
[
"Language Models",
"Semantic Text Processing",
"Ethical NLP",
"Reasoning",
"Fact & Claim Verification",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
17,
8,
46,
4
] |
SCOPUS_ID:85070822006
|
A CNN-based feature extraction scheme for patent analysis
|
The traditional patent analysis is largely based on the probability analysis method. In this paper, we propose a novel scheme to extract features of the patent text based on an improved Convolutional Neural Network (CNN) for text processing. It integrates text mining techniques and Vector Space Modeling (VSM) for discovering the abstract 'topics' that occur in a collection of patent documents; and maps the original data to a dataset of a high dimensional vector space. We also build up a kind of structured data of patent text for our further patent analysis. Finally, various experiments are conducted to evaluate the performances of the proposed scheme.
|
[
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Representation Learning"
] |
[
3,
72,
12
] |
http://arxiv.org/abs/1907.10210v1
|
A CNN-based tool for automatic tongue contour tracking in ultrasound images
|
For speech research, ultrasound tongue imaging provides a non-invasive means for visualizing tongue position and movement during articulation. Extracting tongue contours from ultrasound images is a basic step in analyzing ultrasound data but this task often requires non-trivial manual annotation. This study presents an open source tool for fully automatic tracking of tongue contours in ultrasound frames using neural network based methods. We have implemented and systematically compared two convolutional neural networks, U-Net and DenseU-Net, under different conditions. Though both models can perform automatic contour tracking with comparable accuracy, Dense U-Net architecture seems more generalizable across test datasets while U-Net has faster extraction speed. Our comparison also shows that the choice of loss function and data augmentation have a greater effect on tracking performance in this task. This public available segmentation tool shows considerable promise for the automated tongue contour annotation of ultrasound images in speech research.
|
[
"Visual Data in NLP",
"Speech & Audio in NLP",
"Multimodality"
] |
[
20,
70,
74
] |
SCOPUS_ID:85121795531
|
A CNN-transformer hybrid approach for decoding visual neural activity into text
|
Background and Objective: Most studies used neural activities evoked by linguistic stimuli such as phrases or sentences to decode the language structure. However, compared to linguistic stimuli, it is more common for the human brain to perceive the outside world through non-linguistic stimuli such as natural images, so only relying on linguistic stimuli cannot fully understand the information perceived by the human brain. To address this, an end-to-end mapping model between visual neural activities evoked by non-linguistic stimuli and visual contents is demanded. Methods: Inspired by the success of the Transformer network in neural machine translation and the convolutional neural network (CNN) in computer vision, here a CNN-Transformer hybrid language decoding model is constructed in an end-to-end fashion to decode functional magnetic resonance imaging (fMRI) signals evoked by natural images into descriptive texts about the visual stimuli. Specifically, this model first encodes a semantic sequence extracted by a two-layer 1D CNN from the multi-time visual neural activity into a multi-level abstract representation, then decodes this representation, step by step, into an English sentence. Results: Experimental results show that the decoded texts are semantically consistent with the corresponding ground truth annotations. Additionally, by varying the encoding and decoding layers and modifying the original positional encoding of the Transformer, we found that a specific architecture of the Transformer is required in this work. Conclusions: The study results indicate that the proposed model can decode the visual neural activities evoked by natural images into descriptive text about the visual stimuli in the form of sentences. Hence, it may be considered as a potential computer-aided tool for neuroscientists to understand the neural mechanism of visual information processing in the human brain in the future.
|
[
"Language Models",
"Multimodality",
"Semantic Text Processing",
"Visual Data in NLP"
] |
[
52,
74,
72,
20
] |
SCOPUS_ID:85129563160
|
A COGNITIVE APPROACH IN ONOMASTICS: SOME NOTES ON METAPHORICAL PLACE-NAMES
|
In the framework of cognitive linguistics, metaphor is considered as a general cognitive mechanism that plays a fundamental role in human thinking and understanding, in the creation of our social, cultural, and psychological reality, while also impacting our language use. One of its special types, the image metaphor, also appears in name-giving and serves as the basis of a specific toponym type. With reference primarily to Hungarian toponymic corpus and, sporadically, to other languages, this paper provides a brief description of the mechanism of image metaphor and an overview of typical target concepts and source domains of metaphorical toponyms and of the features of the geographical objects that constitute the basis for metaphorical mappings. It is shown that metaphorical naming may be based on transonymization and involve enantiosemy that helps create irony. The fact that metaphorical place-names are most frequently documented when collecting unofficial toponymy has a significant impact on their status in onomastic research. Being most often microtoponyms, metaphorical place-names display clear quantitative variation from one area to another, which can be explained by the weight the metaphorical naming patterns have among the speakers living in different regions, which, in turn, may influence their readiness to use those or similar patterns for naming new objects. The author also suggests that despite their numerical scarcity in historical sources, metaphorical toponyms constitute a long-standing ancient class of names, and outlines some perspectives for the further study.
|
[
"Visual Data in NLP",
"Cognitive Modeling",
"Linguistics & Cognitive NLP",
"Multimodality"
] |
[
20,
2,
48,
74
] |
SCOPUS_ID:85127982573
|
A COMPARATIVE ANALYSIS OF THE LINGUISTIC COMPLEXITY OF GRADE 12 ENGLISH HOME LANGUAGE AND ENGLISH FIRST ADDITIONAL LANGUAGE EXAMINATION PAPERS
|
It is expected that English Home Language (Eng HL), as a subject, is more complex than English First Additional Language (Eng FAL). This article aims to uncover the reality of this expectation by comparatively investigating the linguistic complexity of texts used for reading comprehension and summaries in the final school exit examinations. The Coh-Metrix online platform was used to analyse a combined total of 24 Grade 12 final examination texts for Eng HL and Eng FAL ranging from 2008 to 2019. Five main indices relating to the word level, sentence, readability, lexical diversity and referential cohesion linguistic complexity were explored. The findings illustrated that the linguistic complexities of the texts used for reading comprehension and summary writing in the two subjects differ significantly, with Eng HL being more linguistically complex than Eng FAL texts. Furthermore, the Flesch-Kincaid Grade Level measure indicates the Eng FAL texts as two grades below the overall grade for Eng HL texts. Nonetheless, the linguistic complexity measures used in this article confirm the expectation that texts used in Eng HL reading comprehension and summary writing are more complex than those used in Eng FAL.
|
[
"Reasoning",
"Semantic Text Processing",
"Text Complexity",
"Machine Reading Comprehension"
] |
[
8,
72,
42,
37
] |
SCOPUS_ID:0009586820
|
A COMPARATIVE STUDY BETWEEN POLYCLASS AND MULTICLASS LANGUAGE MODELS
|
In this work, we introduce the concept of Multiclass for language modeling and we compare it to the Polyclass model. The originality of the Multiclass is its capability to parse a string of classes/tags into variable length independent sequences. A few experimental tests were carried out on a class corpus extracted from the French « Le Monde» word corpus labeled automatically. This corpus contains a set of 43 million of words. In our experiments, Multiclass outperform first-order Polyclass but are slightly outperformed by second-order Polyclass.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85125633839
|
A CONCEPTUAL MODEL FOR DECISION SUPPORT SYSTEMS USING ASPECT BASED SENTIMENT ANALYSIS
|
Sentiment analysis and opinion mining is the most explored area of research where an opinion has been analysed for better decision and recommendation. Currently this field has been supported by many decision support systems that not only depend on the preferences of a decision support system (DSS) designer but also affected by the public thoughts and opinions. Potential users along with their opinions also play a vital role in decision making process. Minion’s exploration using sentiment analysis is the process of analysing facts, sentiments and opinions that are expressed on different social media forums in the form of tweets or reviews. In this research work, a decision process and sentiment analysis has been amalgamated to strengthen the functionality of traditional DSS for better decision-making process from reviews of open forums. The proposed system comprises of data extraction from social media reviews, pre-processing using natural language processing (NLP), aspects extraction using part of speech (POS) tagger and normalized Google distance (NGD). Aspect optimization using genetic algorithm (GA) and finally polarity estimation of each sentiment expressed in the review using SentiWordNet is computed. The experimental results conclude the supremacy of the proposed work by adding the sentiment and opinion information performed better decision-making process than the existing state-of-art techniques.
|
[
"Sentiment Analysis",
"Aspect-based Sentiment Analysis",
"Information Extraction & Text Mining"
] |
[
78,
23,
3
] |
SCOPUS_ID:0344236118
|
A CONSOLIDATED LANGUAGE MODEL FOR SPEECH RECOGNITION
|
Hybrid speech recognition systems using both bigram and grammar models yield improved performance compared with the use of either model alone, but performance is suboptimal because the grammar is abandoned for sentences that fail to parse overall. By merging bigrams (in general n-grams) and grammars into a single framework we aim to combine the advantages of both, in particular structural capacity and trainability, in a robust recognition system. A substring parser allows whatever grammar structure is present to participate in scoring the candidate sentences. Extended bigrams using an information criterion capture remote dependencies and reduce perplexity. The first version of a consolidated model using these methods is described.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
SCOPUS_ID:85117287067
|
A COPRA based algorithm for subject division
|
[Purpose/Significance] Online encyclopedias such as Wikipedia include a large number of concepts. However, in such encyclopedias, there are no clear divisions between concepts and concepts, between concepts and disciplines, and between disciplines and disciplines. It embarrasses junior researchers regarding a certain discipline to obtain domain relevant knowledge in a systematic and low-efficiency manner. [Method/Process] To obtain information in a specific subject area and organize knowledge better, a new algorithm is designed for subject division. Specifically, approaches in complex network analysis are introduced for subject division, which helps to build a topic text network by the classical topic model. Then, the overlapping community label propagation algorithm is improved to identify the boundaries of different subject divisions. [Results/Conclusions] Finally, 300 Wikipedia entry texts were investigated as samples to evaluate the effectiveness of the proposed algorithm. Categories of experiments were conducted to analyze the community structure of the entry network and the complexity of the subject division, which helps to provide a corpus to build a subject knowledge base.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85034647767
|
A CRF based machine learning approach for biomedical named entity recognition
|
The amount of biomedical textual information available in the web becomes more and more. It is very difficult to extract the right information that users are interested in considering the size of documents in the biomedical literatures and databases. It is nearly impossible for human to process all these data and it is even difficult for computers to extract the information since it is not stored in structured format. Identifying the named entities and classifying them can help in extracting the useful information in the unstructured text documents. This paper presents a new method of utilizing biomedical knowledge by both exact matching of disease dictionary and adding semantic concept feature through UMLS semantic type filtering, in order to improve the human disease named entity recognition by machine learning. By engineering the concept semantic type into feature set, we demonstrate the importance of domain knowledge on machine learning based disease NER. The background knowledge enriches the representation of named entity and helps to disambiguate terms in the context thereby improves the overall NER performance.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85049373010
|
A CRF-based stacking model with meta-features for named entity recognition
|
Named Entity Recognition (NER) is a challenging task in Natural Language Processing. Recently, machine learning based methods are widely used for the NER task and outperform traditional handcrafted rule based methods. As an alternative way to handle the NER task, stacking, which combines a set of classifiers into one classifier, has not been well explored for the NER task. In this paper, we propose a stacking model for the NER task. We extend the original stacking model from both model and feature aspects. We use Conditional Random Fields as the level-1 classifier, and we also apply meta-features from global aspect and local aspect of the level-0 classifiers and tokens in our model. In the experiments, our model achieves the state-of-the-art performance on the CoNLL 2003 Shared task.
|
[
"Named Entity Recognition",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
34,
24,
36,
3
] |
SCOPUS_ID:85041440638
|
A CRFs-based approach empowered with word representation features to learning biomedical named entities from medical text
|
Targeting at identifying specific types of entities, biomedical named entity recognition is a fundamental task of biomedical text processing. This paper presents a CRFs-based approach to learning disease entities by identifying their boundaries in texts. Two types of word representation features are proposed and used including word embedding features and cluster-based features. In addition, an external disease dictionary feature is also explored in the learning process. Based on a publically available NCBI disease corpus, we evaluate the performance of the CRFs-based model with the combination of these word representation features. The results show that using these features can significantly improve BNER performance with an increase of 24.7% on F1 measure, demonstrating the effectiveness of the proposed features and the feature-empowered approach.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Representation Learning"
] |
[
34,
3,
72,
12
] |
SCOPUS_ID:85140308570
|
A CRITICAL DISCURSIVE ANALYSIS ON THE REPRESENTATION OF WOMEN AND ABORTION IN REVISTA AZMINA
|
In this paper we investigate the discursive representations of abortion and women who had abortions within the journalistic practice. Among the various facets of the subject, in this paper we will discuss the legal or decriminalized practice in Brazil. For this, we selected seven texts from Revista AzMina. We undertook a theoretical-methodological analysis based on Critical Discourse Analysis (CDA) (FAIRCLOUGH, 2001, 2003, 2013). The results show that abortion is represented as a field of legal dispute, it is sometimes legal, sometimes illegal and completely depends on decisions that go beyond those that women themselves can make. AzMina’s reports aim either to provide guidance on rights, or to humanize the theme by presenting it from the voices of women involved in this practice.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Representation Learning"
] |
[
71,
72,
12
] |
SCOPUS_ID:84904067583
|
A CUDA implementation of the Continuous Space Language Model
|
The training phase of the Continuous Space Language Model (CSLM) was implemented in the NVIDIA hardware/software architecture Compute Unified Device Architecture (CUDA). A detailed explanation of the CSLM algorithm is provided. Implementation was accomplished using a combination of CUBLAS library routines, NVIDIA NPP functions, and CUDA kernel calls on three different CUDA enabled devices of varying compute capability and a time savings over the traditional CPU approach demonstrated. The efficiency of the CUDA version of the open source implementation is analyzed and compared to that using the Intel Math Kernel Libraries (MKL) on a variety of CUDA enabled and multi-core CPU platforms. It is demonstrated that substantial performance benefit can be obtained using CUDA, even with nonoptimal code. Techniques for optimizing performance are then provided. Furthermore, an analysis is performed to determine the conditions in which the performance of CUDA exceeds that of the multi-core MKL realization. © 2013 Springer Science+Business Media New York.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85078940909
|
A CURE algorithm for Vietnamese sentiment classification in a parallel environment
|
Solutions to process big data are imperative and beneficial for numerous fields of research and commercial applications. Thus, a new model has been proposed in this paper to be used for big data set sentiment classification in the Cloudera parallel network environment. Clustering Using Representatives (CURE), combined with Hadoop MAP (M)/REDUCE (R) in Cloudera-a parallel network system, was used for 20,000 documents in a Vietnamese testing data set. The testing data set included 10,000 positive Vietnamese documents and 10,000 negative ones. After testing our new model on the data set, a 62.92% accuracy rate of sentiment classification was achieved. Although our data set is small, this proposed model is able to process millions of Vietnamese documents, in addition to data in other languages, to shorten the execution time in the distributed environment.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85045282794
|
A CWTM Model of Topic Extraction for Short Text
|
The topic model is designed to find potential topics from the massive micro-blog data. On the one hand, the extraction of potential topics contributes to the next analysis. On the other hand, because of the particularity of the data, we can not deal with it directly with the traditional topic model algorithm. In the field of data mining, although the traditional text topic mining has been widely studied, a short text like micro-blog has the distinctive characteristics of network languages and emerging novel words. Owning to the short message, the sparsity of data and incomplete description, the micro-blog can not be obtained efficiently. In this paper, we propose a simple, fast, and effective topic model for short texts, named couple-word topic model (CWTM). Based on Dirichlet Multinomial Mixture (DMM) model, it can leverage couple word co-occurrence to help distill better topics over short texts instead of the traditional word co-occurrence way. The method can alleviate the data sparseness problems, improve the performance of the model and adopt the Gibbs sampling algorithm to derive parameters. Through extensive experiments on two real-world short text collections, we find that CWTM achieves comparable or better topic representations than traditional topic model.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
https://aclanthology.org//W14-4011/
|
A CYK+ Variant for SCFG Decoding Without a Dot Chart
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:0025446887
|
A Cache-Based Natural Language Model for Speech Recognition
|
Speech recognition systems must often decide between competing ways of breaking up the acoustic input into strings of words. Since the possible strings may be acoustically similar, a language model is required; given a word string, the model returns its linguistic probability. This paper discusses several Markov language models. Subsequently, we present a new kind of language model which reflects short-term patterns of word use by means of a “cache component,” analogous to “cache memory” in hardware terminology. The model also contains a “3g-gram component” of the traditional type. The combined model and a pure 3g-gram model were tested on samples drawn from the LOB (Lancaster-Oslo/Bergen) corpus of English text. We discuss the relative performance of the two models, and make suggestions for future improvements. © 1990 IEEE
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
SCOPUS_ID:85108905924
|
A Call Center System based on Expert Systems for the Acquisition of Agricultural Knowledge Transferred from Text-to-Speech in China
|
There is rich knowledge in expert systems that can be used to solve practical problems, but its promotion and application must rely on information facilities. The application of both computers and the Internet for Chinese farmers are not common, which leads to restrictions on the promotion and application of expert systems in rural areas of China. On the other hand, the existing call centers lack a professional knowledge base and the method of automatically calling the knowledge base in real-time, which makes it difficult to meet the needs of users wanting to obtain knowledge in a timely manner. To address these problems, a call center embedded in an expert system inference algorithm and knowledge base for farmers to obtain agricultural knowledge through mobile phones or fixed-line telephones was established. By studying the event-condition-action-based (ECA-based) database triggering model, remote method invocation-based (RMI-based) communication and iterative dichotomiser 3 algorithm-based (ID3-based) parameter extraction, the cohesion between the call center and the expert system was realized. The agricultural knowledge audio acquisition model was then coupled with the call center and the expert system was constructed, allowing farmers to acquire agricultural knowledge through mobile phones or fixed phones with fast responses. When used for cotton disease diagnosis, it can achieve a high diagnostic success rate (above 75%) when at least three disease symptoms are input into the expert system via the voice call, which provides an effective channel for Chinese farmers to obtain agricultural knowledge. It presents good application prospects in China, where 5G technology is currently developing rapidly.
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Multimodality"
] |
[
18,
72,
70,
74
] |
https://aclanthology.org//W18-6319/
|
A Call for Clarity in Reporting BLEU Scores
|
The field of machine translation faces an under-recognized problem because of inconsistency in the reporting of scores from its dominant metric. Although people refer to “the” BLEU score, BLEU is in fact a parameterized metric whose values can vary wildly with changes to these parameters. These parameters are often not reported or are hard to find, and consequently, BLEU scores between papers cannot be directly compared. I quantify this variation, finding differences as high as 1.8 between commonly used configurations. The main culprit is different tokenization and normalization schemes applied to the reference. Pointing to the success of the parsing community, I suggest machine translation researchers settle upon the BLEU scheme used by the annual Conference on Machine Translation (WMT), which does not allow for user-supplied reference processing, and provide a new tool, SACREBLEU, to facilitate this.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/2004.14958v1
|
A Call for More Rigor in Unsupervised Cross-lingual Learning
|
We review motivations, definition, approaches, and methodology for unsupervised cross-lingual learning and call for a more rigorous position in each of them. An existing rationale for such research is based on the lack of parallel data for many of the world's languages. However, we argue that a scenario without any parallel data and abundant monolingual data is unrealistic in practice. We also discuss different training signals that have been used in previous work, which depart from the pure unsupervised setting. We then describe common methodological issues in tuning and evaluation of unsupervised cross-lingual models and present best practices. Finally, we provide a unified outlook for different types of research in this area (i.e., cross-lingual word embeddings, deep multilingual pretraining, and unsupervised machine translation) and argue for comparable evaluation of these models.
|
[
"Multilinguality",
"Low-Resource NLP",
"Cross-Lingual Transfer",
"Responsible & Trustworthy NLP"
] |
[
0,
80,
19,
4
] |
http://arxiv.org/abs/1905.10453v2
|
A Call for Prudent Choice of Subword Merge Operations in Neural Machine Translation
|
Most neural machine translation systems are built upon subword units extracted by methods such as Byte-Pair Encoding (BPE) or wordpiece. However, the choice of number of merge operations is generally made by following existing recipes. In this paper, we conduct a systematic exploration on different numbers of BPE merge operations to understand how it interacts with the model architecture, the strategy to build vocabularies and the language pair. Our exploration could provide guidance for selecting proper BPE configurations in the future. Most prominently: we show that for LSTM-based architectures, it is necessary to experiment with a wide range of different BPE operations as there is no typical optimal BPE configuration, whereas for Transformer architectures, smaller BPE size tends to be a typically optimal choice. We urge the community to make prudent choices with subword merge operations, as our experiments indicate that a sub-optimal BPE configuration alone could easily reduce the system performance by 3-4 BLEU points.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85145881776
|
A Cancel Culture Corpus through the lens of Natural Language Processing
|
Cancel Culture as an Internet phenomenon has been previously explored from a social and legal science perspective. This paper demonstrates how Natural Language Processing tasks can be derived from this previous work, underlying techniques on how cancel culture can be measured, identified and evaluated. As part of this paper, we introduce a first cancel culture data set with of over 2.3 million tweets and a framework to enlarge it further. We provide a detailed analysis of this data set and propose a set of features, based on various models including sentiment analysis and emotion detection that can help characterizing cancel culture.
|
[
"Emotion Analysis",
"Sentiment Analysis"
] |
[
61,
78
] |
https://aclanthology.org//1999.mtsummit-1.71/
|
A Cantonese-English machine translation system PolyU-MT-99
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
http://arxiv.org/abs/1808.04122v3
|
A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization
|
In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are reconstructed into corresponding capsules which are then routed to another capsule to produce a continuous vector. The length of this vector is used to measure the plausibility score of the triple. Our proposed CapsE obtains better performance than previous state-of-the-art embedding models for knowledge graph completion on two benchmark datasets WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17.
|
[
"Semantic Text Processing",
"Structured Data in NLP",
"Representation Learning",
"Knowledge Representation",
"Information Retrieval",
"Multimodality"
] |
[
72,
50,
12,
18,
24,
74
] |
http://arxiv.org/abs/1804.04266v2
|
A Capsule Network-based Embedding Model for Search Personalization
|
Search personalization aims to tailor search results to each specific user based on the user's personal interests and preferences (i.e., the user profile). Recent research approaches to search personalization by modelling the potential 3-way relationship between the submitted query, the user and the search results (i.e., documents). That relationship is then used to personalize the search results to that user. In this paper, we introduce a novel embedding model based on capsule network, which recently is a breakthrough in deep learning, to model the 3-way relationships for search personalization. In the model, each user (submitted query or returned document) is embedded by a vector in the same vector space. The 3-way relationship is described as a triple of (query, user, document) which is then modeled as a 3-column matrix containing the three embedding vectors. After that, the 3-column matrix is fed into a deep learning architecture to re-rank the search results returned by a basis ranker. Experimental results on query logs from a commercial web search engine show that our model achieves better performances than the basis ranker as well as strong search personalization baselines.
|
[
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning"
] |
[
72,
24,
12
] |
http://arxiv.org/abs/1911.04822v2
|
A Capsule Network-based Model for Learning Node Embeddings
|
In this paper, we focus on learning low-dimensional embeddings for nodes in graph-structured data. To achieve this, we propose Caps2NE -- a new unsupervised embedding model leveraging a network of two capsule layers. Caps2NE induces a routing process to aggregate feature vectors of context neighbors of a given target node at the first capsule layer, then feed these features into the second capsule layer to infer a plausible embedding for the target node. Experimental results show that our proposed Caps2NE obtains state-of-the-art performances on benchmark datasets for the node classification task. Our code is available at: \url{https://github.com/daiquocnguyen/Caps2NE}.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85076057926
|
A Cartographic Fiction: Béroalde de Verville, Le Voyage des princes fortunez
|
Maurice Bouguereau hired Gabriel Tavernierto to help him complete an atlas designed and destined for Henry IV of Navarre. The atlas would bring before the eyes of the King the sum of the provinces, regions, cities and rivers and, no less, a sense of the nation. Henry would have at his behest an instrument vital for tactical maneuvers to counter the massive forces of the Holy League. Were he to win the wars and ascend to the throne, the same object could facilitate taxation, administration, commerce, and management of the nation. The atlas would remind its first and best reader that its maps, born of the "theater" of war and strife, were political objects. Recalling the encyclopedic project that Abraham Ortelius had launched with his sumptuous Theatrum orbis terrarium, the book would show the King the sum, substance and virtue of his kingdom. To enhance and launch the atlas, Bouguereau obtained prefatory matter praising its virtue and giving good cause and reason to the local magistrates and dignitaries. Among these writers of praise was Béroalde de Verville, whose cartographic fiction, Le Voyage des princes fortunez (1610), a novel of more than 700 pages, is the main focus of this essay, especially in terms of the relation between image and text.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
http://arxiv.org/abs/2010.03722v1
|
A Cascade Approach to Neural Abstractive Summarization with Content Selection and Fusion
|
We present an empirical study in favor of a cascade architecture to neural text summarization. Summarization practices vary widely but few other than news summarization can provide a sufficient amount of training data enough to meet the requirement of end-to-end neural abstractive systems which perform content selection and surface realization jointly to generate abstracts. Such systems also pose a challenge to summarization evaluation, as they force content selection to be evaluated along with text generation, yet evaluation of the latter remains an unsolved problem. In this paper, we present empirical results showing that the performance of a cascaded pipeline that separately identifies important content pieces and stitches them together into a coherent text is comparable to or outranks that of end-to-end systems, whereas a pipeline architecture allows for flexible content selection. We finally discuss how we can take advantage of a cascaded pipeline in neural text summarization and shed light on important directions for future research.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85128613061
|
A Cascade Binary Tagging Joint Extraction Method
|
Extracting entities and relations from unstructured text has become a significant task in natural processing, especially knowledge graphs. However, the traditional relationship extraction method processes this task in a pipelined manner, i.e., identifying entity first and then recognizing their relations, and rarely consider the relevance and independence of the two subtasks. To solve the problems of error accumulation, we present a jointly learning method to a tagging problem to tackle this problem. Meanwhile, to understand the semantics of sentences, we introduce the pre-training method into our models, such as BERT. To solve the problem of entity overlap, we replace softmax with a sigmoid activation function. In the language and intelligent technology competition, experimental results show that our model outperforms the baseline model.
|
[
"Tagging",
"Relation Extraction",
"Syntactic Text Processing",
"Information Extraction & Text Mining"
] |
[
63,
75,
15,
3
] |
http://arxiv.org/abs/2207.01672v1
|
A Cascade Model for Argument Mining in Japanese Political Discussions: the QA Lab-PoliInfo-3 Case Study
|
The rVRAIN team tackled the Budget Argument Mining (BAM) task, consisting of a combination of classification and information retrieval sub-tasks. For the argument classification (AC), the team achieved its best performing results with a five-class BERT-based cascade model complemented with some handcrafted rules. The rules were used to determine if the expression was monetary or not. Then, each monetary expression was classified as a premise or as a conclusion in the first level of the cascade model. Finally, each premise was classified into the three premise classes, and each conclusion into the two conclusion classes. For the information retrieval (i.e., relation ID detection or RID), our best results were achieved by a combination of a BERT-based binary classifier, and the cosine similarity of pairs consisting of the monetary expression and budget dense embeddings.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Question Answering",
"Natural Language Interfaces",
"Argument Mining",
"Reasoning",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
27,
11,
60,
8,
36,
3
] |
https://aclanthology.org//W19-4502/
|
A Cascade Model for Proposition Extraction in Argumentation
|
We present a model to tackle a fundamental but understudied problem in computational argumentation: proposition extraction. Propositions are the basic units of an argument and the primary building blocks of most argument mining systems. However, they are usually substituted by argumentative discourse units obtained via surface-level text segmentation, which may yield text segments that lack semantic information necessary for subsequent argument mining processes. In contrast, our cascade model aims to extract complete propositions by handling anaphora resolution, text segmentation, reported speech, questions, imperatives, missing subjects, and revision. We formulate each task as a computational problem and test various models using a corpus of the 2016 U.S. presidential debates. We show promising performance for some tasks and discuss main challenges in proposition extraction.
|
[
"Syntactic Text Processing",
"Argument Mining",
"Reasoning",
"Text Segmentation",
"Information Extraction & Text Mining"
] |
[
15,
60,
8,
21,
3
] |
SCOPUS_ID:85104199857
|
A Cascaded Unsupervised Model for PoS Tagging
|
Part of speech (PoS) tagging is one of the fundamental syntactic tasks in Natural Language Processing, as it assigns a syntactic category to each word within a given sentence or context (such as noun, verb, adjective, etc.). Those syntactic categories could be used to further analyze the sentence-level syntax (e.g., dependency parsing) and thereby extract the meaning of the sentence (e.g., semantic parsing). Various methods have been proposed for learning PoS tags in an unsupervised setting without using any annotated corpora. One of the widely used methods for the tagging problem is log-linear models. Initialization of the parameters in a log-linear model is very crucial for the inference. Different initialization techniques have been used so far. In this work, we present a log-linear model for PoS tagging that uses another fully unsupervised Bayesian model to initialize the parameters of the model in a cascaded framework. Therefore, we transfer some knowledge between two different unsupervised models to leverage the PoS tagging results, where a log-linear model benefits from a Bayesian model's expertise. We present results for Turkish as a morphologically rich language and for English as a comparably morphologically poor language in a fully unsupervised framework. The results show that our framework outperforms other unsupervised models proposed for PoS tagging.
|
[
"Low-Resource NLP",
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Responsible & Trustworthy NLP"
] |
[
80,
73,
15,
63,
4
] |
SCOPUS_ID:85121918723
|
A Case Knowledge Extraction Method for Chinese Legal Texts
|
For judicial practitioners and ordinary citizens, it is very difficult to accurately find relevant jurisprudence or case information from a large amount of text. Therefore, it is necessary to extract information that people are generally concerned with from existing semi-structured or unstructured legal texts, store them in a database and form domain knowledge maps, thereby facilitating users to search and quickly obtain the required information. In view of the relatively fixed format structure and rigorous specification of legal texts, this paper mainly adopts strategies based on dictionaries and specific rules, supplemented by conditional random field, so as to achieve higher accuracy and effectiveness of information extraction. First, we define legal entity types and named entities. For the actual needs in this field, 9 types of named entities including the name of the person, the name of the case, and the case number were defined, manual labeling was carried out, and then the conditional random field model was used to identify the named entity. Then, the case knowledge is extracted according to the judgment structure and framework which was drawn for the different types of documents and cases. Finally, the rule-based method is used to extract the routine information of the case, and the dependency trigram is extracted from the “identified facts” part by using the dependency syntax analysis method.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
SCOPUS_ID:85149104320
|
A Case Study Analysis of Modifications of Argumentation Structures as a Result of Translation from German into Czech
|
The paper will focus on the analysis of selected linguistic markers of argumentation structures in Czech and German. On the basis of corpus-based analysis, I work with the assumption that argumentation structures are one of the parameters of equivalence in translation. The theoretical starting point for this analysis is the hypothesis that the linguistic form of arguments has a significant impact on their identification and potential. In my paper, I will pursue the following specific questions: 1) What are the linguistic markers of argument strength/ weakness in German and in Czech? 2) How do the mutual relationships between structure and linguistic outcome change as a result of the translation? 3) Might the effects resulting from the translation of the argumentation structures be interpreted as processes of explicitation and implicita-tion? 4) What are the advantages and disadvantages of working with a parallel corpus as a basis for the analysis of the translation of local argumentative structures? Since the structures of argumentation are one of the elementary fundamentals of a text, issues connected to their translation represent one of the central research interests in Translation Studies.
|
[
"Machine Translation",
"Text Generation",
"Argument Mining",
"Reasoning",
"Multilinguality"
] |
[
51,
47,
60,
8,
0
] |
https://aclanthology.org//W13-2126/
|
A Case Study Towards Turkish Paraphrase Alignment
|
[
"Paraphrasing",
"Text Generation"
] |
[
32,
47
] |
|
http://arxiv.org/abs/2111.02259v2
|
A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining
|
User-generated content from social media is produced in many languages, making it technically challenging to compare the discussed themes from one domain across different cultures and regions. It is relevant for domains in a globalized world, such as market research, where people from two nations and markets might have different requirements for a product. We propose a simple, modern, and effective method for building a single topic model with sentiment analysis capable of covering multiple languages simultanteously, based on a pre-trained state-of-the-art deep neural network for natural language understanding. To demonstrate its feasibility, we apply the model to newspaper articles and user comments of a specific domain, i.e., organic food products and related consumption behavior. The themes match across languages. Additionally, we obtain an high proportion of stable and domain-relevant topics, a meaningful relation between topics and their respective textual contents, and an interpretable representation for social media documents. Marketing can potentially benefit from our method, since it provides an easy-to-use means of addressing specific customer interests from different market regions around the globe. For reproducibility, we provide the code, data, and results of our study.
|
[
"Multilinguality",
"Opinion Mining",
"Cross-Lingual Transfer",
"Sentiment Analysis"
] |
[
0,
49,
19,
78
] |
SCOPUS_ID:85146197091
|
A Case Study and Qualitative Analysis of Simple Cross-lingual Opinion Mining
|
User-generated content from social media is produced in many languages, making it technically challenging to compare the discussed themes from one domain across different cultures and regions. It is relevant for domains in a globalized world, such as market research, where people from two nations and markets might have different requirements for a product. We propose a simple, modern, and effective method for building a single topic model with sentiment analysis capable of covering multiple languages simultanteously, based on a pre-trained state-of-the-art deep neural network for natural language understanding. To demonstrate its feasibility, we apply the model to newspaper articles and user comments of a specific domain, i.e., organic food products and related consumption behavior. The themes match across languages. Additionally, we obtain an high proportion of stable and domain-relevant topics, a meaningful relation between topics and their respective textual contents, and an interpretable representation for social media documents. Marketing can potentially benefit from our method, since it provides an easy-to-use means of addressing specific customer interests from different market regions around the globe. For reproducibility, we provide the code, data, and results of our studya.
|
[
"Multilinguality",
"Topic Modeling",
"Opinion Mining",
"Sentiment Analysis",
"Cross-Lingual Transfer",
"Information Extraction & Text Mining"
] |
[
0,
9,
49,
78,
19,
3
] |
http://arxiv.org/abs/2302.01842v1
|
A Case Study for Compliance as Code with Graphs and Language Models: Public release of the Regulatory Knowledge Graph
|
The paper presents a study on using language models to automate the construction of executable Knowledge Graph (KG) for compliance. The paper focuses on Abu Dhabi Global Market regulations and taxonomy, involves manual tagging a portion of the regulations, training BERT-based models, which are then applied to the rest of the corpus. Coreference resolution and syntax analysis were used to parse the relationships between the tagged entities and to form KG stored in a Neo4j database. The paper states that the use of machine learning models released by regulators to automate the interpretation of rules is a vital step towards compliance automation, demonstrates the concept querying with Cypher, and states that the produced sub-graphs combined with Graph Neural Networks (GNN) will achieve expandability in judgment automation systems. The graph is open sourced on GitHub to provide structured data for future advancements in the field.
|
[
"Language Models",
"Programming Languages in NLP",
"Semantic Text Processing",
"Structured Data in NLP",
"Knowledge Representation",
"Multimodality"
] |
[
52,
55,
72,
50,
18,
74
] |
SCOPUS_ID:84988835204
|
A Case Study in Big Data Analytics: Exploring Twitter Sentiment Analysis and the Weather
|
The age of social media is upon us. People around the world use social media tools such as Twitter to broadcast their moods, opinions, and status. It is possible to gage the sentiment of people by analyzing tweets using machine-learning approaches. The question that is explored in this chapter is whether the weather impacts human emotion. In some countries this might be through rain, whilst in others it might be through excessive heat. Tackling this question requires a Big Data processing infrastructure that scales to millions of people and their changing moods over time and correlating it with extensive disaggregated weather data. This chapter explores this relationship through a cloud-based Big Data solution. It provides a practical demonstration of how Big Data technologies and infrastructures can be developed and delivered where nuances and correlations between combinations of large-scale and heterogeneous data can be discovered. A front end for visualizing the resultant analyses is also provided. The work is explored across the cities of Australia, however the solution is generic and can be explored in other contexts and regions.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85094924793
|
A Case Study in Comparative Speech-to-Text Libraries for Use in Transcript Generation for Online Education Recordings
|
With a proliferation of Cloud based Speech-to-Text services it can be difficult to decide where to start and how to make use of these technologies. These include the major Cloud providers as well as several Open Source Speech-to-Text projects available. We desired to investigate a sample of the available libraries and their attributes relating to the recording artifacts that are the by-product of Online Education. The fact that so many resources are available means that the computing and technical barriers for applying speech recognition algorithms have decreased to the point of being a non-factor in the decision to use Speech-to-Text services. New barriers such as price, compute time, and access to the services? source code (software freedom) can be factored into the decision of which platform to use. This case study provides a beginning to developing a test-suite and guide to compare Speech-to-Text libraries and their out-of-the-box accuracy. Our initial test suite employed two models: 1) a Cloud model employing AWS S3 using AWS Transcribe, 2) an on-premises Open Source model that relies on Mozilla's DeepSpeech[1]. We present our findings and recommendations based on the criteria discovered. In order to deliver this test-suite, we also conducted research into the latest web development technologies with emphasis on security. This was done to produce a reliable and secure development process and to provide open access to this proof of concept for further testing and development.
|
[
"Text Generation",
"Speech & Audio in NLP",
"Speech Recognition",
"Multimodality"
] |
[
47,
70,
10,
74
] |
https://aclanthology.org//W98-0510/
|
A Case Study in Implementing Dependency-Based Grammars
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
SCOPUS_ID:85091979134
|
A Case Study in Multi-Emotion Classification via Twitter
|
Social media platforms generate continuously tremendous quantities of valuable knowledge for users' perspectives towards our global societies for example, Twitter. Sentiment analysis reveals its vital role to take the advantage of these different perspectives for different applications like, political votes, business domains, financial risks, and etc. Most traditional approaches in sentiment analysis predict a single attitude from the users' tweets. This is not considered a quiet correct approach, due to multiple of implied feelings in the users' tweets towards a specific topic, person, or event. This research presents hybrid machine learning approach, that can predict multiple feelings in the same tweet. It applies two methods, which are Binary relevance based on four machine learning algorithms in addition to Convolutional neural networks. The tweets preprocessed and converted into feature vectors. Word embedding, emotion lexicons, and frequency distribution probability are used to extract features from the input tweets. The paper finally presents a case study of two experiments to show the multi emotion prediction classifiers workflow on real tweets. The applied dataset is on SemEval2018 Task E-c. Binary relevance method has hamming score 0.53, and Convolutional neural network method has score 0.54.
|
[
"Text Classification",
"Sentiment Analysis",
"Emotion Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
78,
61,
24,
3
] |
https://aclanthology.org//1997.iwpt-1.24/
|
A Case Study in Optimizing Parsing Schemata by Disambiguation Filters
|
Disambiguation methods for context-free grammars enable concise specification of programming languages by ambiguous grammars. A disambiguation filter is a function that selects a subset from a set of parse trees the possible parse trees for an ambiguous sentence. The framework of filters provides a declarative description of disambiguation methods independent of parsing. Although filters can be implemented straightforwardly as functions that prune the parse forest produced by some generalized parser, this can be too inefficient for practical applications. In this paper the optimization of parsing schemata, a framework for high-level description of parsing algorithms, by disambiguation filters is considered in order to find efficient parsing algorithms for declaratively specified disambiguation methods. As a case study the optimization of the parsing schema of Earley’s parsing algorithm by two filters is investigated. The main result is a technique for generation of efficient LR-like parsers for ambiguous grammars disambiguated by means of priorities.
|
[
"Responsible & Trustworthy NLP",
"Syntactic Parsing",
"Syntactic Text Processing",
"Green & Sustainable NLP"
] |
[
4,
28,
15,
68
] |
http://arxiv.org/abs/1408.5427v1
|
A Case Study in Text Mining: Interpreting Twitter Data From World Cup Tweets
|
Cluster analysis is a field of data analysis that extracts underlying patterns in data. One application of cluster analysis is in text-mining, the analysis of large collections of text to find similarities between documents. We used a collection of about 30,000 tweets extracted from Twitter just before the World Cup started. A common problem with real world text data is the presence of linguistic noise. In our case it would be extraneous tweets that are unrelated to dominant themes. To combat this problem, we created an algorithm that combined the DBSCAN algorithm and a consensus matrix. This way we are left with the tweets that are related to those dominant themes. We then used cluster analysis to find those topics that the tweets describe. We clustered the tweets using k-means, a commonly used clustering algorithm, and Non-Negative Matrix Factorization (NMF) and compared the results. The two algorithms gave similar results, but NMF proved to be faster and provided more easily interpreted results. We explored our results using two visualization tools, Gephi and Wordle.
|
[
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
81,
4,
3,
29
] |
SCOPUS_ID:85140580922
|
A Case Study of Big Data Technology on Chinese Phonology
|
Chinese, as a language, has developed for thousands of years. Not only the shape and meaning of the characters have changed, but also the pronunciation has changed a lot. In order to deeply reveal the evolution law of Chinese language and pronunciation, and to realize the innovation and breakthrough of Chinese phonology research methods, we use big data technology to compare and analyze the materials of ancient Chinese pronunciation 'Pingshui Rhyme' with that of modern Chinese pronunciation 'Chinese New Rhyme'. The results of experiments are basically consistent with the results of traditional research methods. And the big data technology is efficient and accurate, which is conducive to discovering deep rules in the study of Chinese language and phonetics.
|
[
"Phonetics",
"Phonology",
"Syntactic Text Processing"
] |
[
64,
6,
15
] |
http://arxiv.org/abs/2210.17452v1
|
A Case Study of Chinese Sentiment Analysis on Social Media Reviews Based on LSTM
|
Network public opinion analysis is obtained by a combination of natural language processing (NLP) and public opinion supervision, and is crucial for monitoring public mood and trends. Therefore, network public opinion analysis can identify and solve potential and budding social problems. This study aims to realize an analysis of Chinese sentiment in social media reviews using a long short-term memory network (LSTM) model. The dataset was obtained from Sina Weibo using a web crawler and was cleaned with Pandas. First, Chinese comments regarding the legal sentencing in of Tangshan attack and Jiang Ge Case were segmented and vectorized. Then, a binary LSTM model was trained and tested. Finally, sentiment analysis results were obtained by analyzing the comments with the LSTM model. The accuracy of the proposed model has reached approximately 92%.
|
[
"Language Models",
"Semantic Text Processing",
"Sentiment Analysis"
] |
[
52,
72,
78
] |
http://arxiv.org/abs/2106.14332v1
|
A Case Study of LLVM-Based Analysis for Optimizing SIMD Code Generation
|
This paper presents a methodology for using LLVM-based tools to tune the DCA++ (dynamical clusterapproximation) application that targets the new ARM A64FX processor. The goal is to describethe changes required for the new architecture and generate efficient single instruction/multiple data(SIMD) instructions that target the new Scalable Vector Extension instruction set. During manualtuning, the authors used the LLVM tools to improve code parallelization by using OpenMP SIMD,refactored the code and applied transformation that enabled SIMD optimizations, and ensured thatthe correct libraries were used to achieve optimal performance. By applying these code changes, codespeed was increased by 1.98X and 78 GFlops were achieved on the A64FX processor. The authorsaim to automatize parts of the efforts in the OpenMP Advisor tool, which is built on top of existingand newly introduced LLVM tooling.
|
[
"Programming Languages in NLP",
"Code Generation",
"Text Generation",
"Multimodality"
] |
[
55,
44,
47,
74
] |
https://aclanthology.org//2020.webnlg-1.1/
|
A Case Study of NLG from Multimedia Data Sources: Generating Architectural Landmark Descriptions
|
In this paper, we present a pipeline system that generates architectural landmark descriptions using textual, visual and structured data. The pipeline comprises five main components:(i) a textual analysis component, which extracts information from Wikipedia pages; (ii)a visual analysis component, which extracts information from copyright-free images; (iii) a retrieval component, which gathers relevant (property, subject, object) triples from DBpedia; (iv) a fusion component, which stores the contents from the different modalities in a Knowledge Base (KB) and resolves the conflicts that stem from using different sources of information; (v) an NLG component, which verbalises the resulting contents of the KB. We show that thanks to the addition of other modalities, we can make the verbalisation of DBpedia triples more relevant and/or inspirational.
|
[
"Visual Data in NLP",
"Text Generation",
"Multimodality"
] |
[
20,
47,
74
] |
https://aclanthology.org//2020.sigdial-1.11/
|
A Case Study of User Communication Styles with Customer Service Agents versus Intelligent Virtual Agents
|
We investigate differences in user communication with live chat agents versus a commercial Intelligent Virtual Agent (IVA). This case study compares the two types of interactions in the same domain for the same company filling the same purposes. We compared 16,794 human-to-human conversations and 27,674 conversations with the IVA. Of those IVA conversations, 8,324 escalated to human live chat agents. We then investigated how human-to-human communication strategies change when users first communicate with an IVA in the same conversation thread. We measured quantity, quality, and diversity of language, and analyzed complexity using numerous features. We find that while the complexity of language did not significantly change between modes, the quantity and some quality metrics did vary significantly. This fair comparison provides unique insight into how humans interact with commercial IVAs and how IVA and chatbot designers might better curate training data when automating customer service tasks.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.