id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
http://arxiv.org/abs/1902.01069v2
|
A Comprehensive Exploration on WikiSQL with Table-Aware Word Contextualization
|
We present SQLova, the first Natural-language-to-SQL (NL2SQL) model to achieve human performance in WikiSQL dataset. We revisit and discuss diverse popular methods in NL2SQL literature, take a full advantage of BERT {Devlin et al., 2018) through an effective table contextualization method, and coherently combine them, outperforming the previous state of the art by 8.2% and 2.5% in logical form and execution accuracy, respectively. We particularly note that BERT with a seq2seq decoder leads to a poor performance in the task, indicating the importance of a careful design when using such large pretrained models. We also provide a comprehensive analysis on the dataset and our model, which can be helpful for designing future NL2SQL datsets and models. We especially show that our model's performance is near the upper bound in WikiSQL, where we observe that a large portion of the evaluation errors are due to wrong annotations, and our model is already exceeding human performance by 1.3% in execution accuracy.
|
[
"Language Models",
"Programming Languages in NLP",
"Semantic Text Processing",
"Structured Data in NLP",
"Multimodality"
] |
[
52,
55,
72,
50,
74
] |
SCOPUS_ID:85127900323
|
A Comprehensive Guideline for Bengali Sentiment Annotation
|
Sentiment Analysis (SA) is a Natural Language Processing (NLP) and an Information Extraction (IE) task that primarily aims to obtain the writer's feelings expressed in positive or negative by analyzing a large number of documents. SA is also widely studied in the fields of data mining, web mining, text mining, and information retrieval. The fundamental task in sentiment analysis is to classify the polarity of a given content as Positive, Negative, or Neutral. Although extensive research has been conducted in this area of computational linguistics, most of the research work has been carried out in the context of English language. However, Bengali sentiment expression has varying degree of sentiment labels, which can be plausibly distinct from English language. Therefore, sentiment assessment of Bengali language is undeniably important to be developed and executed properly. In sentiment analysis, the prediction potential of an automatic modeling is completely dependent on the quality of dataset annotation. Bengali sentiment annotation is a challenging task due to diversified structures (syntax) of the language and its different degrees of innate sentiments (i.e., weakly and strongly positive/negative sentiments). Thus, in this article, we propose a novel and precise guideline for the researchers, linguistic experts, and referees to annotate Bengali sentences immaculately with a view to building effective datasets for automatic sentiment prediction efficiently.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85090098766
|
A Comprehensive Methodology for Evaluating Conversation-Based Interfaces to Relational Databases (C-BIRDs)
|
Evaluation can be defined as a process of determining the significance of a research output. This is usually done by devising a well-structured study on this output using one or more evaluation measures in which a careful inspection is performed. This paper presents a review of evaluation techniques for Conversational Agents (CAs) and Natural Language Interfaces to Databases (NLIDBs). It then introduces the developed customized evaluation methodology for Conversation-Based Interface to Relational Databases (C-BIRDs). The evaluation methodology created has been divided into two groups of measures. The first is based on quantitative measures, including two measures: task success and dialogue length. The second group is based on a number of qualitative measures, including: prototype ease of use, naturalness of system responses, positive/negative emotion, appearance, text on screen, organization of information, and error message clarity. Then an elaboration is carried out on the devised methodology by adding a discussion and recommendations on the sample size, the experimental setup and the scaling in order to provide a comprehensive evaluation methodology for C-BIRDs. In conclusion the evaluation methodology created is better way for identifying the strengths and weaknesses of C-BIRDs in comparison to the usage of single measure evaluations.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85112759496
|
A Comprehensive Neural Network Model for Colorization and Captioning of Images Along with IoT Deployment
|
The convergence of artificial neural networks and the internet of things (IoT) has gained popularity in the field of computer science research. In this work, an efficient neural network model for the image colorization problem is proposed along with deploying these models to the remote system using IoT deployment tools. Further, this work proposed two convolution neural network models namely the Alpha model and Beta model towards solving the image colorization of the grayscale format. An efficient combination of models is proposed and analyzed such that the loss rate is minimized as ~ 0.005. Next, an efficient model for solving image captioning is proposed based on the bi-directional long short term memory model. Finally, the work discusses the merits and demerits of deploying the neural network model using the AWS Greengrass and Docker IoT environment on remote systems.
|
[
"Visual Data in NLP",
"Green & Sustainable NLP",
"Captioning",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multimodality"
] |
[
20,
68,
39,
47,
4,
74
] |
SCOPUS_ID:85086138147
|
A Comprehensive Pipeline for Complex Text-to-Image Synthesis
|
Synthesizing a complex scene image with multiple objects and background according to text description is a challenging problem. It needs to solve several difficult tasks across the fields of natural language processing and computer vision. We model it as a combination of semantic entity recognition, object retrieval and recombination, and objects’ status optimization. To reach a satisfactory result, we propose a comprehensive pipeline to convert the input text to its visual counterpart. The pipeline includes text processing, foreground objects and background scene retrieval, image synthesis using constrained MCMC, and post-processing. Firstly, we roughly divide the objects parsed from the input text into foreground objects and background scenes. Secondly, we retrieve the required foreground objects from the foreground object dataset segmented from Microsoft COCO dataset, and retrieve an appropriate background scene image from the background image dataset extracted from the Internet. Thirdly, in order to ensure the rationality of foreground objects’ positions and sizes in the image synthesis step, we design a cost function and use the Markov Chain Monte Carlo (MCMC) method as the optimizer to solve this constrained layout problem. Finally, to make the image look natural and harmonious, we further use Poisson-based and relighting-based methods to blend foreground objects and background scene image in the post-processing step. The synthesized results and comparison results based on Microsoft COCO dataset prove that our method outperforms some of the state-of-the-art methods based on generative adversarial networks (GANs) in visual quality of generated scene images.
|
[
"Visual Data in NLP",
"Information Retrieval",
"Multimodality"
] |
[
20,
24,
74
] |
SCOPUS_ID:85127520490
|
A Comprehensive Review of Arabic Text Summarization
|
The explosion of online and offline data has changed how we gather, evaluate, and understand data. It is frequently difficult and time-consuming to comprehend large text documents and extract crucial information from them. Text summarization techniques address the mentioned problems by compressing long texts while retaining their essential contents. These techniques rely on the fast delivery of filtered, high-quality content to their users. Due to the massive amounts of data generated by technology and various sources, automated text summarization of large-scale data is challenging. There are three types of automatic text summarization techniques: extractive, abstractive, and hybrid. Regardless of these previous techniques, the generated summaries are a long way from the summarization produced by human experts. Although Arabic is a widely spoken language that is frequently used for content sharing on the web, Arabic text summarization of Arabic content is limited and still immature because of several problems, including the Arabic language's morphological structure, the variety of dialects, and the lack of adequate data sources. This paper reviews text summarization approaches and recent deep learning models for this approach. Additionally, it focuses on existing datasets for these approaches, which are also reviewed, along with their characteristics and limitations. The most often used metrics for summarization quality evaluation are ROUGE1, ROUGE2, ROUGE L, and Bleu. The challenges that are encountered during Arabic text summarizing methods and approaches and the solutions proposed in each approach are analyzed. Many Arabic text summarization methods have problems, such as the lack of golden tokens during testing, being out of vocabulary (OOV) words, repeating summary sentences, lack of standard systematic methodologies and architectures, and the complexity of the Arabic language. Finally, providing the required corpora, improving evaluation using semantic representations, the lack of using rouge metrics in abstractive text summarization, and using recent deep learning models to adopt them in Arabic summarization studies is an essential demand.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85123590146
|
A Comprehensive Review of Investor Sentiment Analysis in Stock Price Forecasting
|
Sentiment analysis technologies have a strong impact on financial markets. In recent years there has been increasing interest in analyzing the sentiment of investors. The objective of this paper is to evaluate the current state of the art and synthesize the published literature related to the financial sentiment analysis, especially in investor sentiment for prediction of stock price. Starting from this overview the paper provides answers to the questions about how and to what extent research on investor sentiment analysis and stock price trend forecasting in the financial markets has developed and which tools are used for these purposes remains largely unexplored. This paper represents the comprehensive literature-based study on the fields of the investors sentiment analytics and machine learning applied to analyzing the sentiment of investors and its influencing stock market and predicting stock price.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85132039681
|
A Comprehensive Review of Stock Price Prediction Using
|
Purpose.In various studies, the sentiment analysis identifies as an essential part of stock price behaviorprediction. The availability of news, social media networks,and the rapid development of natural language processing methods resulted in better forecasting performance. However, there is a lack of a comprehensive framework and review paper to address the advantages and challenges of this very timely topic. Design/methodology/approach.This paper aims to promotethe existing literature in this field by focusing on different aspects of previous studies and presenting an explicit picture of their components. We, furthermore, compare each system with the rest and identify their main differentiating factors. This paper summarized and systematized studies that seek to predict stock prices based on text mining and sentiment analysis in a systematic review paper.
|
[
"Sentiment Analysis"
] |
[
78
] |
http://arxiv.org/abs/2207.02160v1
|
A Comprehensive Review of Visual-Textual Sentiment Analysis from Social Media Networks
|
Social media networks have become a significant aspect of people's lives, serving as a platform for their ideas, opinions and emotions. Consequently, automated sentiment analysis (SA) is critical for recognising people's feelings in ways that other information sources cannot. The analysis of these feelings revealed various applications, including brand evaluations, YouTube film reviews and healthcare applications. As social media continues to develop, people post a massive amount of information in different forms, including text, photos, audio and video. Thus, traditional SA algorithms have become limited, as they do not consider the expressiveness of other modalities. By including such characteristics from various material sources, these multimodal data streams provide new opportunities for optimising the expected results beyond text-based SA. Our study focuses on the forefront field of multimodal SA, which examines visual and textual data posted on social media networks. Many people are more likely to utilise this information to express themselves on these platforms. To serve as a resource for academics in this rapidly growing field, we introduce a comprehensive overview of textual and visual SA, including data pre-processing, feature extraction techniques, sentiment benchmark datasets, and the efficacy of multiple classification methodologies suited to each field. We also provide a brief introduction of the most frequently utilised data fusion strategies and a summary of existing research on visual-textual SA. Finally, we highlight the most significant challenges and investigate several important sentiment applications.
|
[
"Visual Data in NLP",
"Sentiment Analysis",
"Multimodality"
] |
[
20,
78,
74
] |
http://arxiv.org/abs/2103.14785v3
|
A Comprehensive Review of the Video-to-Text Problem
|
Research in the Vision and Language area encompasses challenging topics that seek to connect visual and textual information. When the visual information is related to videos, this takes us into Video-Text Research, which includes several challenging tasks such as video question answering, video summarization with natural language, and video-to-text and text-to-video conversion. This paper reviews the video-to-text problem, in which the goal is to associate an input video with its textual description. This association can be mainly made by retrieving the most relevant descriptions from a corpus or generating a new one given a context video. These two ways represent essential tasks for Computer Vision and Natural Language Processing communities, called text retrieval from video task and video captioning/description task. These two tasks are substantially more complex than predicting or retrieving a single sentence from an image. The spatiotemporal information present in videos introduces diversity and complexity regarding the visual content and the structure of associated language descriptions. This review categorizes and describes the state-of-the-art techniques for the video-to-text problem. It covers the main video-to-text methods and the ways to evaluate their performance. We analyze twenty-six benchmark datasets, showing their drawbacks and strengths for the problem requirements. We also show the progress that researchers have made on each dataset, we cover the challenges in the field, and we discuss future research directions.
|
[
"Visual Data in NLP",
"Captioning",
"Text Generation",
"Information Retrieval",
"Multimodality"
] |
[
20,
39,
47,
24,
74
] |
SCOPUS_ID:85135848496
|
A Comprehensive Review on Automatic Image Captioning Using Deep Learning
|
Image captioning is the process of producing the sentence description from an image. It consists of two approaches: image based and language based. Extracting the features of an input image is image-based model. The feature and objects obtained from image-based model are translated into a natural sentence by language-based model. Image captioning is a challenging task as it has to determine the objects in the image and it also has to capture and express their attributes and relationships in natural languages. Deep learning-based technique paves the way for handling the challenges and complexities of image captioning. The goal is to present a literature review of machine learning and deep learning-based image captioning techniques and discusses the performances, strengths, and weaknesses. This paper also highlights the comparison of deep learning-based techniques along with image and language model with respect to the evaluation metrics like BLUE-1, BLUE-2 and METEOR on Flickr8k and MSCOCO datasets.
|
[
"Visual Data in NLP",
"Captioning",
"Text Generation",
"Multimodality"
] |
[
20,
39,
47,
74
] |
SCOPUS_ID:85120045642
|
A Comprehensive Review on Fake News Detection with Deep Learning
|
A protuberant issue of the present time is that, organizations from different domains are struggling to obtain effective solutions for detecting online-based fake news. It is quite thought-provoking to distinguish fake information on the internet as it is often written to deceive users. Compared with many machine learning techniques, deep learning-based techniques are capable of detecting fake news more accurately. Previous review papers were based on data mining and machine learning techniques, scarcely exploring the deep learning techniques for fake news detection. However, emerging deep learning-based approaches such as Attention, Generative Adversarial Networks, and Bidirectional Encoder Representations for Transformers are absent from previous surveys. This study attempts to investigate advanced and state-of-the-art fake news detection mechanisms pensively. We begin with highlighting the fake news consequences. Then, we proceed with the discussion on the dataset used in previous research and their NLP techniques. A comprehensive overview of deep learning-based techniques has been bestowed to organize representative methods into various categories. The prominent evaluation metrics in fake news detection are also discussed. Nevertheless, we suggest further recommendations to improve fake news detection mechanisms in future research directions.
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
SCOPUS_ID:85149665064
|
A Comprehensive Review on Image Captioning Using Deep Learning
|
Our brain is capable of annotating or classifying any image that emerges in front of us. What about computers, though? How can a computer process an image and identify it with a caption that is both relevant and accurate? It appeared unachievable a few years ago, but with the advancement of Computer Vision and Deep Learning algorithms, as well as the availability of appropriate datasets and AI models, building a relevant caption generator for an image is becoming easier. Caption generation is also becoming a booming business around the world, with numerous data annotation companies making billions. Furthermore, this image caption generation process is used to transform images into a series of words from a series of pixels. Image captioning can be thought of as an end-to-end Sequence to Sequence challenge from beginning to end. It is necessary to process both the words or comments as well as the visuals in order to achieve this goal. In this paper, we also reviewed the feature vectors that are obtained by using recurrent neural networks for the language component and convolutional neural networks for the image component, respectively.
|
[
"Visual Data in NLP",
"Captioning",
"Text Generation",
"Multimodality"
] |
[
20,
39,
47,
74
] |
http://arxiv.org/abs/2011.14752v1
|
A Comprehensive Review on Recent Methods and Challenges of Video Description
|
Video description involves the generation of the natural language description of actions, events, and objects in the video. There are various applications of video description by filling the gap between languages and vision for visually impaired people, generating automatic title suggestion based on content, browsing of the video based on the content and video-guided machine translation [86] etc.In the past decade, several works had been done in this field in terms of approaches/methods for video description, evaluation metrics,and datasets. For analyzing the progress in the video description task, a comprehensive survey is needed that covers all the phases of video description approaches with a special focus on recent deep learning approaches. In this work, we report a comprehensive survey on the phases of video description approaches, the dataset for video description, evaluation metrics, open competitions for motivating the research on the video description, open challenges in this field, and future research directions. In this survey, we cover the state-of-the-art approaches proposed for each and every dataset with their pros and cons. For the growth of this research domain,the availability of numerous benchmark dataset is a basic need. Further, we categorize all the dataset into two classes: open domain dataset and domain-specific dataset. From our survey, we observe that the work in this field is in fast-paced development since the task of video description falls in the intersection of computer vision and natural language processing. But still, the work in the video description is far from saturation stage due to various challenges like the redundancy due to similar frames which affect the quality of visual features, the availability of dataset containing more diverse content and availability of an effective evaluation metric.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85145773557
|
A Comprehensive Review on Speaker Recognition
|
Speech is the most natural mode of human communication. In addition to the exchange of thoughts and ideas, speech is considered to be useful for extracting a lot of other information like language identity, gender, age, emotion, and cognitive behavior. It is also known to contain speaker identity information besides other useful information. The task of recognizing the identity of an individual from the para-linguistic cues present in his or her speech signal is known as speaker recognition. It finds numerous applications across different fields like biometrics, forensics, and access control systems. Research in this field has been carried out over several decades, focusing on various aspects like features, modeling techniques, and scoring. The significant advancements in the field of machine learning and deep learning in recent times have developed renewed interest among researchers in this area. This chapter presents a comprehensive literature review on speaker recognition with emphasis on the text-dependent case wherein a predefined text is used for authentication purpose. It discusses the feature extraction and modeling techniques from the earliest to the newest. It also surveys the different deep learning architectures that have resulted in state-of-the-art systems, having received impetus from the availability of increased data and high computational power.
|
[
"Multimodality",
"Speech & Audio in NLP",
"Information Extraction & Text Mining"
] |
[
74,
70,
3
] |
http://arxiv.org/abs/2109.10118v1
|
A Comprehensive Review on Summarizing Financial News Using Deep Learning
|
Investors make investment decisions depending on several factors such as fundamental analysis, technical analysis, and quantitative analysis. Another factor on which investors can make investment decisions is through sentiment analysis of news headlines, the sole purpose of this study. Natural Language Processing techniques are typically used to deal with such a large amount of data and get valuable information out of it. NLP algorithms convert raw text into numerical representations that machines can easily understand and interpret. This conversion can be done using various embedding techniques. In this research, embedding techniques used are BoW, TF-IDF, Word2Vec, BERT, GloVe, and FastText, and then fed to deep learning models such as RNN and LSTM. This work aims to evaluate these model's performance to choose the robust model in identifying the significant factors influencing the prediction. During this research, it was expected that Deep Leaming would be applied to get the desired results or achieve better accuracy than the state-of-the-art. The models are compared to check their outputs to know which one has performed better.
|
[
"Semantic Text Processing",
"Representation Learning",
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
72,
12,
30,
47,
3
] |
SCOPUS_ID:85119015084
|
A Comprehensive Review on Text to Indian Sign Language Translation Systems
|
Language is the primary means of communication used by every individual. It is a tool to express greater ideas of ideas and emotions. It shapes thoughts and carries meanings. Indian Sign Language (ISL) used by the Deaf Community in India, does have linguistic constituents and structural properties. The area of computer science and linguistics, dealing with the relationship between computers and human language, is natural language processing. Through lexical analysis, syntax analysis, semantic analysis, processing discourses, pragmatic analysis, it processes the data. In determining the meaning of a sentence, it is critical to analyze the syntactic structure. In this paper, current computer sign language translators are considered and their pros and cons are identified and discussed. The general approaches followed by the systems are discussed. A new approach for construction of sign languages is proposed, thus resulting in increase in the accuracy of the system in translating the input phrases.
|
[
"Text Generation",
"Machine Translation",
"Syntactic Text Processing",
"Multilinguality"
] |
[
47,
51,
15,
0
] |
SCOPUS_ID:85126431218
|
A Comprehensive Study and Detailed Review on Hate Speech Classification : A Systematic Analysis
|
Hate speech is about making insults, threats, or stereotypes towards people or a group of people because of its characteristics such as origin, race, gender, religion, disabilities, and more. Modern society uses social networking websites for sharing thoughts and emotions. However, sometimes it can lead to hate speech. Hate speech is a severe issue as it may lead to repulsive outcomes. Hate speech is a critical element of social media, and there are various types of hate speech due to the ambiguity of the sentences. The traditional state-of-Art methods give significant accuracy for classifying hate speech. Therefore, in recent years, the state-of art-methods have been succeeded in detecting and recognizing hate speech. In this paper, we have investigated the various research work that has been accomplished so far for hate speech classification.
|
[
"Text Classification",
"Ethical NLP",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
17,
4,
24,
3
] |
http://arxiv.org/abs/1606.06871v2
|
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
|
We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers. We investigate the training aspect and study different variants of optimization methods, batching, truncated backpropagation, different regularization techniques such as dropout and $L_2$ regularization, and different gradient clipping variants. The major part of the experimental analysis was performed on the Quaero corpus. Additional experiments also were performed on the Switchboard corpus. Our best LSTM model has a relative improvement in word error rate of over 14\% compared to our best feed-forward neural network (FFNN) baseline on the Quaero task. On this task, we get our best result with an 8 layer bidirectional LSTM and we show that a pretraining scheme with layer-wise construction helps for deep LSTMs. Finally we compare the training calculation time of many of the presented experiments in relation with recognition performance. All the experiments were done with RETURNN, the RWTH extensible training framework for universal recurrent neural networks in combination with RASR, the RWTH ASR toolkit.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
http://arxiv.org/abs/2212.12799v1
|
A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models
|
Objective. Chemical named entity recognition (NER) models have the potential to impact a wide range of downstream tasks, from identifying adverse drug reactions to general pharmacoepidemiology. However, it is unknown whether these models work the same for everyone. Performance disparities can potentially cause harm rather than the intended good. Hence, in this paper, we measure gender-related performance disparities of chemical NER systems. Materials and Methods. We develop a framework to measure gender bias in chemical NER models using synthetic data and a newly annotated dataset of over 92,405 words with self-identified gender information from Reddit. We applied and evaluated state-of-the-art biomedical NER models. Results. Our findings indicate that chemical NER models are biased. The results of the bias tests on the synthetic dataset and the real-world data multiple fairness issues. For example, for synthetic data, we find that female-related names are generally classified as chemicals, particularly in datasets containing many brand names rather than standard ones. For both datasets, we find consistent fairness issues resulting in substantial performance disparities between female- and male-related data. Discussion. Our study highlights the issue of biases in chemical NER models. For example, we find that many systems cannot detect contraceptives (e.g., birth control). Conclusion. Chemical NER models are biased and can be harmful to female-related groups. Therefore, practitioners should carefully consider the potential biases of these models and take steps to mitigate them.
|
[
"Responsible & Trustworthy NLP",
"Named Entity Recognition",
"Ethical NLP",
"Information Extraction & Text Mining"
] |
[
4,
34,
17,
3
] |
SCOPUS_ID:85111347104
|
A Comprehensive Study of Machine Translation Tools and Evaluation Metrics
|
In this article, the ideas of statistical and neural machine translation approaches were explored. Various machine translation tools and machine translation evaluation metrics were also investigated. Nowadays, machine translation plays a key role in the society where different languages are spoken as it removes the language barrier and digital divide in the society by providing access to all information in the local language which a person can understand. There were different phases of machine translation as its evolution is concerned, and different approaches were followed in different phases some requiring an enormous amount of parallel corpus which is considered a crucial element of machine translation. In the proposed system, some of the parameters were examined to carry the analysis of several translation tools, and evaluation metrics are also available for accessing the quality of machine translation.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85136206191
|
A Comprehensive Study of Optical Character Recognition
|
In recent decades recognition of characters has become a most important research topic for computer vision researchers or scientists. One of the major techniques for character recognition is optical character recognition (OCR) which plays a vital role in recent years in the development of various methodologies for the recognition of characters of several languages alphabets. Currently, OCR technology is utilized by most of the applications of scanning documents and makes them readable for the users such as google translate, which has been developed for translating the language from one language to the other language. But the rate of accuracy and timing to perform the specified task is still a problem. This paper presents a brief description of the OCR technology, the timeline of its development, some major applications of this technology, and its future perspective in our daily life. Moreover, this article provides an overview of this fascinating research topic for the early-stage researchers of computer vision.
|
[
"Visual Data in NLP",
"Machine Translation",
"Multimodality",
"Text Generation",
"Multilinguality"
] |
[
20,
51,
74,
47,
0
] |
SCOPUS_ID:85131120411
|
A Comprehensive Study of Open-Source Libraries for Named Entity Recognition on Handwritten Historical Documents
|
In this paper, we propose an evaluation of several state-of-the-art open-source natural language processing (NLP) libraries for named entity recognition (NER) on handwritten historical documents: spaCy, Stanza and Flair. The comparison is carried out on three low-resource multilingual datasets of handwritten historical documents: HOME (a multilingual corpus of medieval charters), Balsac (a corpus of parish records from Quebec), and Esposalles (a corpus of marriage records in Catalan). We study the impact of the document recognition processes (text line detection and handwriting recognition) on the performance of the NER. We show that current off-the-shelf NER libraries yield state-of-the-art results, even on low-resource languages or multilingual documents using multilingual models. We show, in an end-to-end evaluation, that text line detection errors have a greater impact than handwriting recognition errors. Finally, we also report state-of-the-art results on the public Esposalles dataset.
|
[
"Multilinguality",
"Low-Resource NLP",
"Information Extraction & Text Mining",
"Named Entity Recognition",
"Responsible & Trustworthy NLP"
] |
[
0,
80,
3,
34,
4
] |
http://arxiv.org/abs/2303.08302v2
|
A Comprehensive Study on Post-Training Quantization for Large Language Models
|
Post-training quantization (\ptq) had been recently shown as a compromising method to reduce memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different \ptq methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study of those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and \ptq methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bit precision is similar to 5 bits). We also present recommendations about how to utilize quantization for \llms with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85132339575
|
A Comprehensive Survey for Non-Intrusive Load Monitoring
|
Energy-saving and efficiency are as important as benefiting from new energy sources to supply increasing energy demand globally. Energy demand and resources for energy saving should be managed effectively. Therefore, electrical loads need to be monitored and controlled. Demand-side energy management plays a vital role in achieving this objective. Energy management systems schedule an optimal operation program for these loads by obtaining more accurate and precise residential and commercial loads information. Different intellegent measurement applications and machine learning algorithms have been proposed for the measurement and control of electrical devices/loads used in buildings. Of these, nonintrusive load monitoring (NILM) is widely used to monitor loads and gather precise information about devices without affecting consumers. NILM is a load monitoring method that uses a total power or current signal taken from a single point in residential and commercial buildings. Therefore, its installation and maintenance costs are low compared to other load monitoring methods. This method consists of signal processing and machine learning processes such as event detection (optional), feature extraction and device identification after the total power or current signal is acquired. Up to now, many techniques have been proposed for each processes in the literature. In this paper, techniques used in NILM systems are classified and a comprehensive review is presented.
|
[
"Event Extraction",
"Information Extraction & Text Mining"
] |
[
31,
3
] |
http://arxiv.org/abs/2303.04226v1
|
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
|
Recently, ChatGPT, along with DALL-E-2 and Codex,has been gaining significant attention from society. As a result, many individuals have become interested in related resources and are seeking to uncover the background and secrets behind its impressive performance. In fact, ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC), which involves the creation of digital content, such as images, music, and natural language, through AI models. The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace. AIGC is achieved by extracting and understanding intent information from instructions provided by human, and generating the content according to its knowledge and the intent information. In recent years, large-scale models have become increasingly important in AIGC as they provide better intent extraction and thus, improved generation results. With the growth of data and the size of the models, the distribution that the model can learn becomes more comprehensive and closer to reality, leading to more realistic and high-quality content generation. This survey provides a comprehensive review on the history of generative models, and basic components, recent advances in AIGC from unimodal interaction and multimodal interaction. From the perspective of unimodality, we introduce the generation tasks and relative models of text and image. From the perspective of multimodality, we introduce the cross-application between the modalities mentioned above. Finally, we discuss the existing open problems and future challenges in AIGC.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Multimodality"
] |
[
20,
52,
72,
11,
38,
74
] |
SCOPUS_ID:85122074392
|
A Comprehensive Survey of Grammatical Error Correction
|
Grammatical error correction (GEC) is an important application aspect of natural language processing techniques, and GEC system is a kind of very important intelligent system that has long been explored both in academic and industrial communities. The past decade has witnessed significant progress achieved in GEC for the sake of increasing popularity of machine learning and deep learning. However, there is not a survey that untangles the large amount of research works and progress in this field. We present the first survey in GEC for a comprehensive retrospective of the literature in this area. We first give the definition of GEC task and introduce the public datasets and data annotation schema. After that, we discuss six kinds of basic approaches, six commonly applied performance boosting techniques for GEC systems, and three data augmentation methods. Since GEC is typically viewed as a sister task of Machine Translation (MT), we put more emphasis on the statistical machine translation (SMT)-based approaches and neural machine translation (NMT)-based approaches for the sake of their importance. Similarly, some performance-boosting techniques are adapted from MT and are successfully combined with GEC systems for enhancement on the final performance. More importantly, after the introduction of the evaluation in GEC, we make an in-depth analysis based on empirical results in aspects of GEC approaches and GEC systems for a clearer pattern of progress in GEC, where error type analysis and system recapitulation are clearly presented. Finally, we discuss five prospective directions for future GEC researches.
|
[
"Text Error Correction",
"Machine Translation",
"Syntactic Text Processing",
"Text Generation",
"Multilinguality"
] |
[
26,
51,
15,
47,
0
] |
http://arxiv.org/abs/2001.01115v2
|
A Comprehensive Survey of Multilingual Neural Machine Translation
|
We present a survey on multilingual neural machine translation (MNMT), which has gained a lot of traction in the recent years. MNMT has been useful in improving translation quality as a result of translation knowledge transfer (transfer learning). MNMT is more promising and interesting than its statistical machine translation counterpart because end-to-end modeling and distributed representations open new avenues for research on machine translation. Many approaches have been proposed in order to exploit multilingual parallel corpora for improving translation quality. However, the lack of a comprehensive survey makes it difficult to determine which approaches are promising and hence deserve further exploration. In this paper, we present an in-depth survey of existing literature on MNMT. We first categorize various approaches based on their central use-case and then further categorize them based on resource scenarios, underlying modeling principles, core-issues and challenges. Wherever possible we address the strengths and weaknesses of several techniques by comparing them with each other. We also discuss the future directions that MNMT research might take. This paper is aimed towards both, beginners and experts in NMT. We hope this paper will serve as a starting point as well as a source of new ideas for researchers and engineers interested in MNMT.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/2208.05757v1
|
A Comprehensive Survey of Natural Language Generation Advances from the Perspective of Digital Deception
|
In recent years there has been substantial growth in the capabilities of systems designed to generate text that mimics the fluency and coherence of human language. From this, there has been considerable research aimed at examining the potential uses of these natural language generators (NLG) towards a wide number of tasks. The increasing capabilities of powerful text generators to mimic human writing convincingly raises the potential for deception and other forms of dangerous misuse. As these systems improve, and it becomes ever harder to distinguish between human-written and machine-generated text, malicious actors could leverage these powerful NLG systems to a wide variety of ends, including the creation of fake news and misinformation, the generation of fake online product reviews, or via chatbots as means of convincing users to divulge private information. In this paper, we provide an overview of the NLG field via the identification and examination of 119 survey-like papers focused on NLG research. From these identified papers, we outline a proposed high-level taxonomy of the central concepts that constitute NLG, including the methods used to develop generalised NLG systems, the means by which these systems are evaluated, and the popular NLG tasks and subtasks that exist. In turn, we provide an overview and discussion of each of these items with respect to current research and offer an examination of the potential roles of NLG in deception and detection systems to counteract these threats. Moreover, we discuss the broader challenges of NLG, including the risks of bias that are often exhibited by existing text generation systems. This work offers a broad overview of the field of NLG with respect to its potential for misuse, aiming to provide a high-level understanding of this rapidly developing area of research.
|
[
"Text Generation"
] |
[
47
] |
SCOPUS_ID:85125292362
|
A Comprehensive Survey of Sentiment Analysis Based on User Opinion
|
In this modern era online shopping is getting a lot of attention. Thousands of reviews are available from the customers on different social media platforms which makes it difficult for the user to make a purchasing decision. For a better understanding of user opinion, sentiment analysis (also known as opinion mining) has been conducted which makes a major effect on the purchasing decision of the user. Opinion mining is defined in terms of entities, emotions, and textual relationships. User opinions on e-commerce websites or social media apps have a huge impact on product stakeholders. Over the past decades, researchers, the public sector, and the service industry are carrying out opinion mining, to eradicate and examine community sentiments and opinions. This paper presents a survey of recent studies conducted for sentiment analysis based on user opinion through machine learning techniques (focusing on supervised, semi-supervised, reinforcement, and unsupervised learning), deep learning techniques (focusing on CNN, RNN, and LSTM), and provide the background knowledge.
|
[
"Opinion Mining",
"Sentiment Analysis"
] |
[
49,
78
] |
http://arxiv.org/abs/2006.04611v1
|
A Comprehensive Survey on Aspect Based Sentiment Analysis
|
Aspect Based Sentiment Analysis (ABSA) is the sub-field of Natural Language Processing that deals with essentially splitting our data into aspects ad finally extracting the sentiment information. ABSA is known to provide more information about the context than general sentiment analysis. In this study, our aim is to explore the various methodologies practiced while performing ABSA, and providing a comparative study. This survey paper discusses various solutions in-depth and gives a comparison between them. And is conveniently divided into sections to get a holistic view on the process.
|
[
"Aspect-based Sentiment Analysis",
"Sentiment Analysis"
] |
[
23,
78
] |
http://arxiv.org/abs/1607.06215v1
|
A Comprehensive Survey on Cross-modal Retrieval
|
In recent years, cross-modal retrieval has drawn much attention due to the rapid growth of multimodal data. It takes one type of data as the query to retrieve relevant data of another type. For example, a user can use a text to retrieve relevant pictures or videos. Since the query and its retrieved results can be of different modalities, how to measure the content similarity between different modalities of data remains a challenge. Various methods have been proposed to deal with such a problem. In this paper, we first review a number of representative methods for cross-modal retrieval and classify them into two main groups: 1) real-valued representation learning, and 2) binary representation learning. Real-valued representation learning methods aim to learn real-valued common representations for different modalities of data. To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space. Then, we introduce several multimodal datasets in the community, and show the experimental results on two commonly used multimodal datasets. The comparison reveals the characteristic of different kinds of cross-modal retrieval methods, which is expected to benefit both practical applications and future research. Finally, we discuss open problems and future research directions.
|
[
"Multimodality",
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning"
] |
[
74,
72,
24,
12
] |
SCOPUS_ID:85064552109
|
A Comprehensive Survey on Extractive and Abstractive Techniques for Text Summarization
|
Over the years as the technology advanced, the amount of data generated during the simulations and processing has been constantly increasing. Techniques for creating synopses of this massively generated data have been in the forefront of the research in the recent times. Text Summarization was one such aspect of the research which focused on representing the idea of the context in a short representation. Efforts were put to create a system which was able to generate effective summaries providing an overview of all the ideas represented by the article. Text Summarization techniques can be broadly classified into Extractive and Abstractive Text Summarization techniques. The paper compares all the prevailing systems, their shortcomings, and a combination of technologies used to achieve improved results. The paper also draws attention towards the state-of-the-art standardized datasets used in developing the summarization systems. The paper also focuses on testing parameters and techniques used to test the efficiency of the summarizing systems.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
http://arxiv.org/abs/2212.04072v1
|
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches
|
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
|
[
"Reasoning",
"Machine Reading Comprehension"
] |
[
8,
37
] |
http://arxiv.org/abs/2212.04070v1
|
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Datasets and Metrics
|
Multi-hop Machine reading comprehension is a challenging task with aim of answering a question based on disjoint pieces of information across the different passages. The evaluation metrics and datasets are a vital part of multi-hop MRC because it is not possible to train and evaluate models without them, also, the proposed challenges by datasets often are an important motivation for improving the existing models. Due to increasing attention to this field, it is necessary and worth reviewing them in detail. This study aims to present a comprehensive survey on recent advances in multi-hop MRC evaluation metrics and datasets. In this regard, first, the multi-hop MRC problem definition will be presented, then the evaluation metrics based on their multi-hop aspect will be investigated. Also, 15 multi-hop datasets have been reviewed in detail from 2017 to 2022, and a comprehensive analysis has been prepared at the end. Finally, open issues in this field have been discussed.
|
[
"Reasoning",
"Machine Reading Comprehension"
] |
[
8,
37
] |
SCOPUS_ID:85134571192
|
A Comprehensive Survey on Multilingual Opinion Mining
|
In a current scenario use of multimedia, gadgets have increased the usage of social websites and the Internet. Twitter, Facebook, Instagram, Telegram, and WhatsApp are the generally used platforms in the Internet community. Sharing reviews, feedbacks, and personal experiences are the most common task on social media. Such data is available in an unorganized and immensurable manner on the Internet. Opinion Mining can be carried out on such data available on the Internet. Most of the analyzers are working on the analysis of Chinese and English language sentiments, data available on the Internet is also in different languages which needs to be analyzed. The main purpose of this paper is to discuss the different frameworks, algorithms, Opinion Mining processes, classification techniques, evaluation methods, and limitations faced by the analyzers while bringing off the sentiment analysis on different languages.
|
[
"Opinion Mining",
"Sentiment Analysis",
"Multilinguality"
] |
[
49,
78,
0
] |
http://arxiv.org/abs/2302.09419v1
|
A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT
|
The Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A pretrained foundation model, such as BERT, GPT-3, MAE, DALLE-E, and ChatGPT, is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. The idea of pretraining behind PFMs plays an important role in the application of large models. Different from previous methods that apply convolution and recurrent modules for feature extractions, the generative pre-training (GPT) method applies Transformer as the feature extractor and is trained on large datasets with an autoregressive paradigm. Similarly, the BERT apples transformers to train on large datasets as a contextual language model. Recently, the ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few show prompting. With the extraordinary success of PFMs, AI has made waves in a variety of fields over the past few years. Considerable methods, datasets, and evaluation metrics have been proposed in the literature, the need is raising for an updated survey. This study provides a comprehensive review of recent research advancements, current and future challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. We first review the basic components and existing pretraining in natural language processing, computer vision, and graph learning. We then discuss other advanced PFMs for other data modalities and unified PFMs considering the data quality and quantity. Besides, we discuss relevant research about the fundamentals of the PFM, including model efficiency and compression, security, and privacy. Finally, we lay out key implications, future research directions, challenges, and open problems.
|
[
"Language Models",
"Semantic Text Processing",
"Structured Data in NLP",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Multimodality"
] |
[
52,
72,
50,
11,
38,
74
] |
SCOPUS_ID:85131329543
|
A Comprehensive Survey on Sentiment Analysis in Twitter Data
|
The literature scrutinizes diverse techniques that are associated with sentiment analysis in Twitter data. It reviews several research papers and states the significant analysis. Initially, the analysis depicts various schemes that are contributed in different papers. Subsequently, the analysis also focuses on various features, and it also analyses the sentiment analysis in Twitter data that is exploited in each paper. Furthermore, this paper provides the detailed study regarding the performance measures and maximum performance achievements in each contribution. Finally, it extends the various research issues that can be useful for the researchers to accomplish further research on sentiment analysis in Twitter data.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85126178749
|
A Comprehensive Survey on Topic Modeling in Text Summarization
|
Topic modeling is the statistical model for discovering hidden topics or keywords in a collection of documents. Topic modeling is also considered a probabilistic model for learning, analyzing, and discovering topics from the document collection. The most popular techniques for topic modeling are latent semantic analysis (LSA), probabilistic latent semantic analysis (pLSA), latent Dirichlet allocation (LDA), and the recent deep learning-based lda2vec. LDA is most commonly used in extractive multi-document summarization to determine whether the extracted sentence reflects the concept of the input document. In this paper, we will try to explore various multi-document summarization techniques that use LDA as a topic modeling method for improving final summary coverage and to reduce redundancy. Finally, we compared LDA and LSA using the Genism toolkit, and our experiment results show that LDA outperforms LSA if we increase the number of features considered for sentence selection.
|
[
"Summarization",
"Topic Modeling",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
9,
47,
3
] |
SCOPUS_ID:85131039216
|
A Comprehensive Survey on Various Fully Automatic Machine Translation Evaluation Metrics
|
The fast advancement in machine translation models necessitates the development of accurate evaluation metrics that would allow researchers to track the progress in text languages. The evaluation of machine translation models is crucial since its results are exploited for improvements of translation models. However fully automatically evaluating the machine translation models in itself is a huge challenge for the researchers as human evaluation is very expensive, time-consuming, unreproducible. This paper presents a detailed classification and comprehensive survey on various fully automated evaluation metrics, which are used to assess the performance or quality of machine translated output. Various fully automatic evaluation metrics are classified into five categories that are lexical, character, semantic, syntactic, and semantic & syntactic evaluation metrics for better understanding purpose. Taking account of the challenges posed in the field of machine translation evaluation by Statistical Machine Translation and Neural Machine Translation, along with a discussion on the advantages, disadvantages, and gaps for each fully automatic machine translation evaluation metric has been provided. The presented study will help machine translation researchers in quickly identifying automatic machine translation evaluation metrics that are most appropriate for the improvement or development of their machine translation model, as well as researchers in gaining a general understanding of how automatic machine translation evaluation research evolved.
|
[
"Text Generation",
"Machine Translation",
"Syntactic Text Processing",
"Multilinguality"
] |
[
47,
51,
15,
0
] |
SCOPUS_ID:85141383195
|
A Comprehensive Survey on Visual Question Answering Debias
|
With the rise of multi-modal computing, visual question answering (VQA) have attracted wide attention. It takes an image and a question as input, the VQA system can answer the question according to the given image. However, most of models suffer from language prior problem, they excessively rely on superficial linguistic correlations between the questions and answers without considering the image, which is coursed by inherent data bias. For example, for a certain type of question (e.g., How many apples are there on the table?), the system will tend to return the result (e.g., Tow) that appear frequently in the answer space rather than answering question based on facts (e.g., Five) in the image. Therefore, VQA debias becomes especially important for answering the question correctly. There are many methods have been proposed to deal with such a problem. We summarize the existing methods into the following three categories: 1) Data augmentation 2) Weaken language information 3) Enhance image information. They solve existing problems from data perspective and information perspective respectively, aiming to get a higher accuracy and making VQA system more robust.
|
[
"Visual Data in NLP",
"Question Answering",
"Robustness in NLP",
"Natural Language Interfaces",
"Responsible & Trustworthy NLP",
"Multimodality"
] |
[
20,
27,
58,
11,
4,
74
] |
http://arxiv.org/abs/2010.15036v1
|
A Comprehensive Survey on Word Representation Models: From Classical to State-Of-The-Art Word Representation Language Models
|
Word representation has always been an important research area in the history of natural language processing (NLP). Understanding such complex text data is imperative, given that it is rich in information and can be used widely across various applications. In this survey, we explore different word representation models and its power of expression, from the classical to modern-day state-of-the-art word representation language models (LMS). We describe a variety of text representation methods, and model designs have blossomed in the context of NLP, including SOTA LMs. These models can transform large volumes of text into effective vector representations capturing the same semantic information. Further, such representations can be utilized by various machine learning (ML) algorithms for a variety of NLP related tasks. In the end, this survey briefly discusses the commonly used ML and DL based classifiers, evaluation metrics and the applications of these word embeddings in different NLP tasks.
|
[
"Language Models",
"Semantic Text Processing",
"Representation Learning"
] |
[
52,
72,
12
] |
SCOPUS_ID:85115874355
|
A Comprehensive Survey on Word Representation Models: From Classical to State-of-the-Art Word Representation Language Models
|
Word representation has always been an important research area in the history of natural language processing (NLP). Understanding such complex text data is imperative, given that it is rich in information and can be used widely across various applications. In this survey, we explore different word representation models and its power of expression, from the classical to modern-day state-of-the-art word representation language models (LMS). We describe a variety of text representation methods, and model designs have blossomed in the context of NLP, including SOTA LMs. These models can transform large volumes of text into effective vector representations capturing the same semantic information. Further, such representations can be utilized by various machine learning (ML) algorithms for a variety of NLP-related tasks. In the end, this survey briefly discusses the commonly used ML- and DL-based classifiers, evaluation metrics, and the applications of these word embeddings in different NLP tasks.
|
[
"Language Models",
"Semantic Text Processing",
"Representation Learning"
] |
[
52,
72,
12
] |
http://arxiv.org/abs/2204.12753v1
|
A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer
|
Being a popular mode of text-based communication in multilingual communities, code-mixing in online social media has became an important subject to study. Learning the semantics and morphology of code-mixed language remains a key challenge, due to scarcity of data and unavailability of robust and language-invariant representation learning technique. Any morphologically-rich language can benefit from character, subword, and word-level embeddings, aiding in learning meaningful correlations. In this paper, we explore a hierarchical transformer-based architecture (HIT) to learn the semantics of code-mixed languages. HIT consists of multi-headed self-attention and outer product attention components to simultaneously comprehend the semantic and syntactic structures of code-mixed texts. We evaluate the proposed method across 6 Indian languages (Bengali, Gujarati, Hindi, Tamil, Telugu and Malayalam) and Spanish for 9 NLP tasks on 17 datasets. The HIT model outperforms state-of-the-art code-mixed representation learning and multilingual language models in all tasks. We further demonstrate the generalizability of the HIT architecture using masked language modeling-based pre-training, zero-shot learning, and transfer learning approaches. Our empirical results show that the pre-training objectives significantly improve the performance on downstream tasks.
|
[
"Representation Learning",
"Language Models",
"Semantic Text Processing",
"Multilinguality"
] |
[
12,
52,
72,
0
] |
SCOPUS_ID:85148048433
|
A Comprehensive Understanding of Text Region Identification and Localization in Scene Imagery Using DL Practices
|
Semantic interpretation of scene images in our surroundings is an intriguing research domain in the realm of machine vision and pattern recognition. One of the most demanding tasks in this discipline is identifying and localizing textual information in scene images. Deep learning-based algorithms have improved the accuracy and efficiency of the scene text detection (STD) system in recent years, allowing researchers to concentrate on more target-specific concerns such as multi-oriented, arbitrarily generated, and coherent text instances. The study aims to examine and summarize recent research breakthroughs made with deep learning-based approaches. We have discussed the problem area and the various roadblocks that it entails. The reported techniques based on segmentation and regression schemes are comprehensively studied and analyzed in this work. The feature extraction and prediction frameworks used in the research are then briefly discussed. The categorical study of publically accessible datasets and evaluation procedures for scene images is specified. Finally, we discussed the domain’s possible scopes, which may pique the curiosity of future academics.
|
[
"Visual Data in NLP",
"Information Extraction & Text Mining",
"Multimodality"
] |
[
20,
3,
74
] |
SCOPUS_ID:85075756669
|
A Comprehensive Verification of Transformer in Text Classification
|
Recently, a self-attention based model, named Transformer, is proposed in Neural Machine Translation (NMT) domain, and outperforms the RNNs based seq 2seq model in most cases, hence it becomes the state-of-the-art model for NMT task. However, some studies find that the RNNs based model integrated with the Transformer structures could achieve almost the same experiment effect as the Transformer on the NMT task. In this paper, following the previous researches, we intend to further verify the performance of Transformer structures on the text classification task. Based on RNNs model, we gradually add each part of the Transformer block and evaluate their influence on the text classification task. We carry out the experiments on NLPCC2014 and dmsc_v2 datasets, and the experiment results show that multi-head attention mechanism and multiple attention layers could improve the performance of the model on the text classification task. Furthermore, the visualization of the attention weights also illustrates that multi-head attention outperforms the traditional attention mechanism.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Text Generation",
"Text Classification",
"Multilinguality"
] |
[
52,
51,
72,
24,
3,
47,
36,
0
] |
SCOPUS_ID:85138073291
|
A Comprehensive and Holistic Health Database
|
Health and the initiation, progression, and outcome of disease are the result of multiple environmental factors interacting with individual genetic makeups. Collectively, results from primary clinical research on health and disease represent the most compendious and reliable source of actionable knowledge on strategies to optimize health. However, the dispersal of this information as unstructured data, distributed across millions of documents, is a substantial challenge in bridging the gap between primary research and concrete recommendations for improving health. Described here is the development and implementation of a machine reading pipeline that builds a knowledge graph of causal relationships between a broad range of predictive/modifiable diet and lifestyle factors and health outcomes, extracted from the vast biomedical corpus in the National Library of Medicine.
|
[
"Knowledge Representation",
"Structured Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
18,
50,
72,
74
] |
http://arxiv.org/abs/2205.05974v1
|
A Computational Acquisition Model for Multimodal Word Categorization
|
Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals. However, prior studies have been limited by their reliance on vision models trained on large image datasets annotated with a pre-defined set of depicted object categories. This is (a) not faithful to the information children receive and (b) prohibits the evaluation of such models with respect to category learning tasks, due to the pre-imposed category structure. We address this gap, and present a cognitively-inspired, multimodal acquisition model, trained from image-caption pairs on naturalistic data using cross-modal self-supervision. We show that the model learns word categories and object recognition abilities, and presents trends reminiscent of those reported in the developmental literature. We make our code and trained models public for future reference and use.
|
[
"Visual Data in NLP",
"Text Classification",
"Multimodality",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
20,
36,
74,
24,
3
] |
https://aclanthology.org//W15-0507/
|
A Computational Approach for Generating Toulmin Model Argumentation
|
[
"Argument Mining",
"Reasoning"
] |
[
60,
8
] |
|
SCOPUS_ID:85118186003
|
A Computational Approach for Predicting Individuals' Response Patterns in Human Syllogistic Reasoning
|
One challenge within cognitive psychology on human reasoning is modeling a wide range of tasks within a certain theory. Recently, a meta-study on human syllogistic reasoning has shown that none of the established theories seemed to adequately match the human data. Possible reasons for this sobering result could be that (i) these theories do not account for differences among reasoners and (ii) they presuppose the same assumptions throughout all 64 syllogistic reasoning tasks. In this paper, we will address both aspects by proposing clustering by principle patterns for syllogistic reasoning based on cognitive principles, which have their roots in the literature of cognitive science and philosophy of language. These principles determine how the tasks are formally represented within the weak completion semantics, a logic programming approach that has already been successfully applied for modeling various human reasoning episodes. We will develop a generic cognitive characterization of (i) the reasoners and (ii) the tasks by integrating the results of a machine learning algorithm with underlying cognitive principles. These principles provide a cognitively plausible characterization of the response patterns that cover the population of reasoners. Clustering by principle patterns achieves the highest prediction accuracy compared to the available benchmark models, and gives insights to the differences among (i) the reasoners and among (ii) the explaining principles throughout the tasks.
|
[
"Cognitive Modeling",
"Linguistic Theories",
"Text Clustering",
"Linguistics & Cognitive NLP",
"Reasoning",
"Information Extraction & Text Mining"
] |
[
2,
57,
29,
48,
8,
3
] |
https://aclanthology.org//W99-0906/
|
A Computational Approach to Deciphering Unknown Scripts
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
4
] |
|
SCOPUS_ID:84962643302
|
A Computational Cognitive Model Integrating Different Emotion Regulation Strategies
|
In this paper a cognitive model is introduced which integrates a model for emotion generation with models for three different emotion regulation strategies. Given a stressful situation, humans often apply multiple emotion regulation strategies. The presented computational model has been designed based on principles from recent neurological theories based on brain imaging, and psychological and emotion regulation theories. More specifically, the model involves emotion generation and integrates models for the emotion regulation strategies reappraisal, expressive suppression, and situation modification. The model was designed as a dynamical system. Simulation experiments are reported showing the role of the emotion regulation strategies. The simulation results show how a potential stressful situation in principle could lead to emotional strain and how this can be avoided by applying the emotion regulation strategies decreasing the stressful effects.
|
[
"Cognitive Modeling",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
2,
48,
57
] |
SCOPUS_ID:85034263574
|
A Computational Cognitive Model of Self-monitoring and Decision Making for Desire Regulation
|
Desire regulation can make use of different regulation strategies; this implies an underlying decision making process, which makes use of some form of self-monitoring. The aim of this work is to develop a neurologically inspired computational cognitive model of desire regulation and these underlying self-monitoring and decision making processes. In this model four desire regulation strategies have been incorporated. Simulation experiments have been performed based for the domain of food choice.
|
[
"Cognitive Modeling",
"Linguistics & Cognitive NLP"
] |
[
2,
48
] |
SCOPUS_ID:85078399493
|
A Computational Framework Towards Medical Image Explanation
|
In this paper, a unified computational framework towards medical image explanation is proposed to promote the ability of computers on understanding and interpreting medical images. Four complementary modules are included, such as the construction of Medical Image-Text Joint Embedding (MITE) based on large-scale medical images and related texts; a Medical Image Semantic Association (MISA) mechanism based on the MITE multimodal knowledge representation; a Hierarchical Medical Image Caption (HMIC) module that is visually understandable to radiologists; and a language-independent medical imaging report generation prototype system by integrating the HMIC and transfer learning method. As an initial study of automatic medical image explanation, preliminary experiments were carried out to verify the feasibility of the proposed framework, including the extraction of large scale medical image-text pairs, semantic concept detection from medical images, and automatic medical imaging reports generation. However, there is still a great challenge to produce medical image interpretations clinically usable, and further research is needed to empower machines explaining medical images like a human being.
|
[
"Visual Data in NLP",
"Semantic Text Processing",
"Captioning",
"Representation Learning",
"Explainability & Interpretability in NLP",
"Text Generation",
"Responsible & Trustworthy NLP",
"Multimodality"
] |
[
20,
72,
39,
12,
81,
47,
4,
74
] |
SCOPUS_ID:85118937385
|
A Computational Framework to Analyze the Associations between Symptoms and Cancer Patient Attributes Post Chemotherapy Using EHR Data
|
Patients with cancer, such as breast and colorectal cancer, often experience different symptoms post-chemotherapy. The symptoms could be fatigue, gastrointestinal (nausea, vomiting, lack of appetite), psychoneurological symptoms (depressive symptoms, anxiety), or other types. Previous research focused on understanding the symptoms using survey data. In this research, we propose to utilize the data within the Electronic Health Record (EHR). A computational framework is developed to use a natural language processing (NLP) pipeline to extract the clinician-documented symptoms from clinical notes. Then, a patient clustering method is based on the symptom severity levels to group the patient in clusters. The association rule mining is used to analyze the associations between symptoms and patient attributes (smoking history, number of comorbidities, diabetes status, age at diagnosis) in the patient clusters. The results show that the various symptom types and severity levels have different associations between breast and colorectal cancers and different timeframes post-chemotherapy. The results also show that patients with breast or colorectal cancers, who smoke and have severe fatigue, likely have severe gastrointestinal symptoms six months after the chemotherapy. Our framework can be generalized to analyze symptoms or symptom clusters of other chronic diseases where symptom management is critical.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85133733678
|
A Computational Literature Analysis of Conversational AI Research with a Focus on the Coaching Domain
|
We conduct a computational analysis of the literature on Conversational AI. We identify the trend based on all publications until the year 2020. We then concentrate on the publications for the last five years between 2016 and 2020 to find out the top ten venues and top three journals where research on Conversational AI has been published. Further, using the Latent Dirichlet Allocation (LDA) topic modeling technique, we discover nine important topics discussed in Conversational AI literature and specifically two topics related to the area of coaching. Finally, we detect the key authors who have contributed significantly to Conversational AI research and area(s) related to coaching. We determine the key authors' areas of expertise and how the knowledge is distributed across different regions. Our findings show an increasing trend and thus, an interest in Conversational AI research, predominantly from the authors in Europe.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85135925667
|
A Computational Measure for the Semantic Readability of Segmented Texts
|
In this paper we introduce a computational procedure for measuring the semantic readability of a segmented text. The procedure mainly consists of three steps. First, natural language processing tools and unsupervised machine learning techniques are adopted in order to obtain a vectorized numerical representation for any section or segment of the inputted text. Hence, similar or semantically related text segments are modeled by nearby points in a vector space, then the shortest and longest Hamiltonian paths passing through them are computed. Lastly, the lengths of these paths and that of the original ordering on the segments are combined into an arithmetic expression in order to derive an index, which may be used to gauge the semantic difficulty that a reader is supposed to experience when reading the text. A preliminary experimental study is conducted on seven classic narrative texts written in English, which were obtained from the well-known Gutenberg project. The experimental results appear to be in line with our expectations.
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
4
] |
SCOPUS_ID:85077788831
|
A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time
|
This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85139522085
|
A Computational Model of General Rule Learning with Unnatural Classes
|
This paper presents the results of a computational model of generalized phonological rule learning (Calamaro and Jarosz, 2012), which is used to model experimental studies on the learning of phonotactic patterns governed by natural and unnatural classes. I focus on two papers with conflicting results on the learnability of natural and unnatural rules. Saffran and Thiessen (2003) find that a phonotactic pattern of positional voicing restrictions governed by a natural class of segments is learned by infants, but a similar pattern governed by an unnatural class is not learned. In contrast, Chambers, Onishi, and Fisher (2003) find that infants can learn a phonotactic pattern governed by an unnatural class of segments. The computational model presented in this paper is able to account for these seemingly conflicting results, explaining both the learnability and unlearnability of rules governed by unnatural classes.
|
[
"Phonology",
"Syntactic Text Processing"
] |
[
6,
15
] |
http://arxiv.org/abs/cmp-lg/9406029v1
|
A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation
|
Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fact, the process of ambiguity resolution is almost always unconscious. But it is not infallible, however, as example 1 demonstrates. 1. The horse raced past the barn fell. This sentence is perfectly grammatical, as is evident when it appears in the following context: 2. Two horses were being shown off to a prospective buyer. One was raced past a meadow. and the other was raced past a barn. ... Grammatical yet unprocessable sentences such as 1 are called `garden-path sentences.' Their existence provides an opportunity to investigate the human sentence processing mechanism by studying how and when it fails. The aim of this thesis is to construct a computational model of language understanding which can predict processing difficulty. The data to be modeled are known examples of garden path and non-garden path sentences, and other results from psycholinguistics. It is widely believed that there are two distinct loci of computation in sentence processing: syntactic parsing and semantic interpretation. One longstanding controversy is which of these two modules bears responsibility for the immediate resolution of ambiguity. My claim is that it is the latter, and that the syntactic processing module is a very simple device which blindly and faithfully constructs all possible analyses for the sentence up to the current point of processing. The interpretive module serves as a filter, occasionally discarding certain of these analyses which it deems less appropriate for the ongoing discourse than their competitors. This document is divided into three parts. The first is introductory, and reviews a selection of proposals from the sentence processing literature. The second part explores a body of data which has been adduced in support of a theory of structural preferences --- one that is inconsistent with the present claim. I show how the current proposal can be specified to account for the available data, and moreover to predict where structural preference theories will go wrong. The third part is a theoretical investigation of how well the proposed architecture can be realized using current conceptions of linguistic competence. In it, I present a parsing algorithm and a meaning-based ambiguity resolution method.
|
[
"Explainability & Interpretability in NLP",
"Syntactic Text Processing",
"Responsible & Trustworthy NLP"
] |
[
81,
15,
4
] |
http://arxiv.org/abs/0812.3070v1
|
A Computational Model to Disentangle Semantic Information Embedded in Word Association Norms
|
Two well-known databases of semantic relationships between pairs of words used in psycholinguistics, feature-based and association-based, are studied as complex networks. We propose an algorithm to disentangle feature based relationships from free association semantic networks. The algorithm uses the rich topology of the free association semantic network to produce a new set of relationships between words similar to those observed in feature production norms.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85137921188
|
A Computational Neural Network Model for College English Grammar Correction
|
For the error correction of English grammar, if there are errors in the semantic units (words and sentences), it will inevitably affect the subsequent text analysis and semantic understanding, and ultimately reduce the overall performance of the practical application system. Therefore, intelligent error detection and correction of the word and grammatical errors in English texts is one of the key and difficult points of natural language processing. This exploration innovatively combines a computational neural model with college grammar error correction to improve the accuracy of college grammar error correction. It studies the computational neural model in English grammar error correction based on a neural network named Knowledge and Neural machine translation powered College English Grammar Typo Correction (KNGTC). First, the Recurrent Neural Network is introduced, and the overall structure of the English grammatical error correction neural model is constructed. Moreover, the supervised training of Attention is discussed, and the experimental environment and experimental data are given. The results show that KNGTC has high accuracy in college English grammar correction, and the accuracy of this model in CET-4 and CET-6 writing can reach 82.69%. The English grammar error correction model based on the computational neural network has perfect function and strong error correction ability. The optimization and perfection of the model can improve students' English grammar level, which has certain practical value. After years of continuous optimization and improvement, English grammar error correction technology has entered a performance bottleneck. This mode's construction can break the current technology's limitations and bring a better user experience. Therefore, it is very valuable to study the error correction model of English grammar in practical application.
|
[
"Text Error Correction",
"Syntactic Text Processing"
] |
[
26,
15
] |
https://aclanthology.org//W15-2408/
|
A Computational Study of Cross-situational Lexical Learning of Brazilian Portuguese
|
[
"Cognitive Modeling",
"Linguistics & Cognitive NLP"
] |
[
2,
48
] |
|
SCOPUS_ID:85139590986
|
A Computational System of Psycholinguistic Fuzzy Inference Under Uncertainty
|
The modern intelligent systems of the Narrow Artificial Intelligence (NAI) cannot independently and continuously think, reason, cognize, and carry out a Logical Inference, based on these computational reasonable functions under: a) uncertainty, b) time changing of the situations and objects of surrounding environment, and c) without using of the human's intellect for retrain (reprogramming) these systems. For a solution of these NAI's problem and in order to create a new reasonable generation of Artificial Intelligence Fuzzy Logic Inference Subsystem (AIFLIS), in this paper is proposed the conception, model, method and subsystem for modeling of the Computational Psycholinguistic and Cognitive Fuzzy Inference in the identified Subject Domain by implementation of the following Computational Brain's computational subsystems: a) identification and preprocessing of the psycholinguistic, lingual, sound, signal. and other meanings of objects, associated with the images of the subject area, and b) situational fuzzy control of computational memory, decision-making, reasoning, thinking, consciousness, awareness, cognition, intuition, wisdom, and others. To implement of the AIFLIS functionality, have been applied the Situational Control, Fuzzy Logic, Psycholinguistics, Data Science, and Informatics.
|
[
"Reasoning",
"Psycholinguistics",
"Linguistics & Cognitive NLP"
] |
[
8,
77,
48
] |
SCOPUS_ID:85084048189
|
A Computational Theory for the Emergence of Grammatical Categories in Cortical Dynamics
|
A general agreement in psycholinguistics claims that syntax and meaning are unified precisely and very quickly during online sentence processing. Although several theories have advanced arguments regarding the neurocomputational bases of this phenomenon, we argue that these theories could potentially benefit by including neurophysiological data concerning cortical dynamics constraints in brain tissue. In addition, some theories promote the integration of complex optimization methods in neural tissue. In this paper we attempt to fill these gaps introducing a computational model inspired in the dynamics of cortical tissue. In our modeling approach, proximal afferent dendrites produce stochastic cellular activations, while distal dendritic branches–on the other hand–contribute independently to somatic depolarization by means of dendritic spikes, and finally, prediction failures produce massive firing events preventing formation of sparse distributed representations. The model presented in this paper combines semantic and coarse-grained syntactic constraints for each word in a sentence context until grammatically related word function discrimination emerges spontaneously by the sole correlation of lexical information from different sources without applying complex optimization methods. By means of support vector machine techniques, we show that the sparse activation features returned by our approach are well suited—bootstrapping from the features returned by Word Embedding mechanisms—to accomplish grammatical function classification of individual words in a sentence. In this way we develop a biologically guided computational explanation for linguistically relevant unification processes in cortex which connects psycholinguistics to neurobiological accounts of language. We also claim that the computational hypotheses established in this research could foster future work on biologically-inspired learning algorithms for natural language processing applications.
|
[
"Linguistics & Cognitive NLP",
"Psycholinguistics",
"Linguistic Theories"
] |
[
48,
77,
57
] |
SCOPUS_ID:34248837594
|
A Computational Treatment of Stress in Greek Inflected Forms
|
This paper deals with the treatment of stress in Greek inflectional morphology. First, a morphological processor is presented which is built on the basis of a linguistic analysis of Greek inflected forms. This is followed by a discussion of how stress applies to words and how stress shift phenomena are taken into account by the morphological processor. In Greek, word stress distribution is important because it represents a major difficulty in every attempt to create a morphological processor which can be used by speech recognition systems, machine readable dictionaries, and machine translation projects involving Greek as source or target language. © 1992, SAGE Publications. All rights reserved.
|
[
"Syntactic Text Processing",
"Morphology"
] |
[
15,
73
] |
SCOPUS_ID:84997272165
|
A Computational and Inferential Method for Analyzing the Semantics of Phrase and Sentence in Vietnamese Question Answering System Model (VietQASM)
|
Based on the Reading Answering System (RAS) in S. T. Pham and D. T. Nguyen (2013), this paper aims to present a novel computational and inferential method for computing the semantics of phrase and sentence in Vietnamese in order to build the textual knowledge base of Vietnamese Question Answering System Model (VietQASM). The VietQASM is a Vietnamese question answering system model, which is capable to answer many types of questions about series of events and questions having many interrogative objects. The VietQASM composes three main elements: i) a set of semantic models for phrases, ii) a set of semantic models for sentences, iii) a semantic processing mechanism for analyzing sentences.
|
[
"Natural Language Interfaces",
"Question Answering"
] |
[
11,
27
] |
SCOPUS_ID:85057375508
|
A Computational model for child inferences of word meanings via syntactic categories for different ages and languages
|
Children exploit their morphosyntactic knowledge in order to infer the meanings of words. A recent behavioral study has reported developmental changes in word learning from three to five years of age, with respect to a child's native language. To understand the computational basis of this phenomenon, we propose a model based on a hidden Markov model (HMM). The HMM acquires syntactic categories of given words as its hidden states, which are associated with observed features. Then, the model infers the syntactic category of a new word, which facilitates the selection of an appropriate visual feature. We hypothesize that using this model with different numbers of categories can replicate the manner in which children of different ages learn words. We perform simulation experiments in three native language environments (English, Japanese, and Chinese), which demonstrate that the model produces similar performances as the children in each environment. Allowing a larger number of categories means that the model can acquire a sufficient number of obvious categories, which results in the successful inference of visual features for novel words. In addition, cross-linguistic differences originating from the acquisition of language-specific syntactic categories are identified, i.e., the syntactic categories learned from English and Chinese corpora are relatively reliant on word orders, whereas the Japanese-Trained model exploits morphological cues to infer the syntactic categories.
|
[
"Visual Data in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
20,
15,
74
] |
SCOPUS_ID:85146390412
|
A Computational-Augmented Critical Discourse Analysis of Tweets on the Saudi General Entertainment Authority Activities
|
This study used both computational tools in the form of a machine learning predictive model (Support Vector Machine) and a critical discourse analysis model (Van Dijk’s ideological square model) (Van Dijk, 1993, 2008, 2009) to fulfill three objectives: (1) clustering the Saudis’ Twitter-based opinions and sentiments regarding the entertaining and recreational activities run by the Saudi General Entertainment Authority (GEA); (2) offering empirical evidence on how computational linguistic methods could be implemented for offering a reliable conceptual framing of such opinionated big data; and (3) outlining the central themes generating ideologically motivated polarity in Saudi public opinion and the macrostrategies through which this polarity is textually instantiated and actualized. Toward fulfilling these objectives, we designed a purpose-built corpus of 9378 tweets based on five trending hashtags, covering the period between 2020 and 2022. Findings affirmed the efficacy of synergizing the Support Vector Machine model and the ideological square model in clustering and interpreting the target tweets. Based on the output discourse features and thematization of the tweets, two main groups with different ideologically motivated perspectives were identified. This ideological polarity was achieved through the use of two macrostrategies: positive self-presentation and negative other-presentation. These findings may prompt policymakers to reconsider current (mis)practices in order to achieve long-term sustainable development goals.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
71,
72,
3,
29
] |
SCOPUS_ID:85123994648
|
A Computer-Assisted Writing Tool for an Extended Variety of Leichte Sprache (Easy-to-Read German)
|
Leichte Sprache (LS; easy-to-read German) defines a variety of German characterized by simplified syntactic constructions and a small vocabulary. It provides barrier-free information for a wide spectrum of people with cognitive impairments, learning difficulties, and/or a low level of literacy in the German language. The levels of difficulty of a range of syntactic constructions were systematically evaluated with LS readers as part of the recent LeiSA project (Bock, 2019). That study identified a number of constructions that were evaluated as being easy to comprehend but which fell beyond the definition of LS. We therefore want to broaden the scope of LS to include further constructions that LS readers can easily manage and that they might find useful for putting their thoughts into words. For constructions not considered in the LeiSA study, we performed a comparative treebank study of constructions attested to in a collection of 245 LS documents from a variety of sources. Employing the treebanks TüBa-D/S (also called VERBMOBIL) and TüBa-D/Z, we compared the frequency of such constructions in those texts with their incidence in spoken and written German sources produced without the explicit goal of facilitating comprehensibility. The resulting extension is called Extended Leichte Sprache (ELS). To date, text in LS has generally been produced by authors proficient in standard German. In order to enable text production by LS readers themselves, we developed a computational linguistic system, dubbed ExtendedEasyTalk. This system supports LS readers in formulating grammatically correct and semantically coherent texts covering constructions in ELS. This paper outlines the principal components: (1) a natural-language paraphrase generator that supports fast and correct text production while taking readership-design aspects into account, and (2) explicit coherence specifications based on Rhetorical Structure Theory (RST) to express the communicative function of sentences. The system's writing-workshop mode controls the options in (1) and (2). Mandatory questions generated by the system aim to teach the user when and how to consider audience-design concepts. Accordingly, users are trained in text production in a similar way to elementary school students, who also tend to omit audience-design cues. Importantly, we illustrate in this paper how to make the dialogues of these components intuitive and easy to use to avoid overtaxing the user. We also report the results of our evaluation of the software with different user groups.
|
[
"Text Generation",
"Paraphrasing",
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
47,
32,
28,
15
] |
http://arxiv.org/abs/2301.00503v3
|
A Concept Knowledge Graph for User Next Intent Prediction at Alipay
|
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. To explicitly characterize user intent, we propose AlipayKG, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
|
[
"Semantic Text Processing",
"Structured Data in NLP",
"Knowledge Representation",
"Intent Recognition",
"Sentiment Analysis",
"Multimodality"
] |
[
72,
50,
18,
79,
78,
74
] |
http://arxiv.org/abs/1807.02226v3
|
A Concept Specification and Abstraction-based Semantic Representation: Addressing the Barriers to Rule-based Machine Translation
|
Rule-based machine translation is more data efficient than the big data-based machine translation approaches, making it appropriate for languages with low bilingual corpus resources -- i.e., minority languages. However, the rule-based approach has declined in popularity relative to its big data cousins primarily because of the extensive training and labour required to define the language rules. To address this, we present a semantic representation that 1) treats all bits of meaning as individual concepts that 2) modify or further specify one another to build a network that relates entities in space and time. Also, the representation can 3) encapsulate propositions and thereby define concepts in terms of other concepts, supporting the abstraction of underlying linguistic and ontological details. These features afford an exact, yet intuitive semantic representation aimed at handling the great variety in language and reducing labour and training time. The proposed natural language generation, parsing, and translation strategies are also amenable to probabilistic modeling and thus to learning the necessary rules from example data.
|
[
"Machine Translation",
"Semantic Text Processing",
"Representation Learning",
"Text Generation",
"Multilinguality"
] |
[
51,
72,
12,
47,
0
] |
SCOPUS_ID:85015198575
|
A Concept-Based Integer Linear Programming Approach for Single-Document Summarization
|
Automatic single-document summarization is a process that receives a single input document and outputs a condensed version with only the most relevant information. This paper proposes an unsupervised concept-based approach for singledocument summarization using Integer Linear Programming (ILP). Such an approach maximizes the coverage of the important concepts in the summary, avoiding redundancy, and taking into consideration some readability aspects of the generated summary as well. A new weighting method that combines both coverage and position of the sentences is proposed to estimate the importance of a concept. Moreover, a weighted distribution strategy that prioritizes sentences at the beginning of the document if they have relevant concepts is investigated. The readability of the generated summaries is improved by the inclusion of constraints into the ILP model to avoid dangling coreferences and breaks in the normal discourse flow of the document. Experimental results on the DUC 2001-2002 and the CNN corpora demonstrated that the proposed approach is competitive with state-of-the-art summarizers evaluated regarding the traditional ROUGE scores.
|
[
"Programming Languages in NLP",
"Information Extraction & Text Mining",
"Summarization",
"Text Generation",
"Multimodality"
] |
[
55,
3,
30,
47,
74
] |
SCOPUS_ID:84884310190
|
A Concept-based Approach for Indexing Documents in IR
|
This paper addresses two important problems related to the use of semantics in IR. The first one concerns the representation of document semantics and its proper use in retrieval. The second is the integration of semantic-based retrieval with traditional keywords-based retrieval. The proposed approach aims to represent the document content by the best semantic network called document semantic core in two main steps. The first step extracts concepts (mono and multiword) from a document, driven by external general-purpose ontology, namely WordNet. The second step builds the best semantic network by achieving a global disambiguation of the extracted concepts regarding to the document. Thus, selected concepts senses represent the nodes of the semantic network while the similarity measure values between them represent the arcs. The resulted scored concepts senses are used for conceptual indexing in Information Retrieval.
|
[
"Indexing",
"Semantic Text Processing",
"Information Retrieval",
"Representation Learning"
] |
[
69,
72,
24,
12
] |
SCOPUS_ID:84868693927
|
A ConceptLink graph for text structure mining
|
Most text mining methods are based on representing documents using a vector space model, commonly known as a bag of word model, where each document is modeled as a linear vector representing the occurrence of independent words in the text corpus. It is well known that using this vector-based representation, important information, such as semantic relationship among concepts, is lost. This paper proposes a novel text representation model called ConceptLink graph. The ConceptLink graph does not only represent the content of the document, but also captures some of its underlying semantic structure in terms of the relationships among concepts. The ConceptLink graph is constructed in two main stages. First, we find a set of concepts by clustering conceptually related terms using the self-organizing map method. Secondly, by mapping each document's content to concept, we generate a graph of concepts based on the occurrences of concepts using a singular value decomposition technique. The ConceptLink graph will overcome the keyword independence limitation in the vector space model to take advantage of the implicit concept relationships exhibit in all natural language texts. As an information-rich text representation model, the ConceptLink graph will advance text mining technology beyond feature-based to structure-based knowledge discovery. We will illustrate the ConceptLink graph method using samples generated from benchmark text mining dataset. Copyright © 2009, Australian Computer Society, Inc.
|
[
"Multimodality",
"Structured Data in NLP",
"Semantic Text Processing",
"Representation Learning"
] |
[
74,
50,
72,
12
] |
SCOPUS_ID:85097501412
|
A Conceptual Data Modelling Framework for Context-Aware Text Classification
|
Data analytics has an interesting variant that aims to understand an entity’s behavior. It is termed as diagnostic analytics, which answers “why type questions”. “Why type questions” find their applications in emotion classification, brand analysis, drug review modeling, customer complaints classification etc. Labeled data form the core of any analytics’ problem, leave alone diagnostic analytics; however, labeled data is not always available. In some cases, it is required to assign labels to unknown entities and understand its behavior. For such scenarios, the proposed model unites topic modeling and text classification techniques. This combined data model will help to solve diagnostic issues and obtain meaningful insights from data by treating the procedure as a classification problem. The proposed model uses Improved Latent Drichlet Allocation for topic modeling and sentiment analysis to understand an entity’s behavior and represent it as an Improved Multinomial Naïve Bayesian data model to achieve automated classification. The model is tested using drug review dataset obtained from UCI repository. The health conditions with their associated drug names were extracted from the reviews and sentiment scores were assigned. The sentiment scores reflected the behavior of various drugs for a particular health condition and classified them according to their quality. The proposed model performance is compared with existing baseline models and it is proved that our model exhibited better than other models.
|
[
"Topic Modeling",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
9,
36,
78,
24,
3
] |
SCOPUS_ID:85118979205
|
A Conceptual Enhancement of LSTM Using Knowledge Distillation for Hate Speech Detection
|
Hate speech is by no means always on the rise due to the high rate of remote service usage such as communication, online studies, meeting, dating, etc. With the recent outbreak of COVID-19, there has been an increase in the number of users on different social media platforms. This increase in number has brought about an increase in issues such as hate speech, among others. This paper aims to provide a detailed process of improving LSTM used for hate speech detection using knowledge distillation. The knowledge transfer is done from the more extensive network (teacher) to the smaller student network. The teacher has trained for five entire epochs to output accuracy of 76.8%, the student network trained from the teacher network for three whole epochs attained an accuracy of 82.6%. Another student model cloned and trained from scratch for three entire epochs instead of the teacher network achieves an accuracy of 75.4%.
|
[
"Language Models",
"Semantic Text Processing",
"Green & Sustainable NLP",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
68,
17,
4
] |
https://aclanthology.org//W14-2625/
|
A Conceptual Framework for Inferring Implicatures
|
[
"Sentiment Analysis"
] |
[
78
] |
|
SCOPUS_ID:85112348865
|
A Conceptual Framework for Malay-English Mixed-language Question Answering System
|
Mixed language has turned into a current trend of language which refers to combining two or more languages either in spoken or written form. It has been widely used in social media forums to improve communication and for a greater range of expression. The current question answering (QA) system only supports monolingual queries, which restricts the capability of multilingual users to have a natural interaction with the system. In recent years, there has been a rise of interest in multilingual QA systems where translation models merged with machine learning algorithms in question classification are the commonly used solution. However, using words from other languages in a single sentence has led to the problem of the inability to identify code-switch from the monolingual sentence; this has also caused the problem of limited captured language context from machine translation processed mistranslated questions. The informal mixed-language representation that disobeys the natural linguistic rule in particular languages provides a challenge for automated QA systems, as the systems would need to translate and extract answers for the given questions. Additionally, lack of public resources such as Chunker, POS Tagger, and WordNet for mixed-language, especially for Malay-English, leads to low performance of the translation and classification model. Furthermore, the use of machine learning algorithms in question classification requires a large number of structured training data to ensure performance. This is impracticable in the Malay-English mixed-language domain since the availability of the mixed-language dataset is still an issue. To solve these problems, we aim to propose a framework consisting of the combination of enhanced translation models with deep learning; by using Convolutional Neural Networks (CNN) to address the Malay-English mixed-language question classification to generate the best answer. The first part will study the machine translation model, where word-level language identification and text normalization towards Malay-English mixed-language questions will be developed. The second part will focus on the deep learning algorithm, where we will explore CNN as the classification model to assist in the translated questions to provide the best answer. Thus, in this paper, a framework consisting of an enhanced translation model for Malay-English, and also an end-to-end mixed-language question answering system for the Malay-English QA system, is presented. This research will provide a significant contribution to a multilingual forum platform and also to intelligent QA systems (chatbots).
|
[
"Machine Translation",
"Information Retrieval",
"Information Extraction & Text Mining",
"Code-Switching",
"Text Normalization",
"Question Answering",
"Syntactic Text Processing",
"Natural Language Interfaces",
"Text Generation",
"Text Classification",
"Multilinguality"
] |
[
51,
24,
3,
7,
59,
27,
15,
11,
47,
36,
0
] |
SCOPUS_ID:84962209956
|
A Conceptual Framework of E-Commerce Supervision System Based on Opinion Mining
|
The Internet significantly reshapes traditional commerce mode, making online business ubiquitous and indispensable. Others' opinions are influential when we make a decision, which is complicated by tremendous and misleading information. According to opinion mining techniques, we propose a novel conceptual framework which is designed to exploit the potentials of opinion mining in e-commerce domain. Based on the various text corpora we crawled from targeted internet source, we utilize the sentiment analysis principle and algorithm to provide convenient and assistant services, such as opinion spam detection, opinion search and retrieval, opinion summary, opinion question answering and opinion recommendation, to e-commerce participants such as potential customers, manufacturers, third-party traders or retailers, and regulators. Besides, the system introduces a novel NLP toolkits excelling in handling the Chinese corpora than most of previous alike work. The system is especially helpful to regulators by providing useful applications which can be utilized to make administrating decisions.
|
[
"Opinion Mining",
"Sentiment Analysis"
] |
[
49,
78
] |
SCOPUS_ID:85113365946
|
A Conceptual Model for real-time Binaural-Room Impulse Responses generation using ANNs in Virtual Environments: State of the Art
|
This work aims to give an overview of Artificial Neural Networks (ANN) approaches applied for BIRs generation published in the literature and to expose gaps in the academic research. The literature review shows that several successful studies are using ANNs approaches for BIRs generation with a reduction in the computational effort by up to 90% with respect to the Traditional Method. Nevertheless, these approaches are bounded by a fixed pair of a sound-source and binaural-receptor, meaning that they do not take into account dynamic variations in the position of the receptor. In this sense, this work also introduces a conceptual model for a real-time BIRs generator that considers a moving binaural-receptor using a set of Artificial Neural Networks.
|
[
"Dialogue Response Generation",
"Text Generation"
] |
[
14,
47
] |
http://arxiv.org/abs/cmp-lg/9605025v1
|
A Conceptual Reasoning Approach to Textual Ellipsis
|
We present a hybrid text understanding methodology for the resolution of textual ellipsis. It integrates conceptual criteria (based on the well-formedness and conceptual strength of role chains in a terminological knowledge base) and functional constraints reflecting the utterances' information structure (based on the distinction between context-bound and unbound discourse elements). The methodological framework for text ellipsis resolution is the centering model that has been adapted to these constraints.
|
[
"Reasoning"
] |
[
8
] |
http://arxiv.org/abs/1906.12035v2
|
A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder
|
Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion. Previous work usually regards MCCWS as different tasks, which are learned together under the multi-task learning framework. In this paper, we propose a concise but effective unified model for MCCWS, which is fully-shared for all the criteria. By leveraging the powerful ability of the Transformer encoder, the proposed unified model can segment Chinese text according to a unique criterion-token indicating the output criterion. Besides, the proposed unified model can segment both simplified and traditional Chinese and has an excellent transfer capability. Experiments on eight datasets with different criteria show that our model outperforms our single-criterion baseline model and other multi-criteria models. Source codes of this paper are available on Github https://github.com/acphile/MCCWS.
|
[
"Language Models",
"Text Segmentation",
"Semantic Text Processing",
"Syntactic Text Processing"
] |
[
52,
21,
72,
15
] |
http://arxiv.org/abs/1108.1966v1
|
A Concise Query Language with Search and Transform Operations for Corpora with Multiple Levels of Annotation
|
The usefulness of annotated corpora is greatly increased if there is an associated tool that can allow various kinds of operations to be performed in a simple way. Different kinds of annotation frameworks and many query languages for them have been proposed, including some to deal with multiple layers of annotation. We present here an easy to learn query language for a particular kind of annotation framework based on 'threaded trees', which are somewhere between the complete order of a tree and the anarchy of a graph. Through 'typed' threads, they can allow multiple levels of annotation in the same document. Our language has a simple, intuitive and concise syntax and high expressive power. It allows not only to search for complicated patterns with short queries but also allows data manipulation and specification of arbitrary return values. Many of the commonly used tasks that otherwise require writing programs, can be performed with one or more queries. We compare the language with some others and try to evaluate it.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval"
] |
[
52,
72,
24
] |
SCOPUS_ID:85130323481
|
A Concise Review on Automatic Text Summarization
|
Today, data is the most important thing humanity needs, thus understanding the linguistics of such a large data is not practically possible so, text summarization is introduced as the problem in natural language processing (NLP). Text summarization is the technique to convert long text corpus such that the semantics of the text does not change. This paper provides a study of different text summarization methods till Q3 2020. Text summarization methods are broadly classified as abstractive and extractive. In this paper, more focus is given to abstractive summarization a review for most of the methods of text summarization to date is written concisely along with the evaluations and advantages-disadvantages also for each method. At the end of the paper, the challenges faced by researchers for this task are mentioned and what improvements can be done in every method for summarization is also written in a structured way.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85137804789
|
A Concurrent Intelligent Natural Language Understanding Model for an Automated Inquiry System
|
The work is intended to tackle a vital field that lies at the intersection of speech processing and natural language processing: Spoken Language Understanding (SLU). Its idea is to understand the essence of machine-directed human speech in order to facilitate its further processing and take on board its cognitive impact. The proposed system is CIDIS -Concurrent Intelligent Model for Dialogue Act Classification, Intent Detection and Slot Filling, that uses a deep concurrent multi-task paradigm to perform the three fundamental tasks of the SLU domain: Dialogue Act Classification, Intent Detection and Slot Filling. Since the model is orchestrated in a multi-task fashion, every task interacts with the other to have a global understanding of the input query. It follows an intelligent encoding strategy involving concatenation of the query's BERT and CharCNN embedding to handle all possible edge cases and ambiguities involved in human speech queries. This intelligent encoding is passed through a Stacked BiLSTM architecture followed by task-specific attention layers. The three supplementary outputs are in turn fed to the final module that generates the expected query response in real-time based on the dialogue act, intent and slot. The developed models are evaluated against standard benchmark datasets like ATIS, TRAINS and FRAMES and the achieved state-of-the-art performances are eventually tabulated.
|
[
"Language Models",
"Low-Resource NLP",
"Semantic Text Processing",
"Information Retrieval",
"Information Extraction & Text Mining",
"Semantic Parsing",
"Speech & Audio in NLP",
"Sentiment Analysis",
"Intent Recognition",
"Multimodality",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
52,
80,
72,
24,
3,
40,
70,
78,
79,
74,
11,
38,
36,
4
] |
http://arxiv.org/abs/2106.10468v1
|
A Condense-then-Select Strategy for Text Summarization
|
Select-then-compress is a popular hybrid, framework for text summarization due to its high efficiency. This framework first selects salient sentences and then independently condenses each of the selected sentences into a concise version. However, compressing sentences separately ignores the context information of the document, and is therefore prone to delete salient information. To address this limitation, we propose a novel condense-then-select framework for text summarization. Our framework first concurrently condenses each document sentence. Original document sentences and their compressed versions then become the candidates for extraction. Finally, an extractor utilizes the context information of the document to select candidates and assembles them into a summary. If salient information is deleted during condensing, the extractor can select an original sentence to retain the information. Thus, our framework helps to avoid the loss of salient information, while preserving the high efficiency of sentence-level compression. Experiment results on the CNN/DailyMail, DUC-2002, and Pubmed datasets demonstrate that our framework outperforms the select-then-compress framework and other strong baselines.
|
[
"Information Extraction & Text Mining",
"Green & Sustainable NLP",
"Summarization",
"Text Generation",
"Responsible & Trustworthy NLP"
] |
[
3,
68,
30,
47,
4
] |
http://arxiv.org/abs/2108.13303v1
|
A Conditional Cascade Model for Relational Triple Extraction
|
Tagging based methods are one of the mainstream methods in relational triple extraction. However, most of them suffer from the class imbalance issue greatly. Here we propose a novel tagging based model that addresses this issue from following two aspects. First, at the model level, we propose a three-step extraction framework that can reduce the total number of samples greatly, which implicitly decreases the severity of the mentioned issue. Second, at the intra-model level, we propose a confidence threshold based cross entropy loss that can directly neglect some samples in the major classes. We evaluate the proposed model on NYT and WebNLG. Extensive experiments show that it can address the mentioned issue effectively and achieves state-of-the-art results on both datasets. The source code of our model is available at: https://github.com/neukg/ConCasRTE.
|
[
"Relation Extraction",
"Syntactic Text Processing",
"Named Entity Recognition",
"Tagging",
"Information Extraction & Text Mining"
] |
[
75,
15,
34,
63,
3
] |
http://arxiv.org/abs/2106.15760v1
|
A Conditional Splitting Framework for Efficient Constituency Parsing
|
We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions. Our parsing model estimates the conditional probability distribution of possible splitting points in a given text span and supports efficient top-down decoding, which is linear in number of nodes. The conditional splitting formulation together with efficient beam search inference facilitate structural consistency without relying on expensive structured inference. Crucially, for discourse analysis we show that in our formulation, discourse segmentation can be framed as a special case of parsing which allows us to perform discourse parsing without requiring segmentation as a pre-requisite. Experiments show that our model achieves good results on the standard syntactic parsing tasks under settings with/without pre-trained representations and rivals state-of-the-art (SoTA) methods that are more computationally expensive than ours. In discourse parsing, our method outperforms SoTA by a good margin.
|
[
"Semantic Text Processing",
"Semantic Parsing",
"Syntactic Text Processing",
"Discourse & Pragmatics",
"Responsible & Trustworthy NLP",
"Green & Sustainable NLP"
] |
[
72,
40,
15,
71,
4,
68
] |
SCOPUS_ID:85079098988
|
A Configurable Agent to Advance Peers’ Productive Dialogue in MOOCs
|
Chatbot technology can greatly contribute towards the creation of personalized and engaging learning activities. Still, more experimentation is needed on how to integrate and use such agents in real world educational settings and, especially, in large-scale learning environments such as MOOCs. This paper presents the prototype design of a teacher-configurable conversational agent service, aiming to scaffold synchronous collaborative activities in MOOCs. The architecture of the conversational agent system is followed by a pilot evaluation study, which was conducted in the context of postgraduate computer science course on Learning Analytics. The preliminary study findings reveal an overall favorable student opinion as regards the ease of use and user acceptance of the system.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
http://arxiv.org/abs/2107.05876v1
|
A Configurable Multilingual Model is All You Need to Recognize All Languages
|
Multilingual automatic speech recognition (ASR) models have shown great promise in recent years because of the simplified model training and deployment process. Conventional methods either train a universal multilingual model without taking any language information or with a 1-hot language ID (LID) vector to guide the recognition of the target language. In practice, the user can be prompted to pre-select several languages he/she can speak. The multilingual model without LID cannot well utilize the language information set by the user while the multilingual model with LID can only handle one pre-selected language. In this paper, we propose a novel configurable multilingual model (CMM) which is trained only once but can be configured as different models based on users' choices by extracting language-specific modules together with a universal model from the trained CMM. Particularly, a single CMM can be deployed to any user scenario where the users can pre-select any combination of languages. Trained with 75K hours of transcribed anonymized Microsoft multilingual data and evaluated with 10-language test sets, the proposed CMM improves from the universal multilingual model by 26.0%, 16.9%, and 10.4% relative word error reduction when the user selects 1, 2, or 3 languages, respectively. CMM also performs significantly better on code-switching test sets.
|
[
"Multilinguality"
] |
[
0
] |
http://arxiv.org/abs/2203.00725v3
|
A Conformer Based Acoustic Model for Robust Automatic Speech Recognition
|
This study addresses robust automatic speech recognition (ASR) by introducing a Conformer-based acoustic model. The proposed model builds on the wide residual bi-directional long short-term memory network (WRBN) with utterance-wise dropout and iterative speaker adaptation, but employs a Conformer encoder instead of the recurrent network. The Conformer encoder uses a convolution-augmented attention mechanism for acoustic modeling. The proposed system is evaluated on the monaural ASR task of the CHiME-4 corpus. Coupled with utterance-wise normalization and speaker adaptation, our model achieves $6.25\%$ word error rate, which outperforms WRBN by $8.4\%$ relatively. In addition, the proposed Conformer-based model is $18.3\%$ smaller in model size and reduces total training time by $79.6\%$.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Robustness in NLP",
"Text Generation",
"Responsible & Trustworthy NLP",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
58,
47,
4,
10,
74
] |
SCOPUS_ID:85145713176
|
A Confounding Discourse Analysis of Vietnamese Sex Workers’ Talk in the City of Kaiyuan, China
|
Background: Vietnamese female sex workers (VFSWs) cross the border into Kaiyuan City, Yunnan Province yearly. However, very little is known about both the health and psychological issues VFSWs experience. The objectives of this study were to explore the dominant discourses that emerged from the VFSWs’ talk. The interviews occurred between May 2018 and June 2018 with 20 VFSWs who worked in Kaiyuan City, China. The English translated transcripts were analyzed using an eclectic feminist method of discourse analysis. Two discourses emerged. First, “Agency when working in Karaoke Bars and other Indoor Venues”, and second, “Negative Impacts on Psychological Well-being and Other Problems from Migration.” As for Discourse 1, the VFSWs positioned themselves as having agency over choosing their clientele as well as agency over what they were willing to negotiate with their clients to establish boundaries of their bodies. As for the Discourse 2, while there was a discourse of agency in their work there was also a contrasting, confounding discourse around the negative impact on psychological well-being and reports of stress as a migrant worker. Discourse 1 and Discourse 2 are confounding. When analyzed together, the discourses suggest that the impacts on psychological well-being may be more related to the migrant status of the women, supporting the notion of systemically influenced agency.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:0032378581
|
A Connectionist Model for Bootstrap Learning of Syllabic Structure
|
We report on a series of experiments with simple recurrent networks (SRNs) solving phoneme prediction in continuous phonemic data. The purpose of the experiments is to investigate whether the network output could function as a source for syllable boundary detection. We show that this is possible, using a generalisation of the network resembling the linguistic sonority principle. We argue that the primary generalisation of the network, that is, the fact that sonority varies in a hat-shaped way across phonemic strings, ending and starting at syllable boundaries, is an indication that sonority might be a major cue in discovering the essential building bricks of language when confronted with unsegmented running speech. The segment which is most directly related to sonority patterns, the syllable, has received considerable attention in psycholinguistics as being an element of natural language that is easily grasped by language learners. The phoneme prediction network presents a simulation of the necessary bootstrap to arrive at the discovery of syllabic segmentation in unsegmented speech, which can be used as a basis for the segmentation of larger structures like words.
|
[
"Linguistics & Cognitive NLP",
"Speech & Audio in NLP",
"Psycholinguistics",
"Multimodality"
] |
[
48,
70,
77,
74
] |
https://aclanthology.org//W89-0224/
|
A Connectionist Parser Aimed at Spoken Language
|
We describe a connectionist model which learns to parse single sentences from sequential word input. A parse in the connectionist network contains information about role assignment, prepositional attachment, relative clause structure, and subordinate clause structure. The trained network displays several interesting types of behavior. These include predictive ability, tolerance to certain corruptions of input word sequences, and some generalization capability. We report on experiments in which a small number of sentence types have been successfully learned by a network. Work is in progress on a larger database. Application of this type of connectionist model to the area of spoken language processing is discussed.
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
https://aclanthology.org//W90-0103/
|
A Connectionist Treatment of Grammar for Generation: Relying on Emergents
|
[
"Text Generation"
] |
[
47
] |
|
SCOPUS_ID:84891659743
|
A Consensus Based Method for Multi-criteria Group Decision Making Under Uncertain Linguistic Setting
|
A two-stage (a consensus process and a selection process) approach is proposed to solve multi-criteria group decision making problems under an uncertain linguistic environment. Since achieving general consensus is a desirable goal in group decision making, the proposed method first develops a consensus reaching process in order to reach a satisfactory consensus. Based on the partial order of uncertain linguistic variables, the superiority index of one alternative over another for a given criterion and the overall superiority index of one alternative are defined. Then a procedure based on the superiority indices is described to select the best alternative(s). Given the decision makers' desire for a consensus solution, a common framework based on the previous consensus model and the selection process is presented. Finally, a practical application is demonstrated to show the effectiveness of the proposed method. © 2012 Springer Science+Business Media B.V.
|
[
"Indexing",
"Information Retrieval"
] |
[
69,
24
] |
SCOPUS_ID:85024291890
|
A Consideration of Various Theoretical and Clinical Problems Pertaining to “Natural Process Analysis”
|
There have been a considerable number of attempts to apply Natural Phonology, either explicitly or not, to the phonological disorders of children. These attempts, whether they are assessment tests or therapeutic programs, are referred to as “Natural Process Analysis” or “Phonological Process Analysis”. This approach has several defects, however, both theoretical and clinical. In this article I discuss a number of problems pertinent to Natural Process Analysis from the standpoint of linguistics. First, a brief outline of Natural Phonology is provided, followed by an examination based upon phonology. Second, one functionally-disordered system is presented for which Natural Process Analysis fails to give a plausible account, thus illustrating that there is a limitation to this analysis. Finally, on the basis of the discussion, rough guidelines are presented for clinical application of Natural Process Analysis. © 1995, The Japan Society of Logopedics and Phoniatrics. All rights reserved.
|
[
"Phonology",
"Syntactic Text Processing"
] |
[
6,
15
] |
SCOPUS_ID:85115625901
|
A Consolidated Open Knowledge Representation for Multiple Texts
|
We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner. We do so by consolidating OIE extractions using entity and predicate coreference, while modeling information containment between coreferring elements via lexical entailment. We suggest that generating OKR structures can be a useful step in the NLP pipeline, to give semantic applications an easy handle on consolidated information across multiple texts.
|
[
"Semantic Text Processing",
"Representation Learning",
"Open Information Extraction",
"Knowledge Representation",
"Information Extraction & Text Mining"
] |
[
72,
12,
25,
18,
3
] |
http://arxiv.org/abs/1909.10368v1
|
A Consolidated System for Robust Multi-Document Entity Risk Extraction and Taxonomy Augmentation
|
We introduce a hybrid human-automated system that provides scalable entity-risk relation extractions across large data sets. Given an expert-defined keyword taxonomy, entities, and data sources, the system returns text extractions based on bidirectional token distances between entities and keywords and expands taxonomy coverage with word vector encodings. Our system represents a more simplified architecture compared to alerting focused systems - motivated by high coverage use cases in the risk mining space such as due diligence activities and intelligence gathering. We provide an overview of the system and expert evaluations for a range of token distances. We demonstrate that single and multi-sentence distance groups significantly outperform baseline extractions with shorter, single sentences being preferred by analysts. As the taxonomy expands, the amount of relevant information increases and multi-sentence extractions become more preferred, but this is tempered against entity-risk relations become more indirect. We discuss the implications of these observations on users, management of ambiguity and taxonomy expansion, and future system modifications.
|
[
"Responsible & Trustworthy NLP",
"Robustness in NLP",
"Information Extraction & Text Mining"
] |
[
4,
58,
3
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.