id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
http://arxiv.org/abs/2109.04593v1
|
A Large-Scale Study of Machine Translation in the Turkic Languages
|
Recent advances in neural machine translation (NMT) have pushed the quality of machine translation systems to the point where they are becoming widely adopted to build competitive systems. However, there is still a large number of languages that are yet to reap the benefits of NMT. In this paper, we provide the first large-scale case study of the practical application of MT in the Turkic language family in order to realize the gains of NMT for Turkic languages under high-resource to extremely low-resource scenarios. In addition to presenting an extensive analysis that identifies the bottlenecks towards building competitive systems to ameliorate data scarcity, our study has several key contributions, including, i) a large parallel corpus covering 22 Turkic languages consisting of common public datasets in combination with new datasets of approximately 2 million parallel sentences, ii) bilingual baselines for 26 language pairs, iii) novel high-quality test sets in three different translation domains and iv) human evaluation scores. All models, scripts, and data will be released to the public.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
https://aclanthology.org//W18-6307/
|
A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation
|
The translation of pronouns presents a special challenge to machine translation to this day, since it often requires context outside the current sentence. Recent work on models that have access to information across sentence boundaries has seen only moderate improvements in terms of automatic evaluation metrics such as BLEU. However, metrics that quantify the overall translation quality are ill-equipped to measure gains from additional context. We argue that a different kind of evaluation is needed to assess how well models translate inter-sentential phenomena such as pronouns. This paper therefore presents a test suite of contrastive translations focused specifically on the translation of pronouns. Furthermore, we perform experiments with several context-aware models. We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set. Our experiments also show the effectiveness of parameter tying for multi-encoder architectures.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
https://aclanthology.org//W19-5935/
|
A Large-Scale User Study of an Alexa Prize Chatbot: Effect of TTS Dynamism on Perceived Quality of Social Dialog
|
This study tests the effect of cognitive-emotional expression in an Alexa text-to-speech (TTS) voice on users’ experience with a social dialog system. We systematically introduced emotionally expressive interjections (e.g., “Wow!”) and filler words (e.g., “um”, “mhmm”) in an Amazon Alexa Prize socialbot, Gunrock. We tested whether these TTS manipulations improved users’ ratings of their conversation across thousands of real user interactions (n=5,527). Results showed that interjections and fillers each improved users’ holistic ratings, an improvement that further increased if the system used both manipulations. A separate perception experiment corroborated the findings from the user study, with improved social ratings for conversations including interjections; however, no positive effect was observed for fillers, suggesting that the role of the rater in the conversation—as active participant or external listener—is an important factor in assessing social dialogs.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
http://arxiv.org/abs/2103.11528v4
|
A Large-scale Dataset for Hate Speech Detection on Vietnamese Social Media Texts
|
In recent years, Vietnam witnesses the mass development of social network users on different social platforms such as Facebook, Youtube, Instagram, and Tiktok. On social medias, hate speech has become a critical problem for social network users. To solve this problem, we introduce the ViHSD - a human-annotated dataset for automatically detecting hate speech on the social network. This dataset contains over 30,000 comments, each comment in the dataset has one of three labels: CLEAN, OFFENSIVE, or HATE. Besides, we introduce the data creation process for annotating and evaluating the quality of the dataset. Finally, we evaluated the dataset by deep learning models and transformer models.
|
[
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
4
] |
https://aclanthology.org//W03-2902/
|
A Large-scale Inheritance-based Morphological Lexicon for Russian
|
[
"Syntactic Text Processing",
"Morphology"
] |
[
15,
73
] |
|
SCOPUS_ID:85058342190
|
A Large-scale empirical study on linguistic antipatterns affecting apis
|
The concept of monolithic stand-Alone software systems developed completely from scratch has become obsolete, as modern systems nowadays leverage the abundant presence of Application Programming Interfaces (APIs) developed by third parties, which leads on the one hand to accelerated development, but on the other hand introduces potentially fragile dependencies on external resources. In this context, the design of any API strongly influences how developers write code utilizing it. A wrong design decision like a poorly chosen method name can lead to a steeper learning curve, due to misunderstandings, misuse and eventually bug-prone code in the client projects using the API. It is not unfrequent to find APIs with poorly expressive or misleading names, possibly lacking appropriate documentation. Such issues can manifest in what have been defined in the literature as Linguistic Antipatterns (LAs), i.e., inconsistencies among the naming, documentation, and implementation of a code entity. While previous studies showed the relevance of LAs for software developers, their impact on (developers of) client projects using APIs affected by LAs has not been investigated. This paper fills this gap by presenting a large-scale study conducted on 1.6k releases of popular Maven libraries, 14k open-source Java projects using these libraries, and 4.4k questions related to the investigated APIs asked on Stack Overflow. In particular, we investigate whether developers of client projects have higher chances of introducing bugs when using APIs affected by LAs and if these trigger more questions on Stack Overflow as compared to non-Affected APIs.
|
[
"Programming Languages in NLP",
"Multimodality"
] |
[
55,
74
] |
SCOPUS_ID:85136931350
|
A Latent Dirichlet Allocation Technique for Opinion Mining of Online Reviews of Global Chain Hotels
|
The hospitality industry has faced unprecedented challenges with the outbreak of Covid-19, which has changed customers' expectations. Therefore, it is essential to identify customers' new perceptions and expectations that lead to positive and negative opinions towards the service providers. Accordingly, this study aims to perform topic modeling and sentiment analysis on 94,200 online reviews of five global chain hotels in South Asia. Topic modeling as a text mining, unsupervised machine learning technique can decipher topics from a corpus such as online reviews, online reports, news covers, etc. In this study, the data is extracted from Trip Advisor through web scraping. Topic modeling is performed using the Latent Dirichlet Approach (LDA) on the extracted data set to analyze the key topics mentioned by the customers in the online reviews. The analysis depicted that cleanliness, food, staff, and service were the main concerns of the hotel guests. Furthermore, the findings represented that the main issues impacting the hotel guests were service delays. However, food and services were the keywords with the maximum word count as depicted by topic modeling.
|
[
"Topic Modeling",
"Opinion Mining",
"Information Extraction & Text Mining",
"Sentiment Analysis"
] |
[
9,
49,
3,
78
] |
http://arxiv.org/abs/1910.13890v3
|
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
|
Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings.
|
[
"Machine Translation",
"Morphology",
"Syntactic Text Processing",
"Text Generation",
"Multilinguality"
] |
[
51,
73,
15,
47,
0
] |
SCOPUS_ID:44949210319
|
A Latent Semantic Indexing-based approach to multilingual document clustering
|
The creation and deployment of knowledge repositories for managing, sharing, and reusing tacit knowledge within an organization has emerged as a prevalent approach in current knowledge management practices. A knowledge repository typically contains vast amounts of formal knowledge elements, which generally are available as documents. To facilitate users' navigation of documents within a knowledge repository, knowledge maps, often created by document clustering techniques, represent an appealing and promising approach. Various document clustering techniques have been proposed in the literature, but most deal with monolingual documents (i.e., written in the same language). However, as a result of increased globalization and advances in Internet technology, an organization often maintains documents in different languages in its knowledge repositories, which necessitates multilingual document clustering (MLDC) to create organizational knowledge maps. Motivated by the significance of this demand, this study designs a Latent Semantic Indexing (LSI)-based MLDC technique capable of generating knowledge maps (i.e., document clusters) from multilingual documents. The empirical evaluation results show that the proposed LSI-based MLDC technique achieves satisfactory clustering effectiveness, measured by both cluster recall and cluster precision, and is capable of maintaining a good balance between monolingual and cross-lingual clustering effectiveness when clustering a multilingual document corpus. © 2007 Elsevier B.V. All rights reserved.
|
[
"Information Extraction & Text Mining",
"Text Clustering",
"Indexing",
"Cross-Lingual Transfer",
"Information Retrieval",
"Multilinguality"
] |
[
3,
29,
69,
19,
24,
0
] |
http://arxiv.org/abs/1502.03520v8
|
A Latent Variable Model Approach to PMI-based Word Embeddings
|
Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of~\citet{mnih2007three}. The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by~\citet{mikolov2013efficient} and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
https://aclanthology.org//W07-2218/
|
A Latent Variable Model for Generative Dependency Parsing
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
SCOPUS_ID:85115682482
|
A Latent Variable Model with Hierarchical Structure and GPT-2 for Long Text Generation
|
Variational AutoEncoder (VAE) has made great achievements in the field of text generation. However, the current research mainly focuses on short texts, with little attention paid to long texts (more than 20 words). In this paper, we first propose a hidden-variable model based on the GPT-2 and hierarchical structure to generate long text. We use hierarchical GRU to encode long text to get hidden variables. At the same time, to generate the text better, we combine the hierarchical structure and GPT-2 in the decoder for the first time. Our model improves Perplexity (PPL), Kullback Leibler (KL) divergence, Bilingual Evaluation Understudy (BLEU) score, and Self-BLEU. The experiment indicates that the coherence and diversity of sentences generated by our model are better than the baseline model.
|
[
"Language Models",
"Semantic Text Processing",
"Text Generation"
] |
[
52,
72,
47
] |
http://arxiv.org/abs/1603.01913v2
|
A Latent Variable Recurrent Neural Network for Discourse Relation Language Models
|
This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences. A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representations. The discourse relations are represented with a latent variable, which can be predicted or marginalized, depending on the task. The resulting model can therefore employ a training objective that includes not only discourse relation classification, but also word prediction. As a result, it outperforms state-of-the-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act classification in the Switchboard corpus. Furthermore, by marginalizing over latent discourse relations at test time, we obtain a discourse informed language model, which improves over a strong LSTM baseline.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
24,
3
] |
SCOPUS_ID:85068345299
|
A Layered Approach to Automatic Essay Evaluation Using Word-Embedding
|
Automated Essay Evaluation (AEE) use a set of features to evaluate and score students essay solutions. Most of the features like lexical similarity, syntax, vocabulary and shallow content were addressed but evaluating students essays using the semantics and context of the essay are not addressed well. To address the issue which are related to the semantics and context, we propose a layered approach to AEE which uses neural word embedding in order to evaluate student answers semantically and the similarity will be computed by using Word Mover’s Distance. We also implemented a plagiarism detection algorithms to protect the students from submitting someone else solution as their own using k-shingles and local sensitive hashing. We also implemented an algorithm that penalize students who are trying to fool the system by submitting only content bearing works. The performance of the proposed AEE was evaluated and compared to other state-of-the-art methods qualitatively and quantitatively. The experimental results show that the proposed AEE approach using neural word embedding achieve higher level of accuracy as compared to others baselines and are promising in evaluating students essay solutions semantically.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85139406835
|
A Layered Bridge from Sound to Meaning: Investigating Cross-linguistic Phonosemantic Correspondences
|
The present paper addresses the study of cross-linguistic phonosemantic correspondences within a deep learning framework. An LSTM-based Recurrent Neural Network is trained to associate the phonetic representation of a word, encoded as a sequence of feature vectors, to its corresponding semantic representation in a multilingual and cross-family vector space. The processing network is then tested, without further training, in a language that does not appear in the training set and belongs to a different language family. The performance of the model is evaluated through a comparison with a monolingual and mono-family upper bound and a randomized baseline. After the assessment of the network’s performance, the distribution of phonosemantic properties in the lexicon is inspected in relation to different (psycho)linguistic variables, showing a link between lexical non-arbitrariness and semantic, syntactic, pragmatic, and developmental factors.
|
[
"Psycholinguistics",
"Linguistics & Cognitive NLP"
] |
[
77,
48
] |
http://arxiv.org/abs/0810.1207v1
|
A Layered Grammar Model: Using Tree-Adjoining Grammars to Build a Common Syntactic Kernel for Related Dialects
|
This article describes the design of a common syntactic description for the core grammar of a group of related dialects. The common description does not rely on an abstract sub-linguistic structure like a metagrammar: it consists in a single FS-LTAG where the actual specific language is included as one of the attributes in the set of attribute types defined for the features. When the lang attribute is instantiated, the selected subset of the grammar is equivalent to the grammar of one dialect. When it is not, we have a model of a hybrid multidialectal linguistic system. This principle is used for a group of creole languages of the West-Atlantic area, namely the French-based Creoles of Haiti, Guadeloupe, Martinique and French Guiana.
|
[
"Syntactic Text Processing"
] |
[
15
] |
SCOPUS_ID:85038419616
|
A Layered Language Model based Hybrid Approach to Automatic Full Diacritization of Arabic
|
In this paper we present a system for automatic Arabic text diacritization using three levels of analysis granularity in a layered back off manner. We build and exploit diacritized language models (LM) for each of three different levels of granularity: surface form, morphologically segmented into prefix/stem/suffix, and character level. For each of the passes, we use Viterbi search to pick the most probable diacritization per word in the input. We start with the surface form LM, followed by the morphological level, then finally we leverage the character level LM. Our system outperforms all of the published systems evaluated against the same training and test data. It achieves a 10.87% WER for complete full diacritization including lexical and syntactic diacritization, and 3.0% WER for lexical diacritization, ignoring syntactic diacritization.
|
[
"Language Models",
"Semantic Text Processing",
"Syntactic Text Processing",
"Morphology"
] |
[
52,
72,
15,
73
] |
SCOPUS_ID:85014519936
|
A Lean PSS design and evaluation framework supported by KPI monitoring and context sensitivity tools
|
Over the last decade, Product-Service System (PSS) has been established as a prominent business model which promises sustainability. A great amount of literature work has been devoted to PSS issues, but there is fairly limited published work on integrated and easily applicable evaluation methodologies for PSS design, as well as a lack of Lean PSS approaches. Contributing to these directions, the present work introduces a framework for the evaluation and improvement of the Lean PSS design using key performance indicators (KPIs), Lean rules, and sentiment analysis, aiming to feed all the stages of PSS design lifecycle. According to the evaluation phase, a certain appropriate set of KPIs is selected and suggested to the PSS designer via a context-sensitivity analysis (CSA) tool through a pool, which have been identified after intensive literature survey, and systematically classified into five main categories: design, manufacturing, customer, environment, and sustainability. According to the same phase, sentiment analysis has been used to identify the polarity of the customer opinions regarding the PSS offerings. During the phase of Lean design assistance, Lean rules are selected using CSA and are suggested to the designer to ensure the minimization of wasteful activities. Enabler for the context awareness is the availability of feedback gathered from the manufacturing, shop-floor experts, and the different types of customers (business or final-product consumers), as well as the PSS lifecycle which the designer treats. The proposed framework is implemented in a software prototype and is applied in a mold-making industrial case study.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85148019776
|
A Learnable Graph Convolutional Neural Network Model for Relation Extraction
|
Relation extraction is the task of extracting the semantic relationships between two named entities in a sentence. The task relies on semantic dependencies relevant to named entities. Recently, graph convolutional neural networks have shown great potential in supporting this task, wherein dependency trees are usually adopted to learn semantic dependencies between entities. However, the requirement of external toolkits to parse sentences poses a problem, owing to them being error prone. Furthermore, entity relations and parsing structures vary in semantic expressions. Therefore, manually designed rules are required to prune the structure of the dependency trees. This study proposed a novel learnable graph convolutional neural network model (L-GCN) that directly encodes every word of a sentence as nodes of a graph neural network. Then, the L-GCN uses a learnable adjacency matrix to encode dependencies between nodes. The model offers the advantage of automatically learning high-order abstract representations of the semantic dependencies between words. Moreover, a fusion module was designed to aggregate the global and local semantic structure information of sentences. Further, the proposed L-GCN was evaluated on the ACE 2005 English dataset and Chinese Literature Text Corpus. The experimental results confirmed the effectiveness of L-GCN in learning the semantic dependencies of a relation instance. Moreover, it clearly outperformed previous dependency-tree-based models.
|
[
"Information Extraction & Text Mining",
"Relation Extraction",
"Structured Data in NLP",
"Syntactic Text Processing",
"Syntactic Parsing",
"Multimodality"
] |
[
3,
75,
50,
15,
28,
74
] |
SCOPUS_ID:85139265579
|
A Learned Label Modulates Object Representations in 10-Month-Old Infants
|
Despite substantial evidence for a bidirectional relationship between language and representation, the roots of this relationship in infancy are not known. The current study explores the possibility that labels may affect object representations at the earliest stages of language acquisition. We asked parents to play with their 10-month-old infants with two novel toys for three minutes, every day for a week, teaching infants a novel word for one toy but not the other. After a week infants participated in a familiarization task in which they saw each object for 8 trials in silence, followed by a test trial consisting of both objects accompanied by the trained word. Infants exhibited a faster decline in looking times to the previously unlabeled object. These data speak to the current debate over the status of labels in human cognition, supporting accounts in which labels are an integral part of representation.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:84861894794
|
A Learning to Rank framework applied to text-image retrieval
|
We present a framework based on a Learning to Rank setting for a text-image retrieval task. In Information Retrieval, the goal is to compute the similarity between a document and an user query. In the context of text-image retrieval where several similarities exist, human intervention is often needed to decide on the way to combine them. On the other hand, with the Learning to Rank approach the combination of the similarities is done automatically. Learning to Rank is a paradigm where the learnt objective function is able to produce a ranked list of images when a user query is given. These score functions are generally a combination of similarities between a document and a query. In the past, Learning to Rank algorithms were successfully applied to text retrieval where they outperformed baselines such as BM25 or TFIDF. This inspired us to apply our state-of-the-art algorithm, called OWPC (Usunier et al. 2009), to the text-image retrieval task. At this time, no benchmarks are available, therefore we present a framework for building one. The empirical validation of this algorithm is done on the dataset constructed through comparison of typical text-image retrieval similarities. In both cases, visual only and text and visual, our algorithm performs better than a simple baseline. © 2011 Springer Science+Business Media, LLC.
|
[
"Visual Data in NLP",
"Information Retrieval",
"Multimodality"
] |
[
20,
24,
74
] |
SCOPUS_ID:85138728985
|
A Learning-to-Rank Approach for Spare Parts Consumption in the Repair Process
|
The repair process of devices is an important part of the business of many original equipment manufacturers. The consumption of spare parts, during the repair process, is driven by the defects found during inspection of the devices, and these parts are a big part of the costs in the repair process. But current Supply Chain Control Tower solutions do not provide support for the automatic check of spare parts consumption in the repair process. In this paper, we investigate a multi-label classification problem and present a learning-to-rank approach, where we simulate the passage of time while training hundreds of Logistic Regression Machine Learning models to provide an automatic check in the consumption of spare parts. The results show that the trained models can achieve a mean NDCG@20 score of 81% when ranking the expected parts, while also marking a low volume of 10% of the consumed parts for alert generation. We briefly discuss how these marked parts can be aggregated and combined with additional data to generate more fine-grained alerts.
|
[
"Passage Retrieval",
"Information Retrieval"
] |
[
66,
24
] |
https://aclanthology.org//1997.iwpt-1.20/
|
A Left-to-right Tagger for Word Graphs
|
An algorithm is presented for tagging input word graphs and producing output tag graphs that are to be subjected to further syntactic processing. It is based on an extension of the basic HMM equations for tagging an input word string that allows it to handle word-graph input, where each arc has been assigned a probability. The scenario is that of some word-graph source, e.g., an acoustic speech recognizer, producing the arcs of a word graph, and the tagger will in turn produce output arcs, labelled with tags and assigned probabilities. The processing as done entirely left-to-right, and the output tag graph is constructed using a minimum of lookahead, facilitating real-time processing.
|
[
"Structured Data in NLP",
"Syntactic Text Processing",
"Syntactic Parsing",
"Tagging",
"Multimodality"
] |
[
50,
15,
28,
63,
74
] |
http://arxiv.org/abs/2004.03422v3
|
A Legal Approach to Hate Speech: Operationalizing the EU's Legal Framework against the Expression of Hatred as an NLP Task
|
We propose a 'legal approach' to hate speech detection by operationalization of the decision as to whether a post is subject to criminal law into an NLP task. Comparing existing regulatory regimes for hate speech, we base our investigation on the European Union's framework as it provides a widely applicable legal minimum standard. Accurately judging whether a post is punishable or not usually requires legal training. We show that, by breaking the legal assessment down into a series of simpler sub-decisions, even laypersons can annotate consistently. Based on a newly annotated dataset, our experiments show that directly learning an automated model of punishable content is challenging. However, learning the two sub-tasks of `target group' and `targeting conduct' instead of an end-to-end approach to punishability yields better results. Overall, our method also provides decisions that are more transparent than those of end-to-end models, which is a crucial point in legal decision-making.
|
[
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
4
] |
SCOPUS_ID:85126392579
|
A Legal Qestion Answering System Based on BERT
|
With the development of artificial intelligence technology, intelligent question-answering systems in general fields have been widely accepted by people. However, the development of intelligent question-answering systems in limited areas is not very satisfactory. Moreover, due to the diversification of Chinese expressions, matching user input problems with prior problems is very important. This paper proposes a scheme to obtain the problem vector representation based on the BERT model. In addition, the Milvus vector search engine is used in this paper, which can not only provide store vector representation information but also calculate vector similarity. Finally, we return the answer through the database. When the threshold value of our proposed scheme is 0.2, the recall rate reaches 86%, and the mismatch rate reaches 84%. The results verify that the system has relatively good performance.
|
[
"Language Models",
"Semantic Text Processing",
"Question Answering",
"Representation Learning",
"Natural Language Interfaces"
] |
[
52,
72,
27,
12,
11
] |
SCOPUS_ID:85103445551
|
A Legal Question Answering Ontology-Based System
|
Question-answering systems (QASs) aim to provide a relevant and concise answer to questions asked in natural language by a user. In this article, we describe our method of developing a question-answering system, operating in the legal domain in Morocco, which mostly uses the French and Arabic languages, and sometimes English. Its purpose is to give relevant and concise answers to questions in the legal domain, stated in natural language by a user, without him having to go through the legal documents to find an answer to his question. The implementation of the proposed system is based on three processes: the first process consists of modeling the legal domain knowledge by an ontology, both (i) independent of the language, and (ii) capable of supporting several languages. The second process consists of extracting the RDF triplet components from the user's question. The third process consists of reformulating the question by a SPARQL query(s) with which we can query the ontology and thus retrieve the appropriate answer to the question asked by the user.
|
[
"Natural Language Interfaces",
"Knowledge Representation",
"Semantic Text Processing",
"Question Answering"
] |
[
11,
18,
72,
27
] |
http://arxiv.org/abs/1403.5596v1
|
A Lemma Based Evaluator for Semitic Language Text Summarization Systems
|
Matching texts in highly inflected languages such as Arabic by simple stemming strategy is unlikely to perform well. In this paper, we present a strategy for automatic text matching technique for for inflectional languages, using Arabic as the test case. The system is an extension of ROUGE test in which texts are matched on token's lemma level. The experimental results show an enhancement of detecting similarities between different sentences having same semantics but written in different lexical forms..
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85130464116
|
A Lemmatizer for Low-resource Languages: WSD and Its Role in the Assamese Language
|
The morphological variations of highly inflected languages that appear in a text impede the progress of computer processing and root word determination tasks while extracting an abstract. As a remedy to this difficulty, a lemmatization algorithm is developed, and its effectiveness is evaluated for Word Sense Disambiguation (WSD). Having observed its usefulness, lemmatizer is considered for developing Natural Language Processing tools for languages rich in morphological variations. Among various Indian highly inflected languages, Assamese, spoken by over 14 million people in the North-Eastern region of India, is also one of them. In this present work, after a detailed study on the possible transformations through which surface words are created from lemmas, we have designed an Assamese lemmatizer in such a manner that suitable reverse transformations can be employed on a surface word to derive the co-relative (similar) lemma back. And it has been observed that the lemmatizer is competent to deal with inflectional and derivational morphology in Assamese, and the same was evaluated on various Assamese articles extracted from the Assamese Corpus consisting of 50,000 surface words (excluding proper nouns), and the result that it yielded with 82% accuracy was quite encouraging and satisfying, as Assamese is a low-level language and no research work has been done in the Assamese language regarding the lemmatization of words. Considering the result obtained, the lemmatizer is then evaluated for Assamese WSD. For this purpose, 10 highly polysemous Assamese words are taken into account for sense disambiguation. We have also regarded varied WSD systems and observed that such systems enhance the effectiveness of all the WSD systems, which is statistically significant.
|
[
"Low-Resource NLP",
"Semantic Text Processing",
"Word Sense Disambiguation",
"Morphology",
"Syntactic Text Processing",
"Responsible & Trustworthy NLP"
] |
[
80,
72,
65,
73,
15,
4
] |
http://arxiv.org/abs/2212.10554v1
|
A Length-Extrapolatable Transformer
|
Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define attention resolution as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at https://aka.ms/LeX-Transformer.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85113797511
|
A Levenshtein distance based implementation of lost character prediction, imputation and tagging in Malayalam palm leaf bundles
|
Character prediction, which predicts the next probable character given a character or a sequence of characters, is one of the significant tasks coming under Natural Language Processing. The application of these tasks is most relevant in predicting characters written in old documents that might get lost due to aging, insect bites, fungus, etc. From the starting stage of NLP research, numerous frameworks were proposed for various dialects prediction tasks. There has been a significant progress in this field in many languages but not in the Malayalam language. Due to lot many reasons, a lot of missing content that makes these repositories not suitable for exploitation. So in this paper, a Levenshtein distance based model, which is independent of the grammatical structure of the language, is developed, for lost character prediction and tagging in old Malayalam palm leaf manuscripts bundles. The model is able to give an accuracy of 81.02% for the palm leaf manuscript bundles considered for the experiment.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
SCOPUS_ID:85146237305
|
A Levenshtein distance-based method for word segmentation in corpus augmentation of geoscience texts
|
For geoscience text, rich domain corpora have become the basis of improving the model performance in word segmentation. However, the lack of domain-specific corpus with annotation labelled has become a major obstacle to professional information mining in geoscience fields. In this paper, we propose a corpus augmentation method based on Levenshtein distance. According to the technique, a geoscience dictionary of 20,137 words was collected and constructed by crawling the keywords from published papers in China National Knowledge Infrastructure (CNKI). The dictionary was further used as the main source of synonyms to enrich the geoscience corpus according to the Levenshtein distance between words. Finally, a Chinese word segmentation model combining the BERT, Bi-gated recurrent neural network (Bi-GRU), and conditional random fields (CRF) was implemented. Geoscience corpus composed of complex long specific vocabularies has been selected to test the proposed word segmentation framework. CNN-LSTM, Bi-LSTM-CRF, and Bi-GRU-CRF models were all selected to evaluate the effects of Levenshtein data augmentation technique. Experiments results prove that the proposed methods achieve a significant performance improvement of more than 10%. It has great potential for natural languages processing tasks like named entity recognition and relation extraction.
|
[
"Language Models",
"Text Segmentation",
"Semantic Text Processing",
"Syntactic Text Processing"
] |
[
52,
21,
72,
15
] |
SCOPUS_ID:85118139881
|
A Lexical Analysis Algorithm for the Translation System of Germany and China Under Information Technology Education
|
This paper proposes a rule-based lexical analysis algorithm for German Chinese machine translation. The algorithm can not only effectively restore the original morphemes of various deformed words, but also provide useful part of speech and various grammatical features for the subsequent parsing mechanism in the system, Through the part of speech information of specific morphological changes, we can only extract the part of speech information in the original dictionary definition of the deformed word and its corresponding dictionary entry definition, so as to facilitate the analysis and processing of the word.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
https://aclanthology.org//W97-1105/
|
A Lexical Database Tool for Quantitative Phonological Research
|
[
"Phonology",
"Syntactic Text Processing"
] |
[
6,
15
] |
|
SCOPUS_ID:85031745039
|
A Lexical Updating Algorithm for Sentiment Analysis on Chinese Movie Reviews
|
With the prevalence of Internet, sentiment analysis gets popularity among the world. Researchers have made use of kinds of online documents like commodities reivews and movie reviews as training samples to train their models and classfiers, by which they could speculate the underlying emotion in new ones. Douban is a Chinese online community where users share their personal reviews to express their feelings about movies. Those Chinese movie reviews were utilized by us to train our lexicon-based model. Yet multiple words in a ready-made lexicon do not agree with the movie reviews in a specific domain, which means the original lexicon acquires being updated to gain higher accuracy. In this paper we introduce a lexical updating algorithm based on a widely used lexicon. After turns of training of updating, this lexicon is capable of classifying sentiment among movie reviews. The experimental result shows our model using the updated lexicon could get a better performance than the primitive lexicon-based model.
|
[
"Sentiment Analysis"
] |
[
78
] |
http://arxiv.org/abs/1909.08349v1
|
A Lexical, Syntactic, and Semantic Perspective for Understanding Style in Text
|
With a growing interest in modeling inherent subjectivity in natural language, we present a linguistically-motivated process to understand and analyze the writing style of individuals from three perspectives: lexical, syntactic, and semantic. We discuss the stylistically expressive elements within each of these levels and use existing methods to quantify the linguistic intuitions related to some of these elements. We show that such a multi-level analysis is useful for developing a well-knit understanding of style - which is independent of the natural language task at hand, and also demonstrate its value in solving three downstream tasks: authors' style analysis, authorship attribution, and emotion prediction. We conduct experiments on a variety of datasets, comprising texts from social networking sites, user reviews, legal documents, literary books, and newswire. The results on the aforementioned tasks and datasets illustrate that such a multi-level understanding of style, which has been largely ignored in recent works, models style-related subjectivity in text and can be leveraged to improve performance on multiple downstream tasks both qualitatively and quantitatively.
|
[
"Syntactic Text Processing"
] |
[
15
] |
SCOPUS_ID:85142877389
|
A Lexicon Enhanced Collaborative Network for targeted financial sentiment analysis
|
The increasing interest around emotions in online texts creates the demand for financial sentiment analysis. Previous studies mainly focus on coarse-grained document-/sentence-level sentiment analysis, which ignores different sentiment polarities of various targets (e.g., company entities) in a sentence. To fill the gap, from a fine-grained target-level perspective, we propose a novel Lexicon Enhanced Collaborative Network (LECN) for targeted sentiment analysis (TSA) in financial texts. In general, the model designs a unified and collaborative framework that can capture the associations of targets and sentiment cues to enhance the overall performance of TSA. Moreover, the model dynamically incorporates sentiment lexicons to guide the sentiment classification, which cultivates the model faculty of understanding financial expressions. In addition, the model introduces a message selective-passing mechanism to adaptively control the information flow between two tasks, thereby improving the collaborative effects. To verify the effectiveness of LECN, we conduct experiments on four financial datasets, including SemEVAL2017 Task5 subset1, SemEVAL2017 Task5 subset2, FiQA 2018 Task1, and Financial PhraseBank. Results show that LECN achieves improvements over the state-of-art baseline by 1.66 p.p., 1.47 p.p., 1.94 p.p., and 1.88 p.p. in terms of F1-score. A series of further analyses also indicate that LECN has a better capacity for comprehending domain-specific expressions and can achieve the mutually beneficial effect between tasks.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85058012934
|
A Lexicon Generation Method for Aspect-Based Opinion Mining
|
Opining mining is the task of analyzing the written text about an entity to determine the expressed sentiment toward that entity. This paper presents a lexicon generation approach to determine the polarity of the aspects described in reviews written in e-commerce websites. The method we are to present here is an extension over a previous method, FBSA, which calculates scores for words in reviews rapidly. The previously mentioned method was designed for tweet level problems. The main contribution of this paper is altering the score generation algorithm of aforementioned method and adding new features to adapt it to aspect based problems. The results prove that our method produces lexicon performing better compared to reputed lexicons such as AFFIN and Bing Liu's lexicon on SemEval-2014 dataset by a minimum margin of 4.2 percentage points in laptop domain and 1.2 percentage points in restaurant domain on F-measure.
|
[
"Opinion Mining",
"Sentiment Analysis"
] |
[
49,
78
] |
SCOPUS_ID:84863429211
|
A Lexicon based sentiment analysis retrieval system for tourism domain
|
Sentiment analysis has been extensively investigated during the last years mainly for English language. Currently, existing approaches can be split into two main groups: methods based on the combination of lexical resources and Natural Language Processing (NLP) techniques; and machine learning approaches. This paper introduces the use of lexical databases for Sentiment Analysis of user reviews in Spanish for the accommodation and food and beverage sectors. A global sentiment score has been calculate based on the negative and positive words which appear in the review and using the mentioned lexicon database. The algorithm has been tested with short users online reviews acquired from TripAdvisor.
|
[
"Information Retrieval",
"Sentiment Analysis"
] |
[
24,
78
] |
http://arxiv.org/abs/cmp-lg/9705011v1
|
A Lexicon for Underspecified Semantic Tagging
|
The paper defends the notion that semantic tagging should be viewed as more than disambiguation between senses. Instead, semantic tagging should be a first step in the interpretation process by assigning each lexical item a representation of all of its systematically related senses, from which further semantic processing steps can derive discourse dependent interpretations. This leads to a new type of semantic lexicon (CoreLex) that supports underspecified semantic tagging through a design based on systematic polysemous classes and a class-based acquisition of lexical knowledge for specific domains.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
SCOPUS_ID:85060036724
|
A Lexicon-Based Sentiment Analysis for Amazon Web Review
|
The development of internet more quickly over time as well as the development of e-commerce one of them is amazon.com. The amazon.com is one of the largest e-commerce in the world by providing various needs that can be accessed by internet. Review feature provided to make amazon.com party can know the various responses from consumers. However amazon.com difficulties in summarizing the various kinds of reviews that are positive or negative. By using one natural language processing the main purpose is to help the amazon.com in knowing the most responses from consumers to improve the quality service. In this research we will be using the dataset that was obtained from UCI Machine Learning that contain 1000 set of data which has 478 of negative data and 522 positive data and this will be combine a variety of classification methods for comparison and in preprocessing process will be added with lexicon technique to improve the quality of preprocessing. The result of this research is K-Nearest Neighbor with lexicon technique get highest accuracy with value 92.67% followed by SVM with lexicon get 91.33% accuracy and last Decision tree with 82% accuracy.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:84959463902
|
A Lexicon-Grammar Based Methodology for Ontology Population for e-Health Applications
|
Nowadays, the need for well-structured ontologies in the medical domain is rising, especially due to the significant support these ontologies bring to a number of groundbreaking applications, such as intelligent medical diagnosis system and decision-support systems. Indeed, the considerable production of clinical data belonging to restricted sub domains has stressed the need for efficient methodologies to automatically process enormous amounts of un-structured, domain specific information in order to make use of the knowledge these data provide. In this work, we propose a lexicon-grammar based methodology for efficient information extraction and retrieval on unstructured medical records in order to enrich a simple ontology descriptive of such a kind of documents. We describe the NLP methodology for extracting RDF triples from unstructured medical records, and show how an existing ontology built by a domain expert can be populated with the set of triples and then enriched through its linking to external resources.
|
[
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Green & Sustainable NLP",
"Knowledge Representation",
"Responsible & Trustworthy NLP"
] |
[
3,
72,
68,
18,
4
] |
SCOPUS_ID:38349085674
|
A Lexicon-Guided LSI Method for Semantic News Video Retrieval
|
Many researchers try to utilize the semantic information extracted from visual feature to directly realize the semantic video retrieval or to supplement the automated speech recognition (ASR) text retrieval. But bridging the gap between the low-level visual feature and semantic content is still a challenging task. In this paper, we study how to effectively use Latent Semantic Indexing (LSI) to improve the semantic video retrieval through the ASR texts. The basic LSI method has been shown effective in the traditional text retrieval and the noisy ASR text retrieval. In this paper, we further use the lexiconguided semantic clustering to effectively remove the noise introduced by news video's additional contents, and use the cluster-based LSI to automatically mine the semantic structure underlying the terms expression. Tests on the TRECVID 2005 dataset show that the above two enhancements achieve 21.3% and 6.9% improvements in performance over the traditional vector-space model(VSM) and the basic LSI separately. © Springer-Verlag Berlin Heidelberg 2007.
|
[
"Visual Data in NLP",
"Information Extraction & Text Mining",
"Speech & Audio in NLP",
"Text Generation",
"Text Clustering",
"Speech Recognition",
"Information Retrieval",
"Multimodality"
] |
[
20,
3,
70,
47,
29,
10,
24,
74
] |
SCOPUS_ID:85029415301
|
A Lexicon-based text classification model to analyse and predict sentiments from online reviews
|
The ubiquity of Internet has shown way to multitude of people to connect with each other beyond space and time. This has led to the colossal usage and hence popularity of online media as a platform to share opinions, exchange ideas, raise questions, show contempt and many such sorts of reviews. Thus web is transforming into a huge repository of textual data, which can be classified to fathom the sentiment and emotional state of an online user. There are many text classification algorithms proposed by different researchers that take in texts from online media and after text preprocessing, mining and classification, the sentiment of a user is predicted. In this paper, we are proposing a Lexicon-based text classification algorithm which is used to analyze and predict a user's sentiment polarity viz. positive, negative & neutral from online reviews. Our algorithm is different from other Lexicon-based algorithms in the context that it uses the three degrees of comparison viz. positive, comparative and superlative degrees on words; for each of the positive and negative sentiment words. Further, we have used the negation words to show how the accuracy of the system can be improved.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
http://arxiv.org/abs/2205.00716v1
|
A Library Perspective on Nearly-Unsupervised Information Extraction Workflows in Digital Libraries
|
Information extraction can support novel and effective access paths for digital libraries. Nevertheless, designing reliable extraction workflows can be cost-intensive in practice. On the one hand, suitable extraction methods rely on domain-specific training data. On the other hand, unsupervised and open extraction methods usually produce not-canonicalized extraction results. This paper tackles the question how digital libraries can handle such extractions and if their quality is sufficient in practice. We focus on unsupervised extraction workflows by analyzing them in case studies in the domains of encyclopedias (Wikipedia), pharmacy and political sciences. We report on opportunities and limitations. Finally we discuss best practices for unsupervised extraction workflows.
|
[
"Low-Resource NLP",
"Responsible & Trustworthy NLP",
"Information Extraction & Text Mining"
] |
[
80,
4,
3
] |
SCOPUS_ID:85018319146
|
A Lifelong Learning Topic Model Structured Using Latent Embeddings
|
We propose a latent-embedding-structured lifelong learning topic model, called the LLT model, to discover coherent topics from a corpus. Specifically, we exploit latent word embeddings to structure our model and mine word correlation knowledge to assist in topic modeling. During each learning iteration, our model learns new word embeddings based on the topics generated in the previous learning iteration. Experimental results demonstrate that our LLT model is able to generate more coherent topics than state-of-the-art methods.
|
[
"Topic Modeling",
"Information Extraction & Text Mining",
"Semantic Text Processing",
"Representation Learning"
] |
[
9,
3,
72,
12
] |
SCOPUS_ID:85082305505
|
A Lifelong Sentiment Classification Framework Based on a Close Domain Lifelong Topic Modeling Method
|
In lifelong machine learning, the determination of the hypotheses related to the current task is very meaningful thanks to the reduction of the space to look for the knowledge patterns supporting for solving the current task. However, there are few studies for this problem. In this paper, we propose the definitions for measuring the “close domains to the current domain”, and a lifelong sentiment classification method based on using the close domains for topic modeling the current domain. Experimental results on sentiment datasets of product reviews from Amazon.com show the promising performance of system and the effectiveness of our approach.
|
[
"Topic Modeling",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
9,
36,
78,
24,
3
] |
SCOPUS_ID:85107269921
|
A Light Arabic POS Tagger Using a Hybrid Approach
|
Part Of Speech (POS) tagging is the ability to computationally determine which POS of a word is activated by its use in a particular context. It is a useful preprocessing tool in many natural languages processing (NLP) applications. In this paper, we expose a new Arabic POS Tagger based on the combination of two main modules: the 1st order Markov and a decision tree models. These two modules allow improving existing POS Taggers with the possibility of tagging unknown words. The tag set used for this POS is an elementary tag set composed of 4 tags {noun, verb, particle, punctuation} that are sufficient for some NLP applications but greatly help increasing the accuracy. The POS tagger has been trained with the NEMLAR corpus. The experiment results demonstrate its efficiency with an overall accuracy of 98% for the full system.
|
[
"Tagging",
"Syntactic Text Processing"
] |
[
63,
15
] |
SCOPUS_ID:85146937090
|
A Light Bug Triage Framework for Applying Large Pre-trained Language Model
|
Assigning appropriate developers to the bugs is one of the main challenges in bug triage. Demands for automatic bug triage are increasing in the industry, as manual bug triage is labor-intensive and time-consuming in large projects. The key to the bug triage task is extracting semantic information from a bug report. In recent years, large Pre-trained Language Models (PLMs) including BERT [4] have achieved dramatic progress in the natural language processing (NLP) domain. However, applying large PLMs to the bug triage task for extracting semantic information has several challenges. In this paper, we address the challenges and propose a novel framework for bug triage named LBT-P, standing for Light Bug Triage framework with a Pre-trained language model. It compresses a large PLM into small and fast models using knowledge distillation techniques and also prevents catastrophic forgetting of PLM by introducing knowledge preservation fine-tuning. We also develop a new loss function exploiting representations of earlier layers as well as deeper layers in order to handle the overthinking problem. We demonstrate our proposed framework on the real-world private dataset and three public real-world datasets [11]: Google Chromium, Mozilla Core, and Mozilla Firefox. The result of the experiments shows the superiority of LBT-P.
|
[
"Language Models",
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Green & Sustainable NLP"
] |
[
52,
4,
72,
68
] |
http://arxiv.org/abs/1509.05517v1
|
A Light Sliding-Window Part-of-Speech Tagger for the Apertium Free/Open-Source Machine Translation Platform
|
This paper describes a free/open-source implementation of the light sliding-window (LSW) part-of-speech tagger for the Apertium free/open-source machine translation platform. Firstly, the mechanism and training process of the tagger are reviewed, and a new method for incorporating linguistic rules is proposed. Secondly, experiments are conducted to compare the performances of the tagger under different window settings, with or without Apertium-style "forbid" rules, with or without Constraint Grammar, and also with respect to the traditional HMM tagger in Apertium.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85111081238
|
A Light Transfer Model for Chinese Named Entity Recognition for Specialty Domain
|
Named entity recognition (NER) for specialty domain is a challenging task since the labels are specific and there are not sufficient labelled data for training. In this paper, we propose a simple but effective method, named Light Transfer NER model (LTN), to tackle this problem. Different with most traditional methods that fine tune the network or reconstruct its probing layer, we design an additional part over a general NER network for new labels in the specific task. By this way, on the one hand, we can reuse the knowledge learned in the general NER task as much as possible, from the granular elements for combining inputs, to higher level embedding of outputs. On the other hand, the model can be easily adapted to the domain specific NER task without reconstruction. We also adopt the linear combination on each dimension of input feature vectors instead of using vector concatenation, which reduces about half parameters in the forward levels of network and makes the transfer light. We compare our model with other state-of-the-art NER models on real datasets against different quantity of labelled data. The experimental results show that our model is consistently superior than baseline methods on both effectiveness and efficiency, especially in case of low-resource data for specialty domain.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
SCOPUS_ID:85131131629
|
A Light Transformer-Based Architecture for Handwritten Text Recognition
|
Transformer models have been showing ground-breaking results in the domain of natural language processing. More recently, they started to gain interest in many others fields as in computer vision. Traditional Transformer models typically require a significant amount of training data to achieve satisfactory results. However, in the domain of handwritten text recognition, annotated data acquisition remains costly resulting in small datasets compared to those commonly used to train a Transformer-based model. Hence, training Transformer models able to transcribe handwritten text from images remains challenging. We propose a light encoder-decoder Transformer-based architecture for handwriting text recognition, containing a small number of parameters compared to traditional Transformer architectures. We trained our architecture using a hybrid loss, combining the well-known connectionist temporal classification with the cross-entropy. Experiments are conducted on the well-known IAM dataset with and without the use of additional synthetic data. We show that our network reaches state-of-the-art results in both cases, compared with other larger Transformer-based models.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85073156362
|
A Light Weight Text Extraction Technique for Hand-Held Device
|
Automated systems for understanding display boards are finding many applications useful in guiding tourists, assisting visually challenged and also in providing location aware information. Such systems require an automated method to detect and extract text prior to further image analysis. In this paper, a new approach that uses zonewise profile features to identify and segment text regions from low resolution images of display boards captured from mobile phone cameras is presented. The method computes zonewise profile features on every 40 × 40 pixel image block and identifies potential text blocks using newly defined discriminant functions. Further, a merging algorithm is used to merge text blocks to obtain text regions. The method is implemented using the android software development kit and experimented on Sony X-PeriaTM Z C6603/C6602 mobile. The proposed methodology is evaluated on 3240 low resolution images of display boards captured from 2 and/or 5 mega pixel cameras on mobile phones at various pixel sizes 240 × 320, 480 × 640 and 960 × 1280 and reports an average processing time of 13 s and detection rate of 95.5%. The proposed method is found to be robust and insensitive to the variations in size and style of font, thickness and spacing between characters.
|
[
"Visual Data in NLP",
"Information Extraction & Text Mining",
"Multimodality"
] |
[
20,
3,
74
] |
SCOPUS_ID:85117760045
|
A Light-Weight Text Summarization System for Fast Access to Medical Evidence
|
As the volume of published medical research continues to grow rapidly, staying up-to-date with the best-available research evidence regarding specific topics is becoming an increasingly challenging problem for medical experts and researchers. The current COVID19 pandemic is a good example of a topic on which research evidence is rapidly evolving. Automatic query-focused text summarization approaches may help researchers to swiftly review research evidence by presenting salient and query-relevant information from newly-published articles in a condensed manner. Typical medical text summarization approaches require domain knowledge, and the performances of such systems rely on resource-heavy medical domain-specific knowledge sources and pre-processing methods (e.g., text classification) for deriving semantic information. Consequently, these systems are often difficult to speedily customize, extend, or deploy in low-resource settings, and they are often operationally slow. In this paper, we propose a fast and simple extractive summarization approach that can be easily deployed and run, and may thus aid medical experts and researchers obtain fast access to the latest research evidence. At runtime, our system utilizes similarity measurements derived from pre-trained medical domain-specific word embeddings in addition to simple features, rather than computationally-expensive pre-processing and resource-heavy knowledge bases. Automatic evaluation using ROUGE—a summary evaluation tool—on a public dataset for evidence-based medicine shows that our system's performance, despite the simple implementation, is statistically comparable with the state-of-the-art. Extrinsic manual evaluation based on recently-released COVID19 articles demonstrates that the summarizer performance is close to human agreement, which is generally low, for extractive summarization.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
http://arxiv.org/abs/2108.07493v1
|
A Light-weight contextual spelling correction model for customizing transducer-based speech recognition systems
|
It's challenging to customize transducer-based automatic speech recognition (ASR) system with context information which is dynamic and unavailable during model training. In this work, we introduce a light-weight contextual spelling correction model to correct context-related recognition errors in transducer-based ASR systems. We incorporate the context information into the spelling correction model with a shared context encoder and use a filtering algorithm to handle large-size context lists. Experiments show that the model improves baseline ASR model performance with about 50% relative word error rate reduction, which also significantly outperforms the baseline method such as contextual LM biasing. The model also shows excellent performance for out-of-vocabulary terms not seen during training.
|
[
"Text Generation",
"Speech & Audio in NLP",
"Speech Recognition",
"Multimodality"
] |
[
47,
70,
10,
74
] |
SCOPUS_ID:85139796691
|
A Lightweight CNN-Based Pothole Detection Model for Embedded Systems Using Knowledge Distillation
|
Recent breakthroughs in computer vision have led to the invention of several intelligent systems in different sectors. In transportation, this advancement led to the possibility of proposing autonomous vehicles. This recent technology relies heavily on wireless sensors and Deep learning. For an autonomous vehicle to navigate safely on highways, the vehicle needs equipment to aid with detecting road anomalies such as potholes ahead of time. The massive improvement in computer vision models such as Deep Convolutional Neural networks (DCNN) or vision transformers (ViT) resulted in many success stories and tremendous breakthroughs in object detection tasks; this enabled the use of such models in different application areas. But many of the reported results are theoretical and unrealistic in real-life. Usually, the nature of these models is extensive; they are trained on High-performance computers or cloud computing environments with GPUs, which challenge their usage on edge devices. However, to come up with a light model that can fit into embedded devices, the model size has to be reduced significantly so that the performance will not be affected. Therefore, this paper proposes a lightweight model of pothole detection for an embedded device. The model achieved a state-of-the-art accuracy of 98%, with the number of parameters reduced to more than 70% compared with a deep CNN model; the model can be trained and deployed on embedded devices such as smartphones efficiently.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Green & Sustainable NLP",
"Responsible & Trustworthy NLP",
"Multimodality"
] |
[
20,
52,
72,
68,
4,
74
] |
SCOPUS_ID:85094840409
|
A Lightweight Chinese Character Recognition Model for Elementary Level Hanzi Learning Application
|
The Chinese language is widely spoken and written by a quarter of the earth's population. Its usage is recently increased due to the rise of China as a new world power in trade and economy. This attracts new learners of Chinese and Chinese is often taught as early as elementary school in countries such as Indonesia, which regard Chinese as a new foreign trade and social language. However, without proper and continuous exercise, mastering Chinese, especially the written, is a big challenge. Previous studies has proposed and affirmed the use of information technology as a learning aid to study Chinese. They show positive results, but has left out the writing exercise section. This research proposes a modest Optical Character Recognition (OCR) model applicable to aid learning of writing Chinese characters, also known as Hanzi, for elementary education level. The goal aimed is not just the functionality, but due to its modesty, it should be able to be applied to a broader condition; in wider range of devices and by wider level of programmers. Experiment results shown that for the defined environment, the model give an acceptable accuracy of 95% in recognising handwritten Chinese characters. However, if it is planned to be applied using a more complex set of characters and writing styles, the statistical features used should be replaced and improved.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
https://aclanthology.org//W11-2102/
|
A Lightweight Evaluation Framework for Machine Translation Reordering
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85144394341
|
A Lightweight Hybrid Scheme for Hiding Text Messages in Colour Images Using LSB, Lah Transform and Chaotic Techniques
|
Data security can involve embedding hidden images, text, audio, or video files within other media to prevent hackers from stealing encrypted data. Existing mechanisms suffer from a high risk of security breaches or large computational costs, however. The method proposed in this work incorporates low-complexity encryption and steganography mechanisms to enhance security during transmission while lowering computational complexity. In message encryption, it is recommended that text file data slicing in binary representation, to achieve different lengths of string, be conducted before text file data masking based on the lightweight Lucas series and mod function to ensure the retrieval of text messages is impossible. The steganography algorithm starts by generating a random key stream using a hybrid of two low-complexity chaotic maps, the Tent map and the Ikeda map. By finding a position vector parallel to the input image vector, these keys are used based on the previously generated position vector to randomly select input image data and create four vectors that can be later used as input for the Lah transform. In this paper, we present an approach for hiding encrypted text files using LSB colour image steganography by applying a low-complexity XOR operation to the most significant bits in 24-bit colour cover images. It is necessary to perform inverse Lah transformation to recover the image pixels and ensure that invisible data cannot be retrieved in a particular sequence. Evaluation of the quality of the resulting stego-images and comparison with other ways of performing encryption and message concealment shows that the stego-image has a higher PSNR, a lower MSE, and an SSIM value close to one, illustrating the suitability of the proposed method. It is also considered lightweight in terms of having lower computational overhead.
|
[
"Visual Data in NLP",
"Semantic Text Processing",
"Representation Learning",
"Information Retrieval",
"Multimodality"
] |
[
20,
72,
12,
24,
74
] |
SCOPUS_ID:85118539328
|
A Lightweight Multi-Scale Crossmodal Text-Image Retrieval Method in Remote Sensing
|
Remote sensing (RS) crossmodal text-image retrieval has become a research hotspot in recent years for its application in semantic localization. However, since multiple inferences on slices are demanded in semantic localization, designing a crossmodal retrieval model with less computation but well performance becomes an emergent and challenging task. In this article, considering the characteristics of multi-scale and target redundancy in RS, a concise but effective crossmodal retrieval model (LW-MCR) is designed. The proposed model incorporates multi-scale information and dynamically filters out redundant features when encoding RS image, while text features are obtained via lightweight group convolution. To improve the retrieval performance of LW-MCR, we come up with a novel hidden supervised optimization method based on knowledge distillation. This method enables the proposed model to acquire dark knowledge of the multi-level layers and representation layers in the teacher network, which significantly improves the accuracy of our lightweight model. Finally, on the basis of contrast learning, we present a method employing unlabeled data to boost the performance of RS retrieval model further. The experiment results on four RS image-Text datasets demonstrate the efficiency of LW-MCR in RS crossmodal retrieval (RSCR) tasks. We have released some codes of the semantic localization and made it open to access at https://github.com/xiaoyuan1996/retrievalSystem.
|
[
"Visual Data in NLP",
"Green & Sustainable NLP",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Multimodality"
] |
[
20,
68,
4,
24,
74
] |
SCOPUS_ID:85149135110
|
A Lightweight Named Entity Recognition Method for Chinese Power Equipment Defect Text
|
During the operation and maintenance of power equipment, a large amount of text data is accumulated, and it is of great importance to mine valuable information and evaluate the operation status of the equipment. Among them, named entity recognition technology is a key prerequisite for downstream tasks. However, with the development of natural language processing technology, while improving the accuracy of entity recognition, the existing models are gradually unable to meet the requirements of time and equipment cost for model training in practice. In this paper, we propose a low-cost ALBERT-BiLSTM-CRF-based named entity recognition model applicable to power equipment defective text. The model achieves an F1 score of 92.47% in entity recognition in the power domain, outperforming the benchmark BERT model performance in terms of time cost and effect.
|
[
"Language Models",
"Named Entity Recognition",
"Semantic Text Processing",
"Information Extraction & Text Mining"
] |
[
52,
34,
72,
3
] |
http://arxiv.org/abs/1705.07008v1
|
A Lightweight Regression Method to Infer Psycholinguistic Properties for Brazilian Portuguese
|
Psycholinguistic properties of words have been used in various approaches to Natural Language Processing tasks, such as text simplification and readability assessment. Most of these properties are subjective, involving costly and time-consuming surveys to be gathered. Recent approaches use the limited datasets of psycholinguistic properties to extend them automatically to large lexicons. However, some of the resources used by such approaches are not available to most languages. This study presents a method to infer psycholinguistic properties for Brazilian Portuguese (BP) using regressors built with a light set of features usually available for less resourced languages: word length, frequency lists, lexical databases composed of school dictionaries and word embedding models. The correlations between the properties inferred are close to those obtained by related works. The resulting resource contains 26,874 words in BP annotated with concreteness, age of acquisition, imageability and subjective frequency.
|
[
"Psycholinguistics",
"Linguistics & Cognitive NLP"
] |
[
77,
48
] |
SCOPUS_ID:85146654645
|
A Lightweight Sentiment Analysis Framework for a Micro-Intelligent Terminal
|
Sentiment analysis aims to mine polarity features in the text, which can empower intelligent terminals to recognize opinions and further enhance interaction capabilities with customers. Considerable progress has been made using recurrent neural networks or pre-trained models to learn semantic representations. However, recently published models with complex structures require increasing computational resources to reach state-of-the-art (SOTA) performance. It is still a significant challenge to deploy these models to run on micro-intelligent terminals with limited computing power and memory. This paper proposes a lightweight and efficient framework based on hybrid multi-grained embedding on sentiment analysis (MC-GGRU). The gated recurrent unit model is designed to incorporate a global attention structure that allows contextual representations to be learned from unstructured text using word tokens. In addition, a multi-grained feature layer can further enrich sentence representation features with implicit semantics from characters. Through hybrid multi-grained representation, MC-GGRU achieves high inference performance with a shallow structure. The experimental results of five public datasets show that our method achieves SOTA for sentiment classification with a trade-off between accuracy and speed.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85122643174
|
A Lightweight Visual Question Answering Model based on Semantic Similarity
|
The key of visual question answering is to learn the semantic alignment of image objects and question words. The typical methods use the attention mechanism to achieve this goal. However, calculating the attention weight of image objects and question keywords requires an attention function, a function usually required a large number of parameters. Focusing on this issue, this paper proposes a lightweight visual question answering model based on semantic similarity. Firstly, the image features and question features are mapped to the common visual-semantic space, and the multi-modal semantic similarity matrix is constructed by using cosine similarity. Then, the multi-level potential semantic space is further explored by using multi-channel convolution neural network to map the semantic similarity matrix into two different attention distributions. Finally, the joint representation of image and text is learned through the multimodal fusion, which will be fed into the classifier and to predict the correct answer. The co-attention achieved by the proposed method with very few parameters. The experiment results show that the proposed model can effectively learn multimodal semantic alignment with a small number of parameters and achieve competitive or better performance than the state-of-the-art methods on VQA v2.0 dataset.
|
[
"Visual Data in NLP",
"Semantic Text Processing",
"Question Answering",
"Semantic Similarity",
"Natural Language Interfaces",
"Multimodality"
] |
[
20,
72,
27,
53,
11,
74
] |
https://aclanthology.org//2022.trac-1.8/
|
A Lightweight Yet Robust Approach to Textual Anomaly Detection
|
Highly imbalanced textual datasets continue to pose a challenge for supervised learning models. However, viewing such imbalanced text data as an anomaly detection (AD) problem has advantages for certain tasks such as detecting hate speech, or inappropriate and/or offensive language in large social media feeds. There the unwanted content tends to be both rare and non-uniform with respect to its thematic character, and better fits the definition of an anomaly than a class. Several recent approaches to textual AD use transformer models, achieving good results but with trade-offs in pre-training and inflexibility with respect to new domains. In this paper we compare two linear models within the NMF family, which also have a recent history in textual AD. We introduce a new approach based on an alternative regularization of the NMF objective. Our results surpass other linear AD models and are on par with deep models, performing comparably well even in very small outlier concentrations.
|
[
"Ethical NLP",
"Robustness in NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
58,
4
] |
SCOPUS_ID:85147217804
|
A Lightweight and Accurate Spatial-Temporal Transformer for Traffic Forecasting
|
We study the forecasting problem for traffic with dynamic, possibly periodical, and joint spatial-temporal dependency between regions. Given the aggregated inflow and outflow traffic of regions in a city from time slots 0 to <inline-formula><tex-math notation="LaTeX">$t - 1$</tex-math></inline-formula>, we predict the traffic at time <inline-formula><tex-math notation="LaTeX">$t$</tex-math></inline-formula> for any region. Prior arts in the area often considered the spatial and temporal dependencies in a decoupled manner, or were rather computationally intensive in training with a large number of hyper-parameters which needed tuning. We propose ST-TIS, a novel, lightweight and accurate Spatial-Temporal Transformer with information fusion and region sampling for traffic forecasting. ST-TIS extends the canonical Transformer with information fusion and region sampling. The information fusion module captures the complex spatial-temporal dependency between regions. The region sampling module is to improve the efficiency and prediction accuracy, cutting the computation complexity for dependency learning from <inline-formula><tex-math notation="LaTeX">$O(n^{2})$</tex-math></inline-formula> to <inline-formula><tex-math notation="LaTeX">$O(n\sqrt{n})$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula> is the number of regions. With far fewer parameters than state-of-the-art deep learning models, ST-TIS's offline training is significantly faster in terms of tuning and computation (with a reduction of up to <inline-formula><tex-math notation="LaTeX">$90\%$</tex-math></inline-formula> on training time and network parameters). Notwithstanding such training efficiency, extensive experiments show that ST-TIS is substantially more accurate in online prediction than state-of-the-art approaches (with an average improvement of <inline-formula><tex-math notation="LaTeX">$9.5\%$</tex-math></inline-formula> on RMSE, and <inline-formula><tex-math notation="LaTeX">$12.4\%$</tex-math></inline-formula> on MAPE compared to STDN and DSAN).
|
[
"Language Models",
"Semantic Text Processing",
"Responsible & Trustworthy NLP",
"Reasoning",
"Numerical Reasoning",
"Green & Sustainable NLP"
] |
[
52,
72,
4,
8,
5,
68
] |
http://arxiv.org/abs/2201.03655v1
|
A Likelihood Ratio based Domain Adaptation Method for E2E Models
|
End-to-end (E2E) automatic speech recognition models like Recurrent Neural Networks Transducer (RNN-T) are becoming a popular choice for streaming ASR applications like voice assistants. While E2E models are very effective at learning representation of the training data they are trained on, their accuracy on unseen domains remains a challenging problem. Additionally, these models require paired audio and text training data, are computationally expensive and are difficult to adapt towards the fast evolving nature of conversational speech. In this work, we explore a contextual biasing approach using likelihood-ratio that leverages text data sources to adapt RNN-T model to new domains and entities. We show that this method is effective in improving rare words recognition, and results in a relative improvement of 10% in 1-best word error rate (WER) and 10% in n-best Oracle WER (n=8) on multiple out-of-domain datasets without any degradation on a general dataset. We also show that complementing the contextual biasing adaptation with adaptation of a second-pass rescoring model gives additive WER improvements.
|
[
"Low-Resource NLP",
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Multimodality",
"Responsible & Trustworthy NLP"
] |
[
80,
52,
72,
70,
74,
4
] |
http://arxiv.org/abs/1103.4090v2
|
A Linear Classifier Based on Entity Recognition Tools and a Statistical Approach to Method Extraction in the Protein-Protein Interaction Literature
|
We participated, in the Article Classification and the Interaction Method subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of the BioCreative III Challenge. For the ACT, we pursued an extensive testing of available Named Entity Recognition and dictionary tools, and used the most promising ones to extend our Variable Trigonometric Threshold linear classifier. For the IMT, we experimented with a primarily statistical approach, as opposed to employing a deeper natural language processing strategy. Finally, we also studied the benefits of integrating the method extraction approach that we have used for the IMT into the ACT pipeline. For the ACT, our linear article classifier leads to a ranking and classification performance significantly higher than all the reported submissions. For the IMT, our results are comparable to those of other systems, which took very different approaches. For the ACT, we show that the use of named entity recognition tools leads to a substantial improvement in the ranking and classification of articles relevant to protein-protein interaction. Thus, we show that our substantially expanded linear classifier is a very competitive classifier in this domain. Moreover, this classifier produces interpretable surfaces that can be understood as "rules" for human understanding of the classification. In terms of the IMT task, in contrast to other participants, our approach focused on identifying sentences that are likely to bear evidence for the application of a PPI detection method, rather than on classifying a document as relevant to a method. As BioCreative III did not perform an evaluation of the evidence provided by the system, we have conducted a separate assessment; the evaluators agree that our tool is indeed effective in detecting relevant evidence for PPI detection methods.
|
[
"Named Entity Recognition",
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
34,
24,
36,
3
] |
SCOPUS_ID:85124380168
|
A Linear Sub-Structure with Co-Variance Shift for Image Captioning
|
Automatic description of image has attracted many researchers in the field of computer vision for captioning the image in artificial intelligence which connects with Natural Language Processing. Exact generation of captions to image is necessary but it lacks due to Gradient Diminishing problem, LSTM can overcome this problem by fusing local and global characteristics of image and text that generates sequenced word prediction for accurate image captioning. We consider Flickr 8k data-set which consists of text as descriptions of images. The use of GLoVe embedding helps for the word representation to consider the global and local features of images which finds distance with Euclidean to understand the relationship between words in vector space. Inception V3 architecture which is pretrained on ImageNet used to extract image features of different objects in scenes. We propose Linear Sub-Structure that helps to generate sequenced order of words for captioning by understanding relationship between words. For extracting image features considers co-variance shift which mainly concentrates on moving parts of the image to generate accurate description of the image to maintain a semantic visual grammar relationship between the predicted text for image as the caption, the proposed model evaluated with the help of BLEU score which achieves state of art model in our work while compared with others that has greater than 81% of accuracy.
|
[
"Visual Data in NLP",
"Captioning",
"Text Generation",
"Multimodality"
] |
[
20,
39,
47,
74
] |
SCOPUS_ID:85121215396
|
A Linguistic Analysis Metric in Detecting Ransomware Cyber-attacks
|
Originating and striking from anywhere, cyberattacks have become ever more sophisticated in our modern society and users are forced to adopt increasingly good and vigilant practices to protect from them. Among these, ransomware remains a major cyber-attack whose major threat to end users (disrupted operations, restricted files, scrambled sensitive data, financial demands, etc.) does not particularly lie in number but in severity. In this study we explore the possibility of real-time detection of ransomware source through a linguistic analysis that examines machine translation relative to the Levenshtein Distance and may thereby provide important indications as to attacker's language of origin. Specifically, the aim of our research is to advance a metric to assist in determining whether an external ransom text is an indicator of either a human- or a machine-generated cyber-attack. Our proposed method works its argument on a set of Eastern European languages but is applicable to a large(r) range of languages and/or probabilistic patterns, being characterized by usage of limited resources and scalability properties.
|
[
"Multilinguality",
"Machine Translation",
"Robustness in NLP",
"Text Generation",
"Responsible & Trustworthy NLP"
] |
[
0,
51,
58,
47,
4
] |
http://arxiv.org/abs/2010.03127v1
|
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions
|
Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus \citep{udagawa2019natural,udagawa2020annotated}, a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their linguistic structures based on \textit{spatial expressions} and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model's understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues.
|
[
"Natural Language Interfaces",
"Visual Data in NLP",
"Multimodality",
"Dialogue Systems & Conversational Agents"
] |
[
11,
20,
74,
38
] |
SCOPUS_ID:84943788537
|
A Linguistic Analysis of the Modern Greek Dekapentasyllavo Meter
|
Dekapentasyllavo (DPS), the dominant poetic meter in the Modern Greek poetic tradition since several centuries, has barely received any attention by modern linguistic theories. Basing our discussion on the analysis of several dimotiká tragoúdia (folk songs), we seek to understand the structure underlying the meter. Our investigation reveals which patterns are frequently attested, which are less frequent and those which are (virtually) inexistent. DPS verifies the oft-cited L-R asymmetry in verselines (cf. Ryan 2013), which renders L-edges looser than the stricter R-edges. It also tolerates stress lapses much more than stress clashes. Our ensuing account captures this distribution by referring to, primarily, the relation of phonological phrasing to counting of metrical positions and, secondarily, to rhythm. These components are then integrated within a formal analysis along the lines of the Bracketed Grid Theory (Fabb & Halle 2008). We conclude by outlining how DPS poses a challenge for theories of poetic meter and by contemplating its contribution to the field.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:85063999123
|
A Linguistic Intuitionistic Cloud Decision Support Model with Sentiment Analysis for Product Selection in E-commerce
|
Online product reviews significantly impact the online purchase decisions of consumers. However, extant decision support models have neglected the randomness and fuzziness of online reviews and the interrelationships among product features. This study presents an integrated decision support model that can help customers discover desirable products online. This proposed model encompasses three modules: information acquisition, information transformation, and integration model. We use the information acquisition module to gather linguistic intuitionistic fuzzy information in each review through sentiment analysis. We also apply the information transformation module to convert the linguistic intuitionistic fuzzy information into linguistic intuitionistic normal clouds (LINCs). The integration module is employed to obtain the overall LINCs for each product. A ranked list of alternative products is determined. A case study on Taobao.com is then provided to illustrate the effectiveness and feasibility of the proposal, along with sensitivity and comparison analyses, to verify its stability and superiority. Finally, conclusions and future research directions are suggested.
|
[
"Sentiment Analysis"
] |
[
78
] |
https://aclanthology.org//W07-0601/
|
A Linguistic Investigation into Unsupervised DOP
|
[
"Low-Resource NLP",
"Cognitive Modeling",
"Linguistics & Cognitive NLP",
"Responsible & Trustworthy NLP"
] |
[
80,
2,
48,
4
] |
|
http://arxiv.org/abs/2210.10434v1
|
A Linguistic Investigation of Machine Learning based Contradiction Detection Models: An Empirical Analysis and Future Perspectives
|
We analyze two Natural Language Inference data sets with respect to their linguistic features. The goal is to identify those syntactic and semantic properties that are particularly hard to comprehend for a machine learning model. To this end, we also investigate the differences between a crowd-sourced, machine-translated data set (SNLI) and a collection of text pairs from internet sources. Our main findings are, that the model has difficulty recognizing the semantic importance of prepositions and verbs, emphasizing the importance of linguistically aware pre-training tasks. Furthermore, it often does not comprehend antonyms and homonyms, especially if those are depending on the context. Incomplete sentences are another problem, as well as longer paragraphs and rare words or phrases. The study shows that automated language understanding requires a more informed approach, utilizing as much external knowledge as possible throughout the training process.
|
[
"Reasoning",
"Textual Inference"
] |
[
8,
22
] |
SCOPUS_ID:84946635025
|
A Linguistic Law of Constancy: II
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
|
http://arxiv.org/abs/1210.0252v2
|
A Linguistic Model for Terminology Extraction based Conditional Random Fields
|
In this paper, we show the possibility of using a linear Conditional Random Fields (CRF) for terminology extraction from a specialized text corpus.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
SCOPUS_ID:85116163317
|
A Linguistic System for Predicting Sentiment in Arabic Tweets
|
The term sentiment analysis is considered very important in our current era, especially with widespread of social media, as it helps understand people's feelings, behavior and opinions about a specific behavior or entity, individuals, organizations and any related topic. Recently, with the development of machine learning, there have been many studies concerned with analyzing feelings. Still, most of these researches are concerned with the English language more than other languages. This paper proposes a model for working with Standard Arabic and some other Arabic dialects such as Levantine, Egyptian, and Gulf. Working with the Arabic language poses several challenges due to the complex structure of the language, the large number of dialects used and the lack of associated resources. The data collected was divided into positive, negative, and neutral. Several algorithms were used to predict sentiment in Arabic texts such as Naive Bayes classifiers (NB), Support Vector Machine (SVM), Random Forest Classifier, and BERT model (Bidirectional Encoder Representations from Transformers). The results obtained are very encouraging, especially with the Bert model (Bidirectional Encoder Representations from Transformers) that gave very accurate results during the test, reaching more than 83%.
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Sentiment Analysis",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
78,
24,
3
] |
SCOPUS_ID:0016591229
|
A Linguistic and Cognitive Perspective on Retardation
|
The idea of critical readiness periods for both language learning and cognitive development was reviewed and related to mental subnormalities. A basic model of a synthesized theory of language and cognition was presented, and a theoretical “Basal Person” was constructed to illustrate a new perspective on retardation. The general conclusions reached were that (a) the basal person represents the lowest normal level of cognitive linguistic development, and (b) retardation can be defined in terms of impeded or disrupted development during critical readiness periods. © 1975 Taylor & Francis Group, LLC.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:0742271586
|
A Linguistic and Neuropsychological Approach to Remediation in a German Case of Developmental Dysgraphia
|
There have been relatively few single case studies concerned with the remediation of spelling deficits among developmental impairments. Among these there have been a small number that targeted specific components of the spelling process and used linguistic theories as theoretical underpinning for the development of remediation procedures. This single case study examines remediation of writing skills and aims at evaluating two different lexically based intervention methods, one of which used Optimality Theory as its basis. We applied a rule-based remediation and an intervention method using whole-word forms to a child with selective impairments in the lexical-graphemic components. The investigation was done with words in which phoneme-grapheme-correspondences in word final position change due to voicing neutralization. The individual exhibited a method- and item-specific effect with respect to the rule-based method. In addition, a transfer effect to untreated items and a generalization effect to untrained but related tasks was observed. The absence of a method-specific and a generalization effect for the whole-word form intervention and the success of the rule-based method is determined by the specific cognitive component(s)s that constitute the source of the deficit and the appropriateness of Optimality Theory to address this particular deficit.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
SCOPUS_ID:85097355567
|
A Linguistic-Based Method that Combines Polarity, Emotion and Grammatical Characteristics to Detect Fake News in Portuguese
|
In the last decades, the dissemination of News through digital media has increased the information accessibility previously offered by traditional channels. Despite their benefits, digital media have exacerbated an old problem: the spread of Fake News, (i.e., false News intentionally published). Faced with this scenario, the linguistic approaches to automatic Fake News detection use information that can be directly extracted from the News' text. Several methods based on these approaches use grammatical classification and sentiment analysis over News writing in Portuguese. However, as far as it was possible to observe in the related literature, these methods are limited to the identification of polarity of sentiment (i.e., positive, neutral or negative) existing in the text. Although polarity classification be an effective method for a wide range of natural language processing applications, it does not address language nuances (e.g., emotions such as anger, sadness, etc.) that can provide evidence that a text contains false information. Hence, this study proposes an extended method that, in addition to the grammatical classification and polarity based sentiment analysis, also uses the analysis of emotions to detect Fake News written in Portuguese. The extended method showed promising results in experimental data, obtaining accuracy greater than 92%. In average, the proposed method overcame polarity and gramatical classification based methods in 1.4 percentage points.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Polarity Analysis",
"Ethical NLP",
"Sentiment Analysis",
"Emotion Analysis",
"Reasoning",
"Fact & Claim Verification",
"Text Classification",
"Responsible & Trustworthy NLP"
] |
[
3,
24,
33,
17,
78,
61,
8,
46,
36,
4
] |
https://aclanthology.org//W15-2915/
|
A Linguistically Informed Convolutional Neural Network
|
[
"Sentiment Analysis"
] |
[
78
] |
|
http://arxiv.org/abs/cmp-lg/9807008v1
|
A Linguistically Interpreted Corpus of German Newspaper Text
|
In this paper, we report on the development of an annotation scheme and annotation tools for unrestricted German text. Our representation format is based on argument structure, but also permits the extraction of other kinds of representations. We discuss several methodological issues and the analysis of some phenomena. Additional focus is on the tools developed in our project and their applications.
|
[
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP"
] |
[
81,
4
] |
SCOPUS_ID:85144394194
|
A Linguistically Motivated Test Suite to Semi-Automatically Evaluate German-English Machine Translation Output
|
This paper presents a fine-grained test suite for the language pair German-English. The test suite is based on a number of linguistically motivated categories and phenomena and the semi-automatic evaluation is carried out with regular expressions. We describe the creation and implementation of the test suite in detail, providing a full list of all categories and phenomena. Furthermore, we present various exemplary applications of our test suite that have been implemented in the past years, like contributions to the Conference of Machine Translation, the usage of the test suite and MT outputs for quality estimation, and the expansion of the test suite to the language pair Portuguese-English. We describe how we tracked the development of the performance of various systems MT systems over the years with the help of the test suite and which categories and phenomena are prone to resulting in MT errors. For the first time, we also make a large part of our test suite publicly available to the research community.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:84965047545
|
A Linked Data Driven Semantic Model for Interpreting English Queries in question answering system
|
This paper introduces the Linked Data Driven Semantic Model for Interpreting English Queries (SMIQ) which is proposed to interpret the semantics of English queries used in our linked data based question answering system. SMIQ models the meanings of English queries by using a linked data driven approach, interprets the semantic presentation of English queries into linked data queries, and connects with many linked data sources to query useful data for question-answering system. SMIQ is evaluated by the keywords of the testing questions of the Task 1 of QALD-4. Three evaluation scores <Recall, Precision, F-Measure> of SMIQ are <0.90, 0.82, 0.85>.
|
[
"Explainability & Interpretability in NLP",
"Natural Language Interfaces",
"Question Answering",
"Responsible & Trustworthy NLP"
] |
[
81,
11,
27,
4
] |
SCOPUS_ID:85129008912
|
A Lite Romanian BERT:ALR-BERT
|
Large-scale pre-trained language representation and its promising performance in various downstream applications have become an area of interest in the field of natural language processing (NLP). There has been huge interest in further increasing the model’s size in order to outperform the best previously obtained performances. However, at some point, increasing the model’s parameters may lead to reaching its saturation point due to the limited capacity of GPU/TPU. In addition to this, such models are mostly available in English or a shared multilingual structure. Hence, in this paper, we propose a lite BERT trained on a large corpus solely in the Romanian language, which we called “A Lite Romanian BERT (ALR-BERT)”. Based on comprehensive empirical results, ALR-BERT produces models that scale far better than the original Romanian BERT. Alongside presenting the performance on downstream tasks, we detail the analysis of the training process and its parameters. We also intend to distribute our code and model as an open source together with the downstream task.
|
[
"Language Models",
"Semantic Text Processing"
] |
[
52,
72
] |
SCOPUS_ID:85135185977
|
A Literature Analysis of Consumer Privacy Protection in Augmented Reality Applications in Creative and Cultural Industries: A Text Mining Study
|
Digital reality technologies (such as AR, VR, and MR) have recently become a key component of promoting creative and cultural industries (CCIs) worldwide to transform static cultural heritage exhibits into more engaging, entertaining, and immersive experiences. These technologies present an exciting example of studying how consumers would respond to the potential invasion of privacy due to these technologies. This literature review study mainly focuses on one essential branch of CCIs: museums and their applications of digital reality technologies. Because many of these location-based AR applications by museums are inherently sensitive to users’ locational information, there is also a rising concern of the potential infringement of personal privacy (RQ1). A thorough examination of existing literature on how consumers respond to privacy concerns related to the museum’s AR applications will help uncover how scholars have approached and studied these crucial issues in the literature (RQ2). Unlike traditional literature review analyses, we employed a text mining of retrieved 715 studies articles from Business Source Complete and Engineering Village (E.I.) databases to answer our two research questions. Our study found that privacy and user(s) /visitor(s) has dramatically increased since 2017, echoing the rising concerns of other privacy-invasive technologies. Most notably, key phrases extracted from the literature corpus include “security and privacy,” “privacy and security,” “privacy risks,” “privacy concerns,” “privacy issues,” “user privacy,” “location privacy,” “privacy protection,” and “privacy preserving” that are most pertinent to the rapid implementation of AR technology in the museum sector. Discussions and implications are provided.
|
[
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
17,
4
] |
SCOPUS_ID:85061902189
|
A Literature Review in Preprocessing for Sentiment Analysis for Brazilian Portuguese Social Media
|
Online Social Networks have been increasingly adopted by web users interested in sharing their opinions and thoughts about restaurants, bars, and products they have visited or bought. This scenario enables new analyses to companies and institutions that seek information on how their audience perceives them, and which aspects should be improved. One technique widely used in this type of study is Sentiment Analysis (SA), which allows the automatic mining of opinions on a particular topic. However, this approach faces challenges in social networks, due to the informal nature of the posts and the lack of attention to the grammatical rules found on user-generated content. In this context, this paper presents a literature review about methods and techniques used in the preprocessing of social media data for SA, in the context of Brazilian Portuguese. The results highlight some gaps in the literature and research possibilities, mainly to increase the accuracy of analyses for those platforms.
|
[
"Sentiment Analysis"
] |
[
78
] |
SCOPUS_ID:85084367562
|
A Literature Review of Gene Function Prediction by Modeling Gene Ontology
|
Annotating the functional properties of gene products, i.e., RNAs and proteins, is a fundamental task in biology. The Gene Ontology database (GO) was developed to systematically describe the functional properties of gene products across species, and to facilitate the computational prediction of gene function. As GO is routinely updated, it serves as the gold standard and main knowledge source in functional genomics. Many gene function prediction methods making use of GO have been proposed. But no literature review has summarized these methods and the possibilities for future efforts from the perspective of GO. To bridge this gap, we review the existing methods with an emphasis on recent solutions. First, we introduce the conventions of GO and the widely adopted evaluation metrics for gene function prediction. Next, we summarize current methods of gene function prediction that apply GO in different ways, such as using hierarchical or flat inter-relationships between GO terms, compressing massive GO terms and quantifying semantic similarities. Although many efforts have improved performance by harnessing GO, we conclude that there remain many largely overlooked but important topics for future research.
|
[
"Knowledge Representation",
"Semantic Text Processing",
"Semantic Similarity"
] |
[
18,
72,
53
] |
SCOPUS_ID:85115204432
|
A Literature Review on Smart Technologies and Logistics
|
The emergence of smart technologies has brought substantial changes in logistics. Hence, understanding smart technologies applied in logistics has become critical for practitioners and scholars to make smart technologies better empower logistics activities. Because research on this issue is new and largely fragmented, it will be theoretically essential to evaluate what has been studied and derive meaningful insights through a literature review. In this study, we conduct a mixed-method literature review of smart technologies in logistics. We classify these studies by topic modeling and identify important research domains and methods. More importantly, we draw upon the task-technology fit theory and logistics activities process to propose a multi-level theoretical framework in smart technologies in logistics for understanding the current status in research. We believe that this framework can provide a valuable basis for future logistics research.
|
[
"Topic Modeling",
"Information Extraction & Text Mining"
] |
[
9,
3
] |
SCOPUS_ID:85103481950
|
A Literature Review on Text Classification and Sentiment Analysis Approaches
|
Sentiment analysis is an important branch task of text classification and the related system usually is applied to in perception of user emotion and public opinion monitoring. By comparison, the text classification can be applied to more fields than sentiment analysis. In the system architecture, same as text classification, the complete classification system mainly contains data acquisition, data pre-process, feature extraction, classification algorithm and result output. The Web crawler usually be used in first step, the URL Link, hashtags, Non-Chinese text should be removed in second step. In feature extraction, the IG, TF-IDF, Word2vec usually be used. Then, the SVM, Naive Bayes, KNN or Neural network algorithm usually be used in classifier. Furthermore, as a system that can run automatically, the sentiment analysis system should be able to extract significant feature from corpus and make accurately analysis about emotional polarity of text corpus. At present, the system improvement direction of related system focuses on 3 aspects: data acquisition, feature extraction and classifier algorithm.
|
[
"Information Retrieval",
"Sentiment Analysis",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
78,
36,
3
] |
SCOPUS_ID:85059990413
|
A Literature Study on Different Multi-Document Summarization Techniques
|
Currently the rate of improvement of information is developing exponentially in the World Wide Web. In this manner, removing real and significant information from tremendous data has transformed into a testing issue. Starting late text summarization is seen as one of the response for expel material information from immense documents. In view of number of documents considered for summarization, the summarization task is requested as single document or multi-document summarization. Rather than single document, multi-document summarization is all the additionally striving for the examiners to find correct abstract from multiple documents. In this paper, firstly we will discuss about the concept of multi-document summarization and then we will have an in depth analysis of various methodologies which goes under the multi-document summarization. The paper moreover contains bits of knowledge about the focal points and issues in the present methods. This would especially be helpful for researchers working in this field of text data mining. By using this data, researchers can create new or mixed based methodologies for multi-document summarization.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
http://arxiv.org/abs/2201.06657v1
|
A Literature Survey of Recent Advances in Chatbots
|
Chatbots are intelligent conversational computer systems designed to mimic human conversation to enable automated online guidance and support. The increased benefits of chatbots led to their wide adoption by many industries in order to provide virtual assistance to customers. Chatbots utilise methods and algorithms from two Artificial Intelligence domains: Natural Language Processing and Machine Learning. However, there are many challenges and limitations in their application. In this survey we review recent advances on chatbots, where Artificial Intelligence and Natural Language processing are used. We highlight the main challenges and limitations of current work and make recommendations for future research investigation.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85101785937
|
A Literature Survey on Biomedical Named Entity Recognition
|
The importance of information extraction is well known due to its conceptual simplicity and potential usefulness but domain-specific task makes the process more tractable than others. Biomedical named entity recognition one such active research area that identifies biomedical entities and serves as a support system for the downstream task such as knowledge base construction, knowledge discovery, etc. The key challenge behind biomedical named entity recognition lies in the features and methods selection owing to higher complexity in the related entities. The researches have shown promising result but correctly identifying a chunk of text is an important task as it contains lots of important details which need to be analyzed to make sense out of it. This survey attempts to provide important insights of biomedical named entity recognition task to help biomedical research community.
|
[
"Named Entity Recognition",
"Information Extraction & Text Mining"
] |
[
34,
3
] |
http://arxiv.org/abs/1908.08983v1
|
A Little Annotation does a Lot of Good: A Study in Bootstrapping Low-resource Named Entity Recognizers
|
Most state-of-the-art models for named entity recognition (NER) rely on the availability of large amounts of labeled data, making them challenging to extend to new, lower-resourced languages. However, there are now several proposed approaches involving either cross-lingual transfer learning, which learns from other highly resourced languages, or active learning, which efficiently selects effective training data based on model predictions. This paper poses the question: given this recent progress, and limited human annotation, what is the most effective method for efficiently creating high-quality entity recognizers in under-resourced languages? Based on extensive experimentation using both simulated and real human annotation, we find a dual-strategy approach best, starting with a cross-lingual transferred model, then performing targeted annotation of only uncertain entity spans in the target language, minimizing annotator effort. Results demonstrate that cross-lingual transfer is a powerful tool when very little data can be annotated, but an entity-targeted annotation strategy can achieve competitive accuracy quickly, with just one-tenth of training data.
|
[
"Low-Resource NLP",
"Green & Sustainable NLP",
"Responsible & Trustworthy NLP",
"Cross-Lingual Transfer",
"Multilinguality"
] |
[
80,
68,
4,
19,
0
] |
https://aclanthology.org//2020.sustainlp-1.14/
|
A Little Bit Is Worse Than None: Ranking with Limited Training Data
|
Researchers have proposed simple yet effective techniques for the retrieval problem based on using BERT as a relevance classifier to rerank initial candidates from keyword search. In this work, we tackle the challenge of fine-tuning these models for specific domains in a data and computationally efficient manner. Typically, researchers fine-tune models using corpus-specific labeled data from sources such as TREC. We first answer the question: How much data of this type do we need? Recognizing that the most computationally efficient training is no training, we explore zero-shot ranking using BERT models that have already been fine-tuned with the large MS MARCO passage retrieval dataset. We arrive at the surprising and novel finding that “some” labeled in-domain data can be worse than none at all.
|
[
"Language Models",
"Semantic Text Processing",
"Green & Sustainable NLP",
"Information Retrieval",
"Responsible & Trustworthy NLP"
] |
[
52,
72,
68,
24,
4
] |
SCOPUS_ID:85026778039
|
A Little Less Conversation, a Little More Action: Illustrations of the Mediated Discourse Analysis Method
|
This article provides an introduction into the innovative use of the methodological approach of Mediated Discourse Analysis (MDA) and illustrates this with examples from an interventionist insider action research study. An overview of the method, including its foundation and association with the analysis of practice and how it can be situated within a reflexive ethnographic and critical realist stance, is presented. It offers samples of findings and analysis for each of the different aspects of method, structured by a set of heuristic questions, as well as an example showing the possibilities of theory development. The article constructs and shows an analytical pathway for HRD researchers to use MDA and concludes with a discussion about the advantages of utilizing MDA, in terms of theory and practice, as well as the practical issues in conducting an MDA study. The implication for the HRD research community is that MDA is a new, innovative, and germane approach for analyzing HRD practice within organizational settings.
|
[
"Semantic Text Processing",
"Linguistics & Cognitive NLP",
"Linguistic Theories",
"Discourse & Pragmatics",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
72,
48,
57,
71,
11,
38
] |
https://aclanthology.org//W19-4214/
|
A Little Linguistics Goes a Long Way: Unsupervised Segmentation with Limited Language Specific Guidance
|
We present de-lexical segmentation, a linguistically motivated alternative to greedy or other unsupervised methods, requiring only minimal language specific input. Our technique involves creating a small grammar of closed-class affixes which can be written in a few hours. The grammar over generates analyses for word forms attested in a raw corpus which are disambiguated based on features of the linguistic base proposed for each form. Extending the grammar to cover orthographic, morpho-syntactic or lexical variation is simple, making it an ideal solution for challenging corpora with noisy, dialect-inconsistent, or otherwise non-standard content. In two evaluations, we consistently outperform competitive unsupervised baselines and approach the performance of state-of-the-art supervised models trained on large amounts of data, providing evidence for the value of linguistic input during preprocessing.
|
[
"Low-Resource NLP",
"Syntactic Text Processing",
"Responsible & Trustworthy NLP"
] |
[
80,
15,
4
] |
http://arxiv.org/abs/2102.06551v2
|
A Little Pretraining Goes a Long Way: A Case Study on Dependency Parsing Task for Low-resource Morphologically Rich Languages
|
Neural dependency parsing has achieved remarkable performance for many domains and languages. The bottleneck of massive labeled data limits the effectiveness of these approaches for low resource languages. In this work, we focus on dependency parsing for morphological rich languages (MRLs) in a low-resource setting. Although morphological information is essential for the dependency parsing task, the morphological disambiguation and lack of powerful analyzers pose challenges to get this information for MRLs. To address these challenges, we propose simple auxiliary tasks for pretraining. We perform experiments on 10 MRLs in low-resource settings to measure the efficacy of our proposed pretraining method and observe an average absolute gain of 2 points (UAS) and 3.6 points (LAS). Code and data available at: https://github.com/jivnesh/LCM
|
[
"Low-Resource NLP",
"Language Models",
"Semantic Text Processing",
"Morphology",
"Syntactic Text Processing",
"Syntactic Parsing",
"Responsible & Trustworthy NLP"
] |
[
80,
52,
72,
73,
15,
28,
4
] |
SCOPUS_ID:85136691380
|
A Local Self-Attention Sentence Model for Answer Selection Task in CQA Systems
|
Current evidence indicates that the semantic representation of question and answer sentences is better generated by deep neural network-based sentence models than traditional methods in community answer selection tasks. In particular, as a widely recognized language model, the self-attention model computes the similarity between the specific word and the whole sets of words in the same sentence and generates new semantic representation through the similarity-weighted summation of semantic representations of the whole words. However, the self-attention operation entirely considers all the signals with a weighted sum operation, which disperses the distribution of attention, which may result in overlooking the relation of neighboring signals. This issue becomes serious when applying the self-attention model to online community question answering platforms because of the varied length of the user-generated questions and answers. To address this problem, we introduce an attention mechanism enhanced local self-attention (LSA), which restricts the range of original self-attention by a local window mechanism, thereby scaling linearly when increasing the sequence length. Furthermore, we propose stacking multiple LSA layers to model the relationship of multiscale <inline-formula> <tex-math notation="LaTeX">$n$</tex-math> </inline-formula>-gram features. It captures the word-to-word relationship in the first layer and then captures the chunk-to-chunk (such as lexical <inline-formula> <tex-math notation="LaTeX">$n$</tex-math> </inline-formula>-gram phrases) relationship in its deeper layers. We also test the effectiveness of the proposed model by applying the learned representation through the LSA model to a Siamese and a classification network in community question answer selection tasks. Experiments on the public datasets show that the proposed LSA achieves a good performance.
|
[
"Language Models",
"Semantic Text Processing",
"Question Answering",
"Representation Learning",
"Natural Language Interfaces",
"Reasoning",
"Numerical Reasoning"
] |
[
52,
72,
27,
12,
11,
8,
5
] |
SCOPUS_ID:85136118748
|
A Local and Global Context Focus Multilingual Learning Model for Aspect-Based Sentiment Analysis
|
Aspect-Based Sentiment Analysis (ABSA) aims to predict the sentiment polarity of different aspects in a sentence or document, which is a fine-grained task of natural language processing. Most of the existing work focuses on the correlation between aspect sentiment polarity and local context. The important deep correlations between global context and aspect sentiment polarity have not received enough attention. Besides, there are few studies on Chinese ABSA tasks and multilingual ABSA tasks. Based on the local context focus mechanism, we propose a multilingual learning model based on the interactive learning of local and global context focus, namely LGCF. Compared with the existing models, this model can effectively learn the correlation between local context and target aspects and the correlation between global context and target aspects simultaneously. In addition, the model can effectively analyze both Chinese and English reviews. Experiments conducted on three Chinese benchmark datasets(Camera, Phone and Car) and six English benchmark datasets(Laptop14, Restaurant14, Restaurant16, Twitter, Tshirt and Television) demonstrate that LGCF has achieved compelling performance and efficiency improvements compared with several existing state-of-the-art models. Moreover, the ablation experiment results also verify the effectiveness of each cmponent in LGCF.
|
[
"Polarity Analysis",
"Sentiment Analysis",
"Aspect-based Sentiment Analysis",
"Multilinguality"
] |
[
33,
78,
23,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.