id
stringlengths 20
52
| title
stringlengths 3
459
| abstract
stringlengths 0
12.3k
| classification_labels
list | numerical_classification_labels
list |
---|---|---|---|---|
SCOPUS_ID:85083032551
|
A Deep-Learning Approach to Optical Character Recognition for Uighur Language
|
Optical Character Recognition (OCR) for Uighur language is a difficult problem and open research field because of the cursive nature script of the Uighur text, even in the printed text. Except for a few differences and modifications, Uighur characters have possessed the same characteristics of Arabic characters, so the obstacles and challenges of the Arabic OCR researches are also available for Uighur OCR researches. The central functional core of Uighur OCR, as well as Arabic OCR, is consisting of two operation stages: Segmentation and classification. This paper proposed an obtaining segmentation point approach in the segmentation stage and applied a deep learning classifier with three-block characters as a recognized unit in the classification stage. The experiment results show that the word segmentation method has gained 95.68%, the segmentation point approach has made 94.74%, and the deep learning approach with three-block characters unit has reached 99.33%, while an identical approach with a single character unit has reached 91.98% within five epochs.
|
[
"Visual Data in NLP",
"Information Extraction & Text Mining",
"Information Retrieval",
"Syntactic Text Processing",
"Text Segmentation",
"Text Classification",
"Multimodality"
] |
[
20,
3,
24,
15,
21,
36,
74
] |
SCOPUS_ID:85107570417
|
A Deeper Analysis of AOI Coverage in Code Reading
|
The proportion of areas of interest that are covered with gaze is employed as metric to compare natural-language text and source code reading, as well as novice and expert programmers' code reading behavior. Two levels of abstraction are considered for AOIs: lines and elements. AOI coverage is significantly higher on natural-language text than on code, so a detailed account is provided on the areas that are skipped. Between novice and expert programmers, the overall AOI coverage is comparable. However, segmenting the stimuli into meaningful components revealed that they distribute their gaze differently and partly look at different AOIs. Thus, while programming expertise does not strongly influence AOI coverage quantitatively, it does so qualitatively.
|
[
"Programming Languages in NLP",
"Multimodality"
] |
[
55,
74
] |
http://arxiv.org/abs/1804.05972v1
|
A Deeper Look into Dependency-Based Word Embeddings
|
We investigate the effect of various dependency-based word embeddings on distinguishing between functional and domain similarity, word similarity rankings, and two downstream tasks in English. Variations include word embeddings trained using context windows from Stanford and Universal dependencies at several levels of enhancement (ranging from unlabeled, to Enhanced++ dependencies). Results are compared to basic linear contexts and evaluated on several datasets. We found that embeddings trained with Universal and Stanford dependency contexts excel at different tasks, and that enhanced dependencies often improve performance.
|
[
"Semantic Text Processing",
"Representation Learning"
] |
[
72,
12
] |
SCOPUS_ID:85149742381
|
A Definitive Survey of How to Use Unsupervised Text Classifiers
|
A tremendous number of discussions, reviews, and postings are placed on online communities any day, greatly expanding the global dataset. To learn what consumers think of a particular service or brand, one must now sift thru vast volumes of data. The majority of assessments are written in English, however as electronics both men's comprehension advance, there is an increasing amount of Gujarati content available online. Additionally, understanding viewpoints about a single item is crucial to Indian language trend analysis because we display for all their viewpoints as entirely valid. We utilised the Punjabi collection for basic news items from various news outlets to increase the precision of our classifications. Dnn but other algorithms svm classifiers, such as Unusual Forest, Svms, and Rapid deployment Regressors, were used to explore the exactness of classifiers.
|
[
"Low-Resource NLP",
"Text Classification",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
80,
36,
4,
24,
3
] |
https://aclanthology.org//W14-4334/
|
A Demonstration of Dialogue Processing in SimSensei Kiosk
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
https://aclanthology.org//W12-1618/
|
A Demonstration of Incremental Speech Understanding and Confidence Estimation in a Virtual Human Dialogue System
|
[
"Natural Language Interfaces",
"Multimodality",
"Speech & Audio in NLP",
"Dialogue Systems & Conversational Agents"
] |
[
11,
74,
70,
38
] |
|
SCOPUS_ID:0032221562
|
A Demonstration of Research Methodologies Used in Psycholinguistics
|
In this article, I describe software that presents classic experiments in psycholinguistics. I use this software for my Research in the Psychology of Language course, but it also has applicability to methods courses or survey courses in psycholinguistics and cognitive psychology. © 1998, SAGE Publications. All rights reserved.
|
[
"Psycholinguistics",
"Linguistics & Cognitive NLP"
] |
[
77,
48
] |
SCOPUS_ID:84940565682
|
A Demonstration of the MARKOS License Analyser
|
The MARKOS license analyser is an innovative application of the latest version of the Carneades argumentation system, for helping software developers to analyse open source license compatibility issues.
|
[
"Argument Mining",
"Reasoning"
] |
[
60,
8
] |
SCOPUS_ID:85130959768
|
A Denoising Method for Distant Supervised Relation Extraction Based on Deep Clustering
|
Distant supervision for relation extraction uses an external knowledge base as supervision signals to automatically label corpus and has attracted more and more attention. However, this method has an ideal hypothesis that all instances containing the same entity pairs represent the same relation, which will lead to a lot of noisy data and affect the training effect of classifier. Aiming at the noise problem of distant supervised dataset, we propose a denoising method based on deep clustering. First, the deep clustering method is used to train the text features and clustering centers, and the effect of clustering is improved by using the information of mask entity pairs. Then high quality training samples are obtained by removing the noisy data, so as to achieve the effect of denoising. Experimental results show that the proposed model can effectively improve the performance of relation extraction by comparing with the traditional denoising methods.
|
[
"Relation Extraction",
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
75,
3,
29
] |
SCOPUS_ID:85059956979
|
A Dense Vector Representation for Relation Tuple Similarity
|
Open Information Extraction (Open IE), which has been extensively studied as a new paradigm on unrestricted information extraction, produces relation tuples (results) which serve as intermediate structures in several natural language processing tasks, one of which is question answering system. In this paper, we investigate ways to learn the vector representation of Open IE relation tuples using various approaches, ranging from simple vector composition to more advanced methods, such as recursive autoencoder (RAE). The quality of vector representation was evaluated by conducting experiments on the relation tuple similarity task. While the results show that simple linear combination (i.e., averaging the vectors of the words participating in the tuple) outperforms any other methods, including RAE, RAE itself has its own advantage in dealing with a case, in which the similarity criterion is characterized by each element in the tuple, in cases where the simple linear combination is unable to identify them.
|
[
"Representation Learning",
"Open Information Extraction",
"Semantic Text Processing",
"Information Extraction & Text Mining"
] |
[
12,
25,
72,
3
] |
http://arxiv.org/abs/2203.13953v1
|
A Densely Connected Criss-Cross Attention Network for Document-level Relation Extraction
|
Document-level relation extraction (RE) aims to identify relations between two entities in a given document. Compared with its sentence-level counterpart, document-level RE requires complex reasoning. Previous research normally completed reasoning through information propagation on the mention-level or entity-level document-graph, but rarely considered reasoning at the entity-pair-level.In this paper, we propose a novel model, called Densely Connected Criss-Cross Attention Network (Dense-CCNet), for document-level RE, which can complete logical reasoning at the entity-pair-level. Specifically, the Dense-CCNet performs entity-pair-level logical reasoning through the Criss-Cross Attention (CCA), which can collect contextual information in horizontal and vertical directions on the entity-pair matrix to enhance the corresponding entity-pair representation. In addition, we densely connect multiple layers of the CCA to simultaneously capture the features of single-hop and multi-hop logical reasoning.We evaluate our Dense-CCNet model on three public document-level RE datasets, DocRED, CDR, and GDA. Experimental results demonstrate that our model achieves state-of-the-art performance on these three datasets.
|
[
"Relation Extraction",
"Reasoning",
"Information Extraction & Text Mining"
] |
[
75,
8,
3
] |
SCOPUS_ID:85096510913
|
A Densely Connected Encoder Stack Approach for Multi-type Legal Machine Reading Comprehension
|
Legal machine reading comprehension (MRC) is becoming increasingly important as the number of legal documents rapidly grows. Currently, the main approach of MRC is the deep neural network based model which learns multi-level semantic information with different granularities layer by layer, and it converts the original data from shallow features into abstract features. Owing to excessive abstract semantic features learned by the model at the top of layers and the large loss of shallow features, the current approach still can be strengthened when applying to the legal field. In order to solve the problem, this paper proposes a Densely Connected Encoder Stack Approach for Multi-type Legal MRC. It can easily get multi-scale semantic features. A novel loss function named multi-type loss is designed to enhance the legal MRC performance. In addition, our approach includes a bidirectional recurrent convolutional layer to learn local features and assist in answering general questions. And several fully connected layers are used to keep position features and make predictions. Both extensive experiments and ablation studies in the biggest Chinese legal dataset demonstrate the effectiveness of our approach. Finally, our approach achieves 0.817 in terms of F1 in CJRC dataset and 83.4 in the SQuAD2.0 dev.
|
[
"Language Models",
"Semantic Text Processing",
"Question Answering",
"Natural Language Interfaces",
"Reasoning",
"Machine Reading Comprehension"
] |
[
52,
72,
27,
11,
8,
37
] |
SCOPUS_ID:85089431107
|
A Densely Connected Transformer for Machine Translation
|
Recent work has shown that the network for natural language processing (NLP) task can be deeper as well as more accurate by applying attention mechanism with residual. However, the original features of training data will be lost after multiple operations on deep network. We propose a new model structure, Densely Connected Transformer (DCT), which is based on the Transformer to solve this problem. We connect each encoder/decoder layer to other layers, and sum outputs of all previous layers as the input of next layer. Our model encourages feature reuse and improves the information flow between layers. We apply this model to machine translation (MT) tasks and evaluate it on both IWSLT 2016 German-English and IWSLT 2016 German-English. The experiment results show that our model obtains a higher BLEU scores than the basic Transformer after training for 20 epochs. And the convergence speed of our model is faster than the basic Transformer.
|
[
"Language Models",
"Machine Translation",
"Semantic Text Processing",
"Text Generation",
"Multilinguality"
] |
[
52,
51,
72,
47,
0
] |
SCOPUS_ID:85133140038
|
A Density Based k-Means Initialization Scheme
|
In this paper we present the results from some versions of a new initialization scheme for the k-Means algorithm. k- Means is probably the most fundamental clustering algorithm, with application in lots of _elds, such as Signal Processing, Image Colour Segmentation as well as Web data management. The initialization process of the algorithm is of great interest, raising two big challenges. The _rst one, is to _nd out what is k, the number of clusters. The second one, is to determine which are the initial k seeds. We here mainly focus on the later. Our approach is heuristic hence profound mathematical arguments are not being presented. We are based mainly on criteria like density, Euclidean distance and the Mardia's multivariate kurtosis statistic. In order to test the quality of our results, a few cluster validity measures, other than the commonly used Sum Squared Error( SSE) are applied, which in our belief are suitable to be used for evaluation purposes
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85081619018
|
A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition
|
This article describes a density ratio approach to integrating external Language Models (LMs) into end-To-end models for Automatic Speech Recognition (ASR). Applied to a Recurrent Neural Network Transducer (RNN-T) ASR model trained on a given domain, a matched in-domain RNN-LM, and a target domain RNN-LM, the proposed method uses Bayes' Rule to define RNN-T posteriors for the target domain, in a manner directly analogous to the classic hybrid model for ASR based on Deep Neural Networks (DNNs) or LSTMs in the Hidden Markov Model (HMM) framework (Bourlard Morgan, 1994). The proposed approach is evaluated in cross-domain and limited-data scenarios, for which a significant amount of target domain text data is used for LM training, but only limited (or no) {audio, transcript} training data pairs are used to train the RNN-T. Specifically, an RNN-T model trained on paired audio transcript data from YouTube is evaluated for its ability to generalize to Voice Search data. The Density Ratio method was found to consistently outperform the dominant approach to LM and end-To-end ASR integration, Shallow Fusion.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
https://aclanthology.org//W11-1009/
|
A Dependency Based Statistical Translation Model
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
https://aclanthology.org//W19-3321/
|
A Dependency Structure Annotation for Modality
|
This paper presents an annotation scheme for modality that employs a dependency structure. Events and sources (here, conceivers) are represented as nodes and epistemic strength relations characterize the edges. The epistemic strength values are largely based on Saurí and Pustejovsky’s (2009) FactBank, while the dependency structure mirrors Zhang and Xue’s (2018b) approach to temporal relations. Six documents containing 377 events have been annotated by two expert annotators with high levels of agreement.
|
[
"Knowledge Representation",
"Semantic Text Processing"
] |
[
18,
72
] |
http://arxiv.org/abs/2004.01951v1
|
A Dependency Syntactic Knowledge Augmented Interactive Architecture for End-to-End Aspect-based Sentiment Analysis
|
The aspect-based sentiment analysis (ABSA) task remains to be a long-standing challenge, which aims to extract the aspect term and then identify its sentiment orientation.In previous approaches, the explicit syntactic structure of a sentence, which reflects the syntax properties of natural language and hence is intuitively crucial for aspect term extraction and sentiment recognition, is typically neglected or insufficiently modeled. In this paper, we thus propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA. This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn). Additionally, we design a simple yet effective message-passing mechanism to ensure that our model learns from multiple related tasks in a multi-task learning framework. Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach, which significantly outperforms existing state-of-the-art methods. Besides, we achieve further improvements by using BERT as an additional feature extractor.
|
[
"Language Models",
"Low-Resource NLP",
"Semantic Text Processing",
"Syntactic Text Processing",
"Knowledge Representation",
"Aspect-based Sentiment Analysis",
"Sentiment Analysis",
"Responsible & Trustworthy NLP"
] |
[
52,
80,
72,
15,
18,
23,
78,
4
] |
https://aclanthology.org//2021.depling-1.1/
|
A Dependency Treebank for Classical Arabic Poetry
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
https://aclanthology.org//W07-0706/
|
A Dependency Treelet String Correspondence Model for Statistical Machine Translation
|
[
"Machine Translation",
"Syntactic Text Processing",
"Syntactic Parsing",
"Text Generation",
"Multilinguality"
] |
[
51,
15,
28,
47,
0
] |
|
http://arxiv.org/abs/1507.04646v1
|
A Dependency-Based Neural Network for Relation Classification
|
Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
http://arxiv.org/abs/1702.04510v1
|
A Dependency-Based Neural Reordering Model for Statistical Machine Translation
|
In machine translation (MT) that involves translating between two languages with significant differences in word order, determining the correct word order of translated words is a major challenge. The dependency parse tree of a source sentence can help to determine the correct word order of the translated words. In this paper, we present a novel reordering approach utilizing a neural network and dependency-based embeddings to predict whether the translations of two source words linked by a dependency relation should remain in the same order or should be swapped in the translated sentence. Experiments on Chinese-to-English translation show that our approach yields a statistically significant improvement of 0.57 BLEU point on benchmark NIST test sets, compared to our prior state-of-the-art statistical MT system that uses sparse dependency-based reordering features.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
https://aclanthology.org//W89-0246/
|
A Dependency-Based Parser for Topic and Focus
|
[
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
28,
15
] |
|
https://aclanthology.org//W13-2259/
|
A Dependency-Constrained Hierarchical Model with Moses
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85123416606
|
A Dependency-Guided Character-Based Slot Filling Model for Chinese Spoken Language Understanding
|
The joint models for intent detection and slot tagging have taken the state of the art of spoken language understanding (SLU) to a new level. However, the presence of rarely seen or unseen mention degrades the performance of the model. Earlier research showed that sequence labeling task can benefit from the use of dependency tree structure for inferring existence of slot tags. In Chinese spoken language understanding, common models for slot filling are character-based hence word-level dependency tree structure can not be integrated into model directly. In this paper, we propose a dependency-guided character-based slot filling (DCSF) model, which provides a concise way to resolve the conflict of incorporating the word-level dependency tree structure into the character-level model in Chinese. Our DCSF model can integrate dependency tree information into the character-level model while preserving word-level context and segmentation information by modeling different types of relationships between Chinese characters in the utterance. Experimental results on the public benchmark corpus SMP-ECDT and CrossWOZ show our model outperforms the compared models and has a great improvement, especially in low resource and unseen slot mentions scenario.
|
[
"Semantic Text Processing",
"Semantic Parsing",
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[
72,
40,
28,
15
] |
SCOPUS_ID:85083666155
|
A Depth Study on Suicidal Thoughts in the Online Social Networks
|
Online Social Network acts as platforms for users to communicate with one another and to share their feeling online. Few category of social media users utilizes the platform towards positing aggressive data. The automatic identification of the aggressive data can be identified by employing data mining algorithm utilizing machine learning principles. The standard machine learning approaches works with training, validation, and testing phases, and considered features such as part-of-speech, frequencies of insults and sentiment has been considered for emotions traits collected from the facebook data which leads several challenges to the system performance. In order to tackle particular issues, various technique employed in the literatures has been discussed in depth. In this paper, we undergo a detailed survey on technique employed to detect the suicide oriented traits on integration of sentiment analysis, Negative matrix factorization and summed up direct relapse calculation to analyze the connection between enthusiastic qualities and suicide chance and synthetic minority over- sampling technique is used in order to extract the information from a large collection of dataset. The ID3, C4.5, Apriori algorithm, association rule mining and naïve Bayes models has been used to predict who have suicidal ideation to repeatedly commit suicide attempts. Those techniques incorporate the linguistic features to regulate the durability of the quality on the count of self destruction. The issue attain were unique and remain to have a powerful segment with the count of self destruction. On this study, more meaningful insight about self destruction has been gathered.
|
[
"Emotion Analysis",
"Sentiment Analysis"
] |
[
61,
78
] |
SCOPUS_ID:77956980144
|
A Description of Automath and Some Aspects of its Language Theory
|
This note presents a self-contained introduction into Automath, a formal definition and an overview of the language theory. Thus it can serve as an introduction to the papers [van Benthem Jutting 73] and [Zandleven 73 (E.1)]. Among the various Automath languages this paper concentrates on the original version AUT-68 (because of its relative simplicity) and one extension AUT-QE (in which most texts have been written thus far). The contents are: 1 Introductory remarks. 2 Informal description of AUT-68. 3 Mathematics in Automath: propositions and types. 4 Extension of AUT-68 to AUT-QE. 5 A formal definition of AUT-QE. 6 Some remarks on language theory. © 1994 Elsevier Science B.V. All rights reserved.
|
[
"Reasoning",
"Numerical Reasoning",
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
8,
5,
48,
57
] |
SCOPUS_ID:0035599537
|
A Description of Phonetic, Acoustic, and Physiological Changes Associated with Improved Intelligibility in a Speaker with Spastic Dysarthria
|
Spastic dysarthria is a motor speech disorder produced by bilateral damage to the direct (pyramidal) and indirect (extrapyramidal) activation pathways of the central nervous system. This case report describes the recovery of an individual with severe spastic dysarthria and illustrates the close relationship between intelligibility measures and acoustic and physiological parameters. Detailed phonetic feature analyses combined with acoustic and physiological information helped to clarify (a) the loci of the intelligibility deficit, (b) the features of deviant speech whose improvement would lead to the greatest gains with treatment, and (c) the changes contributing to improvement in intelligibility observed over a 30-month treatment/recovery period. Though auditory-perceptual analysis remains the foundation of day-to-day dysarthria assessment, this case illustrates the potential for instrumental assessment to (a) supplement perceptual assessment techniques, (b) parse speech subsystem deficits, and (c) track the effects of interventions.
|
[
"Phonetics",
"Speech & Audio in NLP",
"Syntactic Text Processing",
"Multimodality"
] |
[
64,
70,
15,
74
] |
https://aclanthology.org//W13-2253/
|
A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
SCOPUS_ID:85091298226
|
A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities
|
Nonspeaking individuals with motor disabilities typically have very low communication rates. This paper proposes a design engineering approach for quantitatively exploring context-aware sentence retrieval as a promising complementary input interface, working in tandem with a word-prediction keyboard. We motivate the need for complementary design engineering methodology in the design of augmentative and alternative communication and explain how such methods can be used to gain additional design insights. We then study the theoretical performance envelopes of a context-aware sentence retrieval system, identifying potential keystroke savings as a function of the parameters of the subsystems, such as the accuracy of the underlying auto-complete word prediction algorithm and the accuracy of sensed context information under varying assumptions. We find that context-aware sentence retrieval has the potential to provide users with considerable improvements in keystroke savings under reasonable parameter assumptions of the underlying subsystems. This highlights how complementary design engineering methods can reveal additional insights into design for augmentative and alternative communication.
|
[
"Information Retrieval"
] |
[
24
] |
SCOPUS_ID:85132147787
|
A Design for Safety (DFS) Semantic Framework Development Based on Natural Language Processing (NLP) for Automated Compliance Checking Using BIM: The Case of China
|
For design for safety (DFS), automated compliance checking methods have received extensive attention. Although many research efforts have indicated the potential of BIM and ontology for automated compliance checking, an efficient methodology is still required for the interoperability and semantic representation of data from different sources. Therefore, a natural language processing (NLP)-based semantic framework is proposed in this paper, which implements rules-based automated compliance checking for building information modeling (BIM) at the design stage. Semantic-rich information can be extracted from safety regulations by NLP methods, which were analyzed to generate conceptual classes and individuals of ontology and provide a corpus basis for rule classification. The data on BIM was extracted from Revit to a spreadsheet using the Dynamo tool and then mapped to the ontology using the Cellfie tool. The interoperability of different source data was well improved through the isomorphism of information in the framework of semantic integration, causing data processed by the semantic web rule language to be transformed from safety regulations to achieve the purpose that automated compliance checking is implemented in the design documents. The practicability and scientific feasibility of the proposed framework was verified through a 95.21% recall and a 90.63% precision in compliance checking of a case study in China. Compared with traditional compliance checking methods, the proposed framework had high efficiency, response speed, data interoperability, and interaction.
|
[
"Responsible & Trustworthy NLP",
"Knowledge Representation",
"Semantic Text Processing",
"Green & Sustainable NLP"
] |
[
4,
18,
72,
68
] |
SCOPUS_ID:85135748656
|
A Design of Parallel Content-Defined Chunking System Using Non-Hashing Algorithms on FPGA
|
Content-defined chunking is a common method in many applications such as data deduplication and data synchronization. In recent years, new CDC algorithms using non-hashing methods have been developed, and positive results have been obtained. However, most of the algorithms are developed for single-thread computation on microprocessors. After analyzing some popular CDC algorithms, we observed that the algorithms using the basic sliding window protocol are more feasible to process in parallel. In this work, we proposed a new parallel chunking method that aims for hardware implementation. Additionally, we used the PCI algorithm, which does not include hash functions, to implement a multi-thread chunking system on FPGA devices. By exploiting the strength of the FPGAs, our proposed design achieves not only high computational speed but also great scalability.
|
[
"Syntactic Text Processing",
"Chunking"
] |
[
15,
43
] |
SCOPUS_ID:0018492539
|
A Designer/Verifier's Assistant
|
Since developing and maintaining formally verified programs is an incremental activity, one is not only faced with the problem of constructing specifications, programs, and proofs, but also with the complex problem of determining what previous work remains valid following incremental changes. A system that reasons about changes must build a detailed model of each development and be able to apply its knowledge, the same kind of knowledge an expert would have, to integrate new or changed information into an existing model. This paper describes a working computer program called the designer/verifier‘s assistant, which is the initial prototype of such a system. The assistant embodies a unified theory of how to reason about changes to a design or verification. This theory also serves as the basis for answering questions about the effects of hypothesized changes and for making proposals on how to proceed with the development in an orderly fashion. Excerpts from a sample session are used to illustrate the key ideas. Copyright © 1979 by The Institute of Electrical and Electronics Engineers, Inc.
|
[
"Programming Languages in NLP",
"Linguistic Theories",
"Question Answering",
"Natural Language Interfaces",
"Linguistics & Cognitive NLP",
"Reasoning",
"Multimodality"
] |
[
55,
57,
27,
11,
48,
8,
74
] |
SCOPUS_ID:85123322763
|
A Detailed Review on Text Extraction Using Optical Character Recognition
|
There exist businesses and applications that involve huge amount of data generated be it in any form to be processed & stored on daily basis. It is an implicit requirement to be able to carry out quick search through this enormous data in order to deal with the high amount of document and data generated. Documents are being digitized in all possible fields as collecting the required data from these documents manually is very time consuming as well as a tedious task. We have been able to save a huge amount of efforts in creating, processing, and saving scanned documents using OCR. It proves to be very efficient due to its use in variety of applications in Healthcare, Education, Banking, Insurance industries, etc. There exists sufficient researches and papers that describe the methods for converting the data residing in the documents into machine readable form. This paper describes a detailed overview of general extraction methods from different types of documents with different forms of data and in addition to this, we have also illustrated on various OCR platforms. The current study is expected to advance OCR research, providing better understanding and assist researchers to determine which method is ideal for OCR.
|
[
"Visual Data in NLP",
"Multimodality",
"Information Extraction & Text Mining"
] |
[
20,
74,
3
] |
http://arxiv.org/abs/2001.00842v1
|
A Deterministic plus Stochastic Model of the Residual Signal for Improved Parametric Speech Synthesis
|
Speech generated by parametric synthesizers generally suffers from a typical buzziness, similar to what was encountered in old LPC-like vocoders. In order to alleviate this problem, a more suited modeling of the excitation should be adopted. For this, we hereby propose an adaptation of the Deterministic plus Stochastic Model (DSM) for the residual. In this model, the excitation is divided into two distinct spectral bands delimited by the maximum voiced frequency. The deterministic part concerns the low-frequency contents and consists of a decomposition of pitch-synchronous residual frames on an orthonormal basis obtained by Principal Component Analysis. The stochastic component is a high-pass filtered noise whose time structure is modulated by an energy-envelope, similarly to what is done in the Harmonic plus Noise Model (HNM). The proposed residual model is integrated within a HMM-based speech synthesizer and is compared to the traditional excitation through a subjective test. Results show a significative improvement for both male and female voices. In addition the proposed model requires few computational load and memory, which is essential for its integration in commercial applications.
|
[
"Speech & Audio in NLP",
"Multimodality"
] |
[
70,
74
] |
SCOPUS_ID:85119453136
|
A Development of Multi-Language Interactive Device using Artificial Intelligence Technology for Visual Impairment Person
|
The issue of lacking reference books in braille in most public building is crucial, especially public places like libraries, museum and others. The visual impairment or blind people is not getting the information like we normal vision do. Therefore, a multi languages reading device for visually impaired is built and designed to overcome the limitation of reference books in public places. Some research regarding current product available is done to develop a better reading device. This reading device is an improvement from previous project which only focuses on single language which is not suitable for public places. This reading device will take a picture of the book using 5MP Pi camera, Google Vision API will extract the text, and Google Translation API will detect the language and translated to desired language based on push buttons input by user. Google Text-to-Speech will convert the text to speech and the device will read out aloud in through audio output like speaker or headphones. A few testings have been made to test the functionality and accuracy of the reading device. The testings are functionality, performance test and usability test. The reading device passed most of the testing and get a score of 91.7/100 which is an excellent (A) rating.
|
[
"Visual Data in NLP",
"Machine Translation",
"Speech & Audio in NLP",
"Multimodality",
"Text Generation",
"Multilinguality"
] |
[
20,
51,
70,
74,
47,
0
] |
SCOPUS_ID:85043581822
|
A Development of Participatory Sensing System for Foreign Visitors in PBL
|
In this project, we design a system to realize user participatory sensing. This system collects, organizes and visualizes data such as problems of foreign visitors in a city, thereby allowing city officials use that information for making decisions on various city policies. Furthermore, foreigners can be supported by the volunteers from city officials through exchanges of posts and comments in the application. One major problem is that many volunteers are not good at English. To alleviate this language barrier, we adopt a machine translation service and verified whether the service can serve as a communication method between application users.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85130005092
|
A Developmental Approach to Assessing and Treating Agrammatic Aphasia
|
Purpose: There is mounting evidence that the agrammatism that defines Broca’s aphasia can be explained in processing terms. However, the extant approach simply describes agrammatism as disparate deficits in a static, mature system. This tutorial aims to motivate and outline a developmental alternative. This alternative is processability theory (PT), a root-to-apex theory of language development, with its origins in the field of second language acquisition, which can connect the findings of aphasia research. Method: This tutorial critically reviews research on agrammatism as a language deficit, a representational deficit, and a processing phenomenon. Given evidence from research applying PT to language disorders, this tutorial outlines PT’s multidimensional architecture of language processing. Using an emergence (onset) criterion, PT predicts fixed developmental stages in word order (syntax) and inflection (morphology) and individual differences in the timing of syntax and morphology. To link PT to agrammatism, this theory’s applications to diagnosis and teaching are overviewed, and a case study of five individuals with moderate agrammatism is presented. Results: Analysis showed that all individuals were positioned in the early PT stages and differed in their timing of syntax and morphology consistent with theoretical predictions. Conclusions: Evidence from the case study suggests that, although agrammatism results from neural damage and associated language loss, the processing procedures necessary for relearning remain and can be exploited for recovery. A program of diagnosis and intervention is proposed, and future research directions are discussed.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories",
"Syntactic Text Processing",
"Morphology"
] |
[
48,
57,
15,
73
] |
SCOPUS_ID:85146962775
|
A Device for Automatic Conversion of Speech to Text and Braille for Visually and Hearing Impaired Persons
|
This paper describes the implementation of a prototype device for individuals dealing with both visual and hearing impairment to communicate. This is carried out with the help of a speech input system and a braille decoder. The input is processed through a machine learning model which recognizes the spoken words, convert spoken words to text, capitalize each letter in the word and send the respective ASCII code to a custom designed decoder through the Raspberry pi 4B module. The decoder is designed using logic gates which are further implemented using n-type and p-type MOSFETs. The proposed device provides an output in the form of braille codes corresponding to each letter which have been implemented using LEDs.
|
[
"Visual Data in NLP",
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
20,
52,
72,
70,
47,
10,
74
] |
SCOPUS_ID:85137834252
|
A Diachronic Study of Code-Switching Patterns in the Language of a Third Culture Filipino Kid in Korea
|
Code-switching has been of immense interest in bilingualism for decades, and most previous studies present different code-switching functions in the language of bilinguals. However, a diachronic exploration of code-switching patterns in young polyglots’ production is a road less ventured. The present study follows the three-year language development of a Filipino third culture kid (living in a culture other than their parents’) in Korea from when he was 5;5 to 8;5 years old. Discourse analyses and hours of ethnographic observation through audio/video recordings expose a substantial shift of code-switching patterns across the three stages of language development. Significant changes can be observed explicitly in code-switching as referential function, addressee specification, and cross-cultural solidarity. The current investigation proposes that there is a diachronic change in the patterns of code-switching when a child’s new language develops, and the results resonate with the argument that code-switching is used for increasingly sophisticated purposes to manifest multicompetence, behavior transformation, and identity change when a certain level of communicative fluency is reached. Finally, the study provides useful insights toward a cross-cultural understanding of the dynamic interplay of code-switching and multicultural kids’ language in a pluralistic community.
|
[
"Code-Switching",
"Multilinguality"
] |
[
7,
0
] |
SCOPUS_ID:85096908631
|
A Diachronic Study of Rhythm in Shakespeare Performance
|
How is the performance of Shakespeare’s texts made special? Does each performance have an independent relationship to the text, or are performances related to each other? This article presents findings from a corpus analysis of recordings spanning eighty-five years from 1930 to 2015. Several factors changed scholarly views on Shakespeare (both text and performance) in the twentieth-century, ultimately tipping the balance away from the meter of the text, toward the meaning of the text. The results of a corpus study of 61 recordings contribute to our knowledge of how Shakespeare’s plays are differentiated from ordinary speech not only by the text, but also through performance, as well as how Shakespearean performance is evolving over time. Three aspects of speech timing are analyzed computationally, specifically tempo, rhythm, and pauses. Evidence suggests that tempo has decreased while rhythmic contrast and amount of pause has increased. The role of meter as well as enjambment and caesura are addressed. The results are consistent with the conclusion that the meter of Shakespeare’s verse may not have had a large influence on the spoken rhythm of professional performance for nearly a century.
|
[
"Speech & Audio in NLP",
"Multimodality"
] |
[
70,
74
] |
https://aclanthology.org//W12-5606/
|
A Diagnostic Evaluation Approach Targeting MT Systems for Indian Languages
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
|
http://arxiv.org/abs/2009.13295v1
|
A Diagnostic Study of Explainability Techniques for Text Classification
|
Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity. Efforts to make the rationales behind the models' predictions transparent have inspired an abundance of new explainability techniques. Provided with an already trained model, they compute saliency scores for the words of an input instance. However, there exists no definitive guide on (i) how to choose such a technique given a particular application task and model architecture, and (ii) the benefits and drawbacks of using each such technique. In this paper, we develop a comprehensive list of diagnostic properties for evaluating existing explainability techniques. We then employ the proposed list to compare a set of diverse explainability techniques on downstream text classification tasks and neural network architectures. We also compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones. Overall, we find that the gradient-based explanations perform best across tasks and model architectures, and we present further insights into the properties of the reviewed explainability techniques.
|
[
"Text Classification",
"Explainability & Interpretability in NLP",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
36,
81,
4,
24,
3
] |
SCOPUS_ID:85125466546
|
A Dialectic of Race Discourses: The Presence/Absence of Mixed Race at the State, Institution, and Civil Society and Voluntary and Community Sector Levels in the United Kingdom
|
For the twenty years that mixed race has been on the United Kingdom (UK) censuses, the main story of mixed race in the UK remains one notable for its nominal presence and widespread absence in national discourses on race and ethnicity, racialisation, and racisms. The article explores reasons for this through connecting the continued presence/absence of mixed race in public discursive spheres to the role that White supremacy continues to play at systemic, structural, and institutional levels within UK society. As technologies of White supremacy, the article argues that continued marginalisation of mixed race has a direct connection to systemic, structural, and institutional aspects of race, racialisation, and racisms. Using three case studies, the article will use race-critical analyses to examine the ways that mixed race is present and—more often—absent at three societal levels: the state, institution, and civil society and voluntary and community sector. The paper will conclude by exploring key broad consequences for the persistent and common presence/absence of mixed race within race and racisms discourses as a technology of political power. Working in tandem, the paper exposes that presence/absence continues to affect mixed race people—and all racialised people—living in and under White supremacy.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
https://aclanthology.org//W02-0223/
|
A Dialog Architecture for Military Story Capture
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
SCOPUS_ID:85088306648
|
A Dialog-Based Tutoring System for Project-Based Learning in Information Systems Education
|
This chapter discusses the design of a dialog-based intelligent tutoring system for the domain of Business Information Systems education. The system is designed to help students work on group projects, maintain their motivation, and provide subtle hints for self-directed discovery. We analyze the domain of Business Information Systems—which we find to be “ill-defined” in the sense that e.g. multiple conflicting solutions may exist and be acceptable for a given task. Based on an extensive collection of requirements derived from previous work, we propose a solution that helps both groups find solutions and individuals reflect on these solutions. This combination ensures that not only the group’s result is valid, but also that all group members reach the defined learning goals. We show how the complexity of the domain can be captured in a rather simple way via constraint-based engineering and how machine learning can help map student utterances to these constraints. We demonstrate the intended working principles of the system with some example dialogs and some first thoughts about backend implementation principles.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85141126551
|
A Dialogflow-Based Chatbot for Karnataka Tourism
|
Imagine an invisible robot living within the Internet asking you questions. A Chatbot is a computer software developed to simulate communication with human users over the Internet. A Chatbot is a conversational agent that engages users in natural language interactions. The most basic communication model is a database of questions and replies, as well as the background history of the dialogs and the name of the related communication issue. With so many applications available now, Chatbots are becoming increasingly important for research and practice. The fundamental methodologies and technologies behind a tourist Chatbot allow individuals to textually converse with the goal of booking hotels, arranging excursions, and inquiring for interesting places to visit. So, development of Chatbot on tourism using Dialogflow consists of Karnataka Tourism Website, their user can get the information about the tourism places of Karnataka, they can get the sufficient services like transportation, accommodation, and so on, sign-in page helps the user to book a hotels, resorts, and they contact the organizer in any time to sort out the queries. The website Chatbot is there to help the user, about the places, foods, mode of transportation, etc.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
https://aclanthology.org//W97-0402/
|
A Dialogue Analysis Model with Statistical Speech Act Processing for Dialogue Machine Translation
|
[
"Machine Translation",
"Speech & Audio in NLP",
"Multimodality",
"Natural Language Interfaces",
"Text Generation",
"Dialogue Systems & Conversational Agents",
"Multilinguality"
] |
[
51,
70,
74,
11,
47,
38,
0
] |
|
SCOPUS_ID:85147258731
|
A Dialogue System Making Humor by Nori-Tsukkomi
|
For the formation of friendly relationships between users and dialogue systems, this study develops the related research and proposes the method to generate mishearing words and nori-tsukkomi text considering the context of the user utterances.
|
[
"Commonsense Reasoning",
"Natural Language Interfaces",
"Reasoning",
"Dialogue Systems & Conversational Agents"
] |
[
62,
11,
8,
38
] |
SCOPUS_ID:85060285430
|
A Dialogue System Recommending Query Sentences in Consideration of User Interest
|
We design and develop a dialogue system with collaborative filtering (CF), analyzing user histories to users and weighted information. For users who do not know what to ask, the system recommends query sentences that users should ask, considering user interest. We aim to apply the system to public relation activities of universities.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85140582343
|
A Dialogue System That Models User Opinions Based on Information Content
|
When designing rule-based dialogue systems, the need for the creation of an elaborate design by the designer is a challenge. One way to reduce the cost of creating content is to generate utterances from data collected in an objective and reproducible manner. This study focuses on rule-based dialogue systems using survey data and, more specifically, on opinion dialogue in which the system models the user. In the field of opinion dialogue, there has been little study on the topic of transition methods for modeling users while maintaining their motivation to engage in dialogue. To model them, we adopted information content. Our contribution includes the design of a rule-based dialogue system that does not require an elaborate design. We also reported an appropriate topic transition method based on information content. This is confirmed by the influence of the user’s personality characteristics. The content of the questions gives the user a sense of the system’s intention to understand them. We also reported the possibility that the system’s rational intention contributes to the user’s motivation to engage in dialogue with the system.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85016146810
|
A Dialogue System for Evaluating Explanations
|
This chapter presents a theory of explanation by building a dialectical system that has speech act rules that define the kinds of moves allowed, such as putting forward an argument, requesting an explanation and offering an explanation. Pre and post-condition rules for the speech acts determine when a particular speech act can be put forward as a move in the dialogue, and what type of move or moves must follow it. This chapter offers a dialogue structure with three stages, an opening stage, an explanation stage and a closing stage, and shows how an explanation dialogue can shift to other types of dialogue known in argumentation studies such as persuasion dialogue and deliberation dialogue. Such shifts can go from argumentation to explanation and back again. The problem of evaluating explanations is solved by extending the hybrid system of (Bex, Arguments, stories and criminal evidence: a formal hybrid theory. Springer, Dordrecht, 2011) which combines explanations and arguments to include a method of testing stories called examination dialogue. In this type of dialogue an explanation can be probed and tested by arguments. The result is a method of evaluating explanations.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories",
"Speech & Audio in NLP",
"Explainability & Interpretability in NLP",
"Multimodality",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Responsible & Trustworthy NLP"
] |
[
48,
57,
70,
81,
74,
11,
38,
4
] |
SCOPUS_ID:85070783379
|
A Dialogue manager for task-oriented agents based on dialogue building-blocks and generic cognitive processing
|
This paper introduces a novel dialogue manager, called DAISY, for intelligent virtual agents. The proposed approach is based on two central concepts: (i) dialogue building-blocks, which offer a systematic approach to the representation and implementation of human-agent dialogue, and (ii) cognitive processing in the form of sequences of simple, generic cognitive actions. The generic nature of the cognitive actions makes it possible to represent a large variety of cognitive processing (e.g, retrieving and manipulating memory content) by using a rather small set of such actions. DAISY is illustrated by means of a specific example, namely an agent acting as an information system for travel and tourist information. The example highlights the usefulness of the systematic approach offered by DAISY.
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
SCOPUS_ID:85127733671
|
A Dialogue-Based Interface for Active Learning of Activities of Daily Living
|
While Human Activity Recognition (HAR) systems may benefit from Active Learning (AL) by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in HAR systems, which utilises a dataset of natural language descriptions of common activities (which we make publicly available) and semantic similarity measures. Our approach involves system-initiated dialogue, including follow-up questions to reduce ambiguity in user responses where appropriate. We apply our work to an existing CASAS dataset in an active learning scenario, to demonstrate our work in context, in which a natural language interface provides knowledge that can help interpret other multi-modal sensor data. We provide results highlighting the potential of our dialogue- and semantic similarity-based approach. We evaluate our work: (i) technically, as an effective way to seek users' input for active learning of ADLs; and (ii) qualitatively, through a user study in which users were asked to use our approach and an established method, and to subsequently compare the two. Results show the potential of our approach as a user-friendly mechanism for annotation of sensor data as part of an active learning system.
|
[
"Low-Resource NLP",
"Semantic Text Processing",
"Semantic Similarity",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Responsible & Trustworthy NLP"
] |
[
80,
72,
53,
11,
38,
4
] |
SCOPUS_ID:85109823452
|
A Dialogue-Based System with Photo and Storytelling for Older Adults: Toward Daily Cognitive Training
|
As the elderly population grows worldwide, living a healthy and full life as an older adult is becoming a topic of great interest. One key factor and severe challenge to maintaining quality of life in older adults is cognitive decline. Assistive robots for helping older adults have been proposed to solve issues such as social isolation and dependent living. Only a few studies have reported the positive effects of dialogue robots on cognitive function but conversation is being discussed as a promising intervention that includes various cognitive tasks. Existing dialogue robot-related studies have reported on placing dialogue robots in elderly homes and allowing them to interact with residents. However, it is difficult to reproduce these experiments since the participants’ characteristics influence experimental conditions, especially at home. Besides, most dialogue systems are not designed to set experimental conditions without on-site support. This study proposes a novel design method that uses a dialogue-based robot system for cognitive training at home. We define challenges and requirements to meet them to realize cognitive function training through daily communication. Those requirements are designed to satisfy detailed conditions such as duration of dialogue, frequency, and starting time without on-site support. Our system displays photos and gives original stories to provide contexts for dialogue that help the robot maintain a conversation for each story. Then the system schedules dialogue sessions along with the participant’s plan. The robot moderates the user to ask a question and then responds to the question by changing its facial expression. This question-answering procedure continued for a specific duration (4 min). To verify our design method’s effectiveness and implementation, we conducted three user studies by recruiting 35 elderly participants. We performed prototype-, laboratory-, and home-based experiments. Through these experiments, we evaluated current datasets, user experience, and feasibility for home use. We report on and discuss the older adults’ attitudes toward the robot and the number of turns during dialogues. We also classify the types of utterances and identify user needs. Herein, we outline the findings of this study, outlining the system’s essential characteristics to experiment toward daily cognitive training and explain further feature requests.
|
[
"Natural Language Interfaces",
"Question Answering",
"Dialogue Systems & Conversational Agents"
] |
[
11,
27,
38
] |
SCOPUS_ID:85099216134
|
A Dialogue-System Using a Qur'anic Ontology
|
A dialogue-system is a medium of communication and interaction between humans and computers by exchanging questions and answers in natural languages. The performance of the latter depends on the ability to analyze the question, the good management of keywords to question the knowledge base, and finally the use of an efficient knowledge base. However, several problems related to developing dialogue systems are due to using traditional databases. In this project, we propose a dialogue-system based on a Quranic ontology. This system allows easy access to Qur'anic information by SPARQL queries from pre-existing domain ontology. The used ontology covers Quranic chapters and verses, each word of the Qur'an and its root and lemma.
|
[
"Natural Language Interfaces",
"Knowledge Representation",
"Semantic Text Processing",
"Dialogue Systems & Conversational Agents"
] |
[
11,
18,
72,
38
] |
http://arxiv.org/abs/2107.05866v1
|
A Dialogue-based Information Extraction System for Medical Insurance Assessment
|
In the Chinese medical insurance industry, the assessor's role is essential and requires significant efforts to converse with the claimant. This is a highly professional job that involves many parts, such as identifying personal information, collecting related evidence, and making a final insurance report. Due to the coronavirus (COVID-19) pandemic, the previous offline insurance assessment has to be conducted online. However, for the junior assessor often lacking practical experience, it is not easy to quickly handle such a complex online procedure, yet this is important as the insurance company needs to decide how much compensation the claimant should receive based on the assessor's feedback. In order to promote assessors' work efficiency and speed up the overall procedure, in this paper, we propose a dialogue-based information extraction system that integrates advanced NLP technologies for medical insurance assessment. With the assistance of our system, the average time cost of the procedure is reduced from 55 minutes to 35 minutes, and the total human resources cost is saved 30% compared with the previous offline procedure. Until now, the system has already served thousands of online claim cases.
|
[
"Natural Language Interfaces",
"Information Extraction & Text Mining",
"Dialogue Systems & Conversational Agents"
] |
[
11,
3,
38
] |
SCOPUS_ID:85102618390
|
A Differentiable Generative Adversarial Network for Open Domain Dialogue
|
This work presents a novel methodology to train open domain neural dialogue systems within the framework of Generative Adversarial Networks with gradient based optimization methods. We avoid the non-differentiability related to text-generating networks approximating the word vector corresponding to each generated token via a top-k softmax. We show that a weighted average of the word vectors of the most probable tokens computed from the probabilities resulting of the top-k softmax leads to a good approximation of the word vector of the generated token. Finally we demonstrate through a human evaluation process that training a neural dialogue system via adversarial learning with this method successfully discourages it from producing generic responses. Instead it tends to produce more informative and variate ones.
|
[
"Semantic Text Processing",
"Robustness in NLP",
"Representation Learning",
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents",
"Responsible & Trustworthy NLP"
] |
[
72,
58,
12,
11,
38,
4
] |
SCOPUS_ID:85124745616
|
A Differentiable Language Model Adversarial Attack on Text Classifiers
|
Transformer models play a crucial role in state of the art solutions to problems arising in the field of natural language processing (NLP). They have billions of parameters and are typically considered as black boxes. Robustness of huge Transformer-based models for NLP is an important question due to their wide adoption. One way to understand and improve robustness of these models is an exploration of an adversarial attack scenario: check if a small perturbation of an input invisible to a human eye can fool a model. Due to the discrete nature of textual data, gradient-based adversarial methods, widely used in computer vision, are not applicable per se. The standard strategy to overcome this issue is to develop token-level transformations, which do not take the whole sentence into account. The semantic meaning and grammatical correctness of the sentence are often lost in such approaches In this paper, we propose a new black-box sentence-level attack. Our method fine-tunes a pre-trained language model to generate adversarial examples. A proposed differentiable loss function depends on a substitute classifier score and an approximate edit distance computed via a deep learning model. We show that the proposed attack outperforms competitors on a diverse set of NLP problems for both computed metrics and human evaluation. Moreover, due to the usage of the fine-tuned language model, the generated adversarial examples are hard to detect, thus current models are not robust. Hence, it is difficult to defend from the proposed attack, which is not the case for others. Our attack demonstrates the highest decrease of classification accuracy on all datasets(on AG news: 0.95 without attack, 0.89 under SamplingFool attack, 0.82 under DILMA attack).
|
[
"Language Models",
"Semantic Text Processing",
"Text Classification",
"Robustness in NLP",
"Responsible & Trustworthy NLP",
"Information Retrieval",
"Information Extraction & Text Mining"
] |
[
52,
72,
36,
58,
4,
24,
3
] |
SCOPUS_ID:84934652744
|
A Differential Interactive Analysis of Language Teaching and Learning
|
Since many basic principles of first language acquisition and environmental input have been clarified by research of the last decade, more differentiated questions are explored in the present study. The overall goal is to analyze language teaching and learning as it transpires during the course of verbal interactions in the home. This encompasses three major interrelated aims: (a) An extensive taxonomy of maternal language “teaching techniques’” and filial language “learning strategies” is established incorporating the research efforts of the last two decades. (b) Markov chain models, based upon transitional probabilites, are employed for the description of sequential dependencies. With these sequential dependencies between teaching techniques and learning strategies, causal relationships can be explored in single-subject observations. (c) Finally, to counteract past nomothetic tendencies and to demonstrate the value of the present approach for differential instructional/learning analyses, two dyads are compared in their idiosyncratic and differentially successful interactions. The children of these two dyads were approximately matched in mean length of utterances which ranged between 1,5 and 4,0 morphemes. The ages of the children ranged between 18 and 35 months during the period they were observed. The implications of the presented results for contrasting theories of language acquisition are discussed. © 1985, Taylor & Francis Group, LLC. All rights reserved.
|
[
"Linguistics & Cognitive NLP",
"Linguistic Theories"
] |
[
48,
57
] |
https://aclanthology.org//W03-1104/
|
A Differential LSI Method for Document Classification
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
|
http://arxiv.org/abs/2002.08241v1
|
A Differential-form Pullback Programming Language for Higher-order Reverse-mode Automatic Differentiation
|
Building on the observation that reverse-mode automatic differentiation (AD) -- a generalisation of backpropagation -- can naturally be expressed as pullbacks of differential 1-forms, we design a simple higher-order programming language with a first-class differential operator, and present a reduction strategy which exactly simulates reverse-mode AD. We justify our reduction strategy by interpreting our language in any differential $\lambda$-category that satisfies the Hahn-Banach Separation Theorem, and show that the reduction strategy precisely captures reverse-mode AD in a truly higher-order setting.
|
[
"Programming Languages in NLP",
"Multimodality"
] |
[
55,
74
] |
https://aclanthology.org//2020.privatenlp-1.2/
|
A Differentially Private Text Perturbation Method Using Regularized Mahalanobis Metric
|
Balancing the privacy-utility tradeoff is a crucial requirement of many practical machine learning systems that deal with sensitive customer data. A popular approach for privacy- preserving text analysis is noise injection, in which text data is first mapped into a continuous embedding space, perturbed by sampling a spherical noise from an appropriate distribution, and then projected back to the discrete vocabulary space. While this allows the perturbation to admit the required metric differential privacy, often the utility of downstream tasks modeled on this perturbed data is low because the spherical noise does not account for the variability in the density around different words in the embedding space. In particular, words in a sparse region are likely unchanged even when the noise scale is large. In this paper, we propose a text perturbation mechanism based on a carefully designed regularized variant of the Mahalanobis metric to overcome this problem. For any given noise scale, this metric adds an elliptical noise to account for the covariance structure in the embedding space. This heterogeneity in the noise scale along different directions helps ensure that the words in the sparse region have sufficient likelihood of replacement without sacrificing the overall utility. We provide a text-perturbation algorithm based on this metric and formally prove its privacy guarantees. Additionally, we empirically show that our mechanism improves the privacy statistics to achieve the same level of utility as compared to the state-of-the-art Laplace mechanism.
|
[
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Ethical NLP",
"Representation Learning"
] |
[
4,
72,
17,
12
] |
http://arxiv.org/abs/2010.11947v1
|
A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric
|
Balancing the privacy-utility tradeoff is a crucial requirement of many practical machine learning systems that deal with sensitive customer data. A popular approach for privacy-preserving text analysis is noise injection, in which text data is first mapped into a continuous embedding space, perturbed by sampling a spherical noise from an appropriate distribution, and then projected back to the discrete vocabulary space. While this allows the perturbation to admit the required metric differential privacy, often the utility of downstream tasks modeled on this perturbed data is low because the spherical noise does not account for the variability in the density around different words in the embedding space. In particular, words in a sparse region are likely unchanged even when the noise scale is large. %Using the global sensitivity of the mechanism can potentially add too much noise to the words in the dense regions of the embedding space, causing a high utility loss, whereas using local sensitivity can leak information through the scale of the noise added. In this paper, we propose a text perturbation mechanism based on a carefully designed regularized variant of the Mahalanobis metric to overcome this problem. For any given noise scale, this metric adds an elliptical noise to account for the covariance structure in the embedding space. This heterogeneity in the noise scale along different directions helps ensure that the words in the sparse region have sufficient likelihood of replacement without sacrificing the overall utility. We provide a text-perturbation algorithm based on this metric and formally prove its privacy guarantees. Additionally, we empirically show that our mechanism improves the privacy statistics to achieve the same level of utility as compared to the state-of-the-art Laplace mechanism.
|
[
"Responsible & Trustworthy NLP",
"Semantic Text Processing",
"Ethical NLP",
"Representation Learning"
] |
[
4,
72,
17,
12
] |
SCOPUS_ID:85087923606
|
A Digital Nudge to Counter Confirmation Bias
|
Fake news is increasingly an issue on social media platforms. In this work, rather than detect misinformation, we propose the use of nudges to help steer internet users into fact checking the news they read online. We discuss two types of nudging strategies, by presentation and by information. We present the tool BalancedView, a proof-of-concept that shows news stories relevant to a tweet. The method presents the user with a selection of articles from a range of reputable news sources providing alternative opinions from the whole political spectrum, with these alternative articles identified as matching the original one by a combination of natural language processing and search. The results of an initial user study of BalancedView suggest that nudging by information may change the behavior of users towards that of informed news readers.
|
[
"Reasoning",
"Fact & Claim Verification",
"Ethical NLP",
"Responsible & Trustworthy NLP"
] |
[
8,
46,
17,
4
] |
SCOPUS_ID:85131950510
|
A Digitization Pipeline for Mixed-Typed Documents Using Machine Learning and Optical Character Recognition
|
Although digitization is advancing rapidly, a large amount of data processed by companies is in printed format. Technologies such as Optical Character Recognition (OCR) support the transformation of printed text into machine-readable content. However, OCR struggles when data on documents is highly unstructured and includes non-text objects. This, e.g., applies to documents such as medical prescriptions. Leveraging Design Science Research (DSR), we propose a flexible processing pipeline that can deal with character recognition on the one hand and object detection on the other hand. To do so, we derive Design Requirements (DR) in cooperation with a practitioner doing prescription billing in the healthcare domain. We then developed a prototype blueprint that is applicable to similar problem formulations. Overall, we contribute to research and practice in multiple ways. First, we provide evidence for selected OCR methods provided by previous research. Second, we design a machine-learning-based digitization pipeline for printed documents containing both text and non-text objects in the context of medical prescriptions. Third, we derive a nascent design pattern for this type of document digitization. These patterns are the foundation for further research and can support the development of innovative information systems leading to more efficient decision making and thus to economic resource usage.
|
[
"Visual Data in NLP",
"Multimodality"
] |
[
20,
74
] |
SCOPUS_ID:85095688942
|
A Direct Regression Scene Text Detector with Position-Sensitive Segmentation
|
Direct regression methods have demonstrated their success on various multi-oriented benchmarks for scene text detection due to the high recall rate for small targets and the direct regression for text boxes. However, too many false positive candidates and inaccurate position regression still limit the performance of these methods. In this paper, we propose an end-to-end method by introducing position-sensitive segmentation into the direct regression method to overcome these shortcomings. We generate the ground truth of position-sensitive segmentation maps based on the information of text boxes so that the position-sensitive segmentation module can be trained synchronously with the direct regression module. Besides, more information about the relative position of text is provided for the network through the training of position-sensitive segmentation maps, which improves the expressiveness of the network. We also introduce spatial pyramid of position-sensitive segmentation into the proposed method considering the huge differences in sizes and aspect ratios of scene texts and we propose position-sensitive COI(Corner area of Interest) pooling into the proposed method to speed up the inference. Experiments on datasets ICDAR2015, MLT-17 and COCO-Text demonstrate that the proposed method has a comparable performance with state-of-the-art methods while it is more efficient. We also provide abundant ablation experiments to demonstrate the effectiveness of these improvements in our proposed method.
|
[
"Text Segmentation",
"Syntactic Text Processing"
] |
[
21,
15
] |
SCOPUS_ID:84907033074
|
A Dirichlet multinomial mixture model-based approach for short text clustering
|
Short text clustering has become an increasingly important task with the popularity of social media like Twitter, Google+, and Facebook. It is a challenging problem due to its sparse, high-dimensional, and large-volume characteristics. In this paper, we proposed a collapsed Gibbs Sampling algorithm for the Dirichlet Multinomial Mixture model for short text clustering (abbr. to GSDMM). We found that GSDMM can infer the number of clusters automatically with a good balance between the completeness and homogeneity of the clustering results, and is fast to converge. GSDMM can also cope with the sparse and high-dimensional problem of short texts, and can obtain the representative words of each cluster. Our extensive experimental study shows that GSDMM can achieve significantly better performance than three other clustering models. © 2014 ACM.
|
[
"Information Extraction & Text Mining",
"Text Clustering"
] |
[
3,
29
] |
SCOPUS_ID:85034569196
|
A Discourse Analysis Based Approach to Automatic Information Identification in Chinese Legal Texts
|
Alternative Dispute Resolution (ADR) is increasingly advocated as an alternative to litigation nowadays. Being one of the possible forms of ADR, eMediation aims to inform the parties of their desired information in legal texts relevant to the current case. A fundamental requirement to achieve this is that information in legal texts should be automatically identified. As a preliminary investigation, this study puts forward a discourse analysis based approach to recognize and present legal information with respect to users’ command. First, we make use of the 15 information categories proposed by Discourse Information Theory to describe legal information. Next, through a corpus-based analysis of the hierarchical structure of legal text and the tripartite structure of clause, we formulate a series of processing rules. Finally, an experiment is conducted to examine the efficacy of our approach. Experimental results show that our approach can reach a satisfying accuracy. Moreover, the approach may also provide some insights into the statistics based Natural Language Processing (NLP) techniques.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85129096833
|
A Discourse Analysis of 40 Years Rural Development in China
|
Since the reform and opening-up policy of 1978, rural areas in China have experienced significant changes in spatial, social, economic, and environmental development. In this research, we aim to explore the changes in the discourses on rural development over the past 40 years. This can help to understand how problems are framed and why certain strategies are adopted at different times. We employ a quantitative approach and analyze keywords from 32,657 Chinese publications on rural development from 1981 to 2020. From the results, we distinguish eight development paradigms, including “household responsibility system”, “rural commodity economy”, “social market economy”, “sustainable development”, “Sannong”, “building a new socialist countryside”, “beautiful countryside”, and “rural revitalization”. We also interpret the discursive shifts in three aspects, i.e., actors, places, and activities. We argue that the key characteristic of current rural development discourse is the duality, which emerges between agricultural and non-agricultural industries, economic growth and environmental conservation, urban and rural development, topdown and bottom-up approaches, and modernist and postmodernist discourses.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85090440308
|
A Discourse Analysis of Quotidian Expressions of Nationalism during the COVID-19 Pandemic in Chinese Cyberspace
|
By conducting discourse analysis on quotidian expressions of nationalism of Chinese netizens and analyzing their “Liking” behavior, this article tries to inductively explore during the COVID-19 pandemic what and how Chinese netizens say about nationalism. This article finds that during the pandemic, Chinese netizens show a confident and rational but confrontational and xenophobic posture in their quotidian discourses. They value reasoning and deliberation in their expressions of nationalist discourses. In the quotidian discourses, they maintain a confident tone when comparing China’s performance with other countries during the pandemic, but show vigilance and even hostile sentiments toward external provocations.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85123503094
|
A Discourse Analysis of the Conflicting Implications of Terrorism: the Iranian and U.S. Perspectives
|
There are many approaches in analyzing the prolonged Iran–US impasse. We can taxonomize them into objective and subjective perspectives. We can explain Iran–US tension for realistic and geopolitical reasons. But discourse analysis is a subjective approach that maintains social facts are constructed in a discursive way by social players. This article aims to provide a discursive overview of how the definition of terrorism has been influenced by divergent discourses, as well as conflicting political interests by Iran and the US. In the discursive approach, as what anti-foundationalism maintains, social phenomena and social concepts like terrorism, miss a fixed essence or meaning. The present article applies the term discourse analysis mostly in Foucauldian philosophy and other like-minded political scientists in the deconstruction of the relationship between power and knowledge. This research concludes that definition and determining the instances of terrorism is a discursive action by Iran and the United States, so it explains the subjective reasons why there has been a dichotomy between Iran and the US in characterizing terrorism or ‘resistance movements’ in the Middle East. Therefore, subjective reasons as much as objective ones play a major role in the Tehran–Washington discord.
|
[
"Semantic Text Processing",
"Discourse & Pragmatics",
"Explainability & Interpretability in NLP",
"Reasoning",
"Responsible & Trustworthy NLP"
] |
[
72,
71,
81,
8,
4
] |
SCOPUS_ID:85084644803
|
A Discourse Analysis of the Istanbul Convention
|
The author analyzes the case of ratification of the Istanbul Convention in Croatia in 2018. After introductory explanations of the basic theoretical and methodological conditions of the analysis, the first part of the article analyzes the text of the Convention. The second part of the article analyzes its political reception in Croatian public at the time before, during and after its ratification. In doing so, it inductively establishes the discourses that both clashed and collaborated, on the basis of repetitive similarities and differences in the utterances that different actors made in the public media. After the discursive framework of the debate is analyzed, the institutional and non-institutional aspects of the ratification process that took place within these framework are presented. Emphasis is placed on the formation of a discursive coalition between advocates of the dominant discourse on violence against women and the discourse on family tradition that, with the Interpretive statement, enabled the ratification of the Convention.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:85082112581
|
A Discourse Analysis on Betel Nut Chewing in Hunan Province, China
|
Betel nut chewing has become prevalent in Hunan Province, China. There are different voices over its health risks. In spite of this, the local government has not taken any effective measures to control its expansion. It is necessary to reveal the concern of interests and public health behind such voices. This study used qualitative and quantitative methods to investigate the dispute over the health risks of betel nut chewing. The different voices over the risks demonstrate the tension of power, interests and public health among the government, institution, business, media and medical elites. Discursive practices of these institutions and individuals are associated with the exercise of power and expression of interests. With the deep concern about its cancerogenicity, majority of the public hold a negative attitude, and agree that the related industry should be controlled. Faced with conflicting perspectives, the government has the responsibility to clarify the issue and express an official stance. Measures should be taken to protect public health.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Multimodality"
] |
[
71,
72,
70,
74
] |
SCOPUS_ID:85040344980
|
A Discourse Analysis: One Caregiver’s Voice in End-of-Life Care
|
Informal family caregivers make a significant contribution to the U.S. health care system, and the need for caregivers will likely increase. Gaining deeper insights into the caregiver experience will provide essential knowledge needed to support the future caregiver workforce delivering care. Discourse analysis is a viable approach in analyzing textual caregiver data that focuses on the end-of-life caregiving experience. The purpose of this study was to conduct an in-depth discourse analytic examination of 13 hours of caregiver interview data, which reveal the multiplicity of shifting stances and perceptions of one caregiver in the midst of end-of-life care, specifically with regard to his perceptions of self (caregiver) and other (care recipient). By isolating a specific but limited set of reference terms used throughout the discourse, we gained systematic glimpses into the mind and perceptions of this single caregiver in relation to his role as caregiver for his terminally ill wife.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Multimodality"
] |
[
71,
72,
70,
74
] |
SCOPUS_ID:84930367701
|
A Discourse Analytic Approach to Video Analysis of Teaching: Aligning Desired Identities With Practice
|
The authors present findings from a qualitative study of an experience that supports teacher candidates to use discourse analysis and positioning theory to analyze videos of their practice during student teaching. The research relies on the theoretical concept that learning to teach is an identity process. In particular, teachers construct and enact their identities during moment-to-moment interactions with students, colleagues, and parents. Using case study methods for data generation and analysis, the authors demonstrate how one participant used the analytic tools to trace whether and how she enacted her preferred teacher identities (facilitator and advocate) during student teaching. Implications suggest that using discourse analytic frameworks to analyze videos of instruction is a generative strategy for developing candidates’ interactional awareness that impacts student learning and the nature of classroom talk. Overall, these tools support novice teachers with the difficult task of becoming the teacher they desire to be.
|
[
"Discourse & Pragmatics",
"Visual Data in NLP",
"Semantic Text Processing",
"Multimodality"
] |
[
71,
20,
72,
74
] |
https://aclanthology.org//W07-1428/
|
A Discourse Commitment-Based Framework for Recognizing Textual Entailment
|
[
"Reasoning",
"Textual Inference"
] |
[
8,
22
] |
|
SCOPUS_ID:85067383146
|
A Discourse Construction Grammar Approach to Discourse Analysis: Microblog Parody and Instant Messaging
|
On the basis of Goldberg's (1995) Construction Grammar (CxG) and Östman's (2005) Construction Discourse perspectives, and by incorporating the theories of genre, register and cohesion from Systemic Functional Grammar, this research attempts to set up a construction grammar framework for discourse analysis, namely the discourse construction grammar (dcg) model. With dcg, we see a discourse first as an overarching abstract discourse construction, which consists of and integrates a number of ever smaller schematic constructions. Moreover, in order to account for the nexion of clauses into sentences and sentences into cross-sentential discourse chunks from a dcg perspective, this paper also resorts to clause conjunct construction and inter-sentential conjunction construction conceptions. Alongside establishing our dcg model, we have analyzed a trendy microblog templatic parody as well as a piece of dialogic instant messaging to exemplify our multi-layered and multi-faceted construction treatment of a piece of discourse.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
https://aclanthology.org//W19-2708/
|
A Discourse Signal Annotation System for RST Trees
|
This paper presents a new system for open-ended discourse relation signal annotation in the framework of Rhetorical Structure Theory (RST), implemented on top of an online tool for RST annotation. We discuss existing projects annotating textual signals of discourse relations, which have so far not allowed simultaneously structuring and annotating words signaling hierarchical discourse trees, and demonstrate the design and applications of our interface by extending existing RST annotations in the freely available GUM corpus.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
SCOPUS_ID:84992426206
|
A Discourse on Discourse Studies
|
Discourse analysis can be said to have evolved out of the desire linguists had to move beyond the sentence and be able to analyze all kinds of texts, from conversations to advertisements and texts with written or spoken language, images, video and music. As it evolved, some ideologically centered discourse analysts developed what is called CDA: Critical Discourse Analysis and then, to deal with the complex nature of mass mediated texts, MultiModal discourse analysis and, for some, Critical Multimodal Discourse Analysis. Two examples of how discourse analysis can be used are offered: the first is a discussion of the way language in speed dating shapes decision making by speed daters. The second is an analysis of a Fidji perfume advertisement.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing",
"Multimodality"
] |
[
71,
72,
74
] |
http://arxiv.org/abs/1804.05685v2
|
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
|
Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
https://aclanthology.org//W10-4202/
|
A Discourse-Aware Graph-Based Content-Selection Framework
|
[
"Structured Data in NLP",
"Text Generation",
"Multimodality"
] |
[
50,
47,
74
] |
|
http://arxiv.org/abs/1711.07010v5
|
A Discourse-Level Named Entity Recognition and Relation Extraction Dataset for Chinese Literature Text
|
Named Entity Recognition and Relation Extraction for Chinese literature text is regarded as the highly difficult problem, partially because of the lack of tagging sets. In this paper, we build a discourse-level dataset from hundreds of Chinese literature articles for improving this task. To build a high quality dataset, we propose two tagging methods to solve the problem of data inconsistency, including a heuristic tagging method and a machine auxiliary tagging method. Based on this corpus, we also introduce several widely used models to conduct experiments. Experimental results not only show the usefulness of the proposed dataset, but also provide baselines for further research. The dataset is available at https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset
|
[
"Relation Extraction",
"Syntactic Text Processing",
"Named Entity Recognition",
"Tagging",
"Information Extraction & Text Mining"
] |
[
75,
15,
34,
63,
3
] |
http://arxiv.org/abs/0911.1516v2
|
A Discourse-based Approach in Text-based Machine Translation
|
This paper presents a theoretical research based approach to ellipsis resolution in machine translation. The formula of discourse is applied in order to resolve ellipses. The validity of the discourse formula is analyzed by applying it to the real world text, i.e., newspaper fragments. The source text is converted into mono-sentential discourses where complex discourses require further dissection either directly into primitive discourses or first into compound discourses and later into primitive ones. The procedure of dissection needs further improvement, i.e., discovering as many primitive discourse forms as possible. An attempt has been made to investigate new primitive discourses or patterns from the given text.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/1911.09845v1
|
A Discrete CVAE for Response Generation on Short-Text Conversation
|
Neural conversation models such as encoder-decoder models are easy to generate bland and generic responses. Some researchers propose to use the conditional variational autoencoder(CVAE) which maximizes the lower bound on the conditional log-likelihood on a continuous latent variable. With different sampled la-tent variables, the model is expected to generate diverse responses. Although the CVAE-based models have shown tremendous potential, their improvement of generating high-quality responses is still unsatisfactory. In this paper, we introduce a discrete latent variable with an explicit semantic meaning to improve the CVAE on short-text conversation. A major advantage of our model is that we can exploit the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we pro-pose a two-stage sampling approach to enable efficient diverse variable selection from a large latent space assumed in the short-text conversation task. Experimental results indicate that our model outperforms various kinds of generation models under both automatic and human evaluations and generates more diverse and in-formative responses.
|
[
"Language Models",
"Semantic Text Processing",
"Dialogue Response Generation",
"Natural Language Interfaces",
"Text Generation",
"Dialogue Systems & Conversational Agents"
] |
[
52,
72,
14,
11,
47,
38
] |
http://arxiv.org/abs/1909.04849v1
|
A Discrete Hard EM Approach for Weakly Supervised Question Answering
|
Many question answering (QA) tasks only provide weak supervision for how the answer should be computed. For example, TriviaQA answers are entities that can be mentioned multiple times in supporting documents, while DROP answers can be computed by deriving many different equations from numbers in the reference text. In this paper, we show it is possible to convert such tasks into discrete latent variable learning problems with a precomputed, task-specific set of possible "solutions" (e.g. different mentions or equations) that contains one correct option. We then develop a hard EM learning scheme that computes gradients relative to the most likely solution at each update. Despite its simplicity, we show that this approach significantly outperforms previous methods on six QA tasks, including absolute gains of 2--10%, and achieves the state-of-the-art on five of them. Using hard updates instead of maximizing marginal likelihood is key to these results as it encourages the model to find the one correct answer, which we show through detailed qualitative analysis.
|
[
"Low-Resource NLP",
"Natural Language Interfaces",
"Question Answering",
"Responsible & Trustworthy NLP"
] |
[
80,
11,
27,
4
] |
SCOPUS_ID:85073933302
|
A Discriminative Approach to Sentiment Classification
|
Due to the explosive growth of user-generated contents, understanding opinions (such as reviews on products) generated by Internet users is important for optimizing business decision. To achieve such understanding, this paper investigates a discriminative approach to classifying opinions according to sentiments. The discriminative approach builds a model with the prior knowledge of the categorization information in order to extract meaningful features from the unstructured texts. The prior knowledge includes ratio factors to reinforce terms’ sentiment polarity by using TF-IDF, short for term frequency-inverse document frequency. Experimental results with four datasets show the proposed approach is very competitive, compared with some of the previous works.
|
[
"Information Extraction & Text Mining",
"Information Retrieval",
"Text Classification",
"Sentiment Analysis"
] |
[
3,
24,
36,
78
] |
SCOPUS_ID:85091022316
|
A Discriminative Convolutional Neural Network with Context-Aware Attention
|
Feature representation and feature extraction are two crucial procedures in text mining. Convolutional Neural Networks (CNN) have shown overwhelming success for text-mining tasks, since they are capable of efficiently extracting n-gram features from source data. However, vanilla CNN has its own weaknesses on feature representation and feature extraction. A certain amount of filters in CNN are inevitably duplicate and thus hinder to discriminatively represent a given text. In addition, most existing CNN models extract features in a fixed way (i.e., max pooling) that either limit the CNN to local optimum nor without considering the relation between all features, thereby unable to learn a contextual n-gram features adaptively. In this article, we propose a discriminative CNN with context-Aware attention to solve the challenges of vanilla CNN. Specifically, our model mainly encourages discrimination across different filters via maximizing their earth mover distances and estimates the salience of feature candidates by considering the relation between context features. We validate carefully our findings against baselines on five benchmark datasets of classification and two datasets of summarization. The results of the experiments verify the competitive performance of our proposed model.
|
[
"Information Extraction & Text Mining"
] |
[
3
] |
SCOPUS_ID:85127443323
|
A Discriminative Deep Neural Network for Text Classification
|
Text classification plays an important role in natural language processing. It has been widely applied to sentiment analysis, stance detection and fake news detection. Although previous work on text classification has made great progress in recent years, these methods do not consider discriminative. And the ignoration degrades the performance of text classification. Hence, this paper proposes a novel CNN-based method, which incorporates the power of discriminative by adding an extra regularization term. We conduct experiments on three datasets, and the results demonstrate that our model is superior to other popular text classification methods.
|
[
"Information Retrieval",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
24,
36,
3
] |
http://arxiv.org/abs/2106.11292v1
|
A Discriminative Entity-Aware Language Model for Virtual Assistants
|
High-quality automatic speech recognition (ASR) is essential for virtual assistants (VAs) to work well. However, ASR often performs poorly on VA requests containing named entities. In this work, we start from the observation that many ASR errors on named entities are inconsistent with real-world knowledge. We extend previous discriminative n-gram language modeling approaches to incorporate real-world knowledge from a Knowledge Graph (KG), using features that capture entity type-entity and entity-entity relationships. We apply our model through an efficient lattice rescoring process, achieving relative sentence error rate reductions of more than 25% on some synthesized test sets covering less popular entities, with minimal degradation on a uniformly sampled VA test set.
|
[
"Language Models",
"Semantic Text Processing",
"Speech & Audio in NLP",
"Text Generation",
"Speech Recognition",
"Multimodality"
] |
[
52,
72,
70,
47,
10,
74
] |
http://arxiv.org/abs/1808.09334v2
|
A Discriminative Latent-Variable Model for Bilingual Lexicon Induction
|
We introduce a novel discriminative latent-variable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embedding-based approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
http://arxiv.org/abs/1909.00444v1
|
A Discriminative Neural Model for Cross-Lingual Word Alignment
|
We introduce a novel discriminative word alignment model, which we integrate into a Transformer-based machine translation model. In experiments based on a small number of labeled examples (~1.7K-5K sentences) we evaluate its performance intrinsically on both English-Chinese and English-Arabic alignment, where we achieve major improvements over unsupervised baselines (11-27 F1). We evaluate the model extrinsically on data projection for Chinese NER, showing that our alignments lead to higher performance when used to project NER tags from English to Chinese. Finally, we perform an ablation analysis and an annotation experiment that jointly support the utility and feasibility of future manual alignment elicitation.
|
[
"Cross-Lingual Transfer",
"Multilinguality"
] |
[
19,
0
] |
SCOPUS_ID:85136209844
|
A Discursive Approach to Analyzing the Social Construction of Exercise During Pregnancy
|
While research has demonstrated that exercise is healthy for pregnant women (Ramirez-Velez et al., 2017), many pregnant women do not meet medical recommendations and hesitate to engage in exercise. This may be related to the dominant discourses circulating in society and popular media. For this study, I selected a sample of top-selling pregnancy books to explore the attitudes, beliefs, and ideas circulating in these texts surrounding exercise during pregnancy. I conducted a discourse analysis to deconstruct the meaning of the language used and the advice given. Throughout the analysis, a postmodern feminist epistemology is employed to consider the implications this discourse may have on a pregnant woman. I discovered evidence within the books that represents the current social constructions which may contribute to the lack of participation in exercise amongst pregnant women.
|
[
"Discourse & Pragmatics",
"Semantic Text Processing"
] |
[
71,
72
] |
https://aclanthology.org//W15-4608/
|
A Discursive Grid Approach to Model Local Coherence in Multi-document Summaries
|
[
"Natural Language Interfaces",
"Dialogue Systems & Conversational Agents"
] |
[
11,
38
] |
|
http://arxiv.org/abs/2106.06292v2
|
A Discussion on Building Practical NLP Leaderboards: The Case of Machine Translation
|
Recent advances in AI and ML applications have benefited from rapid progress in NLP research. Leaderboards have emerged as a popular mechanism to track and accelerate progress in NLP through competitive model development. While this has increased interest and participation, the over-reliance on single, and accuracy-based metrics have shifted focus from other important metrics that might be equally pertinent to consider in real-world contexts. In this paper, we offer a preliminary discussion of the risks associated with focusing exclusively on accuracy metrics and draw on recent discussions to highlight prescriptive suggestions on how to develop more practical and effective leaderboards that can better reflect the real-world utility of models.
|
[
"Machine Translation",
"Text Generation",
"Multilinguality"
] |
[
51,
47,
0
] |
SCOPUS_ID:85097828799
|
A Discussion on Various Methods in Automatic Abstractive Text Summarization
|
Automatic abstractive text summarization (ATS) is important in terms of optimized data storage and transmission requirements on technical aspect and also for quick view of the information for the people with their busy lives. This perspective and applicability have triggered the thoughts of various researchers to develop the efficient models that can provide consistently good in terms of understandable, meaningful and short summaries. The main aim of this discussion is to address the literature available in the domain for understanding the researcher’s perspective of development of such models and to form the platform of thoughts for the researchers in the field to finalize new strategies of the development. Some of the papers addressed here show the use of neural network platform in ATS along with graph-based hypothetical methods which show significant difference in perspectives considered while developing the models and flexibility and complexity involved in the implementation for the sake of experimentation and evaluating the performance of models.
|
[
"Summarization",
"Text Generation",
"Information Extraction & Text Mining"
] |
[
30,
47,
3
] |
SCOPUS_ID:85132272831
|
A Disease Identification Algorithm for Medical Crowdfunding Campaigns: Validation Study
|
Background: Web-based crowdfunding has become a popular method to raise money for medical expenses, and there is growing research interest in this topic. However, crowdfunding data are largely composed of unstructured text, thereby posing many challenges for researchers hoping to answer questions about specific medical conditions. Previous studies have used methods that either failed to address major challenges or were poorly scalable to large sample sizes. To enable further research on this emerging funding mechanism in health care, better methods are needed. Objective: We sought to validate an algorithm for identifying 11 disease categories in web-based medical crowdfunding campaigns. We hypothesized that a disease identification algorithm combining a named entity recognition (NER) model and word search approach could identify disease categories with high precision and accuracy. Such an algorithm would facilitate further research using these data. Methods: Web scraping was used to collect data on medical crowdfunding campaigns from GoFundMe (GoFundMe Inc). Using pretrained NER and entity resolution models from Spark NLP for Healthcare in combination with targeted keyword searches, we constructed an algorithm to identify conditions in the campaign descriptions, translate conditions to International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes, and predict the presence or absence of 11 disease categories in the campaigns. The classification performance of the algorithm was evaluated against 400 manually labeled campaigns. Results: We collected data on 89,645 crowdfunding campaigns through web scraping. The interrater reliability for detecting the presence of broad disease categories in the campaign descriptions was high (Cohen κ: range 0.69-0.96). The NER and entity resolution models identified 6594 unique (276,020 total) ICD-10-CM codes among all of the crowdfunding campaigns in our sample. Through our word search, we identified 3261 additional campaigns for which a medical condition was not otherwise detected with the NER model. When averaged across all disease categories and weighted by the number of campaigns that mentioned each disease category, the algorithm demonstrated an overall precision of 0.83 (range 0.48-0.97), a recall of 0.77 (range 0.42-0.98), an F1 score of 0.78 (range 0.56-0.96), and an accuracy of 95% (range 90%-98%). Conclusions: A disease identification algorithm combining pretrained natural language processing models and ICD-10-CM code–based disease categorization was able to detect 11 disease categories in medical crowdfunding campaigns with high precision and accuracy.
|
[
"Language Models",
"Semantic Text Processing",
"Information Retrieval",
"Named Entity Recognition",
"Text Classification",
"Information Extraction & Text Mining"
] |
[
52,
72,
24,
34,
36,
3
] |
http://arxiv.org/abs/2010.11384v2
|
A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews
|
The flexibility of the inference process in Variational Autoencoders (VAEs) has recently led to revising traditional probabilistic topic models giving rise to Neural Topic Models (NTMs). Although these approaches have achieved significant results, surprisingly very little work has been done on how to disentangle the latent topics. Existing topic models when applied to reviews may extract topics associated with writers' subjective opinions mixed with those related to factual descriptions such as plot summaries in movie and book reviews. It is thus desirable to automatically separate opinion topics from plot/neutral ones enabling a better interpretability. In this paper, we propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones. We conduct an extensive experimental assessment introducing a new collection of movie and book reviews paired with their plots, namely MOBO dataset, showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models.
|
[
"Topic Modeling",
"Opinion Mining",
"Robustness in NLP",
"Sentiment Analysis",
"Responsible & Trustworthy NLP",
"Information Extraction & Text Mining"
] |
[
9,
49,
58,
78,
4,
3
] |
SCOPUS_ID:1642505638
|
A Distance Based Semantic Search Algorithm for Peer-to-Peer Open Hypermedia Systems
|
We consider the problem of content management in dynamically created collaborative environments. We describe the problem domain with the aid of a collaborative application in Open Hypermedia Systems, which allows individual users to share their link databases, otherwise known as linkbases. The RDF specification is utilised to express and categorise resources stored in a linkbase. This paper describes a semantic search mechanism to discover semantically related resources across such distributed linkbases. Our approach differs from the traditional crawler based search mechanism since it relies on the clustering of semantically related entities to expedite the search for resources in a randomly created network and uses distance-vector based heuristics to guide the search. Our experimental results indicate that the algorithm yields high search effectiveness in collaborative environments where changes in content published by each participant are rapid and random.
|
[
"Semantic Search",
"Semantic Text Processing",
"Information Retrieval"
] |
[
41,
72,
24
] |
http://arxiv.org/abs/2204.06584v1
|
A Distant Supervision Corpus for Extracting Biomedical Relationships Between Chemicals, Diseases and Genes
|
We introduce ChemDisGene, a new dataset for training and evaluating multi-class multi-label document-level biomedical relation extraction models. Our dataset contains 80k biomedical research abstracts labeled with mentions of chemicals, diseases, and genes, portions of which human experts labeled with 18 types of biomedical relationships between these entities (intended for evaluation), and the remainder of which (intended for training) has been distantly labeled via the CTD database with approximately 78\% accuracy. In comparison to similar preexisting datasets, ours is both substantially larger and cleaner; it also includes annotations linking mentions to their entities. We also provide three baseline deep neural network relation extraction models trained and evaluated on our new dataset.
|
[
"Relation Extraction",
"Information Extraction & Text Mining"
] |
[
75,
3
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.