id
stringlengths
20
52
title
stringlengths
3
459
abstract
stringlengths
0
12.3k
classification_labels
list
numerical_classification_labels
list
SCOPUS_ID:85059873222
A Corpus for Hybrid Question Answering Systems
Question answering has been the focus of a lot of researches and evaluation campaigns, either for text-based systems (TREC and CLEF evaluation campaigns for example), or for knowledge-based systems (QALD, BioASQ). Few systems have effectively combined both types of resources and methods in order to exploit the fruitfulness of merging the two kinds of information repositories. The only evaluation QA track that focuses on hybrid QA is QALD since 2014. As it is a recent task, few annotated data are available (around 150 questions). In this paper, we present a question answering dataset that was constructed to develop and evaluate hybrid question answering systems. In order to create this corpus, we collected several textual corpora and augmented them with entities and relations of a knowledge base by retrieving paths in the knowledge base which allow to answer the questions. The resulting corpus contains 4300 question-answer pairs and 1600 have a true link with DBpedia.
[ "Natural Language Interfaces", "Knowledge Representation", "Semantic Text Processing", "Question Answering" ]
[ 11, 18, 72, 27 ]
http://arxiv.org/abs/2005.13962v1
A Corpus for Large-Scale Phonetic Typology
A major hurdle in data-driven research on typology is having sufficient data in many languages to draw meaningful conclusions. We present VoxClamantis v1.0, the first large-scale corpus for phonetic typology, with aligned segments and estimated phoneme-level labels in 690 readings spanning 635 languages, along with acoustic-phonetic measures of vowels and sibilants. Access to such data can greatly facilitate investigation of phonetic typology at a large scale and across many languages. However, it is non-trivial and computationally intensive to obtain such alignments for hundreds of languages, many of which have few to no resources presently available. We describe the methodology to create our corpus, discuss caveats with current methods and their impact on the utility of this data, and illustrate possible research directions through a series of case studies on the 48 highest-quality readings. Our corpus and scripts are publicly available for non-commercial use at https://voxclamantisproject.github.io.
[ "Phonetics", "Typology", "Syntactic Text Processing", "Multilinguality" ]
[ 64, 45, 15, 0 ]
http://arxiv.org/abs/1805.09821v1
A Corpus for Multilingual Document Classification in Eight Languages
Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area.
[ "Multilinguality", "Text Classification", "Cross-Lingual Transfer", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 0, 36, 19, 24, 3 ]
http://arxiv.org/abs/1811.00491v3
A Corpus for Reasoning About Natural Language Grounded in Photographs
We introduce a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The data contains 107,292 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a pair of photographs. We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language. Qualitative analysis shows the data requires compositional joint reasoning, including about quantities, comparisons, and relations. Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge.
[ "Visual Data in NLP", "Reasoning", "Multimodality" ]
[ 20, 8, 74 ]
http://arxiv.org/abs/2003.13342v1
A Corpus of Controlled Opinionated and Knowledgeable Movie Discussions for Training Neural Conversation Models
Fully data driven Chatbots for non-goal oriented dialogues are known to suffer from inconsistent behaviour across their turns, stemming from a general difficulty in controlling parameters like their assumed background personality and knowledge of facts. One reason for this is the relative lack of labeled data from which personality consistency and fact usage could be learned together with dialogue behaviour. To address this, we introduce a new labeled dialogue dataset in the domain of movie discussions, where every dialogue is based on pre-specified facts and opinions. We thoroughly validate the collected dialogue for adherence of the participants to their given fact and opinion profile, and find that the general quality in this respect is high. This process also gives us an additional layer of annotation that is potentially useful for training models. We introduce as a baseline an end-to-end trained self-attention decoder model trained on this data and show that it is able to generate opinionated responses that are judged to be natural and knowledgeable and show attentiveness.
[ "Opinion Mining", "Natural Language Interfaces", "Sentiment Analysis", "Dialogue Systems & Conversational Agents" ]
[ 49, 11, 78, 38 ]
http://arxiv.org/abs/1712.02480v1
A Corpus of Deep Argumentative Structures as an Explanation to Argumentative Relations
In this paper, we compose a new task for deep argumentative structure analysis that goes beyond shallow discourse structure analysis. The idea is that argumentative relations can reasonably be represented with a small set of predefined patterns. For example, using value judgment and bipolar causality, we can explain a support relation between two argumentative segments as follows: Segment 1 states that something is good, and Segment 2 states that it is good because it promotes something good when it happens. We are motivated by the following questions: (i) how do we formulate the task?, (ii) can a reasonable pattern set be created?, and (iii) do the patterns work? To examine the task feasibility, we conduct a three-stage, detailed annotation study using 357 argumentative relations from the argumentative microtext corpus, a small, but highly reliable corpus. We report the coverage of explanations captured by our patterns on a test set composed of 270 relations. Our coverage result of 74.6% indicates that argumentative relations can reasonably be explained by our small pattern set. Our agreement result of 85.9% shows that a reasonable inter-annotator agreement can be achieved. To assist with future work in computational argumentation, the annotated corpus is made publicly available.
[ "Explainability & Interpretability in NLP", "Responsible & Trustworthy NLP" ]
[ 81, 4 ]
http://arxiv.org/abs/1805.11869v1
A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection
Social media platforms like twitter and facebook have be- come two of the largest mediums used by people to express their views to- wards different topics. Generation of such large user data has made NLP tasks like sentiment analysis and opinion mining much more important. Using sarcasm in texts on social media has become a popular trend lately. Using sarcasm reverses the meaning and polarity of what is implied by the text which poses challenge for many NLP tasks. The task of sarcasm detection in text is gaining more and more importance for both commer- cial and security services. We present the first English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and irony where each token is also annotated with a language tag. We present a baseline su- pervised classification system developed using the same dataset which achieves an average F-score of 78.4 after using random forest classifier and performing 10-fold cross validation.
[ "Stylistic Analysis", "Sentiment Analysis" ]
[ 67, 78 ]
https://aclanthology.org//W07-1405/
A Corpus of Fine-Grained Entailment Relations
[ "Reasoning", "Textual Inference" ]
[ 8, 22 ]
SCOPUS_ID:85144337860
A Corpus of German Citizen Contributions in Mobility Planning: Supporting Evaluation Through Multidimensional Classification
Political authorities in democratic countries regularly consult the public in order to allow citizens to voice their ideas and concerns on specific issues. When trying to evaluate the (often large number of) contributions by the public in order to inform decision-making, authorities regularly face challenges due to restricted resources. We identify several tasks whose automated support can help in the evaluation of public participation. These are i) the recognition of arguments, more precisely premises and their conclusions, ii) the assessment of the concreteness of arguments, iii) the detection of textual descriptions of locations in order to assign citizens' ideas to a spatial location, and iv) the thematic categorization of contributions. To enable future research efforts to develop techniques addressing these four tasks, we introduce the CIMT PartEval Corpus, a new publicly-available German-language corpus that includes several thousand citizen contributions from six mobility-related planning processes in five German municipalities. The corpus provides annotations for each of these tasks which have not been available in German for the domain of public participation before either at all or in this scope and variety.
[ "Text Classification", "Argument Mining", "Reasoning", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 36, 60, 8, 24, 3 ]
https://aclanthology.org//W11-2703/
A Corpus of Human-written Summaries of Line Graphs
[ "Structured Data in NLP", "Text Generation", "Multimodality" ]
[ 50, 47, 74 ]
SCOPUS_ID:85139876112
A Corpus-Assisted Critical Discourse Analysis of News Construction of the Flint Water Crisis
Background: News media play a critical role in communicating risks and shaping public perceptions of social issues. Covering a multilayered disaster that grew from a local story to a national one, the ways that news media at different levels construct the Flint water crisis have not been previously explored. Literature review: Despite the well-established role of journalism as a government watchdog, news media do not neutrally mirror every social event. Instead, news reporting, highly mediated by language, is filled with political interests, values, and attitudes. Research questions: 1. How did local/regional and national newspapers construct the Flint water crisis? 2. Are there any similarities and/or differences in local/regional and national news construction of the Flint water crisis? 3. What are the practical implications for media coverage of risks, emergencies, or crises? 4. What are the methodological implications of this study for professional communication research? Methodology: This study integrates corpus linguistics and critical discourse analysis to analyze 1858 news reports about the Flint water crisis published between 2014 and 2018. I use keywords as a core analytical technique to compare the local/regional and national news coverage. Results: The results show that both local and national news reports overemphasized government activities while downplaying the unofficial voices of Flint residents and community activists. In addition, national newspapers were more likely than local newspapers to use racial cues in describing the Flint community and to associate the crisis with other social problems. Conclusions: This study suggests that news media should provide wide coverage of the affected community's efforts in risk/crisis communication rather than reproducing official messages. News representations should be cautious of strengthening stereotypes or forming negative conceptual associations of traditionally disenfranchised communities.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85062910525
A Corpus-Assisted Critical Discourse Analysis of “Migrants” and “Migration” in the British Tabloids and Quality Press
The aim of the paper is to study how the problem of mass migration is presented in the British press. Frequency of occurrences of the words “migrants” and “migration” are analysed in four British press titles: two newspapers represent a conservative bias and two a centre-left political alignment; at the same time, two newspapers exemplify quality press and two are tabloids. The methodology used in this study follows a corpus-assisted approach to language analysis, which is conducted in a bottom-up fashion, also known as a corpus-driven study (Tognini-Bonelli 2001). The analysis shows that the representation of the problem of migration in the four British newspapers is generally negative, and the negativity revolves mainly around illegal entry, employment and abuse of the social benefit system, which results in frequent social and political exclusion of migrants on economic and legal bases. Secondly, the conservative press focus on criticising migration and migrants while the labour-oriented press, in particular The Guardian, express compassion and sympathy towards migrants. Moreover, quality papers devote much more space to discussing the problem of migration than tabloids.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85147368382
A Corpus-Based Investigation of “Would You Like” and “Would You Mind” Request Expressions’ Collocational Patterns in American Spoken English Discourse
Knowledge of speech acts and their functions are basic components of pragmatics and the request speech act plays a crucial part in everyday interactions. This study aimed to investigate whether native speakers of English make any differences utilizing the request expressions “would you like” and “would you mind”, their collocations in both spoken and academic contexts and the functional differences caused by the co-text. To this end, the data was retrieved from Corpus of Contemporary American English (COCA). The results revealed that such expressions in the spoken corpus were used more frequently in the transactional context with equal status and as interactional-oriented. However, in the academic corpus, the same expressions were used more frequently in the pedagogical context with the high-low status and as both interactional-oriented and taskoriented. The expression “would you like” was mostly used to give information, whereas “would you mind” was usually used to request an action. These expressions were not used for the purpose of imposition in any of the two contexts. The study revealed that the collocations didn’t affect the function of such requests. In fact, it was the collocating words that changed due to the pragmatic functions and the objectives of the speakers. The findings might contribute to understanding of the variations which matter between the request expressions. Teachers and learners might gain insights into how and when they are used and which collocations are more frequent so as to focus more carefully on them and make informed and proper decisions within pedagogical settings.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 71, 72, 70, 74 ]
SCOPUS_ID:85125749684
A Corpus-based Analysis of High School English Textbooks and English University Entrance Exams in Turkey
This study explores the disconnect between the English textbooks studied in high schools (9th-12th grades) and the English tested on Turkish university entrance exams (2010-2019). Using corpus linguistics tools such as AntWordProfiler, TAALED, and the L2 Syntactic Complexity Analyzer (L2SCA), this paper analyzes the lexical diversity and syntactic complexity indices in the sample material. A comparison of official textbooks and complementary materials obtained from the Ministry of National Education against the official university entrance exams demonstrates that: (i) differences in lexical sophistication level can be observed between the two corpora, the lexical sophistication level of the exam corpus was higher than that of the textbook corpus, (ii) there is a statistically significant difference between the two corpora in terms of lexical diversity, the exam corpus has a significantly higher level of lexical diversity than the textbook corpus, (iii) statistically significant differences also existed between the two corpora regarding the syntactic complexity indices. The syntactic complexity level of the exam corpus was higher than that of the textbook corpus. These findings suggest that Turkish high school student taught English with official textbooks have to tackle low-frequency and more sophisticated words at a higher level of syntactic complexity when they take the nationwide exam. This, in turn, creates a negative backwash effect, distorting their approach to L2, and raising other concerns about the misalignment between the official language education materials and nationwide exams.
[ "Syntactic Text Processing" ]
[ 15 ]
https://aclanthology.org//W10-0205/
A Corpus-based Method for Extracting Paraphrases of Emotion Terms
[ "Emotion Analysis", "Paraphrasing", "Text Generation", "Sentiment Analysis" ]
[ 61, 32, 47, 78 ]
https://aclanthology.org//1995.iwpt-1.26/
A Corpus-based Probabilistic Grammar with Only Two Non-terminals
The availability of large, syntactically-bracketed corpora such as the Penn Tree Bank affords us the opportunity to automatically build or train broad-coverage grammars, and in particular to train probabilistic grammars. A number of recent parsing experiments have also indicated that grammars whose production probabilities are dependent on the context can be more effective than context-free grammars in selecting a correct parse. To make maximal use of context, we have automatically constructed, from the Penn Tree Bank version 2, a grammar in which the symbols S and NP are the only real nonterminals, and the other non-terminals or grammatical nodes are in effect embedded into the right-hand-sides of the S and NP rules. For example, one of the rules extracted from the tree bank would be S -> NP VBX JJ CC VBX NP [1] ( where NP is a non-terminal and the other symbols are terminals – part-of-speech tags of the Tree Bank). The most common structure in the Tree Bank associated with this expansion is (S NP (VP (VP VBX (ADJ JJ) CC (VP VBX NP)))) [2]. So if our parser uses rule [1] in parsing a sentence, it will generate structure [2] for the corresponding part of the sentence. Using 94% of the Penn Tree Bank for training, we extracted 32,296 distinct rules ( 23,386 for S, and 8,910 for NP). We also built a smaller version of the grammar based on higher frequency patterns for use as a back-up when the larger grammar is unable to produce a parse due to memory limitation. We applied this parser to 1,989 Wall Street Journal sentences (separate from the training set and with no limit on sentence length). Of the parsed sentences (1,899), the percentage of no-crossing sentences is 33.9%, and Parseval recall and precision are 73.43% and 72 .61%.
[ "Syntactic Parsing", "Syntactic Text Processing" ]
[ 28, 15 ]
SCOPUS_ID:85126328657
A Corpus‐Based Sentence Classifier for Entity–Relationship Modelling
Automated creation of a conceptual data model based on user requirements expressed in the textual form of a natural language is a challenging research area. The complexity of natural language requires deep insight into the semantics buried in words, expressions, and string patterns. For the purpose of natural language processing, we created a corpus of business descriptions and an adherent lexicon containing all the words in the corpus. Thus, it was possible to define rules for the automatic translation of business descriptions into the entity–relationship (ER) data model. However, since the translation rules could not always lead to accurate translations, we created an additional classification process layer—a classifier which assigns to each input sentence some of the defined ER method classes. The classifier represents a formalized knowledge of the four data modelling experts. This rule‐based classification process is based on the extraction of ER information from a given sentence. After the detailed description, the classification process itself was evaluated and tested using the standard multiclass performance measures: recall, precision and accuracy. The accuracy in the learning phase was 96.77% and in the testing phase 95.79%.
[ "Machine Translation", "Information Extraction & Text Mining", "Information Retrieval", "Text Generation", "Text Classification", "Multilinguality" ]
[ 51, 3, 24, 47, 36, 0 ]
http://arxiv.org/abs/1606.04754v1
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation
Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.
[ "Language Models", "Machine Translation", "Semantic Text Processing", "Text Generation", "Multilinguality" ]
[ 52, 51, 72, 47, 0 ]
http://arxiv.org/abs/2012.02221v1
A Correspondence Variational Autoencoder for Unsupervised Acoustic Word Embeddings
We propose a new unsupervised model for mapping a variable-duration speech segment to a fixed-dimensional representation. The resulting acoustic word embeddings can form the basis of search, discovery, and indexing systems for low- and zero-resource languages. Our model, which we refer to as a maximal sampling correspondence variational autoencoder (MCVAE), is a recurrent neural network (RNN) trained with a novel self-supervised correspondence loss that encourages consistency between embeddings of different instances of the same word. Our training scheme improves on previous correspondence training approaches through the use and comparison of multiple samples from the approximate posterior distribution. In the zero-resource setting, the MCVAE can be trained in an unsupervised way, without any ground-truth word pairs, by using the word-like segments discovered via an unsupervised term discovery system. In both this setting and a semi-supervised low-resource setting (with a limited set of ground-truth word pairs), the MCVAE outperforms previous state-of-the-art models, such as Siamese-, CAE- and VAE-based RNNs.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Representation Learning", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 12, 4 ]
SCOPUS_ID:85070412712
A Cost-Effective Audio-Visual Summarizer for Summarization of Presentations and Seminars
Nowadays, more than half the world's population live hand-in-hand with technology. From smart watches to smart phones to smart cities, the embodiment of technology in all of our day-to-day activities no longer looks like a distant dream. Lectures in classrooms have also advanced to the extent of using smart-boards and smart-classrooms; the developments in jotting down notes in such scenarios has, however, not advanced at the same pace. In the problem statement being tackled, based on similar lines, the target audience includes the attendees of any seminar, presentation or lecture, be it students, or the public in general, attending important conferences and talks. More often than not, complete undivided attention proves to be difficult at these seminars as the attendee may be preoccupied with the objective of jotting down pointers and making notes for future reference. It is during this process that several essentials in the speaker's delivery are missed out. Keeping this forethought in mind, this paper delves into the implementation of an audio-visual summarizer that achieves the aforementioned motive. With audio evidence on the speaker's delivery, paired with visual images of PowerPoint slides or handwritten material that is presented in the seminars, this device provides a smart solution of summarizing the entire presentation and logging the summary to a remote database server from where it is accessed through a user-end software application. The prototype comprises a Raspberry Pi coupled with a camera and a microphone. The prototype uses a fast RCNN model for text detection, Open Source Computer Vision (OpenCV) for text extraction, Google Speech Recognition and Natural Language Processing concepts for generating the summarized data. The proposed solution is very effective, in terms of feasibility and cost cutting factors. The novelty aspect of the proposed solution lies in the consolidation of ideas of Internet of Things (IOT) and machine learning, to deliver a product capable of providing a smooth and potent solution to the problem statement.
[ "Visual Data in NLP", "Speech & Audio in NLP", "Summarization", "Multimodality", "Text Generation", "Speech Recognition", "Information Extraction & Text Mining" ]
[ 20, 70, 30, 74, 47, 10, 3 ]
SCOPUS_ID:85091967362
A Cost-Effective OCR Implementation to Prevent Phishing on Mobile Platforms
Phishing is currently defined as a criminal mechanism employing both social engineering and technical subterfuge to gather any useful information such as user personal data or financial account credentials. Many users are sensible about this kind of attack from suspicious URL addresses or obvious warning information from browsers, but phishing still accounts for a larger proportion of all of malicious attacks. Moreover, these warning features will be eliminated if the victim is under a DNS hijacking attack. There is much research about the prevention and evaluation of phishing, in both PC platforms and mobile platforms, but there are still technical challenges to reducing the risk from phishing, especially in mobile platforms.We presented a novel method to prevent phishing attacks by using an Optical Character Recognition (OCR) technology in a previous paper. This method not only overcomes the limitation of current preventions, but also provides a high detection accuracy rate. However, whether this method can be implemented ideally in mobile devices needed to be further examined, especially in relation to the challenges of limited resources (power and bandwidth). In this paper, we apply the OCR method in a mobile platform and provide a prototype implementation scheme to determine applicability. Experiments are performed to test the technique under DNS hijacking attacks.
[ "Visual Data in NLP", "Responsible & Trustworthy NLP", "Robustness in NLP", "Multimodality" ]
[ 20, 4, 58, 74 ]
SCOPUS_ID:85118980637
A Cost-Efficient Framework for Scene Text Detection in the Wild
Scene text detection in the wild is a hot research area in the field of computer vision, which has achieved great progress with the aid of deep learning. However, training deep text detection models needs large amounts of annotations such as bounding boxes and quadrangles, which is laborious and expensive. Although synthetic data is easier to acquire, the model trained on this data has large performance gap with that trained on real data because of domain shift. To address this problem, we propose a novel two-stage framework for cost-efficient scene text detection. Specifically, in order to unleash the power of synthetic data, we design an unsupervised domain adaptation scheme consisting of Entropy-aware Global Transfer (EGT) and Text Region Transfer (TRT) to pre-train the model. Furthermore, we utilize minimal actively annotated and enhanced pseudo labeled real samples to fine-tune the model, aiming at saving the annotation cost. In this framework, both the diversity of the synthetic data and the reality of the unlabeled real data are fully exploited. Extensive experiments on various benchmarks show that the proposed framework significantly outperforms the baseline, and achieves desirable performance with even a few labeled real datasets.
[ "Low-Resource NLP", "Responsible & Trustworthy NLP", "Green & Sustainable NLP" ]
[ 80, 4, 68 ]
SCOPUS_ID:85081339796
A Cost-Reducing Partial Labeling Estimator in Text Classification Problem
The paper proposes a new approach to address the text classification problems when learning with partial labels is beneficial. Instead of offering each training sample a set of candidate labels, researchers assign negative-oriented labels to ambiguous training examples if they are unlikely falling into certain classes. Researchers construct two new maximum likelihood estimators with self-correction property, and prove that under some conditions, new estimators converge faster. Also the paper discusses the advantages of applying one of new estimators to a fully supervised learning problem. The proposed method has potential applicability in many areas, such as crowd-sourcing, natural language processing and medical image analysis.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:84976782506
A Course on the Relationship of Formal Language Theory to Automata
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
SCOPUS_ID:85110521259
A Crash Course in Automatic Grammatical Error Correction
[ "Text Error Correction", "Syntactic Text Processing" ]
[ 26, 15 ]
SCOPUS_ID:85141032336
A Crisis Information Dashboard System using Feedback-Based Text Classification of Typhoon-Related Tweets in the Philippines
In this paper, we contribute to social media analytics literature by incorporating user feedback towards improving Tweet classification of code-switch data. We integrate this technology in a crisis information dashboard system to consolidate significant information. The instantaneous nature of data obtained from social media makes it an ideal medium in emergency situations. Using a multiclass SVM with categories (1) Announcement, (2) Casualty and Damage, and (3) Call for Help, our test case involving typhoon Hagupit with a total of 1690 tweets resulted with an accuracy rate of 63.238% as baseline. In a simulated deployment, 67 mislabeled tweets were corrected by the users, which increased the accuracy by 1%. Future work on this study can include increasing the added instances to observe a more significant difference in metrics, and to compare the difference if only corrected mislabeled tweets were added in each iteration of retraining. Multilabel classification can also be considered.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85111820803
A Crisis Within the Crisis: Corona Crisis Communication by Mayors in Germany
The German political system is currently challenged by a twofold crisis: On the one hand, the threat caused by the pandemic and its social consequences have to be managed through practical and communicative measures, while the constitutional state and government action in various policy areas (health, education, economy, etc.) are approaching their limits. On the other hand, in recent years political actors who claim to advocate for »the people« and are referred to as »populists« have been questioning whether sufficient representation of citizens respectively a collective conceptualized in racist terms is guaranteed in established representative democracy. This twofold crisis, which culminates in the protests against governmental measures, also affects mayors, who – ideally – act communicatively in various fields of the democratic constitutional state. This article provides initial insights into a research project devoted to a systematic analysis of communication by, with and about mayors. First, the article examines how mayors can act linguistically in the context of the double crisis in social media and what kind of communication with and among citizens subsequently develops in the comment lists. Second, it asks how the abstract image of mayors in press coverage is shaped and developed in the course of the pandemic. For the purpose of comparison and with regard to interconnections between the two subject areas, methods of linguistic praxeology, analysis of written interaction and text communication as well as corpus and image linguistics are combined.
[ "Visual Data in NLP", "Multimodality" ]
[ 20, 74 ]
http://arxiv.org/abs/1909.09428v1
A Critical Analysis of Biased Parsers in Unsupervised Parsing
A series of recent papers has used a parsing algorithm due to Shen et al. (2018) to recover phrase-structure trees based on proxies for "syntactic depth." These proxy depths are obtained from the representations learned by recurrent language models augmented with mechanisms that encourage the (unsupervised) discovery of hierarchical structure latent in natural language sentences. Using the same parser, we show that proxies derived from a conventional LSTM language model produce trees comparably well to the specialized architectures used in previous work. However, we also provide a detailed analysis of the parsing algorithm, showing (1) that it is incomplete---that is, it can recover only a fraction of possible trees---and (2) that it has a marked bias for right-branching structures which results in inflated performance in right-branching languages like English. Our analysis shows that evaluating with biased parsing algorithms can inflate the apparent structural competence of language models.
[ "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Responsible & Trustworthy NLP" ]
[ 80, 52, 72, 4 ]
SCOPUS_ID:85128752604
A Critical Analysis of VQA Models and Datasets
A visual question answering (VQA) system uses the domain of both computer vision (CV) and natural language processing (NLP). These systems produce an answer in a given natural language corresponding to a natural language question asked about a given image. Also, these systems need to understand an image and the semantics of the given question. In this article, the limitations of some state-of-the-art VQA models, datasets used by VQA models, evaluation metrics for these datasets and limitations of the major datasets are discussed. Also, detailed failure cases of these models are also presented. Also, we present some future directions to achieve higher accuracy for answer generation.
[ "Visual Data in NLP", "Natural Language Interfaces", "Question Answering", "Multimodality" ]
[ 20, 11, 27, 74 ]
SCOPUS_ID:85106069424
A Critical Analysis of the American Nurses Association Position Statement on Workplace Violence: Ethical Implications
In 2015, the American Nurses Association issued a position statement on workplace violence. An authoritative, disciplinary position is critically important to inform policies and recommendations addressing this significant issue in nursing. Position statements and policies should reflect disciplinary values. A discourse analysis of this position statement was performed through the lens of nursing ethics. The position statement endorses a zero-tolerance response, which is moralist, punitive, and questionably effective. It problematically presents patient and coworker violence as equivalent. Promotion of this position has the potential to erode public trust and lead us down a path of criminalizing illness behaviors.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 71, 72, 17, 4 ]
SCOPUS_ID:85101279719
A Critical Classroom Study of Language Oppression: Manuel and Malena’s Testimonios, “Sentía como que yo no valía nada.. se reían de mí”
This critical classroom study of language oppression draws from the notion of existing inequalities based on power relations in education research, as addressed in a critical ethnography. This critical classroom study explores the cases of two recent immigrant students, “Manuel” and “Malena,” on the–U.S.–Mexican border near El Paso, Texas, who were attending a fifth-grade dual language class at “Border PK-5 Elementary School” (pseudonyms). This school followed a 50/50 dual immersion model from K-fourth grade. By the fifth grade in this school, 70% of the academic time was taught in English and 30% in Spanish. Documented data from observations in the classroom and students’ multimodal testimonios reveal acts of linguistic bullying against the two recent immigrants based on their underdeveloped second language, English, when self-regulated learning was at work in a cooperative learning environment.
[ "Multimodality" ]
[ 74 ]
SCOPUS_ID:85130937527
A Critical Discourse Analysis (CDA) of the strategic plans of Istanbul under different political administrations
Strategic plans are sophisticated public administration tools of local governments, which are not limited to the simple expression of priorities of a city or plans and projects to be conducted in the future. They are more than the sum of some words and expressions, and they are shaped primarily by the political approach or ideology of the administration releasing them. In this article, a comparative Critical Discourse Analysis (CDA) of the last two strategic plans of Istanbul Metropolitan Municipality (IMM) (2015–2019/2020-2024), which were prepared by mayors with different party affiliations, was conducted. It is observed that a participatory democracy discourse has been adopted in the new strategic plan with the predominant use of words such as participation, participatory, transparency, cooperation, inclusive, fair, equal, accountability etc. The rigid shift from government to governance discourse in the strategic plans of IMM is also visible in the non-linguistic practice of the municipality.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85125423125
A Critical Discourse Analysis Approach to Mixed-Gender Friendship in the Saudi Context: The Case of the Twitter Platform
Motivated by the growing popularity of Twitter as a public sphere where both genders interact and discuss and debate issues both formally and informally, this study explores the perspectives of Twitter users regarding mixed-gender friendships and gender segregation in the Saudi context. This study constitutes a wide-ranging investigation that explores how Twitter’s platform has been used in the debate regarding whether mixed-gender friendships should be allowed in the Saudi context. Adopting the tools of critical discourse analysis, the study analyses the detailed discourses and discursive strategies pertaining to mixed-gender friendship in a hashtag thread of 8050 tweets collected in September 2020. Overall, the hashtag describes the public reaction and debate with respect to mixed-gender friendship in Saudi Arabia, simultaneously highlighting the contribution of the evoked tweeted arguments in raising awareness of mixed-gender friendship restrictions imposed under the configuration of religious, social, and political discourses. The data analyzed demonstrates that the differences in arguments concerning mixed-gender friendships within the forms of dissent and protest are in a dialogically supportive relation with dominant political, religious, and social conservative discourses.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85148297942
A Critical Discourse Analysis of Donald Trump’s Concession Speech after the Capitol Riots
In today’s modern context, political tensions are especially high, and globalizing society is changing at a pace like never before. The use of critical discourse analysis could not be more appropriate for understanding how public addresses, such as Trump’s concession speech after the riots, influence public perceptions, and either detract for or perpetuate problematic social inequities. In this way, critical discourse analysis conceives language as both influenced by and influencing society in a two-way, dynamic relationship. The following article uses critical discourse analysis to highlight the impact of Trump’s concession speech on the forming and continued evolution of public attitudes (mainly unrest) and social inequities. This article provides an overview of the aim of this analysis, followed by an in-depth theoretical attempt to understand the relevance and application of critical discourse analysis. This article dives into a critical analysis and interpretation of Trump’s concession speech, according to its language characteristics.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Speech & Audio in NLP", "Multimodality" ]
[ 71, 72, 70, 74 ]
SCOPUS_ID:85131062293
A Critical Discourse Analysis of Early Childhood Education and Care Policy in Korean Newspaper Editorials
This study investigated how newspaper editorials contribute implicitly to what people know and think about by recontextualizing the discourses of Early Childhood Education and Care (ECEC) policy. Drawing from Fairclough's critical discourse analysis, we examined 311 editorials on ECEC policies of two mainstream Korean newspapers, Chosun Ilbo and Joongang Ilbo, published from 1991 to 2016. Findings are that the text of discourses changed to reflect shifts in ECEC policy. The textual elements emphasized or de-emphasized information or opinion about events or situations. By controlling the text contents, editorials participated in the (re)formation of discourses, and these discourses were recontextualized to express ideological positions that they support through the interface with socialcultural contexts. The recontextualized discourses can influence the socio-cultural context by building public awareness and bringing political reaction. This study suggests that language, including in newspaper editorials, should be analyzed as a social practice through the lens of discourse.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85137029562
A Critical Discourse Analysis of Former President Nelson Mandela's Two State of the Nation Addresses (1994 and 1999)
This study examined how the former South African president, Nelson Mandela, used language to present his political ideologies and the persuasion techniques he used to convince his audience of the state's achievements and challenges when delivering two State of the Nation addresses (1994 and 1999). The study was qualitative in nature and followed a case study design. Two speeches presented by former President Mandela provided the data for the study. Content analysis was used to analyse the data. The results showed that former President Mandela used the restoration of human dignity for all South Africans, freedom of the individual, taking care of the poor, caring for vulnerable groups, overcoming fear, unity, and a better life for all as his ideologies.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:84959330808
A Critical Discourse Analysis of Governance Issues Affecting Public Private Partnership Contracting for Information Systems Implementations: A South African Case Study
Public Private Partnership (PPP) contracts have drawn considerable media interest due to a number of problems such as cost overruns, mismanagement and failure. The purpose of this paper is to critically analyse media discourse relating to the failure of a PPP contract between the South African Department of Labour (DOL) and Siemens Information Services (SIS). The contract pertained to the provision and implementation of Information and Communication Technology (ICT) services for the DOL. The theoretical foundation for this research is Habermas' theory of communicative action which focuses on normative standards for communication and implications of public speech. Our research builds on a growing literature on critical discourse analysis (CDA) that systematically applies Habermas validity claims to empirical research on public communication focused on revealing distortions concerning claims of truth, sincerity, legitimacy and comprehensibility. Our study contributes to understanding issues of public accountability of PPP contracts and extends the reach of critical research into PPP contracting for information systems (IS) services and highlights key challenges of the lack of public sector management competences in securing the public interest in PPP engagements.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Ethical NLP", "Responsible & Trustworthy NLP" ]
[ 71, 72, 17, 4 ]
SCOPUS_ID:85079620348
A Critical Discourse Analysis of Hillary Clinton's (2016) waiver speech
Discourse is a fundamental factor in all worldwide communication and is necessary for speakers to understand the language and its use. In a political world, discourse is an important tool where one's words are the primary means of communicating visions and ideologies and ultimately making people act upon them. In the light of the upcoming American presidential election, it is interesting to place the discourses of Hillary Clinton in the critical discourse analysis framework because political speeches are highly constructed pieces of discourse. Thus, implicit patterns embedded in discourses are expected to be identified as they are written and performed for specific purposes. The waiver has been given little attention from other researchers as a notion of Critical Discourse Analysis. However, this paper is an effort to shed light on this linguistic obscurity. This paper attempts to illuminate it and provide a clear picture of this concept with waiver samples, as shown in Clinton's speeches with some statistical tools to support the paper's objectives. Hillary Clinton delivered an amazing and encouraging waiver speech to all audiences around the world after her loss in the U.S. presidential election on November 8, 2016. The current paper is a critical discourse analysis of Hillary Clinton's waiver speech. It aims to investigate how this concept is used in critical discourse analysis. It is hypothesised that:1-The description is used to express vocabulary and grammar in waiver speech. 2-The interpretation is used to show the propositional and coherent expressions. 3-The explanation is used to explain the ideology and power of waiver speech. In order to achieve the aims of the research, the following steps are followed: 1-Reviewing the literature about waiver speech. 2-Some devices such as description, interpretation, and explanation are also reviewed since they are relevant to the aims of the study. 3-Analyzing five texts of Clinton's waiver speech drawing upon the adopted model from Fairclough (1989).
[ "Semantic Text Processing", "Speech & Audio in NLP", "Discourse & Pragmatics", "Explainability & Interpretability in NLP", "Multimodality", "Responsible & Trustworthy NLP" ]
[ 72, 70, 71, 81, 74, 4 ]
SCOPUS_ID:85088131518
A Critical Discourse Analysis of The Sunday Mail’s and The Telegraph’s Representation of Zimbabwe’s 2008 Electoral Violence
Using the Critical Discourse Analysis (CDA) framework of Van Dijk T. A. (2000. Ideology and discourse: A multidisciplinary Introduction. Barcelona: Pompeu Fabra University Press), this article investigates the manifestation of linguistic differences in the discursive strategies of Zimbabwe’s The Sunday Mail’s and the UK’s The Telegraph’s coverage of 2008 electoral violence. A CDA analysis of a sample of 16 news stories based on van Dijk’s framework showed that the different ideological leanings of the two newspapers made them represent the same events on electoral violence in Zimbabwe differently, using two main macro-strategies of positive self-presentation and negative other-presentation. These macro-strategies are realized through other micro-strategies that fall under them, the most important to this research being: lexicalization, consensus, illegality, presupposition and others. The study concludes that the power of language remains a weapon at the disposal of journalists in covering important news, and language use in news should be central in journalism and communication curriculums to help readers interpret journalists’ use of language as key to understanding news.
[ "Discourse & Pragmatics", "Semantic Text Processing", "Representation Learning" ]
[ 71, 72, 12 ]
SCOPUS_ID:85040086157
A Critical Discourse Analysis of Welfare-to-Work Program Managers’ Expectations and Evaluations of Their Clients’ Mothering
Dominant ideologies about poverty in the USA draw on personal responsibility and beliefs that a ‘culture of poverty’ creates and reproduces inequality. As the primary recipients of welfare are single mothers, discourses surrounding welfare are also influenced by dominant ideologies about mothering, namely intensive mothering. Yet, given the centrality of resources to intensive mothering, mothers on welfare are often precluded from enacting this type of parenting. In this paper, I conduct a critical discourse analysis of 69 interviews with Ohio Works First (USA) program managers to examine how welfare program managers talk about and evaluate their clients’ mothering. My findings suggest three themes regarding expectations and evaluations of clients’ mothering: (a) enacting child-centered mothering, (b) breaking out of the ‘culture of poverty’ and (c) (mis)managing childcare.
[ "Discourse & Pragmatics", "Programming Languages in NLP", "Semantic Text Processing", "Multimodality" ]
[ 71, 55, 72, 74 ]
SCOPUS_ID:85150385927
A Critical Discourse Analysis of the National Islam and Foreign Islam in the Australian Press
Recent studies conducted in the UK, US, and Europe have highlighted the major differences regarding coverage of internal (i.e., National) Islam and external (i.e., Foreign) Islam, with foreign Islam covered and viewed as the greater threat. This paper explores the prominent themes of National Islam and Foreign Islam in the editorials of Australian newspapers in the period from January 1, 2016 to March 31, 2017. Employing Teun A. van Dijk’s (b. 1943) ideological square and lexicalisation approaches within the critical discourse analysis paradigm, this study examined editorials from two leading newspapers: “The Australian” and “The Age.” The findings show that both newspapers focused and highlighted conflict, violence, and collectivism regarding Islam and Muslims while covering Foreign Islam, with “The Australian” highlighting the underrepresentation of women as well. On the other hand, when discussing National Islam, “The Age” focused on victimisation and prejudice towards Muslims in Australia and emphasised the need for understanding, harmony, and cohesion. On the contrary, “The Australian” associated National Islam with the same themes associated with Foreign Islam i.e., violence, collectivism, conflict, and women underrepresentation.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:79952574541
A Critical Discourse Analysis of three US municipal wireless network initiatives for enhancing social inclusion
The US has a long history of telecommunications policy aimed at providing equitable access to information and communication services. In this paper we examine the most recent of these efforts, municipal wireless broadband Internet networks. Using three cases (Philadelphia, PA; San Francisco, CA; and Chicago, IL) we examine how social inclusion is expressed in the digital inclusion policy articulated in each municipality's broadband network public rhetoric. Using Critical Discourse Analysis, our findings confirms that the growing use of digital inclusion rhetoric around broadband deployments has brought the social inclusion issue to the forefront, and effectively links discourse and technology with discursive practices and types. © 2010 Elsevier Ltd. All rights reserved.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85117185928
A Critical Discourse Analysis to Prevent the Second Wave of COVID-19 Pandemic in Indonesia
The government's choice of complicated language style in communicating issues or cases of Covid-19 risks bringing misunderstandings among the public. The choice of terms which are very elitist will only target certain circles. The government's communication strategy during this pandemic certainly creates new problems. One of the impacts arising from the government's elitist communication approach is the emergence of an information gap between the upper middle class and lower middle class which has a systemic impact, for example the panic buying phenomenon ahead of the implementation of the Social Restrictions Large-Scale (PSBB). In linguistics, an information (discourse) can be framed with topicalization techniques. This technique is a strategy for promoting information that will be highlighted. The part of the information that has a higher negative meaning burden tends to be not highlighted. The issue of the increase in positive cases of Covid-19 does not seem too prominent.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85059903701
A Critical Discourse Approach To Benjamin Martin'S Preface To An Introduction To The English Language And Learning (1754)
The purpose of the present paper is to contribute to the depiction of Martin's role as a grammarian by analysing the preface to his grammar An Introduction to the English Language and Learning (1754). By using a Critical Discourse Analysis approach and a method based on systemic functional grammar, this study intends to describe the discourse structures used in the preface to fulfil its advertising function and persuade the addressee as a potential buyer or user of the grammar. Martin's preface is characterised by a peculiarly exaggerated and aggressive tone and by a strong emphasis on the religious implications of education, all of which confer some distinction to Martin within the discourse community of eighteenth-century grammarians.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85131765829
A Critical Discourse Study on Meghan and Harry’s CBS Primetime Interview
This study focuses on Meghan and Harry’s narratives in the CBS Primetime interview with Oprah Winfrey where they highlighted the issues they faced before moving to America. During the interview, the couple raised several bombshells ranging from the lack of freedom, to Archie’s royal title and security, racism, and the lack of support and guidance from the Royal Family, which negatively portrayed the Royal Family and British tabloids. Using van Dijk’s ideological square model and its discursive strategies as a framework, this study examines how the Duke and Duchess of Sussex linguistically construct the self-other representations that are evident in their interview via critical discourse analysis and narrative inquiry approach. Findings show that the couple most commonly employed discursive strategies such as victimisation, vagueness, disclaimers, comparisons, evidentiality, hyperbole, history as a lesson, generalisation, pseudo-ignorance, implications, distancing, openness, and polarisation of us versus them. In doing so, they represented themselves as positive, while portraying the British tabloids and the Royal Family as the negative-other. Consequently, the use of language in this interview narrative may legitimise the Duke and Duchess of Sussex while suppress the Royal family and British tabloids. This paper is timely as it is only through in-depth analysis of the linguistic features that we are able to unveil ideological presupposition and biases underlying the interview. It also serves to educate the public that there is always more than one side to a story. Therefore, we should avoid having any biases or ideological presupposition towards anyone in any event before the truth is revealed from both sides.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85099845657
A Critical Evaluation of Dyslexia Information on the Internet
The internet is a common source of information for parents, educators, and the general public. However, researchers who analyze the quality of internet sources have found they often contain inaccurate and misleading information. Here, we present an analysis of dyslexia on the internet. Employing disability studies in education (DSE), disability critical race studies (DisCrit), and Bakhtin’s construct of ideological becoming, we examined the credibility of sources, the quality of information, and the discourse in which the information is presented. We found the majority of webpages do not meet basic source credibility criteria, much of the content contradicts or is unsupported by research, and most pages convey information in an authoritative discourse, making it seem irreproachable. Building on the findings, we offer criteria for evaluating dyslexia information and suggestions for research and practice. We focus on the need for less divisive, more collaborative dialogue, along with research among stakeholders with multiple perspectives.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85121389827
A Critical Framework for Examining Sustainability Claims of the Sharing Economy: Exploring the Tensions Within Platform Brand Discourses
The sharing economy represents a market-driven response to the perceived inefficient resource use arising from materialism, and as such, offers the possibility of a more environmentally sustainable form of consumption. However, the sustainability benefits attributed to the sharing economy remain contentious and fraught with paradox. Drawing on a critical discourse analysis of three sharing economy brands (Lime, Rent the Runway and BlaBlaCar) we identify that sustainability discourses compete with claims arising from the espoused benefits of immateriality and platform brands’ desire for rapid growth. We identify and explore three platform brand discourses (disrupting unsustainable leaders, guilt-free choice, and non-commercial appeals) and their associated practices. In doing so we identify that tensions between these discourses and practices give rise to three sustainability-related contradictions: displacement of sustainable alternatives, hidden materiality, and creeping usage. Our findings contribute to our understanding of the sharing economy and its role in sustainability.
[ "Discourse & Pragmatics", "Responsible & Trustworthy NLP", "Semantic Text Processing", "Green & Sustainable NLP" ]
[ 71, 4, 72, 68 ]
SCOPUS_ID:85148957250
A Critical Lens on Health: Key Principles of Critical Discourse Analysis and Its Benefits to Anti-Racism in Population Public Health Research
Critical discourse analysis (CDA) is an interdisciplinary research methodology used to analyze discourse as a form of “social practice”, exploring how meaning is socially constructed. In addition, the methodology draws from the field of critical studies, in which research places deliberate focus on the social and political forces that produce social phenomena as a means to challenge and change societal practices. The purpose of this article is to demonstrate the benefits of CDA to population public health (PPH) research. We will do this by providing a brief overview of CDA and its history and purpose in research and then identifying and discussing three crucial principles that we argue are crucial to successful CDA research: (1) CDA research should contribute to social justice; (2) CDA is strongly based in theory; and (3) CDA draws from constructivist epistemology. A key benefit that CDA brings to PPH research is its critical lens, which aligns with the fundamental goals of PPH including addressing the social determinants of health and reducing health inequities. Our analysis demonstrates the need for researchers in population public health to strongly consider critical discourse analysis as an approach to understanding the social determinants of health and eliminating health inequities in order to achieve health and wellness for all.
[ "Discourse & Pragmatics", "Semantic Text Processing" ]
[ 71, 72 ]
SCOPUS_ID:85078342071
A Critical Look at Benchmarking Datasets: Problem of finding relationship between sentences
Competitions on many data sets attract the attention of the researchers and thanks to the competition, better systems are developed. Recently, however, it has been found that algorithms use annotation artifacts on some datasets. These findings have led to questioning of the results-oriented assessment. It was previously shown that the use of the second sentence is sufficient to prediction of the relationship between two sentences in a popular benchmark dataset. In this study, the Turkish version of the dataset was created and similar findings were obtained for both languages. In addition to the attack methods in the literature, word-based and sentence-based new attacks were applied, and when the sentences were replaced, the estimates were shown to contain logical inconsistencies. For example, if the relationship between two sentences is contradiction, then the relationship should not change when the sentences change. However, it was seen that this is not the case in a significant part of the samples. As a result, it is revealed that the result oriented achievements obtained in this dataset are not real by examining the internal dynamics of the system. This critical approach should not be left on all popular datasets.
[ "Responsible & Trustworthy NLP", "Reasoning", "Textual Inference", "Robustness in NLP" ]
[ 4, 8, 22, 58 ]
SCOPUS_ID:84890240299
A Critical Methodological Review of Discourse and Conversation Analysis Studies of Family Therapy
Discourse (DA) and conversation (CA) analysis, two qualitative research methods, have been recently suggested as potentially promising for the study of family therapy due to common epistemological adherences and their potential for an in situ study of therapeutic dialog. However, to date, there is no systematic methodological review of the few existing DA and CA studies of family therapy. This study aims at addressing this lack by critically reviewing published DA and CA studies of family therapy on methodological grounds. Twenty-eight articles in total are reviewed in relation to certain methodological axes identified in the relevant literature. These include choice of method, framing of research question(s), data/sampling, type of analysis, epistemological perspective, content/type of knowledge claims, and attendance to criteria for good quality practice. It is argued that the reviewed studies show "glimpses" of the methods' potential for family therapy research despite the identification of certain "shortcomings" regarding their methodological rigor. These include unclearly framed research questions and the predominance of case study designs. They also include inconsistencies between choice of method, stated or unstated epistemological orientations and knowledge claims, and limited attendance to criteria for good quality practice. In conclusion, it is argued that DA and CA can add to the existing quantitative and qualitative methods for family therapy research. They can both offer unique ways for a detailed study of the actual therapeutic dialog, provided that future attempts strive for a methodologically rigorous practice and against their uncritical deployment. © FPI, Inc.
[ "Discourse & Pragmatics", "Natural Language Interfaces", "Semantic Text Processing", "Dialogue Systems & Conversational Agents" ]
[ 71, 11, 72, 38 ]
SCOPUS_ID:85107458167
A Critical Micro-semiotic Analysis of Values Depicted in the Indonesian Ministry of National Education-Endorsed Secondary School English Textbook
While the inclusion of moral education (character education) in English language teaching (ELT) globally receives considerable attention, evaluating ELT textbooks as a moral/character agent remains under-examined since such textbooks are assumed to be value-free (Gebregeorgis MY. Afr Educ Rev 13:119–140, 2016a; Gray J. Appl Linguist 31:714–733, 2010). Informed by critical systemic functional linguistics (Fairclough N, Discourse and social change. Blackwell Publishing, Malden, 1992; Halliday MAK. Language as social semiotic. Edward Arnold, London, 1978; Kress G, van Leeuwen T. Reading images: the grammar of visual design (2nd edn). New York, Routledge, 2006), I contend that language textbooks should be viewed as sociocultural artifacts that feature particular moral values or character virtues. To fill this need, this critical micro-semiotic discourse study examines in what ways values are portrayed in one Indonesian Ministry of National Education-approved secondary school English textbook, which deploys various lexico-grammatical and discursive resources. This critical analysis reveals that visual artifacts and verbal texts with different genres in the textbook represent a myriad of values of which both teachers and students need to become aware. The implication of this study suggests that both teachers and students need to equip with skills in critical thinking and reading as well as in critical language awareness analysis. Both teachers and students should have the opportunity to engage critically with textbooks as a value agent, for instance.
[ "Discourse & Pragmatics", "Visual Data in NLP", "Semantic Text Processing", "Multimodality" ]
[ 71, 20, 72, 74 ]
SCOPUS_ID:85103704058
A Critical Philosophy of Mind
In the modern context the investigation of the human cognitive capacities is a task that falls to the philosophy of mind. The first Critique, on the other hand, does not expressly undertake to provide a comprehensive account, or even a ‘theory’, of all our intellectual abilities and achievements. But if we read the work somewhat against the grain, we can certainly find many elements here that furnish significant contributions in this direction, and specifically to the frequently neglected epistemological task of a transcendental psychology. There is certainly no reason to regard the latter as an entirely ‘imaginary’ science, as Strawson claims in a remarkably categorical manner (1966: 32). The very programme of the first Critique already prescribes a particular direction for the philosophy of mind and encourages us to develop one, not indeed directly, but specifically in the context of a critical investigation of the possibility of knowledge in general. Yet Kant’s emphatic methodological division between a transcendental theory and an empirically verifiable theory must also cast doubt upon a recent attempt to interpret the first Critique as a direct contribution to the field of cognitive science in the contemporary sense (Brook 1994). The decisive problem here is not so much the fact that the modern cognitive sciences necessarily lay beyond his horizon, but that disciplines such as neurophysiology, psycholinguistics and information theory essentially are essentially concerned with empirical rather than transcendental questions. It is therefore no accident that the principal thesis underlying Kant’s philosophy of mind, the thesis of transcendental idealism itself, plays an entirely subsidiary role in Brook’s version of the argument. (From amongst the vast literature concerning contemporary philosophy of mind cf. Beckerman 1999 and Kim 1996; for Kant specifically, cf. Ameriks 20002, Klemme 1996, and Sturma 1985).
[ "Linguistics & Cognitive NLP", "Psycholinguistics", "Linguistic Theories" ]
[ 48, 77, 57 ]
SCOPUS_ID:85103538557
A Critical Reassessment of the Saerens-Latinne-Decaestecker Algorithm for Posterior Probability Adjustment
We critically re-examine the Saerens-Latinne-Decaestecker (SLD) algorithm, a well-known method for estimating class prior probabilities ("priors") and adjusting posterior probabilities ("posteriors") in scenarios characterized by distribution shift, i.e., difference in the distribution of the priors between the training and the unlabelled documents. Given a machine learned classifier and a set of unlabelled documents for which the classifier has returned posterior probabilities and estimates of the prior probabilities, SLD updates them both in an iterative, mutually recursive way, with the goal of making both more accurate; this is of key importance in downstream tasks such as single-label multiclass classification and cost-sensitive text classification. Since its publication, SLD has become the standard algorithm for improving the quality of the posteriors in the presence of distribution shift, and SLD is still considered a top contender when we need to estimate the priors (a task that has become known as "quantification"). However, its real effectiveness in improving the quality of the posteriors has been questioned. We here present the results of systematic experiments conducted on a large, publicly available dataset, across multiple amounts of distribution shift and multiple learners. Our experiments show that SLD improves the quality of the posterior probabilities and of the estimates of the prior probabilities, but only when the number of classes in the classification scheme is very small and the classifier is calibrated. As the number of classes grows, or as we use non-calibrated classifiers, SLD converges more slowly (and often does not converge at all), performance degrades rapidly, and the impact of SLD on the quality of the prior estimates and of the posteriors becomes negative rather than positive.
[ "Information Retrieval", "Text Classification", "Information Extraction & Text Mining" ]
[ 24, 36, 3 ]
SCOPUS_ID:85136941168
A Critical Review on Sentiment Analysis Techniques
Social media and other online forums are the platforms that have given people the voice to shape their views and opinion in regards to a specific subject. The data collected from such platforms can be useful for extracting and gaining the valuable insights for many business organizations in regards to their products and services. This review paper lists various machine learning, deep learning and natural language processing approaches that have been used to mine the sentiments. In recent years many deep learning based approaches have been adopted by the research community to analyze the sentiments because of their robust performances that they have exhibited. Since the introduction of the transformer based models a major shift has been witnessed in the field of NLP. Further in the paper we also highlight the need for further exploration of the pre-trained language models which can lead to the improvement in the analysis of static as well as dynamic data streamed over the internet.
[ "Sentiment Analysis" ]
[ 78 ]
http://arxiv.org/abs/1909.13494v1
A Critique of the Smooth Inverse Frequency Sentence Embeddings
We critically review the smooth inverse frequency sentence embedding method of Arora, Liang, and Ma (2017), and show inconsistencies in its setup, derivation, and evaluation.
[ "Semantic Text Processing", "Representation Learning" ]
[ 72, 12 ]
SCOPUS_ID:84864642436
A Croatian weather domain spoken dialog system prototype
Speech technologies and language technologies have been already in use in IT for a certain time. Because of their great impact and fast growth, it is necessary to introduce these technologies for Croatian language. In this paper we propose a solution for developing a domain-oriented spoken dialog system for Croatian language. We have chosen a weather domain because it has limited vocabulary, it has easily accessible data and it is highly applicable. The Croatian weather dialog system provides information about weather in different regions of Croatia. The modules of the spoken dialog system perform automatic word recognition, semantic analysis, dialog management, response generation and text-to-speech synthesis. This is a first attempt to develop such a system for Croatian language and some new approaches are presented.
[ "Natural Language Interfaces", "Multimodality", "Speech & Audio in NLP", "Dialogue Systems & Conversational Agents" ]
[ 11, 74, 70, 38 ]
https://aclanthology.org//W02-1612/
A Cross System Machine Translation
[ "Machine Translation", "Text Generation", "Multilinguality" ]
[ 51, 47, 0 ]
http://arxiv.org/abs/1812.09652v1
A Cross-Architecture Instruction Embedding Model for Natural Language Processing-Inspired Binary Code Analysis
Given a closed-source program, such as most of proprietary software and viruses, binary code analysis is indispensable for many tasks, such as code plagiarism detection and malware analysis. Today, source code is very often compiled for various architectures, making cross-architecture binary code analysis increasingly important. A binary, after being disassembled, is expressed in an assembly languages. Thus, recent work starts exploring Natural Language Processing (NLP) inspired binary code analysis. In NLP, words are usually represented in high-dimensional vectors (i.e., embeddings) to facilitate further processing, which is one of the most common and critical steps in many NLP tasks. We regard instructions as words in NLP-inspired binary code analysis, and aim to represent instructions as embeddings as well. To facilitate cross-architecture binary code analysis, our goal is that similar instructions, regardless of their architectures, have embeddings close to each other. To this end, we propose a joint learning approach to generating instruction embeddings that capture not only the semantics of instructions within an architecture, but also their semantic relationships across architectures. To the best of our knowledge, this is the first work on building cross-architecture instruction embedding model. As a showcase, we apply the model to resolving one of the most fundamental problems for binary code similarity comparison---semantics-based basic block comparison, and the solution outperforms the code statistics based approach. It demonstrates that it is promising to apply the model to other cross-architecture binary code analysis tasks.
[ "Programming Languages in NLP", "Multimodality", "Semantic Text Processing", "Representation Learning" ]
[ 55, 74, 72, 12 ]
SCOPUS_ID:85144746032
A Cross-Attention Fusion Based Graph Convolution Auto-Encoder for Open Relation Extraction
Open Relation Extraction (OpenRE) aims at clustering relation instances to extract relation types. By learning relation patterns between named entities, it clusters semantically equivalent patterns into a unified relation cluster. Existing clustering-based OpenRE methods only consider the information of the instance itself, ignoring knowledge of any relations between instances. Therefore, a Cross-Attention Fusion based Graph Convolution Auto-Encoder (CAGCE) method for Open Relation Extraction is proposed. The Auto-Encoder learns the semantic information of the sentence instance itself, and the Graph Convolution Network learns the relational similarity information between sentences. Then, the two heterogeneous representations are crossed and fused layer-by-layer through a cross-attention fusion mechanism. Finally, the fused features are used for clustering to form the relation types. A comparison with baseline models using the FewRel and NYT-FB datasets shows the effectiveness and superiority of the proposed method.
[ "Language Models", "Semantic Text Processing", "Relation Extraction", "Structured Data in NLP", "Multimodality", "Text Clustering", "Information Extraction & Text Mining" ]
[ 52, 72, 75, 50, 74, 29, 3 ]
SCOPUS_ID:85138010092
A Cross-Domain Ontology Semantic Representation Based on NCBI-BlueBERT Embedding
A common but critical task in biological ontologies data analysis is to compare the difference between ontologies. There have been numerous ontology-based semantic-similarity measures proposed in specific ontology domain, but it still remains a challenge for cross-domain ontologies comparison. An ontology contains the scientific natural language description for the corresponding biological aspect. Therefore, we develop a new method based on natural language processing (NLP) representation model bidirectional encoder representations from transformers (BERT) for cross-domain semantic representation of biological ontologies. This article uses the BERT model to represent the word-level of the ontologies as a set of vectors, facilitating the semantic analysis or comparing the biomedical entities named in an ontology or associated with ontology terms. We evaluated the ability of our method in two experiments: calculating similarities of pair-wise disease ontology and human phenotype ontology terms and predicting the pair-wise of proteins interaction. The experimental results demonstrated the comparative performance. This gives promise to the development of NLP methods in biological data analysis.
[ "Language Models", "Semantic Text Processing", "Semantic Similarity", "Representation Learning", "Knowledge Representation" ]
[ 52, 72, 53, 12, 18 ]
SCOPUS_ID:85147539470
A Cross-Domain Semantic Similarity Measure and Multi-Source Domain Adaptation in Sentiment Analysis
Domain adaptation becomes crucial when there is a lack of labelled data in various domains. The accuracy of traditional machine learning models degrades largely if they are trained on one domain (called the source or training domain) and classify the data of a different domain (called the target domain or test domain, which is different from the source domain). The machine needs to train on a corresponding domain to improve the classification accuracy, but labelling each new domain is a complex and time-consuming task. Hence, the domain adaptation technique is required to solve the issue of data labeling. The similarity measure plays a vital role in selecting important pivot features from the target domain that match source domains. This research article has introduced an enhanced cross-entropy measure for matching the normalized frequency distribution of different domains and found an important domain-specific feature set. In addition, the technique of using enhanced cross entropy measures is proposed in the multi-source domain adaptation model to effectively classify the target domain data. The result shows that there is an improvement of 3.66% to 9.09% using our approach.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Semantic Similarity", "Sentiment Analysis", "Text Classification", "Responsible & Trustworthy NLP" ]
[ 52, 80, 72, 24, 3, 53, 78, 36, 4 ]
http://arxiv.org/abs/2004.14312v1
A Cross-Genre Ensemble Approach to Robust Reddit Part of Speech Tagging
Part of speech tagging is a fundamental NLP task often regarded as solved for high-resource languages such as English. Current state-of-the-art models have achieved high accuracy, especially on the news domain. However, when these models are applied to other corpora with different genres, and especially user-generated data from the Web, we see substantial drops in performance. In this work, we study how a state-of-the-art tagging model trained on different genres performs on Web content from unfiltered Reddit forum discussions. More specifically, we use data from multiple sources: OntoNotes, a large benchmark corpus with 'well-edited' text, the English Web Treebank with 5 Web genres, and GUM, with 7 further genres other than Reddit. We report the results when training on different splits of the data, tested on Reddit. Our results show that even small amounts of in-domain data can outperform the contribution of data an order of magnitude larger coming from other Web domains. To make progress on out-of-domain tagging, we also evaluate an ensemble approach using multiple single-genre taggers as input features to a meta-classifier. We present state of the art performance on tagging Reddit data, as well as error analysis of the results of these models, and offer a typology of the most common error types among them, broken down by training corpus.
[ "Tagging", "Syntactic Text Processing", "Robustness in NLP", "Responsible & Trustworthy NLP" ]
[ 63, 15, 58, 4 ]
SCOPUS_ID:85078530270
A Cross-Genre Morphological Tagging and Lemmatization of the Russian Poetry: Distinctive Test Sets and Evaluation
The poetic texts pose a challenge to full morphological tagging and lemmatization since the authors seek to extend the vocabulary, employ morphologically and semantically deficient forms, go beyond standard syntactic templates, use non-projective constructions and non-standard word order, among other techniques of the creative language game. In this paper we evaluate a number of probabilistic taggers based on decision trees, CRF and neural network algorithms as well as a state-of-the-art dictionary-based tagger. The taggers were trained on prosaic texts and tested on three poetic samples of different complexity. Firstly, we suggest a method to compile the gold standard datasets for the Russian poetry. Secondly, we focus on the taggers’ performance in the identification of the part of speech tags and lemmas. We reveal what kind of POS classes, paradigm classes and syntactic patterns mostly affect the quality of processing.
[ "Tagging", "Syntactic Text Processing", "Morphology" ]
[ 63, 15, 73 ]
SCOPUS_ID:85093083089
A Cross-Layer Connection Based Approach for Cross-Lingual Open Question Answering
Cross-lingual open domain question answering (Open-QA) has become an increasingly important topic. When training a monolingual model, it is often necessary to use a large number of labeled data for supervised training, which makes it difficult to real applications, especially for low-resource languages. Recently, thanks to multilingual BERT model, a new task, so called zero-shot cross-lingual QA has emerged in this field, i.e., training a model for a language rich in resources and directly testing in other languages. The existing problems in the current research include two main points. The one is in document retrieval stage, directly working multilingual pretraining model for similarity calculation will result in insufficient retrieval accuracy. The other is in the stage of answer extraction, the answers will involve different levels of abstraction related to retrieved documents, which needs deep exploration. This paper puts forward a cross-layer connection based approach for cross-lingual Open-QA. It consists of Match-Retrieval module and Connection-Extraction module. The matching network in the retrieval module makes heuristic adjustment and expansion on the learned features to improve the retrieval quality. In the answer extraction module, the reuse of deep semantic features is realized at the network structure level through cross-layer connection. Experimental results on a public cross-lingual Open-QA dataset show the superiority of our proposed approach over the state-of-the-art methods.
[ "Multilinguality", "Question Answering", "Natural Language Interfaces", "Cross-Lingual Transfer", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 0, 27, 11, 19, 24, 3 ]
SCOPUS_ID:85042176760
A Cross-Lingual Mobile Medical Communication System Prototype for Foreigners and Subjects with Speech, Hearing, and Mental Disabilities Based on Pictograms
People with speech, hearing, or mental impairment require special communication assistance, especially for medical purposes. Automatic solutions for speech recognition and voice synthesis from text are poor fits for communication in the medical domain because they are dependent on error-prone statistical models. Systems dependent on manual text input are insufficient. Recently introduced systems for automatic sign language recognition are dependent on statistical models as well as on image and gesture quality. Such systems remain in early development and are based mostly on minimal hand gestures unsuitable for medical purposes. Furthermore, solutions that rely on the Internet cannot be used after disasters that require humanitarian aid. We propose a high-speed, intuitive, Internet-free, voice-free, and text-free tool suited for emergency medical communication. Our solution is a pictogram-based application that provides easy communication for individuals who have speech or hearing impairment or mental health issues that impair communication, as well as foreigners who do not speak the local language. It provides support and clarification in communication by using intuitive icons and interactive symbols that are easy to use on a mobile device. Such pictogram-based communication can be quite effective and ultimately make people's lives happier, easier, and safer.
[ "Multimodality", "Cross-Lingual Transfer", "Speech & Audio in NLP", "Multilinguality" ]
[ 74, 19, 70, 0 ]
SCOPUS_ID:85126544766
A Cross-Lingual Sentence Similarity Calculation Method With Multifeature Fusion
Cross-language sentence similarity computation is among the focuses of research in natural language processing (NLP). At present, some researchers have introduced fine-grained word and character features to help models understand sentence meanings, but they do not consider coarse-grained prior knowledge at the sentence level. Even if two cross-linguistic sentence pairs have the same meaning, the sentence representations extracted by the baseline approach may have language-specific biases. Considering the above problems, in this paper, we construct a Chinese-Uyghur cross-lingual sentence similarity dataset and propose a method to compute cross-lingual sentence similarity by fusing multiple features. The method is based on the cross-lingual pretraining model XLM-RoBERTa and assists the model in similarity calculation by introducing two coarse-grained prior knowledge features, i.e., sentence sentiment and length features. At the same time, to eliminate possible language-specific biases in the vectors, we whitened the sentence vectors of different languages to ensure that they were all represented under the standard orthogonal basis. Considering that the combination of different vectors has different effects on the final performance of the model, we introduce different vector features for comparison experiments based on the basic feature splicing method. The results show that the absolute value feature of the difference between two vectors can reflect the similarity of two sentences well. The final F1 value of our method reaches 98.97%, which is 19.81% higher than that of the baseline.
[ "Cross-Lingual Transfer", "Multilinguality" ]
[ 19, 0 ]
SCOPUS_ID:85017653703
A Cross-Linguistic Perspective on Syntactic Complexity in L2 Development: Syntactic Elaboration and Diversity
Syntactic and linguistic complexity have been studied extensively in applied linguistics as indicators of linguistic performance, development, and proficiency. Recent publications have equally highlighted the reductionist approach taken to syntactic complexity measurement, which often focuses on one or two measures representing complexity at the level of clause-linking or the sentence, but eschews complexity measurement at other syntactic levels, such as the phrase or the clause. Previous approaches have also rarely incorporated measures representing the diversity of syntactic structures in learner productions. Finally, complexity development has rarely been considered from a cross-linguistic perspective, so that many questions pertaining to the cross-linguistic validity of complexity measurement remain. This article reports on an empirical study on syntactic complexity development and introduces a range of syntactic diversity measures alongside frequently used measures of syntactic elaboration. The study analyzed 100 English and 100 French second language oral narratives from adolescent native speakers of Dutch, situated at 4 proficiency levels (beginner–advanced), as well as native speaker benchmark data from each language. The results reveal a gradual process of syntactic elaboration and syntactic diversification in both learner groups, while, especially in French, considerable differences between learners and native speakers reside in the distribution of specific clause types.
[ "Syntactic Text Processing" ]
[ 15 ]
SCOPUS_ID:85129511200
A Cross-Linguistic Study of Foot Metaphor Clusters: A Double Dimensional Perspective of Embodied Cognition and Cultural Entailments
Wang Yin’s proposition of Embodied-Cognitive Linguistics is a localization architecture on the basis of embodied philosophy and Cognitive Linguistics, its kernel principle being “embodied cognition.” With the profile of embodied-cognitive humanism in this field, this paper is an attempt at a cross-linguistic study concerning foot metaphors based upon the conceptual nature of metaphor and theoretical construction of contemporary metaphorology. A large abundance of mapping types as well as image categories of foot metaphors have been concluded and organised. The similarity and differences between foot metaphors in Chinese and English are also analyzed in depth through with a dual-dimensional theoretical paradigm: embodied cognition from Embodied-Cognitive Linguistics in conjunction with cultural entailments within the scope of contemporary metaphorology. Taken together, the theoretical dimensions can provide a compelling account of evidence for the metaphorical mechanism of this specific language usage.
[ "Cognitive Modeling", "Text Clustering", "Linguistics & Cognitive NLP", "Reasoning", "Textual Inference", "Information Extraction & Text Mining" ]
[ 2, 29, 48, 8, 22, 3 ]
SCOPUS_ID:85118464215
A Cross-Linguistic Study of the Non-at-issueness of Exhaustive Inferences
Several constructions have been noted to associate with an exhaustive inference, notably the English it-cleft, the French c’est-cleft, the preverbal focus in Hungarian and the German es-cleft. This inference has long been recognized to differ from exhaustiveness associated with exclusives like English only. While previous literature has attempted to capture this difference by debating whether the exhaustiveness of clefts is semantic or a pragmatic phenomenon, recent studies such as (Velleman et al. 2012, Proceedings of Semantics and Linguistics Theory (SALT) 22, pages 441–460) supplement the debate by proposing that the notion of at-issueness is the culprit of those differences. In light of this notion, this paper reconsiders the results from previous experimental data on Hungarian and German (Onea and Beaver 2011, Proceedings of Semantics and Linguistic Theory (SALT) 19, pages 342–359; Xue and Onea 2011, Proceedings of the ESSLLI 2011 Workshop on Projective Meaning, Ljubljana, Slovenia) and presents new data on English and French, showing that the “Yes, but” test used in these four languages to diagnose the source of the exhaustive inference (semantics vs. pragmatics), in fact diagnoses its status (at-issue vs. non-at-issue). We conclude that the exhaustiveness associated with clefts and cleft-like constructions is not at-issue, or in other words, exhaustiveness it is not the main point of the utterance.
[ "Linguistics & Cognitive NLP", "Linguistic Theories" ]
[ 48, 57 ]
https://aclanthology.org//W11-2807/
A Cross-Linguistic Study on the Production of Multimodal Referring Expressions in Dialogue
[ "Natural Language Interfaces", "Multimodality", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 11, 74, 47, 38 ]
SCOPUS_ID:85126072607
A Cross-Linguistic Validation of the Test for Rating Emotions in Speech: Acoustic Analyses of Emotional Sentences in English, German, and Hebrew
Purpose: The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happi-ness, sadness, and neutral) in both semantics and prosody in different combi-nations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. Method: T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. Results: Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. Conclusions: The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners’ ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.
[ "Emotion Analysis", "Multimodality", "Speech & Audio in NLP", "Sentiment Analysis" ]
[ 61, 74, 70, 78 ]
SCOPUS_ID:85147266891
A Cross-Modal Alignment for Zero-Shot Image Classification
Different from major classification methods based on large amounts of annotation data, we introduce a cross-modal alignment for zero-shot image classification.The key is utilizing the query of text attribute learned from the seen classes to guide local feature responses in unseen classes. First, an encoder is used to align semantic matching between visual features and their corresponding text attribute. Second, an attention module is used to get response maps through feature maps activated by the query of text attribute. Finally, the cosine distance metric is used to measure the matching degree of the text attribute and its corresponding feature response. The experiment results show that the method get better performance than existing Zero-shot Learning in embedding-based methods as well as other generative methods in CUB-200-2011 dataset.
[ "Visual Data in NLP", "Low-Resource NLP", "Language Models", "Semantic Text Processing", "Information Retrieval", "Information Extraction & Text Mining", "Responsible & Trustworthy NLP", "Text Classification", "Multimodality" ]
[ 20, 80, 52, 72, 24, 3, 4, 36, 74 ]
SCOPUS_ID:85127909812
A Cross-Modal Attention and Multi-task Learning Based Approach for Multi-modal Sentiment Analysis
With the rapid development of multimodal research, deep multimodal learning model can effectively improve the accuracy of sentiment classification, and provide important decision support for many applications, such as product reviews, emotion analysis, rumor detection et. Aiming at the problem of feature representation and feature fusion in multimodal sentiment analysis, a model based on multi-task learning and attention mechanism is proposed. Firstly, bi-directional LSTM unit is used to extract the intra-modality representation of single modality. Secondly, the attention mechanism is used to model inter-modality dynamics. Finally, we introduce multi-task learning by predicting sentiment and emotion simultaneously. Experimental results on CMU-MOSEI dataset shows that proposed model outperforms baselines in terms of accuracy and F1-score.
[ "Language Models", "Low-Resource NLP", "Semantic Text Processing", "Sentiment Analysis", "Responsible & Trustworthy NLP", "Multimodality" ]
[ 52, 80, 72, 78, 4, 74 ]
SCOPUS_ID:85146821177
A Cross-Platform Personalized Recommender System for Connecting E-Commerce and Social Network
In this paper, we build a recommender system for a new study area: social commerce, which combines rich information about social network users and products on an e-commerce platform. The idea behind this recommender system is that a social network contains abundant information about its users which could be exploited to create profiles of the users. For social commerce, the quality of the profiles of potential consumers determines whether the recommender system is a success or a failure. In our work, not only the user’s textual information but also the tags and the relationships between users have been considered in the process of building user profiling model. A topic model has been adopted in our system, and a feedback mechanism also been design in this paper. Then, we apply a collative filtering method and a clustering algorithm in order to obtain a high recommendation accuracy. We do an empirical analysis based on real data collected on a social network and an e-commerce platform. We find that the social network has an impact on e-commerce, so social commerce could be realized. Simulations show that our topic model has a better performance in topic finding, meaning that our profile-building model is suitable for a social commerce recommender system.
[ "Topic Modeling", "Information Extraction & Text Mining" ]
[ 9, 3 ]
https://aclanthology.org//2020.repl4nlp-1.20/
A Cross-Task Analysis of Text Span Representations
Many natural language processing (NLP) tasks involve reasoning with textual spans, including question answering, entity recognition, and coreference resolution. While extensive research has focused on functional architectures for representing words and sentences, there is less work on representing arbitrary spans of text within sentences. In this paper, we conduct a comprehensive empirical evaluation of six span representation methods using eight pretrained language representation models across six tasks, including two tasks that we introduce. We find that, although some simple span representations are fairly reliable across tasks, in general the optimal span representation varies by task, and can also vary within different facets of individual tasks. We also find that the choice of span representation has a bigger impact with a fixed pretrained encoder than with a fine-tuned encoder.
[ "Language Models", "Semantic Text Processing", "Representation Learning" ]
[ 52, 72, 12 ]
https://aclanthology.org//W13-4904/
A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix Detection, Affix Labeling, POS Tagging, and Dependency Parsing
[ "Tagging", "Text Segmentation", "Syntactic Parsing", "Syntactic Text Processing" ]
[ 63, 21, 28, 15 ]
SCOPUS_ID:85117230065
A Cross-cultural Analysis of Tourists’ Perceptions of Airbnb Attributes
This study investigates the attributes that influence tourists from different cultures by analyzing a big data set of online reviews. The findings highlight the differences between Chinese-speaking tourists and English-speaking tourists regarding the four attributes the tourists focus on when choosing an Airbnb: the host, accommodation, location, and price. The results suggest that Chinese-speaking tourists are generally more positive and objective than English-speaking tourists when writing online reviews. The former have a more positive perception of the host but are more selective than the latter regarding the other three attributes. Text mining and sentiment analysis are used to provide guidance to Airbnb hosts to improve their marketing strategies to tourists from different cultures.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85144432967
A Cross-document Coreference Dataset for Longitudinal Tracking across Radiology Reports
This paper proposes a new cross-document coreference resolution (CDCR) dataset for identifying co-referring radiological findings and medical devices across a patient's radiology reports. Our annotated corpus contains 5872 mentions (findings and devices) spanning 638 MIMIC-III radiology reports across 60 patients, covering multiple imaging modalities and anatomies. There are a total of 2292 mention chains. We describe the annotation process in detail, highlighting the complexities involved in creating a sizable and realistic dataset for radiology CDCR. We apply two baseline methods-string matching and transformer language models (BERT)-to identify cross-report coreferences. Our results indicate the requirement of further model development targeting better understanding of domain language and context to address this challenging and unexplored task. This dataset can serve as a resource to develop more advanced natural language processing CDCR methods in the future. This is one of the first attempts focusing on CDCR in the clinical domain and holds potential in benefiting physicians and clinical research through long-term tracking of radiology findings.
[ "Coreference Resolution", "Information Extraction & Text Mining" ]
[ 13, 3 ]
SCOPUS_ID:85026954216
A Cross-lingual Annotation Projection-based Self-supervision Approach for Open Information Extraction
Open information extraction (IE) is a weakly supervised IE paradigm that aims to extract relation-independent information from large-scale natural language documents without significant annotation efforts. A key challenge for Open IE is to achieve self-supervision, in which the training examples are automatically obtained. Although the feasibility of Open IE systems has been demonstrated for English, utilizing such techniques to build the systems for other languages is problematic because previous self-supervision approaches require language-specific knowledge. To improve the cross-language portability of Open IE systems, this paper presents a self-supervision approach that exploits parallel corpora to obtain training examples for the target language by projecting the annotations onto the source language. The merit of our method is demonstrated using a Korean Open IE system developed without any language-specific knowledge.
[ "Multilinguality", "Open Information Extraction", "Cross-Lingual Transfer", "Information Extraction & Text Mining" ]
[ 0, 25, 19, 3 ]
SCOPUS_ID:85121998368
A Cross-lingual Messenger with Keyword Searchable Phrases for the Travel Domain
We present Qutr (Query Translator), a smart cross-lingual communication application for the travel domain. Qutr is a real-time messaging app that automatically translates conversations while supporting keyword-to-sentence matching. Qutr relies on querying a database that holds commonly used pre-translated travel-domain phrases and phrase templates in different languages with the use of keywords. The query matching supports paraphrases, incomplete keywords and some input spelling errors. The application addresses common cross-lingual communication issues such as translation accuracy, speed, privacy, and personalization.
[ "Machine Translation", "Text Generation", "Cross-Lingual Transfer", "Information Retrieval", "Multilinguality" ]
[ 51, 47, 19, 24, 0 ]
http://arxiv.org/abs/2010.16357v1
A Cross-lingual Natural Language Processing Framework for Infodemic Management
The COVID-19 pandemic has put immense pressure on health systems which are further strained due to the misinformation surrounding it. Under such a situation, providing the right information at the right time is crucial. There is a growing demand for the management of information spread using Artificial Intelligence. Hence, we have exploited the potential of Natural Language Processing for identifying relevant information that needs to be disseminated amongst the masses. In this work, we present a novel Cross-lingual Natural Language Processing framework to provide relevant information by matching daily news with trusted guidelines from the World Health Organization. The proposed pipeline deploys various techniques of NLP such as summarizers, word embeddings, and similarity metrics to provide users with news articles along with a corresponding healthcare guideline. A total of 36 models were evaluated and a combination of LexRank based summarizer on Word2Vec embedding with Word Mover distance metric outperformed all other models. This novel open-source approach can be used as a template for proactive dissemination of relevant healthcare information in the midst of misinformation spread associated with epidemics.
[ "Multilinguality", "Semantic Text Processing", "Representation Learning", "Ethical NLP", "Reasoning", "Fact & Claim Verification", "Cross-Lingual Transfer", "Responsible & Trustworthy NLP" ]
[ 0, 72, 12, 17, 8, 46, 19, 4 ]
SCOPUS_ID:84943278386
A Cross-lingual part-of-speech tagging for Malay language
Cross-lingual annotation projection methods can benefit from rich-resourced languages to improve the performance of Natural Language Processing (NLP) tasks in less-resourced languages. In this research, Malay is experimented as the less-resourced language and English is experimented as the rich-resourced language. The research is proposed to reduce the deadlock in Malay computational linguistic research due to the shortage of Malay tools and annotated corpus by exploiting state-of-the-art English tools. This paper proposed a cross-lingual annotation projection based on word alignment of two languages with syntactical differences. A word alignment method known as MEWA (Malay-English Word Aligner) that integrates a Dice Coefficient and bigram string similarity measure is proposed. MEWA is experimented to automatically induced annotations using a Malay test collection on terrorism and an identified English tool. In the POS annotation projection experiment, the algorithm achieved accuracy rate of 79%.
[ "Tagging", "Cross-Lingual Transfer", "Syntactic Text Processing", "Multilinguality" ]
[ 63, 19, 15, 0 ]
SCOPUS_ID:85139508409
A Cross-linguistic Study into the Contribution of Affective Connotation in the Lexico-semantic Representation of Concrete and Abstract Concepts
Words carry affective connotations, but the role of these connotations in the representation of meaning is not well understood. Like other aspects of meaning, connotation might be culture or language-specific. This study uses a large-scale relatedness judgment task to determine the role of affective connotations in concrete and abstract words in English, Rioplatense Spanish, and Mandarin Chinese. Across languages, word valence, or how positive or negative a word is, was one of the main organizing factors in both concrete and abstract concepts. Moreover, predicted culture-specific affective connotations were reliably found in the similarity space of abstract concepts. A follow-up analysis was conducted to investigate whether distributional semantic representations derived from language similarly encodes these connotations using word embeddings. The language models did only partly captured the overall similarity structure and the affective connotations shaping it.
[ "Emotion Analysis", "Semantic Text Processing", "Sentiment Analysis", "Representation Learning" ]
[ 61, 72, 78, 12 ]
http://arxiv.org/abs/cs/0309021v1
A Cross-media Retrieval System for Lecture Videos
We propose a cross-media lecture-on-demand system, in which users can selectively view specific segments of lecture videos by submitting text queries. Users can easily formulate queries by using the textbook associated with a target lecture, even if they cannot come up with effective keywords. Our system extracts the audio track from a target lecture video, generates a transcription by large vocabulary continuous speech recognition, and produces a text index. Experimental results showed that by adapting speech recognition to the topic of the lecture, the recognition accuracy increased and the retrieval accuracy was comparable with that obtained by human transcription.
[ "Visual Data in NLP", "Speech & Audio in NLP", "Text Generation", "Speech Recognition", "Information Retrieval", "Multimodality" ]
[ 20, 70, 47, 10, 24, 74 ]
https://aclanthology.org//W19-5942/
A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents
How should conversational agents respond to verbal abuse through the user? To answer this question, we conduct a large-scale crowd-sourced evaluation of abuse response strategies employed by current state-of-the-art systems. Our results show that some strategies, such as “polite refusal”, score highly across the board, while for other strategies demographic factors, such as age, as well as the severity of the preceding abuse influence the user’s perception of which response is appropriate. In addition, we find that most data-driven models lag behind rule-based or commercial systems in terms of their perceived appropriateness.
[ "Natural Language Interfaces", "Dialogue Systems & Conversational Agents" ]
[ 11, 38 ]
http://arxiv.org/abs/2009.10334v2
A Crowdsourced Open-Source Kazakh Speech Corpus and Initial Speech Recognition Baseline
We present an open-source speech corpus for the Kazakh language. The Kazakh speech corpus (KSC) contains around 332 hours of transcribed audio comprising over 153,000 utterances spoken by participants from different regions and age groups, as well as both genders. It was carefully inspected by native Kazakh speakers to ensure high quality. The KSC is the largest publicly available database developed to advance various Kazakh speech and language processing applications. In this paper, we first describe the data collection and preprocessing procedures followed by a description of the database specifications. We also share our experience and challenges faced during the database construction, which might benefit other researchers planning to build a speech corpus for a low-resource language. To demonstrate the reliability of the database, we performed preliminary speech recognition experiments. The experimental results imply that the quality of audio and transcripts is promising (2.8% character error rate and 8.7% word error rate on the test set). To enable experiment reproducibility and ease the corpus usage, we also released an ESPnet recipe for our speech recognition models.
[ "Text Generation", "Speech & Audio in NLP", "Speech Recognition", "Multimodality" ]
[ 47, 70, 10, 74 ]
SCOPUS_ID:85112410985
A Crowdsourcing Based Framework for the Development and Validation of Machine Readable Parallel Corpus for Sign Languages
Sign languages are used by the deaf and mute community of the world. These are gesture based languages where the subjects use hands and facial expressions to perform different gestures. There are hundreds of different sign languages in the world. Furthermore, like natural languages, there exist different dialects for many sign languages. In order to facilitate the deaf community several different repositories of video gestures are available for many sign languages of the world. These video based repositories do not support the development of an automated language translation systems. This research aims to investigate the idea of engaging the deaf community for the development and validation of a parallel corpus for a sign language and its dialects. As a principal contribution, this research presents a framework for building a parallel corpus for sign languages by harnessing the powers of crowdsourcing with editorial manager, thus it engages a diversified set of stakeholders for building and validating a repository in a quality controlled manner. It further presents processes to develop a word-level parallel corpus for different dialects of a sign language; and a process to develop sentence-level translation corpus comprising of source and translated sentences. The proposed framework has been successfully implemented and involved different stakeholders to build corpus. As a result, a word-level parallel corpus comprising of the gestures of almost 700 words of Pakistan Sign Language (PSL) has been developed. While, a sentence-level translation corpus comprising of more than 8000 sentences for different tenses has also been developed for PSL. This sentence-level corpus can be used in developing and evaluating machine translation models for natural to sign language translation and vice-versa. While the machine-readable word level parallel corpus will help in generating avatar based videos for the translated sentences in different dialects of a sign language.
[ "Visual Data in NLP", "Machine Translation", "Multimodality", "Text Generation", "Multilinguality" ]
[ 20, 51, 74, 47, 0 ]
SCOPUS_ID:85127065565
A Cryptocurrency Price Prediction Model Based on Twitter Sentiment Indicators
The cryptocurrency becoming increasingly expensive, price prediction methods have also been widely studied. As an application of big data in finance, the sentiment tendency of related topics on social platforms is an important indicator of cryptocurrency price prediction methods and has attracted broad attention. However, the accuracy of the existing macro-sentiment indicator calculation methods should be further improved. Aiming at the problem that the accuracy of price prediction is not significantly improved by applying the existing macro-sentiment indicators, this paper proposes three new public sentiment indicators based on small granularity. Correlational analysis between the indicators and price data is conducted in the paper as well. By analyzing the degree of sentiment tendency of each comment, the accuracy of the three public sentiment indicators is improved. Specifically, this paper quantifies public sentiment indicators by taking into account the degree of emotional bias of each tweet, which makes sentiment indicators with small granularity. Compared with previous methods, the value prediction accuracies of cryptocurrencies have been improved under three deep learning frameworks LSTM, CNN, and GRU with the use of small granularity sentiment indicators.
[ "Sentiment Analysis" ]
[ 78 ]
SCOPUS_ID:85086581551
A Cue Adaptive Decoder for Controllable Neural Response Generation
In open-domain dialogue systems, dialogue cues such as emotion, persona, and emoji can be incorporated into conversation models for strengthening the semantic relevance of generated responses. Existing neural response generation models either incorporate dialogue cue into decoder's initial state or embed the cue indiscriminately into the state of every generated word, which may cause the gradients of the embedded cue to vanish or disturb the semantic relevance of generated words during back propagation. In this paper, we propose a Cue Adaptive Decoder (CueAD) that aims to dynamically determine the involvement of a cue at each generation step in the decoding. For this purpose, we extend the Gated Recurrent Unit (GRU) network with an adaptive cue representation for facilitating cue incorporation, in which an adaptive gating unit is utilized to decide when to incorporate cue information so that the cue can provide useful clues for enhancing the semantic relevance of the generated words. Experimental results show that CueAD outperforms state-of-the-art baselines with large margins.
[ "Language Models", "Semantic Text Processing", "Dialogue Response Generation", "Natural Language Interfaces", "Text Generation", "Dialogue Systems & Conversational Agents" ]
[ 52, 72, 14, 11, 47, 38 ]
SCOPUS_ID:85066906946
A Culture-Specific “Linguistic Picture of the World” in the Organization of Scientific Discourse in a Foreign Language
This paper is a review of basic concepts depicting what was introduced as general Weltanschauung by W. von Humboldt and developed further into the related concepts of national scientific worldview, and conceptual world view. A “Linguistic picture of the world” common for a specific language community as a way for conceptualization of the surrounding world of human activities reveals itself in pragmatics, i.e. in the use of language by a language user (in our case – the author of a publication and translator). The material for analysis is a collection of abstracts of research papers in English from different journals. The major purpose of the analysis conducted by the author is to show how the overlap between culture and pragmatics influences the pragmatic potential of the texts of research papers, and abstracts and the pragmatic value of their translations. This includes relationships between tendencies to understatement or overstatement, and explaining authors’ attitudes, etc. The results of the analysis are of practical importance for writing research papers by Russian-speaking authors.
[ "Machine Translation", "Semantic Text Processing", "Discourse & Pragmatics", "Text Generation", "Multilinguality" ]
[ 51, 72, 71, 47, 0 ]
SCOPUS_ID:85102611762
A Curriculum Design Method for New Product Development
An entrepreneurship movement is sweeping the world; new product development (NPD) is the core activity of an organization’s competitive strategy, which includes concept design and the successful development of new products that can be launched in the market. However, a surprising dearth of research has been conducted on what the components and knowledge should be embedded in the process of new product development. To orchestrate a timely curriculum, this study aims to analyze the published papers on new product development to shed light on comprehensive developments and trends within the field. Thus, the Latent Semantic Analysis (LSA) was applied to extract research tropics based on scientific articles published between 2015 and 2019 and interpreted the results using Bloom’s Revised Taxonomy. The study identified ten educational objectives of NPD curriculum from the initial results. Furthermore, the domain knowledge and key components were derived from the corresponding educational objective. Consistent with previous findings and others’ assumptions, the study found that the scope of NPD curriculum is still expanding continuously; and the results herein can guide educators to design a more appropriate curriculum and to enhance students’ learning performance.
[ "Information Extraction & Text Mining" ]
[ 3 ]
http://arxiv.org/abs/2210.15147v1
A Curriculum Learning Approach for Multi-domain Text Classification Using Keyword weight Ranking
Text classification is a very classic NLP task, but it has two prominent shortcomings: On the one hand, text classification is deeply domain-dependent. That is, a classifier trained on the corpus of one domain may not perform so well in another domain. On the other hand, text classification models require a lot of annotated data for training. However, for some domains, there may not exist enough annotated data. Therefore, it is valuable to investigate how to efficiently utilize text data from different domains to improve the performance of models in various domains. Some multi-domain text classification models are trained by adversarial training to extract shared features among all domains and the specific features of each domain. We noted that the distinctness of the domain-specific features is different, so in this paper, we propose to use a curriculum learning strategy based on keyword weight ranking to improve the performance of multi-domain text classification models. The experimental results on the Amazon review and FDU-MTL datasets show that our curriculum learning strategy effectively improves the performance of multi-domain text classification models based on adversarial learning and outperforms state-of-the-art methods.
[ "Language Models", "Semantic Text Processing", "Text Classification", "Robustness in NLP", "Responsible & Trustworthy NLP", "Information Retrieval", "Information Extraction & Text Mining" ]
[ 52, 72, 36, 58, 4, 24, 3 ]
SCOPUS_ID:85092693368
A Curriculum Learning Based Approach to Captioning Ultrasound Images
We present a novel curriculum learning approach to train a natural language processing (NLP) based fetal ultrasound image captioning model. Datasets containing medical images and corresponding textual descriptions are relatively rare and hence, smaller-sized when compared to the datasets of natural images and their captions. This fact inspired us to develop an approach to train a captioning model suitable for small-sized medical data. Our datasets are prepared using real-world ultrasound video along with synchronised and transcribed sonographer speech recordings. We propose a “dual-curriculum” method for the ultrasound image captioning problem. The method relies on building and learning from curricula of image and text information for the ultrasound image captioning problem. We compare several distance measures for creating the dual curriculum and observe the best performance using the Wasserstein distance for image information and tf-idf metric for text information. The evaluation results show an improvement in all performance metrics when using curriculum learning over stochastic mini-batch training for the individual task of image classification as well as using a dual curriculum for image captioning.
[ "Visual Data in NLP", "Language Models", "Semantic Text Processing", "Captioning", "Text Generation", "Multimodality" ]
[ 20, 52, 72, 39, 47, 74 ]
http://arxiv.org/abs/1606.06864v2
A Curriculum Learning Method for Improved Noise Robustness in Automatic Speech Recognition
The performance of automatic speech recognition systems under noisy environments still leaves room for improvement. Speech enhancement or feature enhancement techniques for increasing noise robustness of these systems usually add components to the recognition system that need careful optimization. In this work, we propose the use of a relatively simple curriculum training strategy called accordion annealing (ACCAN). It uses a multi-stage training schedule where samples at signal-to-noise ratio (SNR) values as low as 0dB are first added and samples at increasing higher SNR values are gradually added up to an SNR value of 50dB. We also use a method called per-epoch noise mixing (PEM) that generates noisy training samples online during training and thus enables dynamically changing the SNR of our training data. Both the ACCAN and the PEM methods are evaluated on a end-to-end speech recognition pipeline on the Wall Street Journal corpus. ACCAN decreases the average word error rate (WER) on the 20dB to -10dB SNR range by up to 31.4% when compared to a conventional multi-condition training method.
[ "Language Models", "Semantic Text Processing", "Speech & Audio in NLP", "Robustness in NLP", "Text Generation", "Responsible & Trustworthy NLP", "Speech Recognition", "Multimodality" ]
[ 52, 72, 70, 58, 47, 4, 10, 74 ]
SCOPUS_ID:85105874815
A Custom Word Embedding Model for Clustering of Maintenance Records
Maintenance records of industrial equipment contain rich descriptive information in free-text format, such as involved parts, failure mechanisms, operating conditions, etc. Our objective is to leverage this unstructured textual information to identify groups of similar maintenance jobs. In this article, we use a natural language based approach and propose a novel custom word embedding model, which utilizes two sources of information, first, maintenance records collected from in-field operations and second, industrial taxonomy, to effectively identify clusters. The advantages of our model include combined use of semantic and taxonomic sources of information for clustering, one step/simultaneous training, which enables knowledge sharing between the two information sources and reduces hyperparameters, and no dependence on third-party data. We demonstrate the efficacy of our model for cluster identification using a real-world dataset. The results show that simultaneous incorporation of semantic and taxonomic information enables accurate extraction of contextual insights for improving maintenance decision-making and equipment reliability.
[ "Text Clustering", "Semantic Text Processing", "Information Extraction & Text Mining", "Representation Learning" ]
[ 29, 72, 3, 12 ]
http://arxiv.org/abs/1912.11151v1
A Cycle-GAN Approach to Model Natural Perturbations in Speech for ASR Applications
Naturally introduced perturbations in audio signal, caused by emotional and physical states of the speaker, can significantly degrade the performance of Automatic Speech Recognition (ASR) systems. In this paper, we propose a front-end based on Cycle-Consistent Generative Adversarial Network (CycleGAN) which transforms naturally perturbed speech into normal speech, and hence improves the robustness of an ASR system. The CycleGAN model is trained on non-parallel examples of perturbed and normal speech. Experiments on spontaneous laughter-speech and creaky-speech datasets show that the performance of four different ASR systems improve by using speech obtained from CycleGAN based front-end, as compared to directly using the original perturbed speech. Visualization of the features of the laughter perturbed speech and those generated by the proposed front-end further demonstrates the effectiveness of our approach.
[ "Text Generation", "Speech & Audio in NLP", "Speech Recognition", "Multimodality" ]
[ 47, 70, 10, 74 ]
http://arxiv.org/abs/2010.14891v3
A Cyclic Proof System for HFLN
A cyclic proof system allows us to perform inductive reasoning without explicit inductions. We propose a cyclic proof system for HFLN, which is a higher-order predicate logic with natural numbers and alternating fixed-points. Ours is the first cyclic proof system for a higher-order logic, to our knowledge. Due to the presence of higher-order predicates and alternating fixed-points, our cyclic proof system requires a more delicate global condition on cyclic proofs than the original system of Brotherston and Simpson. We prove the decidability of checking the global condition and soundness of this system, and also prove a restricted form of standard completeness for an infinitary variant of our cyclic proof system. A potential application of our cyclic proof system is semi-automated verification of higher-order programs, based on Kobayashi et al.'s recent work on reductions from program verification to HFLN validity checking.
[ "Programming Languages in NLP", "Reasoning", "Multimodality" ]
[ 55, 8, 74 ]
https://aclanthology.org//W97-1106/
A Czech Morphological Lexicon
In this paper, a treatment of Czech phonological rules in two-level morphology approach is described. First the possible phonological alternations in Czech are listed and then their treatment in a practical application of a Czech morphological lexicon.
[ "Phonology", "Syntactic Text Processing", "Morphology" ]
[ 6, 15, 73 ]
SCOPUS_ID:85126975563
A DAE-based Approach for Improving the Grammaticality of Summaries
While recent neural sequence-to-sequence models have achieved better and better Rouge performance in summarization task, there is little work emphasizing the grammaticality of the generated summaries. This paper proposes a simple method of pre-training the summarizer as a Denoising Autoencoder (DAE) to reduce the grammar errors. We apply two types of DAE: one is to simply reconstruct the input sentences where randomly sampled tokens are replaced with [MASK] elements, the other is to recover the words or phrases that are replaced with wrong expressions, namely designed for the grammatical error correction task. We evaluate the experiment outputs with three automatic metrics, and the result reveals better grammatical performance while having ROUGE scores higher than that of the baseline model.
[ "Text Error Correction", "Syntactic Text Processing", "Summarization", "Text Generation", "Information Extraction & Text Mining" ]
[ 26, 15, 30, 47, 3 ]
SCOPUS_ID:85131250007
A DATA-DRIVEN COGNITIVE SALIENCE MODEL FOR OBJECTIVE PERCEPTUAL AUDIO QUALITY ASSESSMENT
Objective audio quality measurement systems often use perceptual models to predict the subjective quality scores of processed signals, as reported in listening tests. Most systems map different metrics of perceived degradation into a single quality score predicting subjective quality. This requires a quality mapping stage that is informed by real listening test data using statistical learning (i. e., a data-driven approach) with distortion metrics as input features. However, the amount of reliable training data is limited in practice, and usually not sufficient for a comprehensive training of large learning models. Models of cognitive effects in objective systems can, however, improve the learning model. Specifically, considering the salience of certain distortion types, they provide additional features to the mapping stage that improve the learning process, especially for limited amounts of training data. We propose a novel data-driven salience model that informs the quality mapping stage by explicitly estimating the cognitive/degradation metric interactions using a salience measure. Systems incorporating the novel salience model are shown to outperform equivalent systems that only use statistical learning to combine cognitive and degradation metrics, as well as other well-known measurement systems, for a representative validation dataset.
[ "Cognitive Modeling", "Speech & Audio in NLP", "Linguistics & Cognitive NLP", "Multimodality" ]
[ 2, 70, 48, 74 ]