text
stringlengths 82
736
| label
int64 0
1
|
---|---|
negation is well-understood in grammars , the valid ways to form a negation are well-documented---dialogs can succeed without a closed dialog model | 0 |
we use the moses toolkit to train our phrase-based smt models---we use the moses software to train a pbmt model | 1 |
in this paper , we proposed a sentiment aligned topic model ( satm ) for product aspect rating prediction---in this paper , we address the problem of product aspect rating prediction | 1 |
the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set---feature weights were trained with minimum error-rate training on the news-test2008 development set using the dp beam search decoder and the mert implementation of the moses toolkit | 1 |
all word vectors are trained on the skipgram architecture---the vectors are pre-trained using the skipgram model 1 | 1 |
however , due to lexical ambiguity , encoding word meaning with a single vector is problematic---the srilm toolkit was used for training the language models using kneser-ney smoothing | 0 |
conditional random fields are undirected graphical models used for labeling sequential data---conditional random fields are undirected graphical models trained to maximize a conditional probability | 1 |
in fnwm data set , the biggest improvements achieved 55.88 % , 31.11 % and 11.50 % respectively in the three groups of results , followed by smt data set---in fnwm data set , the biggest improvements achieved 55 . 88 % , 31 . 11 % and 11 . 50 % respectively | 1 |
the decoding weights are optimized with minimum error rate training to maximize bleu scores---phrasebased smt models are tuned using minimum error rate training | 1 |
we compare the results of ensemble decoding with a number of baselines for domain adaptation---the morphological analyzer we use represents arabic words with 15 features | 0 |
the topic assignment for each word is irrelevant to all other words---we implement some of these features using the stanford parser | 0 |
in figure 1 we define the position of m4 to be right after m3 ( because ¡°the¡± is after ¡°held¡± in leftto-right order on the target side )---we define the position of m4 to be right after m3 ( because ¡° the ¡± is after ¡° held ¡± in leftto-right order | 1 |
for subtask c , we employed a two step filtering strategy to reduce the noise which taking from unrelated comments---for subtask c , we implemented a two-step strategy to select out the similar questions and filter the unrelated comments | 1 |
henry et al and tulkens et al specifically worked on disambiguation of acronyms---tulkens et al combined word representations and definitions from umls to create concept representations | 1 |
since passage information relevant to question is more helpful to infer the answer in reading comprehension , we apply self-matching based on question-aware representation and gated attention-based recurrent networks---in the sr approach , as described by polifroni , the user has to ask for cheap flights and direct flights separately | 0 |
the topics are determined by using latent dirichlet allocation---to learn the topics we use latent dirichlet allocation | 1 |
semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles---semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts | 1 |
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text | 1 |
framenet is a semantic resource which provides over 1200 semantic frames that comprise words with similar semantic behaviour---the framenet database provides an inventory of semantic frames together with a list of lexical units associated with these frames | 1 |
we use pre-trained embeddings from glove---projected trees are added to a statistical parser to improve parsing quality | 0 |
morphological segmentation aims to divide words into morphemes , meaning-bearing subword units---morphological segmentation aims to divide words into a sequence of standardized segments | 1 |
experimental results show that our approach significantly outperforms the baseline system by up to 1.4 bleu points---experimental results show that our approach significantly improves the translation performance and obtains improvement of 1 . 0 bleu scores | 1 |
we trained two 5-gram language models on the entire target side of the parallel data , with srilm---we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing | 1 |
wordnet is a byproduct of such an analysis---the nodes are concepts ( or synsets as they are called in the wordnet ) | 1 |
a notable component of our extension is that we introduce a training algorithm for learning a hidden unit crf of maaten et al from partially labeled sequences---we train the joint model with the max-loss variant of the mira algorithm , adapted to latent variables | 0 |
several general-purpose off-the-shelf parsers have become widely available---nowadays , many state-of-the-art parsers are based on lexicalized models | 1 |
for the evaluation of the results we use the bleu score---the various smt systems are evaluated using the bleu score | 1 |
distributional semantic models produce vector representations which capture latent meanings hidden in association of words in documents---distributional semantic models are employed to produce semantic representations of words from co-occurrence patterns in texts or documents | 1 |
this paper for the first time applies a state-of-the-art probabilistic model to al with pa for dependency parsing---this is the first work that applies a state-of-the-art probabilistic parsing model to al for dependency parsing | 1 |
multiword expressions are notoriously challenging for nlp , due to their many potential levels of idiosyncrasy , from lexical to semantic and pragmatic to statistical---it is widely acknowledged in the nlp community that multiword expressions are a challenge for many nlp applications , due to their idiosyncratic behaviour at different levels of linguistic description | 1 |
we used a phrase-based smt model as implemented in the moses toolkit---we used moses as the implementation of the baseline smt systems | 1 |
this maximum weighted bipartite matching problem can be solved in otime using the kuhnmunkres algorithm---we present quickview , an nlp-based tweet search platform | 0 |
in this paper , we adopt the widely used sequential model , the hidden markov model ( hmm ) ( cite-p-15-1-5 ) , to classify sentences of a multi-author document according to their authorship---in this paper , a well-known sequential model , hidden markov model ( hmm ) , is used for modelling the sequential patterns of the document in order to describe the authorship | 1 |
in this run , we use a sentence vector derived from word embeddings obtained from word2vec---we use the cnn model with pretrained word embedding for the convolutional layer | 1 |
among others , there are studies in japanese grammatical error correction using statistical machine translation which do not limit the type of errors from the learner---among others , there are studies using phrase-based statistical machine translation , which does not limit the types of grammatical errors made by a learner | 1 |
relation extraction is the task of tagging semantic relations between pairs of entities from free text---relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text | 1 |
we train the model using the adam optimizer with the default hyper parameters---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 0 |
lin and pantel describe an algorithm called dirt that automatically learns paraphrase expressions from text---lin and pantel describe an unsupervised algorithm for discovering inference rules from text | 1 |
we used the moses decoder , with default settings , to obtain the translations---we used moses , a phrase-based smt toolkit , for training the translation model | 1 |
the system described in this paper is a combination of a feature-based hierarchical lexicon and word grammar with an extended two-level morphology---the system presented in this paper is a modification of the one published in cite-p-14-1-1 | 1 |
translation quality can be measured in terms of the bleu metric---we measure machine translation performance using the bleu metric | 1 |
we also used word2vec to generate dense word vectors for all word types in our learning corpus---the representations were calculated by multiplying the word2vec vectors for each word , which we found to perform better than addition | 1 |
this paper proposed a novel method for learning probability models of subcategorization preference of verbs---we propose a novel method for learning a probability model of subcategorization preference of verbs | 1 |
a 5-gram language model was built using srilm on the target side of the corresponding training corpus---the language model is trained on the target side of the parallel training corpus using srilm | 1 |
to train our models , which are fully differentiable , we use the adadelta optimizer---we use a minibatch stochastic gradient descent algorithm and adadelta to train each model | 1 |
the alignment improvement results in an improvement of 2.16 bleu score on phrase-based smt system and an improvement of 1.76 bleu score on parsing-based smt system---alignment results in an improvement of 2 . 16 bleu score on a phrase-based smt system and an improvement of 1 . 76 bleu score on a parsing-based smt system | 1 |
in conversational systems , understanding user intent is critical to the success of interaction---identification of user intent also has important implications in building intelligent conversational qa systems | 1 |
in this paper we introduced dkpro wsd , a javaand uima-based framework for word sense disambiguation---in this paper we present dkpro wsd , a freely licensed , general-purpose framework for wsd | 1 |
relation extraction is the task of extracting semantic relationships between entities in text , e.g . to detect an employment relationship between the person larry page and the company google in the following text snippet : google ceo larry page holds a press announcement at its headquarters in new york on may 21 , 2012---relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) | 1 |
in this paper , we propose a generative model that incorporates this distributional prior knowledge---we propose a generative model that incorporates distributional prior knowledge | 1 |
then we use the stanford parser to determine sentence boundaries---first , we use stanford parser to parse our sentences into dependency trees | 1 |
both systems are phrase-based smt models , trained using the moses toolkit---the experiments of the phrase-based smt systems are carried out using the open source moses toolkit | 1 |
we have presented a black box for generating sentential paraphrases : ppdb language packs---we are releasing a black box for generating sentential paraphrases : machine translation language packs | 1 |
for instance , mihalcea et al studied pmi-ir , lsa , and six wordnet-based measures on the text similarity task---mihalcea et al defines a measure of text semantic similarity and evaluates it in an unsupervised paraphrase detector on this data set | 1 |
an english 5-gram language model is trained using kenlm on the gigaword corpus---a 5-gram language model of the target language was trained using kenlm | 1 |
we evaluate translations with bleu and meteor---we evaluate global translation quality with bleu and meteor | 1 |
relation extraction ( re ) is the task of extracting semantic relationships between entities in text---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting | 0 |
for classification , we use simple heuristics by taking the postpositions of the mwe s into account---classification is based on simple heuristics that take the co-occurrence of mwe s with distinct postpositions into account | 1 |
mei et al used random walks over a bipartite graph of queries and urls to find query refinements---recently , mei et al have used the hitting times of nodes in a bipartite graph created from search engine query logs to find related queries | 1 |
gong et al and xiao et al introduce topic-based similarity models to improve smt system---xiao et al propose a topic similarity model for rule selection | 1 |
a 4-grams language model is trained by the srilm toolkit---incorporating external rules or linguistic resources in a deep learning model generally requires substantially adapting the model | 0 |
a good ranking is the one that ranks all good comments above potentiallyuseful and bad ones---a good ranking is the one that the perfectmatch and the relevant questions are both ranked above the irrelevant ones | 1 |
in this paper we present a fully unsupervised wsd system , which only requires wordnet sense inventory and unannotated text---in this paper we present a fully unsupervised word sense disambiguation method that requires only a dictionary and unannotated text | 1 |
snowball is another system that used bootstrapping techniques for extracting relations from unstructured text---with shared parameters , the model is able to learn a general way to act in slots , increasing its scalability to large domains | 0 |
phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model---in recent years , various phrase translation approaches have been shown to outperform word-to-word translation models | 1 |
to deal with this problem , we propose graph merging , a new perspective , for building flexible representations---to deal with this problem , we propose graph merging , a new perspective , for building flexible dependency graphs | 1 |
here we compare our method to an implement of the third-order grand-sibling parser -whose parsing performance on ctb is not reported in koo and collins , and the dynamic programming transition-based parser of huang and sagae---we compare our method to a state-of-the-art graph-based parser as well as a state-of-the-art transition-based parser that uses a beam and the dynamic programming transition-based parser of huang and sagae | 1 |
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text---semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts | 1 |
this network uses pre-trained word embeddings of 200 dimensions generated using word2vec on the above corpus of food-related tweets---the weights of the embedding layer are initialized using word2vec embeddings trained on 400 million tweets from the acl w-nut share task | 1 |
mead is a centroid based multi document summarizer which generates summaries using cluster centroids produced by topic detection and tracking system---mead is a centroid based multi document summarizer , which generates summaries using cluster centroids produced by topic detection and tracking system | 1 |
we employed the glove as the word embedding for the esim---propbank ( cite-p-17-3-4 ) is the corpus of reference for verb-argument relations | 0 |
we also demonstrate that extracted translations significantly improve the performance of the moses machine translation system---coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities | 0 |
following bahdanau et al , we use bi-directional gated recurrent unit as the encoder---unlike bahdanau et al , we use lstms rather than grus as hidden units | 1 |
jiang et al used a character-based model using perceptron for pos tagging and a log-linear model for re-ranking---jiang et al proposes a cascaded linear model for joint chinese word segmentation and pos tagging | 1 |
the english data representation was done using tokenizer 6 and glove pretrained word vectors---the dimension of word embedding was set to 100 , which was initialized with glove embedding | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing | 1 |
knowledge graphs such as freebase , yago and wordnet are among the most widely used resources in nlp applications---knowledge graphs like wordnet , freebase , and dbpedia have become extremely useful resources for many nlp-related applications | 1 |
in this paper , we propose the question condensing networks ( qcn ) to address these problems---in this paper , we propose the question condensing networks ( qcn ) | 1 |
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing | 1 |
to mitigate overfitting , we apply the dropout method to the inputs and outputs of the network---to prevent overfitting , we apply dropout operators to non-recurrent connections between lstm layers | 1 |
therefore , our future work will examine the new implications of problematic situations and user intent for analytical questions---therefore , this paper investigates the new implications of user intent and problematic situations | 1 |
our implementation of the np-based qa system uses the empire noun phrase finder , which is described in detail in cardie and pierce---a detailed description of the base noun phrase finder and its evaluation can be found in cardie and pierce | 1 |
the decoding weights are optimized with minimum error rate training to maximize bleu scores---the model parameters are trained using minimum error-rate training | 1 |
the srilm toolkit is used to train 5-gram language model---word embeddings are initialized with pretrained glove vectors 1 , and updated during the training | 0 |
the english part consists of texts from the penn treebank -articles from the wall street journal---taking syntactic role of each word with its narrow semantic meaning into account , can be highly relevant | 0 |
crfs are a class of undirected graphical models with exponent distribution---the agenda is a structure that stores a list of constituents for which a derivation has been found but which have not yet been combined with other constituents | 0 |
previous work showed that word clusters derived from an unlabelled dataset can improve the performance of many nlp applications---previous work has shown that unlabeled text can be used to induce unsupervised word clusters that can improve performance of many supervised nlp tasks | 1 |
conditional random fields has shown to be the state-of-the-art supervised machine learning approach for this clinical task---conditional random fields have been widely adopted for this task , and give state-of-the-art results | 1 |
sentiment analysis is a research area in the field of natural language processing---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) | 1 |
in this paper , we describe a fast algorithm for sentence alignment that uses lexical information---in this paper , we describe a fast algorithm for aligning sentences with their translations | 1 |
the overall mt system is evaluated both with and without function guessing on 500 held-out sentences , and the quality of the translation is measured using the bleu metric---sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed | 0 |
word2vec is the method to obtain distributed representations for a word by using neural networks with one hidden layer---word2vec is a language modeling technique that maps words from vocabulary to continuous vectors | 1 |
the bleu score for all the methods is summarised in table 5---we extract syntactic dependencies using stanford parser and use its collapsed dependency format | 0 |
in our experiment , using glpk ’ s branch-and-cut solver took 0.2 seconds to produce optimal ilp solutions for 1000 sentences on a machine with intel core 2 duo cpu and 4gb ram---rhetorical structure theory defines some widely used tools for natural language discourse processing | 0 |
recently , klementiev et al extended the neural probabilistic language model to induce cross-lingual word distributed representations on a set of wordlevel aligned parallel sentences---using bilingual parallel corpora , this paper presents a new method for acquiring collocation translations | 0 |
we build a bilstm-lstm encoder-decoder machine translation system as described in using opennmt---we implement our lstm encoder-decoder model using the opennmt neural machine translation toolkit | 1 |
barzilay and elhadad describe a technique for text summarisation based on lexical chains---gildea and jurafsky applied sp to automatic srl by clustering extracted verb-direct object pairs , resulting in modest improvements | 0 |
we used minimum error rate training to tune the feature weights for maximum bleu on the development set---we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu | 1 |
for the classification task , we use pre-trained glove embedding vectors as lexical features---we also use glove vectors to initialize the word embedding matrix in the caption embedding module | 1 |
our models are based on gaussian processes , a non-parametric kernelised probabilistic framework---our models are based on gaussian processes , a non-parametric probabilistic framework | 1 |
Subsets and Splits