text
stringlengths
82
736
label
int64
0
1
socher et al learned compositional vector representations of sentences with a recursive neural network---socher et al present a compositional model based on a recursive neural network
1
dependency tree parsing as the search for the maximum spanning tree in a directed graph was proposed by mcdonald et al---dependency-tree parsing as the search for the maximum spanning tree in a graph was proposed by mcdonald et al
1
the embeddings were trained over the english wikipedia using word2vec---previous work demonstrated that the performance can be improved by using word embeddings learned from large-scale unlabeled data in many nlp tasks both in english and chinese
0
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd )
1
we use the word2vec framework in the gensim implementation to generate the embedding spaces---we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus
1
character n-grams were the best single feature class in their experiments---feature class improved the overall accuracy of their system
1
for example , chung and gildea has proved that automatic empty category detection has a positive impact on machine translation---for example , chung and gildea reported preliminary work that has shown a positive impact of automatic empty element detection on statistical machine translation
1
coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts---coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity
1
the task is to identify such words among a set of negative polar expressions---the minimum error rate training was used to tune the feature weights
0
we will also explore generating other features measuring the higher-level aspects of the spoken responses---we focus on exploring features to represent the high-level aspect of speech
1
the grapheme-based approach , also known as direct orthographical mapping , which treats transliteration as a statistical machine translation problem under monotonic constraints , has also achieved promising results---we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit
0
we evaluated the system using bleu score on the test set---we used smoothed bleu for benchmarking purposes
1
embeddings , have recently shown to be effective in a wide range of tasks---word embeddings have been used to help to achieve better performance in several nlp tasks
1
we implement the classifiers using the text classification framework dkpro tc which includes all of the abovementioned classifiers---for feature extraction and experimentation , we use the dkpro tc text classification framework
1
all language models were trained using the srilm toolkit---all the language models are built with the sri language modeling toolkit
1
the two baseline methods were implemented using scikit-learn in python---we describe an intuitionistic method for dependency parsing , where a classifier is used to determine whether a pair of words forms a dependency edge
0
part-of-speech tagging is the assignment of syntactic categories ( tags ) to words that occur in the processed text---part-of-speech tagging is a crucial preliminary process in many natural language processing applications
1
we have further divided the dump into pieces of growing size and applied mate 7 for the automatic detection of semantic roles to the varying portions and annotated them with srl information---we applied the mate 7 parser for the automatic detection of semantic roles to the portion of the wikipedia dump annotating it with srl information
1
for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b---also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove
1
for input representation , we used glove word embeddings---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization
1
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing
1
zarrie脽 and kuhn argued that multiword expressions can be reliably detected in parallel corpora by using dependency-parsed , word-aligned sentences---zarrie脽 and kuhn argue that multiword expressions can be reliably detected in parallel corpora by using dependency-parsed , word-aligned sentences
1
we use srilm with its default parameters for this purpose---we use 5-grams for all language models implemented using the srilm toolkit
1
table 2 : error reduction in the average f-score me---we used the implementation of random forest in scikitlearn as the classifier
0
chiang and knight gives a good introduction to stsgs , which originate from the syntax-directed translation schemes of aho and ullman---chiang gives a good introduction to stsg , which originate from the syntax-directed translation schemes of aho and ullman
1
these word embeddings are learned in advance using a continuous skip-gram model , or other continuous word representation learning methods---these word vectors can be randomly initialized from a uniform distribution , or be pre-trained from text corpus with embedding learning algorithms
1
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump---to encode the original sentences we used word2vec embeddings pre-trained on google news
1
we use 5-grams for all language models implemented using the srilm toolkit---we used the disambig tool provided by the srilm toolkit
1
we use pre-trained vectors from glove for word-level embeddings---we use pre-trained glove embeddings to represent the words
1
the msa tool we extend is mada -morphological analysis and disambiguation of arabic---we use the morphological analyzer mada to decompose the arabic source
1
a 4-gram language model was trained on the monolingual data by the srilm toolkit---a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language
1
in the experiments reported here we use support vector machines through the svm light package---for instance , chiao and zweigenbaum propose to integrate a reverse translation spotting strategy in order to improve precision
0
the values of the word embeddings matrix e are learned using the neural network model introduced by---the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model
1
therefore , we propose a convolutional neural network ( cnn ) based model which leverages both word-level and character-based representations---in this work , we proposed a convolutional neural network ( cnn ) based approach that combines both word-and character-level representations , for review
1
however , the richer feature representations result in a high-dimensional feature space---in a relatively high-dimensional feature space may suffer from the data sparseness problem
1
in bisk and hockenmaier , we introduced a model that is based on hierarchical dirichlet processes---in our previous work we developed a word-sense induction system based on topic modelling , specifically a hierarchical dirichlet process
1
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing
1
to estimate the weights 位 i in formula , we use the minimum error rate training algorithm , which is widely used for phrasebased smt model training---given a set of question-answer pairs as the development set , we use the minimum error rate training algorithm to tune the feature weights 位 m i in our proposed model
1
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus---the language model is trained on the target side of the parallel training corpus using srilm
1
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity---coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue
1
to employ the features described above in an actual classifier , we trained a logistic regression model using the weka toolkit---we presented a complete , correct , terminating extension of earley ' s algorithm that uses restriction
0
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
1
for the mix one , we also train word embeddings of dimension 50 using glove---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
1
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence---semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences
1
in this paper , we present a survey on taxonomy learning from text corpora---in this paper , we overview recent advances on taxonomy
1
relation extraction is a core task in information extraction and natural language understanding---relation extraction is the task of detecting and classifying relationships between two entities from text
1
when used as the underlying input representation , word vectors have been shown to boost the performance in nlp tasks---most previous research has found only small differences between different techniques for finding clusters
0
the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english---the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles
1
to evaluate the evidence span identification , we calculate f-measure on words , and bleu and rouge---in order to measure translation quality , we use bleu 7 and ter scores
1
we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling---mt systems have proven effective ¡ª the models are compelling and show good room for improvement
0
the most common word embeddings used in deep learning are word2vec , glove , and fasttext---one of the most useful neural network techniques for nlp is the word embedding , which learns vector representations of words
1
feng et al proposed a shift-reduce algorithm to add btg constraints to phrase-based models---feng et al use shift-reduce parsing to impose itg constraints on phrase permutation
1
headden iii et al introduce the extended valence grammar and add lexicalization and smoothing---lexical simplification is a subtask of the more general text simplification task which attempts at reducing the cognitive complexity of a text so that it can be ( better ) understood by a larger audience
0
on the other hand , zarrie脽 and kuhn make use of translational correspondences when identifying multiword expressions---zarrie脽 and kuhn argue that multiword expressions can be reliably detected in parallel corpora by using dependency-parsed , word-aligned sentences
1
framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics---the berkeley framenet is an ongoing project for building a large lexical resource for english with expert annotations based on frame semantics
1
we show that this criterion can be consistently annotated with high agreement , and that it is intuitive enough to be obtained through crowdsourcing---we propose a generic argument reduction criterion , along with an annotation procedure , and show that it can be consistently and intuitively annotated
1
one is a bilexical model , which is a kind of discriminative model , and the other is a generative model---one is the bilexical dependency model and the other is the generative model
1
the purpose of the supertagger is to reduce the search space for the parser---in line with previous work , we let human evaluators judge the grammaticality , simplicity , and meaning preservation of the simplified text
0
for evaluation , caseinsensitive nist bleu is used to measure translation performance---we use case-sensitive bleu-4 to measure the quality of translation result
1
we report case-sensitive bleu and ter as the mt evaluation metrics---we use bleu 2 , ter 3 and meteor 4 , which are the most-widely used mt evaluation metrics
1
we use scikitlearn as machine learning library---we use the skll and scikit-learn toolkits
1
shen et al proposed a tree kernel for ltag derivation trees to focus only on linguistically meaningful structures---system tuning was carried out using both k-best mira and minimum error rate training on the held-out development set
0
in this paper we explore a pos tagging application of neural architectures that can infer word representations from the raw character stream---in this paper , we explored new models that can infer meaningful word representations from the raw character stream
1
the best results were obtained with their new approach---using these representations as features , bansal et al obtained improvements in dependency recovery in the mst parser
0
the word embeddings are initialized by pre-trained glove embeddings 2---we initialize word embeddings with a pre-trained embedding matrix through glove 3
1
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit
1
we trained a 5-grams language model by the srilm toolkit---we used the sri language modeling toolkit with kneser-kney smoothing
1
however , finding the best string ( e.g. , during decoding ) is then computationally intractable---semantic role labeling was first defined in gildea and jurafsky
0
bunescu and mooney propose a shortest path dependency kernel for relation extraction---the idea of using dependency parse trees for relation extraction in general was studied by bunescu and mooney
1
we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features---the word embeddings are word2vec of dimension 300 pre-trained on google news
1
in this paper we developed an approach to mctest that combines lexical matching with simple linguistic analysis---in this paper we develop a lexical matching method that takes into account multiple context
1
in ( cite-p-16-3-8 ) , lexical features were limited on each single side due to the feature space problem---in ( cite-p-16-5-10 ) , syntactic structures were employed to reorder the source language
1
we automatically produced training data from the penn treebank---we trained and tested the model on data from the penn treebank
1
we applied a supervised machine-learning approach , based on conditional random fields---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept
0
we use skip-gram representation for the training of word2vec tool---we perform pre-training using the skipgram nn architecture available in the word2vec tool
1
hypothesis 1 : metaphorical uses of words tend to convey more emotion than their literal paraphrases in the same context---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base
0
it is therefore a promising direction to combine the advantages of both nmt and smt---which combines the advantages of nmt and smt efficiently
1
a modified joint source–channel model along with a number of alternatives have been proposed---including the modified joint source-channel model and their evaluation scheme have been proposed
1
synchronous context-free grammars are now widely used in statistical machine translation , with hiero as the preeminent example---probabilistic synchronous grammars are widely used in statistical machine translation and semantic parsing
1
a 4-grams language model is trained by the srilm toolkit---the trigram language model is implemented in the srilm toolkit
1
valitutti et al present an interactive system which generates humorous puns obtained through variation of familiar expressions with word substitution---valitutti et al present an interactive system which generates humorous puns obtained by modifying familiar expressions with word substitution
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp )
1
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks---information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template
1
relation extraction is a challenging task in natural language processing---relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base
1
the language model was trained using srilm toolkit---in this paper , we statistically study the correlations among popular memes and their wordings , and generate meme
0
we use the adagrad algorithm to optimize the conditional , marginal log-likelihood of the data---for this , we propose a language model for generating reviews
0
the scfg formalism was repopularized for statistical machine translation by chiang---hierarchical phrase-based translation was proposed by chiang
1
the target language model was a standard ngram language model trained by the sri language modeling toolkit---the language model was generated from the europarl corpus using the sri language modeling toolkit
1
to do this we examine the dataset created for the english lexical substitution task in semeval---to score the participating systems , we use an evaluation scheme which is inspired by the english lexical substitution task in semeval 2007
1
finally , rozovskaya and roth found that a classifier outperformed a language modeling approach on different data , making it unclear which approach is best---we found that a language model does not generally perform as well as a classifier in terms of f 1 , similar to a previous finding from rozovskaya and roth
1
we used nltk wordnet synsets for obtaining the ambiguity of the word---we lemmatise each word using the wordnet nltk lemmatiser
1
we specify a non-stochastic version of the formalism , noting that probabilities may be attached to the rewrite rules exactly as in stochastic cfg---we demonstrate that an lda-based topic modelling approach outperforms a baseline distributional semantic approach and weighted textual matrix
0
morphological analysis is a staple of natural language processing for broad languages---we pre-train the word embedding via word2vec on the whole dataset
0
the hierarchical model is built on a weighted synchronous contextfree grammar---a synchronous context-free grammar is extracted from the alignments
1
wikipedia and wiktionary , which have been applied in computational methods only recently , offer new possibilities to enhance ir---sentence compression is a complex paraphrasing task with information loss involving substitution , deletion , insertion , and reordering operations
0
this is , however , computationally intractable , and it is a usual practice to resort to approximate decoding algorithms---exactly , it is a usual practice to resort to approximate search / decoding algorithms
1
this is an interpretation of negation that is intuitively appealing , formally simple , and computationally rto harder than the original rounds-kasper logic---that yields an interpretation that is conceptually simple , motivated by the preservation of monotonicity , and is computationally no harder than the original rounds-kasper logic
1
we discuss examples that suit or challenge our approach---we show examples and solutions that may be challenge our approach
1
the statistical significance test is performed using the re-sampling approach---statistical significance is computed using the bootstrap re-sampling approach proposed by koehn
1
we use a cws-oriented model modified from the skip-gram model to derive word embeddings---we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results
0