text
stringlengths
82
736
label
int64
0
1
as secondary systems we use phrase-based systems equipped with linguistically-oriented modules similar with the ones proposed in---the secondary systems we use in this work are still phrase-based , but equipped with linguisticallyoriented modules similar with the ones proposed in
1
in this work , we use vmf as the observational distribution---in this work , we demonstrate the use of the von mises-fisher distribution
1
we propose natural language and speech processing techniques should be used for efficient closed caption production of tv programs---we think the natural language and speech processing technology will be useful for the efficient production of tv programs with closed captions
1
the phrase-based translation experiments reported in this work was performed using the moses 2 toolkit for statistical machine translation---the experiments were performed on english-togerman translation using a standard phrase-based smt system , trained using the moses toolkit , with a 5-gram language model
1
word sense disambiguation is the task of assigning a sense to a word based on the context in which it occurs---combining similarity functions from different resources could further improve the performance
0
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation
1
we perform the mert training to tune the optimal feature weights on the development set---we tune weights by minimizing bleu loss on the dev set through mert and report bleu scores on the test set
1
semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot---semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application
1
mikolov et al , 2013a ) proposes skip-gram and continuous bag-of-words models based on a single-layer network architecture---mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors
1
distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system---unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks
1
we apply back-translation method to use monolingual data---we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set
0
commonly used word vectors are word2vec , glove and fasttext---the vectors are given by a word2vec model and a glove model trained on german data
0
to train the model , we adopt the averaged perceptron algorithm with early update , following huang and sagae---in this experiment , we use the same set of feature templates as huang and sagae
1
all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing---in this paper , we explore a new problem of text recap extraction
0
we train the concept identification stage using infinite ramp loss with adagrad---we use the stanford pos tagger to obtain the lemmatized corpora for the parss task
0
our contribution to this discussion is a new , principled sparse coding method that transforms any distributed representation of words into sparse vectors , which can then be transformed into binary vectors ( §2 )---our contribution to this discussion is a new technique that constructs task-independent word vector representations using linguistic knowledge derived from pre-constructed linguistic resources like wordnet ( cite-p-12-3-17 ) , framenet ( cite-p-12-1-2 ) , penn treebank ( cite-p-12-3-14 ) etc
1
a node as , must be added to the tree if it statistically differs from its parent node s---as , must be added to the tree if it statistically differs from its parent node
1
metaphor is a frequently used figure of speech , reflecting common cognitive processes---a metaphor is a literary figure of speech that describes a subject by asserting that it is , on some point of comparison , the same as another otherwise unrelated object
1
in contrast , the language models are comparatively more sensitive to words with a syntactic function---mikolov et al propose word2vec where continuous vector representations of words are trained through continuous bag-of-words and skip-gram models
0
we used the first-stage pcfg parser of charniak and johnson for english and bitpar for german---a language model is a statistical model that gives a probability distribution over possible sequences of words
0
the most limiting property of the algorithm is such that the number of frames and roles must be predefined---that must be predefined ¨c number of frames and number of roles ¨c which is the most limiting property of the algorithm
1
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations---huang et al presented a new neural network architecture which incorporated both local and global document context , and offered an impressive result
1
however , most existing parsers are slow , since they need to deal with a heavy grammar constant---since d-parsing algorithms do not have a grammar constant , typical implementations are significantly faster than c-parsers
1
coreference resolution is the next step on the way towards discourse understanding---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept
1
we trained the initial parser on the ccgbank training set , consisting of 39603 sentences of wall street journal text---we use the penn tree bank , constructed from articles from the wall street journal , as our primary training corpus , with the standard training split of 42068 sentences
1
such a forest is called a dependency tree---a dependency tree is a rooted , directed spanning tree that represents a set of dependencies between words in a sentence
1
in feature-based methods , a diverse set of strategies have been exploited to convert the classification clues into feature vectors---in feature-based methods , a diverse set of strategies is exploited to convert classification clues into feature vectors
1
script knowledge is a form of structured world knowledge that is useful in nlp applications for natural language understanding tasks ( e.g. , ambiguity resolution rahman and ng , 2012 ) , as well as for psycholinguistic models of human language processing , which need to represent event knowledge to model human expectations ( cite-p-15-3-5 , cite-p-15-3-4 ) of upcoming referents and utterances---script knowledge is a body of knowledge that describes a typical sequence of actions people do in a particular situation ( cite-p-7-1-6 )
1
this paper presents a stratified seed sampling strategy based on clustering algorithms for semi-supervised learning---this paper presents a clustering-based stratified seed sampling approach for semi-supervised relation extraction
1
bandyopadhyay et al , 2011 , sentiment analysis , and many other applications---local rank distance has already shown promising results in computational biology dinu et al , 2014 ) and native language identification
0
in this paper , we propose a novel task , zero-shot entity extraction , where the specification of the desired entities is provided as a natural language query---in this paper , we consider a new zero-shot learning task of extracting entities specified by a natural language query ( in place of seeds )
1
the decoding weights were optimized with minimum error rate training---bharati et al has described a constraint based hindi parser by applying the paninian framework
0
the goal is to make use of the in-domain unsegmented data to improve the ultimate performance of word segmentation---works aim to use huge amount of unsegmented data to further improve the performance of an already well-trained supervised model
1
turian et al applied this method to both named entity recognition and text chunking---stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target
0
it is a speechenhanced version of the why2-atlas tutoring system---in this paper , we first implement the chunking method described in as a strong baseline
0
our experiments use the dependency model with valence---we use the standard generative dependency model with valence
1
we use the method for calculating the accuracy of propbank verbal semantic roles described in the conll-2008 shared task on semantic role labeling---following common practices , we measure the overlap of induced semantic roles and their gold labels on the conll 2008 training data
1
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---for the classification task , we use pre-trained glove embedding vectors as lexical features
1
lodhi et al , 2002 ) first used string kernels with character level features for text categorization---lodhi et al used string kernels to solve the text classification problem
1
support vector machine is a useful technique for data classification---support vector machines are one class of such model
1
hu et al proposes integration of constraints coming in the form of first order logic rules during training of nns---each essay was represented through the sets of features described below , using term frequency and the liblinear scikit-learn implementation of support vector machines with ovr , one vs
0
the proposed method will be incorporated into the tool kit for linguistic knowledge acquisition which we are now developing---knowledge acquisition method proposed in this paper will be incorporated into the tool kit for linguistic knowledge customization which we are now developing
1
we can reduce the time complexity to oby strictly adopting the dp structures in the parsing algorithm of eisner---thanks to the constraints on dependency trees , it is possible to reduce complexity to ofor lexicalized parsing using the spanbased representation proposed by eisner
1
while each of these alternatives has some advantages over soundex , none is adaptable to alternative distance metrics---however , etk has the advantage that it is adaptable to alternative distance metrics
1
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit
1
during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems---despite their frequent use in topic modeling , we find that stemmers produce no meaningful improvement in likelihood and coherence
0
besides , chinese is a topic-prominent language , the subject is usually covert and the usage of words is relatively flexible---this is because chinese is a pro-drop language ( cite-p-21-3-1 ) that allows the subject to be dropped in more contexts than english does
1
le and mikolov extends the neural network of word embedding to learn the document embedding---le and mikolov introduce paragraph vector to learn document representation from semantics of words
1
garrette et al propose a framework for combining logic and distributional models in which logical form is the primary meaning representation---in the past , our model thoroughly eliminates context windows and can capture the complete history of segmentation
0
we use word2vec as the vector representation of the words in tweets---for smt decoding , we use the moses toolkit with kenlm for language model queries
0
for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window---for estimating the monolingual we , we use the cbow algorithm as implemented in the word2vec package using a 5-token window
1
watkinson and manandhar describe an unsupervised approach for learning syntactic ccg lexicons---to keep consistent , we initialize the embedding weight with pre-trained word embeddings
0
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b )---sentiment analysis is a recent attempt to deal with evaluative aspects of text
1
monroe et al used a single dialect-independent model for segmenting egyptian dialect in addition to msa---monroe et al used a single dialect-independent model for segmenting all arabic dialects including msa
1
collobert and weston propose a unified deep convolutional neural network for different tasks by using a set of taskindependent word embeddings together with a set of task-specific word embeddings---for instance , collobert and weston use a multitask network for different nlp tasks and show that the multi-task setting improves generality among shared tasks
1
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus---we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting
1
shen et al , 2008 shen et al , 2009 proposed a string-to-dependency language model to capture longdistance word order---shen et al , 2008 shen et al , 2009 proposed a way to integrate dependency structure into target and source side string on hierarchical phrase rules
1
we present a brief sketch of sgns -the skip-gram embedding model introduced in trained using the negative-sampling procedure presented in---our departure point is the skip-gram neural embedding model introduced in trained using the negative-sampling procedure presented in
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---in preparation for this evaluation we improved our system by 40 % relative compared to our legacy system
0
it consistently improves the quality of the induced bilingual vector space , and consequently , the quality of bilingual lexicons extracted using that vector space---these intervals were computed following the bootstrap technique described in
0
for comparison purposes , we replicated the hiero system as described in ( cite-p-22-1-2 )---for comparison purposes , we replicated the hiero decoder ( cite-p-22-1-2 )
1
we use the maximum entropy model as a classifier---for entity tagging we used a maximum entropy model
1
sentence compression is a paraphrasing task aimed at generating sentences shorter than the given ones , while preserving the essential content---sentence compression is the task of summarizing a sentence while retaining most of the informational content and remaining grammatical
1
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit---we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit
1
the syntactic relations are obtained using the constituency and dependency parses from the stanford parser---long sentences are removed , and the remaining sentences are pos-tagged and dependency parsed using the pre-trained stanford parser
1
similar to our approach these variants only require soundex mappings of a new language to build transliteration system , but our model does not require explicit mapping between n-gram characters and the ipa symbols instead it learns them automatically using phoneme dictionaries---these variants only require soundex mappings of a new language to build transliteration system , but our model does not require explicit mapping between n-gram characters and the ipa symbols instead
1
for the current task we use the 蠂 2 -measure as the preferred correlation measure because of its simplicity---for the current task we use the 蠂 2 -measure as the preferred co-occurrence measure because of its simplicity
1
in this work , we encode semantic features into convolutional layers by initializing them with important n-grams---we select the glove algorithm as a representative example
0
gedigian et al presented a method that discriminates between literal and metaphorical language , using a maximum entropy classifier---gedigian et al trained a maximum entropy classifier to discriminate between literal and metaphorical use
1
rouge is the standard automatic evaluation metric in the summarization community---most work on tls adopts the rouge toolkit that is used for for standard summarization evaluation
1
coreference resolution is a well known clustering task in natural language processing---in our experiment , svms and hm-svm training are carried out with svm struct packages
0
curran and moens found that dramatically increasing the volume of raw input data for distributional similarity tasks increases the accuracy of synonyms extracted---curran and moens have demonstrated that dramatically increasing the quantity of text used to extract contexts significantly improves synonym quality
1
the two datasets are verified through standard co-occurrence and neural network models , showing results comparable to the respective english datasets---two datasets were verified through standard co-occurrence and neural network models , showing results comparable to the respective english datasets
1
unlike most previous work , which has used a reduced set of pos tags , we use all 680 tags in the bultreebank---for this language , which has limited the number of possible tags , we used a very rich tagset of 680 morphosyntactic tags
1
we train the concept identification stage using infinite ramp loss with adagrad---we apply online training , where model parameters are optimized by using adagrad
1
we exploited two complementary types of indicators : self-identification and self-possession of conceptual class ( role ) attributes---we exploit a complementary signal based on characteristic conceptual attributes of a social role , or concept class
1
we use scikitlearn as machine learning library---we used scikit-learn library for all the machine learning models
1
we use coarse gold pos tags and the extended features set of zhang and nivre , without label information---our baseline parser uses the feature set described by zhang and nivre
1
bengio et al proposed a probabilistic neural network language model for word representations---we solve this sequence tagging problem using the mallet implementation of conditional random fields
0
a zero pronoun ( zp ) is a gap in a sentence which refers to an entity that supplies the necessary information for interpreting the gap---a zero pronoun ( zp ) is a gap in a sentence that is found when a phonetically null form is used to refer to a real-world entity
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit
1
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit
1
other approaches have focused on identifying argument components and relations and how these relate to essay scores---another line of research has studied the role of argumentative features in predicting the overall essay quality
1
the penn discourse treebank includes annotations of 18,459 explicit and 16,053 implicit discourse relations in texts from the wall street jounal---the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses
1
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing---previous work has focused on congressional debates , company-internal discussions , and debates in online forums
0
sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment---sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way
1
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---we trained a 4-gram language model on this data with kneser-ney discounting using srilm
1
all techniques are used from the scikitlearn toolkit---the fw feature set consists of 318 english fws from the scikit-learn package
1
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them---semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing
1
through the method , various range of collocations which are frequently used in a specific domain are retrieved automatically---through the method , various kinds of collocations induced by key strings are retrieved
1
we use stanford log-linear partof-speech tagger to produce pos tags for the english side---we use stanford part-of-speech tagger to automatically detect nouns from text
1
in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks---during the last few years , smt systems have evolved from the original word-based approach to phrase-based translation systems
1
the word embeddings were built from 200 million tweets using the word2vec model---this baseline uses pre-trained word embeddings using word2vec cbow and fasttext
1
goldwater et al showed that incorporating a bigram model of word-to-word dependencies significantly improves word segmentation accuracy in english---goldwater et al showed that modeling dependencies between adjacent words dramatically improves word segmentation accuracy
1
for the specific-requirement scenario , the maximum generated likelihood is used as the objective function---for the diverse-requirement scenario , the conditional valueat-risk ( cvar ) is used as the objective function
1
we implemented the different aes models using scikit-learn---we use the scikit-learn machine learning library to implement the entire pipeline
1
for a labeled dependency parser , we used the mstparser 5 , which achieved top results in the conll 2006 shared task of multilingual dependency parsing---we used the mstparser , which achieved top results in the conll 2006 shared task , as a base dependency parser
1
this paper presents a graph-theoretic model of the acquisition of lexical syntactic representations---in this paper , we have presented a graph-theoretic model of the acquisition of lexical syntactic representations
1
we use a binary cross-entropy loss function , and the adam optimizer---for the loss function , we used the mean square error and adam optimizer
1
our approach revolves around a novel integration between a predictive embedding model and an indian buffet process posterior regularizer---intelligent assistants on mobile devices , such as siri , 1 have recently gained considerable attention as novel applications of dialogue technologies
0