text
stringlengths
82
736
label
int64
0
1
the most successful supervised phrase-structure parsers are feature-rich discriminative parsers which heavily depend on an underlying pcfg---the most successful supervised phrase-structure parsers are feature-rich discriminative parsers that heavily depend on an underlying pcfg grammar
1
hiero is a hierarchical phrase-based statistical mt framework that generalizes phrase-based models by permitting phrases with gaps---we propose a model to reason about context-dependent instructional language that display strong dependencies
0
in this work , we calculated automatic evaluation scores for the translation results using a popular metrics called bleu---in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score
1
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text
1
in a follow-up study on a larger group of children , gabani et al again used part-of-speech language models in an attempt to characterize the agrammaticality that is associated with language impairment---gabani et al used part-ofspeech language models to derive perplexity scores for transcripts of the speech of children with and without language impairment
1
chemspot v1.0 achieved an overall f 1 of 68.1 % on the scai corpus---chemspot achieves an f 1 of 65 . 5 % for exact matching on the test corpus
1
in this paper , we present a machine learning approach to the identification and resolution of chinese anaphoric zero pronouns---then we train word2vec to represent each entity with a 100-dimensional embedding vector
0
thus we feel providing a method to speed up mcmc inference can have a significant impact---we propose an approximate mcmc framework that facilitates efficient inference
1
our research is conceptually similar to the work in ( cite-p-11-3-11 ) , which induces a “ human-likeness ” criteria---with human judgements , we source data from a dataset collected by the authors in ( cite-p-11-1-0 )
1
therefore , we expand the property context with additional words based the technique of word embedding---we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively
0
in this study , we present a system that generates lexical analogies automatically from text data---in this study , we present a novel system for generating lexical analogies directly from a text corpus
1
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations---state of the art statistical parsers are trained on manually annotated treebanks that are highly expensive to create
0
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing
1
we used the logistic regression implemented in the scikit-learn library with the default settings---we used the implementation of random forest in scikitlearn as the classifier
1
text summarization is a task to generate a shorter and concise version of a text while preserving the meaning of the original text---generating a condensed version of a passage while preserving its meaning is known as text summarization
1
we use mira to tune the parameters of the system to maximize bleu---we measure the translation quality using a single reference bleu
1
relation extraction is the task of detecting and classifying relationships between two entities from text---as shown in cite-p-23-13-10 this is a well-motivated convention since it avoids splitting up lexical rules to transfer the specifications that must be preserved for different lexical entries
0
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing---a 4-gram language model was trained on the monolingual data by the srilm toolkit
1
zhang et al is an extension of zhang and clark using online large-margin training and incorporating a large-scale language model---zhang et al improve the ccg approach by zhang and clark by incorporating an n-gram language model
1
in this paper , we extend the popular chain-structured lstm to directed acyclic graph ( dag ) structures , with the aim to endow conventional lstm with the capability of considering compositionality and non-compositionality together---by extending the chain-structured lstm to directed acyclic graphs ( dags ) , with the aim to endow linear-chain lstms with the capability of considering compositionality together with non-compositionality
1
moreover , our method employs predicate inversion and repetition to resolve the problem that japanese has a predicate at the end of a sentence---we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data
0
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing
1
we use mini-batch update and adagrad to optimize the parameter learning---parameters are updated through backpropagation with adagrad for speeding up convergence
1
distributed word representations have been shown to improve the accuracy of ner systems---word embeddings have been used to help to achieve better performance in several nlp tasks
1
in particular , we use a set of analysis-level style markers , i.e. , measures that represent the way in which the text has been processed by the tool---we also use analysis-dependent style markers , that is , measures that represent the way in which the text has been processed
1
figure 1 also shows , in brackets , the augmented annotation described above from hale et al---figure 1 also shows , in brackets , the augmented annotation used by hale et al
1
the aim of this paper is to produce a methodology for analyzing sentiments of selected twitter messages , better known as tweets---aim of this study is to produce a methodology for analyzing sentiments of selected twitter messages , better known as tweets
1
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---coreference resolution is the process of linking multiple mentions that refer to the same entity
1
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text---relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text
1
gildea and jurafsky classify semantic role assignments using all the annotations in framenet , for example , covering all types of verbal arguments---gildea and jurafsky describe a system that uses completely syntactic features to classify the frame elements in a sentence
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus
1
by imposing a composite ` 1 , ∞ regularizer , we obtain structured sparsity , driving entire rows of coefficients to zero---this paper has proposed an incremental parser based on an adjoining operation
0
besides standard features , the phrase-based decoder also uses a maximum entropy phrasal reordering model---the decoder adopts the regular distance distortion model , and also incorporates a maximum entropy based lexicalized phrase reordering model
1
visual question answering ( vqa ) is a well-known and challenging task that requires systems to jointly reason about natural language and vision---visual question answering ( vqa ) is the task of answering natural-language questions about images
1
however , these conclusions contradict yamashita claiming that information structure is not crucial for scrambling---nevertheless , such large bilingual corpora are unavailable for most language pairs in the world , which causes a bottleneck for both of the smt and nmt machine translation methods
0
in particular , we think of initializing our embedding matrices with distributed representations that come from a large-scale neural language model---the system was tuned with batch lattice mira
0
we implement logistic regression with scikit-learn and use the lbfgs solver---we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments
1
abstract meaning representation is a popular framework for annotating whole sentence meaning---abstract meaning representation is a compact , readable , whole-sentence semantic annotation
1
the phrase translation probabilities are smoothed with good-turing smoothing---the maximum likelihood estimates are smoothed using good-turing discounting
1
framenet is a widely-used lexical-semantic resource embodying frame semantics---framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm
1
we demonstrate , however , that this has little positive impact in our setting and can even be detrimental---liu et al allow for application of nonsyntactic phrase pairs in their tree-to-string alignment template system
0
we used l2-regularized logistic regression classifier as implemented in liblinear---we built the svm classifiers using lib-linear and applied its l2-regularized support vector regression model
1
the bnnjm uses the current target word as input , so the information about the current target word can be combined with the context word information and processed in hidden layers---the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation
0
in contrast , goldwasser et al proposed a self-supervised approach , which iteratively chose high-confidence parses to retrain the parser---finally , goldwasser et al presented an unsupervised approach of learning a semantic parser by using an em-like retraining loop
1
davidov et al describe a technique that transforms hashtags and smileys in tweets into sentiments---davidov et al studied the use of hashtags and emoticons in sentiment classification
1
parameters were tuned using minimum error rate training---the model parameters are trained using minimum error-rate training
1
the 4-gram language model was trained with the kenlm toolkit on the english side of the training data and the english wikipedia articles---the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime
1
for our baseline we use the moses software to train a phrase based machine translation model---we develop translation models using the phrase-based moses smt system
1
experimental results show that adversarial training substantially improves the performances of target embedding models under various settings---across various settings , this adversarial learning mechanism can significantly improve the performance of some of the most commonly used translation based kge methods
1
sagae and tsujii emulate a single iteration of cotraining by using maxent and svm , selecting the sentences where both models agreed and adding these sentences to the training set---sagae and tsujii co-train two dependency parsers by adding automatically parsed sentences for which the parsers agree to the training data
1
we evaluate against the gold standard dependencies for section 23 , which were extracted from the phrase structure trees using the standard rules by yamada and matsumoto---we extract dependency structures from the penn treebank using the penn2malt extraction tool , 5 which implements the head rules of yamada and matsumoto
1
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit
1
word embeddings have shown promising results in nlp tasks , such as named entity recognition , sentiment analysis or parsing---distributed word representations have been shown to improve the accuracy of ner systems
1
several structure-based learning algorithms have been proposed so far---there are several structure-based learning algorithms proposed so far
1
we tuned the weights in the log-linear model by optimizing bleu on the tuning dataset , using mert , pro , or mira---we evaluated the models using the wmt data set , computing the ter and bleu scores on the decoded output
1
translation performance is measured using the automatic bleu metric , on one reference translation---relation extraction is the task of detecting and classifying relationships between two entities from text
0
a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect---the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit
0
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting---for the language model , we used srilm with modified kneser-ney smoothing
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing
1
lms based on texts translated from the source language still outperform lms translated from other languages , however---systems that use lms based on manually translated texts significantly outperform lms based on originally written texts
1
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing
1
we proposed a novel constituent hierarchy predictor based on recurrent neural networks , aiming to capture global sentential information---for improving shift-reduce parsing , we propose a novel neural model to predict the constituent hierarchy
1
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles---an annotation effort demonstrates implicit relations reveal as much as 30 % of meaning
0
we train the parameters of the stages separately using adagrad with the perceptron loss function---niculae and yaneva and niculae used constituency and dependency parsing-based techniques to identify similes in text
0
our model uses non-negative matrix factorization -nmf in order to find latent dimensions---the model uses non-negative matrix factorization in order to find latent dimensions
1
the minimum error rate training was used to tune the feature weights---blitzer et al propose a domain adaptation method that uses the unlabeled target instances to infer a good feature representation , which can be regarded as weighting the features
0
for the machine learning component of our system we use the l2-regularised logistic regression implementation of the liblinear 3 software library---we utilise liblinear-java 3 with the l2-regularised l2-loss linear svm setting for the svm implementation , and snowball 4 for the stemmer
1
part-of-speech ( pos ) tagging is a well studied problem in these fields---part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax
1
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context---we train trigram language models on the training set using the sri language modeling tookit
0
second , we introduce the use of vector space similarity in random walk inference in order to reduce the sparsity of surface forms---second , we have described how to incorporate vector space similarity into random walk inference over knowledge bases , reducing the feature sparsity inherent in using surface
1
ng et al exploit category-specific information for multi-document summarization---part of our research addresses the problem of medication detection from informal text
0
given the parameters of ibm model 3 , and a sentence pair math-w-5-1-0-21 , compute the probability math-w-5-1-0-30---given the model parameters and a sentence math-w-2-16-0-10 , determine the most probable translation of math-w-2-16-0-18
1
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 )---sentiment analysis is a ‘ suitcase ’ research problem that requires tackling many nlp subtasks , e.g. , aspect extraction ( cite-p-26-3-15 ) , named entity recognition ( cite-p-26-3-6 ) , concept extraction ( cite-p-26-3-20 ) , sarcasm detection ( cite-p-26-3-16 ) , personality recognition ( cite-p-26-3-7 ) , and more
1
language models were built using the srilm toolkit 16---the srilm toolkit was used to build the trigram mkn smoothed language model
1
the reordering model was trained with the hierarchical , monotone , swap , left to right bidirectional method and conditioned on both the source and target language---the reordering model was trained with the hierarchical , monotone , swap , left to right bidirectional method and conditioned on both the source and target languages
1
and mitchell and lapata propose a model for vector composition , focusing on the different functions that might be used to combine the constituent vectors---mitchell and lapata propose a framework for compositional distributional semantics using a standard term-context vector space word representation
1
in the present paper , however , we have deliberately formulated the general learning axioms of our theory so they do not depend on the robotic framework---in the present paper , however , we have deliberately formulated the general learning axioms of our theory
1
caseinsensitive bleu is used to evaluate the translation results---the translation quality is evaluated by case-insensitive bleu-4
1
we used moses as the phrase-based machine translation system---we adapted the moses phrase-based decoder to translate word lattices
1
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity
1
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems---for decoding , we used the state-of-the-art phrasebased smt toolkit moses with default options , except for the distortion limit
1
senseclusters is a freely available system that identifies similar contexts in text---senseclusters is a freely–available open– source system that served as the university of minnesota , duluth entry in the s enseval -4 sense induction task
1
moreover , we show that semlms improves the performance of coreference resolution , as well as that of predicting the sense of discourse connectives for both explicit and implicit ones---we show that our semlm helps improve performance on semantic natural language processing tasks such as coreference resolution and discourse parsing
1