text
stringlengths
82
736
label
int64
0
1
the log-linear feature weights are tuned with minimum error rate training on bleu---all the weights of those features are tuned by using minimal error rate training
1
we used the implementation of random forest in scikitlearn as the classifier---we used svm classifier that implements linearsvc from the scikit-learn library
1
such topic models are generally built from a large set of example documents as in , or in one component of---such topic models are generally built from a large set of example documents as in
1
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express---sentiment analysis is a growing research field , especially on web social networks
1
though kernel based methods get rid of the feature selection process , they need elaborately designed kernels and are also computationally expensive---since kernel methods require similarity computation between input samples , they are relatively computationally expensive
1
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset---we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization
1
the performance of the system for all subtasks in both languages shows substantial improvements in spearman correlation scores over the baseline models provided by task 1 organizers , ranging from 0.03 to 0.23---word alignment is the problem of annotating parallel text with translational correspondence
0
furthermore , we train a 5-gram language model using the sri language toolkit---a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit
1
there have been also researchs on taxonomy induction based on wordnet---there have been also researchs on taxonomy induction based on wordnet ,
1
to learn noun vectors , we use a skip-gram model with negative sampling---that regards all the generated questions as negative instances could not improve the accuracy of the qa model
0
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit
0
while the former perform best in isolation , the latter present a scalable alternative within joint systems---while bns perform best in isolation , hmms represent a cheap and scalable alternative within the joint framework
1
we learn the noise model parameters using an expectation-maximization approach---coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept
0
the feature weights 位 i are trained in concert with the lm weight via minimum error rate training---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm
1
for this task , we use glove pre-trained word embedding trained on common crawl corpus---we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings
1
log linear models have been proposed to incorporate those features---a variety of log-linear models have been proposed to incorporate these features
1
we obtained a phrase table out of this data using the moses toolkit---we adapted the moses phrase-based decoder to translate word lattices
1
tang et al utilize memory network to store context words and conduct multi-hop attention to get the sentiment representation towards aspects---we used the moses toolkit to train the phrase tables and lexicalized reordering models
0
this measure implements the rand index which has been originally developed to evaluate clustering methods---specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings
0
our goal is to evaluate coreference systems on data that taxes even human coreference---we present a specialized dataset that specifically tests a human ’ s coreference
1
kennedy and inkpen use syntactic analysis to capture language aspects like negation and contextual valence shifters---kennedy and inkpen explore negation shifting by incorporating negation bigrams as additional features into machine learning approaches
1
we learn our word embeddings by using word2vec 3 on unlabeled review data---we used word2vec , a powerful continuous bag-of-words model to train word similarity
1
maximum entropy models 1 have been widely used in many nlp tasks---the maximum entropy statistical framework has been successfully deployed in several nlp tasks
1
we tuned parameters of the smt system using minimum error-rate training---we optimized each system separately using minimum error rate training
1
however , as far as we know , there is no publication available on mining bilingual sentences directly from bilingual web pages---as far as we know , there is no publication available on mining parallel sentences directly from bilingual web pages
1
all of the machine learning was done using scikit-learn---all training was done using the open-source machine learning toolkit scikit-learn 3
1
mintz et al proposed a distant supervision approach for relation extraction using a richfeatured logistic regression model---to alleviate this problem , mintz et al proposed relation extraction in the paradigm of distant supervision
1
the latter embeddings were trained on the english wikipedia dump using word2vec toolkit---the neural embeddings were created using the word2vec software 3 accompanying
1
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review---one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language
1
in this work , we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce---in this paper , we explore the task of acquiring and incorporating external evidence to improve information extraction accuracy
1
word alignment is a natural language processing task that aims to specify the correspondence between words in two languages ( cite-p-19-1-0 )---word alignment is the problem of annotating parallel text with translational correspondence
1
bordes et al , 2014 ) utilizes subgraph embedding to predict the confidence of candidate answers---to address this problem , we first propose a model to solve normal reading comprehension problems
0
cross-cultural differences and similarities are common in cross-lingual natural language understanding , especially for research in social media---differences and similarities are important in cross-cultural social studies , multilingual sentiment analysis , culturally sensitive machine translation , and many other nlp tasks , especially in social media
1
the proposed approach is shown to be robust against the coverage of kbs and the informality of the used language---proposed approach is shown to be robust against the coverage of kbs and the informality of the used language
1
we evaluate the translation quality using the case-insensitive bleu-4 metric---our baseline system is an standard phrase-based smt system built with moses
0
we obtained both phrase structures and dependency relations for every sentence using the stanford parser---basically , the merging of lexica has two well defined steps
0
automatic and manual evaluation results over meeting , chat and email conversations show that our approach significantly outperforms baselines and previous extractive models---we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems
0
for bi we use 2-gram kenlm models trained on the source training data for each domain---we use kenlm 3 for computing the target language model score
1
in our corpus , about 26 % questions do not need context , 12 % questions need type 1 context , 32 % need type 2 context and 30 % type 3---in our corpus , about 26 % questions do not need context , 12 % questions need type 1 context , 32 % need type 2 context
1
the weights associated to feature functions are optimally combined using the minimum error rate training---collobert et al adapted the original cnn proposed by lecun and bengio for modelling natural language sentences
0
word embeddings for english and hindi have been trained using word2vec 1 tool---the word embeddings were obtained using word2vec 2 tool
1
although single-document summarization is a well-studied task , the nature of multi-document summarization is only beginning to be studied in detail---although single-document summarization is a well-studied task ( see mani and maybury , 1999 for an overview ) , multi-document summarization is only recently being studied closely ( marcu & gerber 2001 )
1
relation extraction is a core task in information extraction and natural language understanding---relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text
1
neural networks , working on top of conventional n-gram models , have been introduced in as a potential means to improve conventional n-gram language models---neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve conventional language models
1
we use skip-gram with negative sampling for obtaining the word embeddings---with a corpus of utterances where we can isolate a single word or phrase that is responsible for the speaker ’ s level of certainty
0
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing
1
we applied a supervised machine-learning approach , based on conditional random fields---in our experiments we use a publicly available implementation of conditional random fields
1
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting
1
costa-juss脿 and fonollosa , 2006 ) view the source reordering as a translation task that translate the source language into a reordered source language---costa-jussa and fonollosa considered the source reordering as a translation task which translates the source sentence into reordered source sentence
1
finally , we experiment with adding a 5-gram modified kneser-ney language model during inference using kenlm---for all the systems we train , we build n-gram language model with modified kneserney smoothing using kenlm
1
we implement logistic regression with scikit-learn and use the lbfgs solver---we use the logistic regression implementation of liblinear wrapped by the scikit-learn library
1
we trained the embedding vectors with the word2vec tool on the large unlabeled corpus of clinical texts provided by the task organizers---we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset
1
to automatically evaluate machine translations the machine translation community recently adopted an n-gram co-occurrence scoring procedure bleu---to automatically evaluate machine translations , the machine translation community recently adopted an n-gram co-occurrence scoring procedure bleu
1
we model the sequence of morphological tags using marmot , a pruned higher-order crf---we train a secondorder crf model using marmot , an efficient higher-order crf implementation
1
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context
1
text segmentation is the task of determining the positions at which topics change in a stream of text---text segmentation is the task of dividing text into segments , such that each segment is topically coherent , and cutoff points indicate a change of topic ( cite-p-15-1-8 , cite-p-15-3-4 , cite-p-15-1-3 )
1
our results show a clear improvement with respect to state-of-the-art systems---we present results that indicate a clear improvement on the state-of-the-art
1
the smt system is implemented using moses and the nmt system is built using the fairseq toolkit---the baseline system is a pbsmt engine built using moses with the default configuration
1
ma and xia used word alignments obtained from parallel data to transfer source language constraints to the target side---ma and xia built a dependency parser by maximizing the likelihood on parallel data and the confidence on unlabeled target language data
1
relation extraction is the task of finding relationships between two entities from text---we trained the l1-regularized logistic regression classifier implemented in liblinear
0
in the full-supervision setting of topic id , the lower-dimensional learned representations converge in performance to the raw representation as the dimension math-w-12-1-1-23 increases---in the full-supervision setting of topic id , the lower-dimensional learned representations converge in performance to the raw representation
1
the bleu score , introduced in , is a highly-adopted method for automatic evaluation of machine translation systems---the bleu metric has deeply rooted in the machine translation community and is used in virtually every paper on machine translation methods
1
numerous studies suggest that translated texts are different from original ones---studies suggests that all translated texts , irrespective of source language , share some so-called translation
1
light et al , medlock and briscoe , medlock , and szarvas ,---neural networks based models like seq2seq architecture are proven to be effective to generate valid responses for a dialogue system
0
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training---minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set
1
string-based models include string-to-string and string-to-tree---nivre et al present a constrained decoding procedure for arc-eager transition-based parsers
0
we make use of moses toolkit for this paradigm---across a variety of evaluation scenarios , our algorithm consistently outperforms alternative strategies
0
the tokens are fed into an embedding layer which is initialized with glove word-embedding trained with a large twitter corpus---the word embeddings are initialized with the publicly available word vectors trained through glove 5 and updated through back propagation
1
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 )---relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text
1
translation quality is measured in truecase with bleu on the mt08 test sets---the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval
1
pos tagging is performed using the ims tree tagger---the pos tags used in the reordering model are obtained using the treetagger
1
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings---iyyer et al , 2014 ) addresses political ideology detection using recursive neural networks
0
this paper describes limsi ’ s submission to the conll 2017 ud shared task ( cite-p-20-3-5 ) , dedicated to parsing universal dependencies ( cite-p-20-1-10 ) on a wide array of languages---this paper describes limsi ’ s submission to the conll 2017 ud shared task , which is focused on small treebanks , and how to improve low-resourced parsing
1
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm---we use the 300-dimensional skip-gram word embeddings built on the google-news corpus
1
we use corpus-level bleu score to quantitatively evaluate the generated paragraphs---smith and eisner propose quasi-synchronous grammar for cross-lingual parser projection and assume the existence of hundreds of target language annotated sentences
0
we evaluate our method on two wordnetderived subtaxonomies and show that our method leads to the development of concept hierarchies that capture a higher number of correct taxonomic relations in comparison to those generated by current distributional similarity approaches---our evaluation on two wordnet-derived taxonomies shows that the learned taxonomies capture a higher number of correct taxonomic relations compared to those produced by traditional distributional similarity approaches
1
we applied topic modeling , particularly , latent dirichlet allocation to predict the topics expressed by given texts---to measure the importance of the generated questions , we use lda to identify the important sub-topics from the given body of texts
1
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc---named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval
1
we present our work on using wikipedia as a knowledge source for natural language processing---we presented our previous efforts on using wikipedia as a semantic knowledge source
1
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit
1
ma et al further proposed bidirectional attention mechanism , which also learns the attention weights on aspect words towards the averaged vector of context words---ma et al used multiple sets of attentions , one for modeling the attention of aspect words and one for modeling the attention of context words
1
another advantage of the approach is that it does not need any information about the right number of clusters---an additional advantage of the approach is that it does not need any information about the right number of clusters
1
we use a tokeniser from the cmu twitter tagger extracting only unigrams and bigrams 3 to encode training instances---we used the cmu tokenizer , 3 which is a sub-module of the cmu twitter pos tagger
1
this paper presents a joint model of cr and pa in japanese---this paper presents an entity-centric joint model for japanese
1
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc---named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining
1
fothergill and baldwin expected that wsd features -however successful at type specialised classification -would lose their advantage in crosstype classification because of the lack of a common semantics between mwe-types---fothergill and baldwin introduced features for crosstype classification which captured features of the mwe-type , reasoning that similar expressions would have similar propensity for idiomaticity
1
the training data are tagged with pos tags and lemmatized with treetagger---before semantic alignment is carried out , all hypothesis and text terms are lemmatized using treetagger
1
a standard sri 5-gram language model is estimated from monolingual data---a 4-gram language model is trained on the monolingual data by srilm toolkit
1
we use the publicly available 300-dimensional word vectors of mikolov et al , trained on part of the google news dataset---we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news
1
the selection order is similar to that in the competitive linking algorithm---the first algorithm is similar to competitive linking
1
we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext---the model parameters in word embedding are pretrained using glove
0
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text---sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 )
1
we adopt the greedy feature selection algorithm as described in jiang and ng to pick up positive features empirically and incrementally according to their contributions on the development data---therefore , we adopt the greedy feature selection algorithm as described in jiang and ng to pick up positive features incrementally according to their contributions on the development data
1
in contrast , we are able to add new ( high-quality ) labels for a term with our evidence propagation method---with the most popular patterns , we are not able to extract correct labels for many ( especially rare ) entities
1
for english , we conduct experiments on the general american variant of the combilex data set---for processing large text collections , we revisit the work of cite-p-11-3-5 on using the locality sensitive hash ( lsh ) method of cite-p-11-1-0
0
our word embeddings is initialized with 100-dimensional glove word embeddings---we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word
1
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news---we train a word2vec cbow model on raw 517 , 400 emails from the en-ron email dataset to obtain the word embeddings
1
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text---relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text
1
we use 300 dimension word2vec word embeddings for the experiments---to tackle this issue , we leverage pretrained word embeddings , specifically the 300 dimension glove embeddings trained on 42b tokens of external text corpora
0
srl is a complex task , which is reflected by the algorithms used to address it---srl is the process by which predicates and their arguments are identified and their roles are defined in a sentence
1