text
stringlengths
82
736
label
int64
0
1
using a large corpus and human-oriented tests we describe a comprehensive study of word similarity measures and co-occurrence estimates , including variants on corpus size---for word similarity measures , we compare the results of several different measures and frequency estimates to solve human-oriented language tests
1
our model is a structured conditional random field---the resulting model is an instance of a conditional random field
1
the srilm toolkit was used for training the language models using kneser-ney smoothing---the language models were 5-gram models with kneser-ney smoothing built using kenlm
1
we train the word embeddings through using the training and developing sets of each dataset with word2vec tool---we adopt pretrained embeddings for word forms with the provided training data by word2vec
1
dependency parsing is a fundamental task for language processing which has been investigated for decades---dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation
1
liu et al focused on the sentence boundary detection task , by making use of conditional random fields---the model weights are automatically tuned using minimum error rate training
0
relation extraction is the task of finding semantic relations between two entities from text---relation extraction is a crucial task in the field of natural language processing ( nlp )
1
we learn the noise model parameters using an expectation-maximization approach---coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set
0
we train our ri model on over 30 million words of the english gigaword corpus using the s-space package---we train our random indexing model on over 30 million words of the english gigaword corpus using the s-space package
1
the present paper focuses on extraction-based single-document summarization---present paper introduces a novel method for single-document text summarization
1
word2vec , glove and fasttext are the most simple and popular word embedding algorithms---to date , the most popular architectures to efficiently estimate these distributed representations are word2vec and glove
1
our smt system is a phrase-based system based on the moses smt toolkit---moses is used as a baseline phrase-based smt system
1
a phrase-based smt system takes a source sentence and produces a translation by segmenting the sentence into phrases and translating those phrases separately---we used the maximum entropy approach 5 as a machine learner for this task
0
we showed improvements in translation quality incorporating these models within a phrase-based smt sytem---semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information
0
the use of woz data allows us to develop optimal strategies for domains where no working prototype is available---use of woz data allows development of optimal strategies for domains where no working prototype is available
1
coreference resolution is the task of identifying all mentions which refer to the same entity in a document---coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model
1
börschinger et al . ( 2011 ) introduced an approach to grounded language learning based on unsupervised pcfg induction---borschinger et al . ( 2011 ) ’ s approach to reducing the problem of grounded learning of semantic parsers to pcfg induction
1
we show that by using well calibrated probabilities , we can estimate the sense priors more effectively---by using well calibrated probabilities , we are able to estimate the sense priors effectively
1
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer
1
a pattern is a phrasal cons~ruc~ oi varyxng degrees of specificity---a pattern is a sequence of conditions that must hold true for a sequence of terms
1
this study utilized word embeddings to investigate the semantic representations in brain activity as measured by fmri---in this paper , we first study the semantic representation of words in brain activity
1
discrete representations consist of memberships in a hard clustering of words , eg , via kmeans or the brown et al algorithm---the representations are typically either clusters of distributionally similar words , eg , brown et al , or vector representations
1
for all languages except spanish , we used the treetagger with its built-in lemmatizer---for all languages in our dataset , we used treetagger with its built-in lemmatiser
1
our approach , called ‘ iterated reranking ’ ( ir ) , starts with dependency trees generated by an unsupervised parser , and iteratively improves these trees using the richer probability models used in supervised parsing that are in turn trained on these trees---we propose a framework , iterated reranking ( ir ) , where existing supervised parsers are trained without the need of manually annotated data , starting with dependency trees provided by an existing unsupervised parser
1
finkel et al used simulated annealing with gibbs sampling to find a solution in a similar situation---in this paper , we proposed a novel probabilistic generative model to deal with explicit multiple-topic documents
0
we use srilm for training a trigram language model on the english side of the training data---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus
1
neural networks are among the state-of-the-art techniques for language modeling---neural networks ( rnns ) can also be used for language modeling
1
we used moses with the default configuration for phrase-based translation---for decoding , we used moses with the default options
1
out of the annotated causal links , only 117 caselli and vossen causal relations are indicated by explicit causal cue phrases while the others are implicit---caselli and vossen showed that only 117 annotated causal relations in this dataset are indicated by explicit causal cue phrases while the others are implicit
1
different from their methods , we propose sentence-level attention over multiple instances , which can utilize all informative sentences---as compared to existing neural relation extraction model , our model can make full use of all informative sentences
1
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting---in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization
0
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus---we pre-train the word embedding via word2vec on the whole dataset
1
previous focusing research has not adequately addressed the processing of complex sentences---previous work on focusing has not adequately addressed the processing of complex ( i . e . , multiclausal ) sentences
1
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles---semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 )
1
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp---for the source side we use the pos tags from stanford corenlp mapped to universal pos tags
1
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit---the language model was generated from the europarl corpus using the sri language modeling toolkit
1
we make use of a factorization model in which words , together with their window-based context words and their dependency relations , are linked to latent dimensions---the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus
0
verbnet is a verb lexicon with syntactic and semantic information for english verbs , referring to levin verb classes to construct the lexical entries---verbnet is a very large lexicon of verbs in english that extends levin with explicitly stated syntactic and semantic information
1
we use the moses toolkit to train our phrase-based smt models---our baseline system is an standard phrase-based smt system built with moses
1
part-of-speech ( pos ) tagging is a well studied problem in these fields---part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence
1
to reduce this effect , attempts have been made to adapt nlp tools to microblog data---previous work has shown that off-the-shelf nlp tools can perform poorly on microblogs
1
textual entailment is a similar phenomenon , in which the presence of one expression licenses the validity of another---textual entailment is the task of automatically determining whether a natural language hypothesis can be inferred from a given piece of natural language text
1
in this paper , we improve this model by explicitly incorporating source-side syntactic trees---in this paper , we propose a novel encoder-decoder model that makes use of a precomputed source-side syntactic tree
1
chiang and cherry used a soft constraint to award or penalize hypotheses which respect or violate syntactic boundaries---marton and resnik and cherry use syntactic cohesion as a soft constraint by penalizing hypotheses which violate constituent boundaries
1
first , we examine three subproblems that play a role in coreference resolution : named entity recognition , anaphoricity determination , and coreference element detection---to evaluate our method we use the cross-domain sentiment classification dataset prepared by blitzer et al
0
as shown in ( cite-p-15-3-3 ) , this is the main reason that models with embedding features made more errors than those with brown cluster features---as shown in cite-p-23-13-10 this is a well-motivated convention since it avoids splitting up lexical rules to transfer the specifications that must be preserved for different lexical entries
1
the linguistic and computational attractiveness of lexicalized grammars for modeling idiosyncratic constructions in french was identified by abeill茅 and abeill茅 and schabes---abeill茅 and abeill茅 and schabes identified the linguistic and computational attractiveness of lexicalized grammars for modeling non-compositional constructions in french well before dop
1
callison-burch et al , 2007 ) show that ranking sentences gives higher inter-annotator agreement than scoring adequacy and fluency---callison-burch et al show that ranking sentences gives higher inter-annotator agreement than scoring adequacy and fluency
1
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---transliteration is the task of converting a word from one alphabetic script to another
0
however , commonsense knowledge is so clear for every person that it is often omitted in a text---however , commonsense knowledge is likely to be omitted from texts because it is assumed that every person knows such knowledge
1
this means in practice that the language model was trained using the srilm toolkit---the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm
0
we devise a framework that allows a better integration of non-bottom-up features---we propose a new framework as a generalization of the cky-like bottom-up approach
1
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts---mitchell et al studied self-identified schizophrenia patients on twitter and found that linguistic signals may aid in identifying and getting help to people suffering from it
0
here we employ the target word embedding as an attention to select the most appropriate senses to make up context word embeddings---we successfully apply the attention scheme to detect word senses and learn representations according to contexts
1
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit---semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance
0
it uses flexible semantic templates to specify semantic patterns---and the accompanying semantic templates are open source
1
our ner model is built according to conditional random fields methods , by which we convert the problem of ner into that of sequence labeling---we solve this sequence tagging problem using the mallet implementation of conditional random fields
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems---word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs
1
relation extraction is a core task in information extraction and natural language understanding---the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training
0
in this paper , we conducted an empirical study of chinese chunking---we use pre-trained glove embeddings to represent the words
0
in word embedding algorithms , syntactic and semantic information of words is encoded into low-dimensional real vectors and similar words tend to have close vectors---word embeddings represent each word as a low-dimensional vector where the similarity of vectors captures some aspect of semantic similarity of words
1
we present a method for self-training event extraction systems by bootstrapping additional training data---we present a method for self-training event extraction systems by taking advantage of parallel mentions of the same event instance
1
natural language text is the most difficult subtask in discourse parsing---natural language text usually consists of topically structured and coherent components , such as groups of sentences that form paragraphs and groups of paragraphs that form sections
1
but this model suffers from the problem that the number of transition actions is not identical for different hypotheses in decoding , leading to the failure of performing optimal search---transitions for each hypothesis path is not identical to 2 ∗ n , which leads to the failure of performing optimal search during decoding
1
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 )---relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text
1
the evaluation method is the case insensitive ibm bleu-4---the evaluation metric is case-sensitive bleu-4
1
we used svm-light-tk , which enables the use of the partial tree kernel---we use svm-light-tk to train our reranking models , 9 which enables the use of tree kernels in svm-light
1
our word embeddings is initialized with 100-dimensional glove word embeddings---to address this problem , long short-term memory network was proposed in where the architecture of a standard rnn was modified to avoid vanishing or exploding gradients
0
in this paper , we are concerned about two generally well understood operators on feature functions ¨c addition and conjunction---in this paper , we use the connection between tensor products and conjunctions to prove algebraic properties of feature
1
the language model was trained using kenlm---language models were trained with the kenlm toolkit
1
to test this capability , we applied the trained parser to natural language queries against freebase---we applied our algorithm to construct a semantic parser for freebase
1
moreover , since event coreference resolution is a complex task that involves exploring a rich set of linguistic features , annotating a large corpus with event coreference information for a new language or domain of interest requires a substantial amount of manual effort---event coreference resolution is the task of determining which event mentions expressed in language refer to the same real-world event instances
1
we used the logistic regression implemented in the scikit-learn library with the default settings---rahman and ng in particular propose the cluster-ranking model which we used in our baseline
0
following the previous work , our evaluation metric is f-score of rouge---to evaluate the evidence span identification , we calculate f-measure on words , and bleu and rouge
1
we used the srilm toolkit and kneser-ney discounting for estimating 5-grams lms---we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing
1
xu et al and santos et al both used convolutional architectures along with negative sampling to pursue this task---xu et al , 2015b ) used the convolutional network and proposed a ranking loss function with data cleaning
1
text categorization is the task of classifying documents into a certain number of predefined categories---text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 )
1
dependency parsing is a fundamental task for language processing which has been investigated for decades---from the above two dimensions , we show all existing systems for ecd
0
our approach achieves an f 1 score of 0.485 on the implicit relation labeling task for the penn discourse treebank---with an empirical evaluation on the penn discourse treebank ( pdtb ) ( cite-p-11-3-7 ) dataset , which yields an f 1 score of 0 . 485
1
in order to measure translation quality , we use bleu 7 and ter scores---we used the bleu score to evaluate the translation accuracy with and without the normalization
1
zhou et al explore various features in relation extraction using support vector machine---zhou et al and zhao and grishman studied various features and feature combinations for relation extraction
1
this paper describes our submission to the semeval-2015 task 7 , ¡°diachronic text evaluation¡± ( cite-p-9-1-9 )---we present our submission to semeval-2015 task 7 : diachronic text evaluation , in which we approach the task of assigning a date to a text
1
this enables the low-resource language to utilize the lexical and sentence representations of the higher resource languages---the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )
0
our approach explicitly determines the words which are equally significant with a consistent polarity across source and target domains---in this paper , we propose that the words which are equally significant with a consistent polarity across domains
1
all the training data is available for research purposes at http : //trainomatic.org---which cover english , italian and spanish , are made available to the community at http : / / trainomatic . org
1
we use the glove pre-trained word embeddings for the vectors of the content words---for input representation , we used glove word embeddings
1
in each plot , a single arrow signifies one word , pointing from the position of the original word embedding to the updated representation---in each plot , a single arrow signifies one word , pointing from the position of the original word
1
relation extraction is a fundamental task in information extraction---neural networks have been successfully applied to nlp problems , specifically , sequence-to-sequence or models applied to machine translation and word-to-vector
0
ng et al show that it is possible to use automatically word aligned parallel corpora to train accurate supervised wsd models---for instance , ng et al showed that it is possible to use word aligned parallel corpora to train accurate supervised wsd models
1
in this paper , we also propose a framework to bootstrap with unlabeled data---in this work , we are concerned with a coarse grained semantic analysis over sparse data
1
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network---in this paper , we propose the use of autoencoders based on long short term memory neural networks for capturing long distance relationships between phonemes in a word
1
in this paper , we propose a novel unified model called siamese convolutional neural network for cqa---in this paper , we propose a novel approach called ¡° siamese convolutional neural network for cqa ( scqa ) ¡±
1
cardie et al took advantage of opinion summarization to support multi-perspective question answering system which aims to extract opinion-oriented information of a question---cardie et al , 2003 ) employed opinion summarization to support a multiperspective qa system , aiming at identifying the opinion-oriented answers for a given set of questions
1
the automatic prediction of aspectual classes is very challenging for verbs whose aspectual value varies across readings , which are the rule rather than the exception---on a type level , this method does not give satisfying results for verbs whose aspectual value varies across readings ( henceforth ¡® aspectually polysemous verbs ¡¯ ) , which are far from exceptional ( see section 3 )
1
automatically identifying the polarity of words is a very important task in natural language processing---predicting the semantic orientation of words is a very interesting task in natural language processing
1
one very challenging problem for spoken dialog systems is the design of the utterance generation module---in our previous work we developed a word-sense induction system based on topic modelling , specifically a hierarchical dirichlet process
0
in this work , we present large scale automated analyses of movie characters using language used in dialogs to study stereotyping along factors such as gender , race and age---we perform fine grained comparisons of character portrayal using multiple language based metrics along factors such as gender , race and age
1
ppdb we use lexical features from the paraphrase database---we use phrase pairs from the paraphrase database
1
we evaluate text generated from gold mr graphs using the well-known bleu measure---we evaluate the performance of different translation models using both bleu and ter metrics
1
contradiction was rare in the rte-3 test set , occurring in only about 10 % of the cases , and systems found accurately detecting it difficult---we initialize the embedding layer weights with glove vectors
0