text
stringlengths 82
736
| label
int64 0
1
|
---|---|
deep convolutional neural networks s are recently extensively used in many computer vision and nlp tasks---deep convolutional networks have been successfully applied in image classification and understanding | 1 |
they are equivalent to the 22 gaze features used by barrett et al---our non-gaze features are almost equivalent to barrett et al | 1 |
we adopt the common problem formulation for this task described by merialdo , in which we are given a raw 24,115-word sequence and a dictionary of legal tags for each word type---we adopt the problem formulation of merialdo , in which we are given a raw word sequence and a dictionary of legal tags for each word type | 1 |
simulating test collections for evaluating retrieval quality offers a viable alternative and has been explored in the literature---this paper presents an approach that detects various audience attributes , including author | 0 |
recently , distributed word representations using the skip-gram model has been shown to give competitive results on analogy detection---also , the skip-gram model is extended in to learn contextual word pair similarity in an unsupervised way | 1 |
we used the moses toolkit to build mt systems using various alignments---we used the moses tree-to-string mt system for all of our mt experiments | 1 |
the scores are usually computed based on a combination of statistical and linguistic features , including term frequency , sentence position , cue words , stigma words , topic signature , etc---word embeddings are initialized with pretrained glove vectors 1 , and updated during the training | 0 |
we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training---we set all feature weights by optimizing bleu directly using minimum error rate training on the tuning part of the development set | 1 |
choi et al jointly extracted opinion expressions , holders and their is-from relations using an ilp approach---lexical analogies also have applications in word sense disambiguation , information extraction , question-answering , and semantic relation classification | 0 |
the total cost is more than an order of magnitude lower than professional translation---sentiment analysis ( cite-p-12-3-17 ) is a popular research topic which has a wide range of applications , such as summarizing customer reviews , monitoring social media , and predicting stock market trends ( cite-p-12-1-4 ) | 0 |
for all classifiers , we used the scikit-learn implementation---we used the scikit-learn library the svm model | 1 |
we demonstrate the effectiveness of multilingual learning for unsupervised part-of-speech tagging---in this paper , we explore the application of multilingual learning to part-of-speech tagging | 1 |
experiments using real-life online debate data showed the effectiveness of the model---experimental results show that the proposed model is highly effective in performing its tasks | 1 |
we chose the skip-gram model provided by word2vec tool developed by for training word embeddings---we describe a new technique for parsing free text : a transformational grammar | 0 |
our experiments use the ghkm-based string-totree pipeline implemented in moses---our implementation of the segment-based imt protocol is based on the moses toolkit | 1 |
the statistical significance test is performed using the re-sampling approach---we apply statistical significance tests using the paired bootstrapped resampling method | 1 |
the most prominent of such resources is the framenet , which provides a set of more than 1,200 generic semantic frames , as well as over 200,000 annotated sentences in english---among these , the berkeley framenet database is a semantic lexical resource consisting of frame-semantic descriptions of more than 7000 english lexical items , together with example sentences annotated with semantic roles | 1 |
ammar et al propose two algorithms , multicluster and multicca , for multilingual word embeddings using set of bilingual lexicons---multicluster and multicca are the models proposed from ammar et al trained on monolingual data using bilingual lexicons extracted from aligning europarl corpus | 1 |
the set of dm-wizard messages in this phase were constrained based on the messages from the first phase---we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm | 0 |
this validates our attempt of employing the centering theory in pronoun resolution from the semantic perspective instead of from the grammatical perspective---in pronoun resolution is guided by extending the centering theory from the grammatical level to the semantic level | 1 |
the graph formulation subsumes linear-chain and tree lstms and makes it easy to incorporate rich linguistic analysis---by adopting the graph formulation , our framework subsumes prior approaches based on chain or tree lstms , and can incorporate a rich set of linguistic analyses | 1 |
we used a generative language modeling for ir as the context less ranking algorithm ,---the feature representation significantly improves the accuracy of our transition-based dependency parser | 0 |
we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data---we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit | 1 |
we show that , surprisingly , dynamic programming is in fact possible for many shift-reduce parsers , by merging ¡°equivalent¡± stacks based on feature values---for a large class of modern shift-reduce parsers , dynamic programming is in fact possible and runs in polynomial time | 1 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing | 0 |
our model is thus a form of quasi-synchronous grammar---coreference resolution is the process of linking together multiple expressions of a given entity | 0 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---with equal corpus sizes , we found that there is a clear effect of text type on text prediction quality | 0 |
nakagawa , 2004 ) used hybrid hmm models to integrate word level and character level information seamlessly---nakagawa , 2004 ) proposed integration of word and oov word position tag in a trellis | 1 |
user affect parameters can increase the usefulness of these models---parameters do produce useful models of student learning | 1 |
in this paper , we extent pv by introducing concept information---in order to alleviate the data sparseness in chunk-based translation , we applied the back-off translation method | 0 |
alikaniotis et al and taghipour and ng both present neural systems trained and evaluated on the asap kaggle dataset of student essays---analysis is based on the analysis of the pronunciation of the vowels found in the data set | 0 |
furthermore , we train a 5-gram language model using the sri language toolkit---we use 5-grams for all language models implemented using the srilm toolkit | 1 |
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling---kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes | 1 |
due to the name variation problem and the name ambiguity problem , the entity linking decisions are critically depending on the heterogenous knowledge of entities---here , we focus on fully unsupervised relation extraction | 0 |
in particular , we define the task of classifying the purchase stage of each tweet in a user ’ s tweet sequence---given a user ’ s tweet sequence , we define the purchase stage identification task as automatically determining for each tweet | 1 |
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus---as word vectors the authors use word2vec embeddings trained with the skip-gram model | 1 |
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences---on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing | 1 |
wsd assigns to each induced cluster a score equal to the sum of weights of its hyperedges found in the local context of the target word---wsd assigns to each cluster a score equal to the sum of weights of its hyperedges found in the local context of a target word | 1 |
feature weights are tuned using minimum error rate training on the 455 provided references---the log-linear parameter weights are tuned with mert on the development set | 1 |
a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data---an 5-gram target language model was estimated using the sri lm toolkit the development and test datasets were randomly chosen from the corpus and consisted of 500 and 1,000 sentences , respectively | 1 |
all language models were trained using the srilm toolkit---this means in practice that the language model was trained using the srilm toolkit | 1 |
we use europarl as third-party corpus , because it is large and contains most languages addressed in this shared task---our main corpus is europarl , which is available for all 4 language pairs of the evaluation | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we used data from the conll-x shared task on multilingual dependency parsing | 0 |
the detection model is implemented as a conditional random field , with features over the morphology and context---the tagger is based on the implementation of conditional random fields in the mallet toolkit | 1 |
sennrich et al introduced an effective approach based on encoding rare and out-of-vocabulary words as sequences of subword units---sennrich et al introduced a simpler and more effective approach to encode rare and unknown words as sequences of subword units by byte pair encoding | 1 |
coreference resolution is a field in which major progress has been made in the last decade---in this paper , we discuss methods for automatically creating models of dialog structure | 0 |
in the penn treebank , null elements , or empty categories , are used to indicate non-local dependencies , discontinuous constituents , and certain missing elements---in practical treebanking , empty categories have been used to indicate long-distance dependencies , discontinuous constituents , and certain dropped elements | 1 |
when evaluated on a large set of manually annotated sentences , we find that our method significantly improves over state-of-the-art baseline models---in this paper , we present an experimental study on solving the answer selection problem | 0 |
for preprocessing the corpus , we use the stanford pos-tagger and parser included in the dkpro framework---for the evaluation of translation quality , we used the bleu metric , which measures the n-gram overlap between the translated output and one or more reference translations | 0 |
twitter is a microblogging service that has 313 million monthly active users 1---twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research | 1 |
we use the moses toolkit to train our phrase-based smt models---we use a pbsmt model built with the moses smt toolkit | 1 |
we used the stanford parser to extract dependency features for each quote and response---we parsed all source side sentences using the stanford dependency parser and trained the preordering system on the entire bitext | 1 |
for this purpose , we use an open-source suite of multilingual syntactic analysis , deppattern---to parse text , we use an open-source suite of multilingual syntactic analysis , deppattern | 1 |
descriptions are transformed into a vector by adding the corresponding word2vec embeddings---we use mateplus for srl which produces predicate-argument structures as per propbank | 0 |
nevertheless , we can apply long short-term memory structure for source and target words embedding---we use long shortterm memory networks to build another semanticsbased sentence representation | 1 |
kennedy and inkpen did sentiment analysis of movie and product reviews by utilizing the contextual shifter information---kennedy and inkpen performs sentiment analysis of movie and product reviews by utilizing the contextual shifter information | 1 |
throughout this work , we use the datasets from the conll 2011 shared task 2 , which is derived from the ontonotes corpus---second , we evaluate on the ontonotes 5 corpus as used in the conll 2012 coreference shared task | 1 |
the semantic content of the elicited speech can then be scored by counting the hsicus present in the description---speech can then be scored by counting the hsicus present in the description | 1 |
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit | 1 |
we calculated the language model probabilities using kenlm , and built a 5-gram language model from the english gigaword fifth edition---as a case study , we applied our method to evaluate algorithms for learning inference rules | 0 |
our smt system is a phrase-based system based on the moses smt toolkit---our baseline system is an standard phrase-based smt system built with moses | 1 |
we used the 200-dimensional word vectors for twitter produced by glove---we used the phrasebased smt system moses to calculate the smt score and to produce hfe sentences | 0 |
coreference resolution is the task of determining when two textual mentions name the same individual---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text | 1 |
results show approximately 6-10 % cer reduction of the acms in comparison with the word trigram models , even when the acms are slightly smaller---in this study , we focus on investigating the feasibility of using automatically inferred personal traits in large-scale brand preference | 0 |
for training our system classifier , we have used scikit-learn---we used the scikit-learn library the svm model | 1 |
zeng et al developed a deep convolutional neural network to extract lexical and sentence level features , which are concatenated and fed into the softmax classifier---we formalize the problem as submodular function maximization under the budget constraint | 0 |
text classification is the assignment of predefined categories to text documents---the weights for these features are optimized using mert | 0 |
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context---we used moses , a phrase-based smt toolkit , for training the translation model | 0 |
this paper explains the problem of word segmentation in urdu---work presents a preliminary effort on word segmentation problem in urdu | 1 |
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text---relation extraction is the task of finding semantic relations between entities from text | 0 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing | 1 |
in this paper , we presented a first system for arabic srl system---in this paper , we present a system for arabic | 1 |
we use the moses smt framework and the standard phrase-based mt feature set , including phrase and lexical translation probabilities and a lexicalized reordering model---we use the moses package for this purpose , which uses a phrase-based approach by combining a translation model and a language model to generate paraphrases | 1 |
experiments on english–chinese and english– french show that compared with previous combination methods , our approach produces significantly better translation results---we use the glove word vector representations of dimension 300 | 0 |
in this paper we address paraphrase in twitter task by building a supervised classification model---in this work , we built a supervised binary classifier for paraphrase judgment | 1 |
efforts to detect offensive text in online textual content have been undertaken previously for other languages as well like german and arabic---offensive text classification in other online textual content have been tried previously for other languages as well like german and arabic | 1 |
we use the stanford pos tagger to obtain the lemmatized corpora for the sre task---on five nlp tasks , our single model achieves the state-of-the-art or competitive results on chunking , dependency parsing , semantic relatedness , and textual entailment | 0 |
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context---word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs | 1 |
automatic text summarization is a rapidly developing field in computational linguistics---automatic text summarization is the task of generating/extracting short text snippet that embodies the content of a larger document or a collection of documents in a concise fashion | 1 |
we present a novel learning method for word embeddings designed for relation classification---natural language generation is the process of automatically converting non-linguistic data into a linguistic output format | 0 |
we used a phrase-based smt model as implemented in the moses toolkit---we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit | 1 |
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit---trigram language models are implemented using the srilm toolkit | 0 |
t , w ranges over all words in the training data , and math-w-7-7-0-13 ranges over all chunk tags supplied in the training data---in the training data , and math-w-7-7-0-13 ranges over all chunk tags supplied in the training data | 1 |
we apply s truct vae to semantic parsing and code generation tasks , and show it outperforms a strong supervised parser using extra unlabeled data---code generation show that with extra unlabeled data , s truct vae outperforms strong supervised models | 1 |
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features---we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting | 1 |
schwenk proposed a feedforward network that predicts phrases of a fixed maximum length , such that all phrase words are predicted at once---schwenk proposed a feed-forward network that computes phrase scores offline , and the scores were added to the phrase table of a phrasebased system | 1 |
in this study , we propose a co-training approach to improving the classification accuracy of polarity identification of chinese product reviews---in this study , we focus on improving the corpus-based method for cross-lingual sentiment classification of chinese product reviews | 1 |
recently , galley and manning introduced a hierarchical model capable of analyzing alignments beyond adjacent phrases---for standard phrase-based translation , galley and manning introduced a hierarchical phrase orientation model | 1 |
our experiments show that performance improves steadily as the number of languages increases---we ¡¯ ve demonstrated that the benefits of unsupervised multilingual learning increase steadily with the number of available languages | 1 |
in other words , simply increasing the number of parameters in the model does not necessarily increase predictive power of the model---that increases the accuracy of the model ' s predictions while reducing the number of free parameters in the model | 1 |
we measured performance using the bleu score , which estimates the accuracy of translation output with respect to a reference translation---we have used penn tree bank parsing data with the standard split for training , development , and test | 0 |
the existing methods use only the information in either language side---methods make use of the information from only one language side | 1 |
smyth et al , rogers et al , and raykar et al all discuss the advantages of learning and evaluation with probabilistically annotated corpora---coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity | 0 |
we used the svd implementation provided in the scikit-learn toolkit---although wordnet is a fine resources , we believe that ignoring other thesauri is a serious oversight | 0 |
we used yamcha , a multi-purpose chunking tool , to train our word segmentation models---we extend the rapp model of context vector projection using a seed lexicon | 0 |
in section 3 , we discuss our method to integrating the speech and search components---in this paper , we discuss the benefits of tightly coupling speech recognition and search components | 1 |
an effective solution for these problems is the long short-term memory architecture---we train trigram language models on the training set using the sri language modeling tookit | 0 |
in this paper , we evaluated five models for the acquisition of selectional preferences---in this paper , we focus on class-based models of selectional preferences | 1 |
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit---the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique | 1 |
in this method , the nonterminals are split to different degrees , as appropriate to the actual complexity in the data---in this method , the nonterminals are split to different degrees , as appropriate to the actual complexity | 1 |
Subsets and Splits