text
stringlengths 82
736
| label
int64 0
1
|
---|---|
the model was built using the srilm toolkit with backoff and good-turing smoothing---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit
| 1 |
relation extraction is a fundamental task in information extraction---relation extraction is the task of tagging semantic relations between pairs of entities from free text
| 1 |
in this vein , durrett and klein augment a crf parser to score constituents with a feedforward neural network---durrett and klein use a neural crf based on cky decoding algorithm , with word embeddings pretrained on unannotated data
| 1 |
information extraction is a crucial step toward understanding a text , as it identifies the important conceptual objects and relations between them in a discourse---for evaluation , caseinsensitive nist bleu is used to measure translation performance
| 0 |
in this study , we extend an unsupervised method for word segmentation to include information about prosodic boundaries---in this study , the authors used the role of word stress in constraining word segmentation
| 1 |
to improve chinese srl , we propose a set of additional features , some of which are designed to better capture structural information---for classification , we used the logistic model trees decision tree classifier in the weka implementation in a 10-fold cross-validation setting
| 0 |
in this paper , we present an unsupervised dynamic bayesian model that allows us to model speech style accommodation in a way that does not require us to specify which linguistic features we are targeting---in this paper , we present an unsupervised dynamic bayesian model that allows us to model stylistic style accommodation in a way that is agnostic to which specific speech style features will shift
| 1 |
semantic parsing is the task of mapping natural language to machine interpretable meaning representations---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot
| 1 |
we evaluated each sentence compression method using word f -measures , bigram f -measures , and bleu scores---the development set is used to optimize feature weights using the minimum-error-rate algorithm
| 0 |
franco et al present a system for automatic evaluation of the pronunciation quality of both native and non-native speakers of english on a phone level and a sentence level---franco et al , 2000 ) present a system for automatic evaluation of pronunciation performance on a phone level and a sentence level of native and nonnative speakers of english and other languages
| 1 |
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot
| 0 |
we used the bleu score to evaluate the translation accuracy with and without the normalization---we use case-sensitive bleu-4 to measure the quality of translation result
| 1 |
current mt systems are based on the use of phrase-based models as translation models---bleu is a common metric to automatically measure the quality of smt output by comparing n-gram matches of the smt output with a human reference translation
| 0 |
in this work , we address the technical difficulty of leveraging implicit supervision in learning an algebra word problem solver---in this paper , we demonstrate that it is possible to efficiently mine algebra problems
| 1 |
abstract meaning representation is a compact , readable , whole-sentence semantic annotation---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing
| 0 |
in this work , we focus on understanding humorous language through two subtasks : humor recognition and humor anchor extraction---in this work , we uncover several latent semantic structures behind humor , in terms of meaning
| 1 |
lui et al proposed a system for language identification in multilingual documents using a generative mixture model that is based on supervised topic modeling algorithms---our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing
| 0 |
we use three machine-learning methods to assign cast3lb function tags to sentences parsed with bikel¡¯s parser trained on the cast3lb treebank---we use a machine-learning approach in order to add cast3lb function tags to nodes of basic constituent trees
| 1 |
our approach is similar to conneau et al where authors investigate transfer learning to find universal sentence representation---for recognizing textual entailment we used the model introduced by conneau et al in their work on supervised learning of universal sentence representations
| 1 |
we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors---also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove
| 1 |
on the other hand , as stated by , most nlg systems generate text for readers with good reading ability---as stated by , most nlg systems available generate text for high-skilled users
| 1 |
the evaluation metric for the overall translation quality is caseinsensitive bleu4---we used the disambig tool provided by the srilm toolkit
| 0 |
they use topics to interpret the latent structure of users and items---users play important roles in forming topics and events
| 1 |
we tackle these challenges by proposing b i s parsed ep - a family of robust , unsupervised approaches for identifying cross-lingual hypernymy---in this paper , we introduce a new model for detecting restart and repair disfluencies in spontaneous speech transcripts
| 0 |
barzilay and lapata propose an entity-based coherence model which operationalizes some of the intuitions behind the centering model---barzilay and lapata recently proposed an entity-based coherence model that aims to learn abstract coherence properties , similar to those stipulated by centering theory
| 1 |
the grammar-based system gets an accuracy of 86.1 % on the evaluation data---that has already proven successful in solving a number of relational tasks in natural language processing
| 0 |
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined
| 1 |
for the mix one , we also train word embeddings of dimension 50 using glove---meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens
| 1 |
we use nltk to get sentiment scores using the sentiwordnet corpus---we use the nltk library to compute the pathlen similarity and lin similarity measures
| 1 |
the hybrid approach integrates the rule-based approach with the ml-based approach in order to optimize the overall performance---the hybrid approach integrates the rule-based approach with the ml-based approach in order to optimize overall performance
| 1 |
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---morphologically , arabic is a non-concatenative language
| 0 |
the decoder and encoder word embeddings are of size 620 , the encoder uses a bidirectional layer with 1024 grus to encode the source side---the word embedding dimension is 620 , each direction of the encoder and the decoder has a layer of 1000 gated recurrent units
| 1 |
we use opennmt , which is an implementation of the popular nmt approach that uses an attentional encoder-decoder network---the second space is derived by applying a skip-gram model with the word2vec tool 5
| 0 |
when training , we apply dropout to the embeddings , input vectors of each lstm in bidirectional lstms , and the hidden layer of the mlp---to prevent overfitting , we apply dropout operators to non-recurrent connections between lstm layers
| 1 |
social media is a rich source of rumours and corresponding community reactions---social media is a popular public platform for communicating , sharing information and expressing opinions
| 1 |
an experiment by using the kyoto text corpus ( cite-p-24-3-5 ) showed an f-measure of 75.90 , and we confirmed the effectiveness of our method---using the kyoto text corpus ( cite-p-24-3-5 ) , and obtained higher recall and precision than those of the baseline , leading us to confirm the effectiveness of our method
| 1 |
pinter et al also utilize bilstm to construct word embeddings---pinter et al approximate pre-trained word embeddings with a character-level model
| 1 |
bengio et al have proposed a neural network based model for vector representation of words---hammarstr枚m and borin give an extensive overview of stateof-the-art unsupervised learning of morphology
| 0 |
traditionally , keyphrases are defined as a short list of terms to summarize the topics of a document ( cite-p-26-3-3 )---keyphrases are defined as a set of terms in a document that give a brief summary of its content for readers
| 1 |
in contrast , lexicalized reordering models are extensively used for phrase-based translation---among them , lexicalized reordering models have been widely used in practical phrase-based systems
| 1 |
the phoneme connectivity table supports grammaticality checking of the adjacent two phonetic morphemes---phoneme connectivity table supports the grammaticality of the adjacency of two phonetic morphemes
| 1 |
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 )---the parse trees for sentences in the test set were obtained using the stanford parser
| 0 |
7 for the “ predicted ” setting , first , we predicted the subject labels in a similar manner to five-fold cross validation , and we used the predicted labels as features for the episode classifier---for the “ predicted ” setting , first , we predicted the subject labels in a similar manner to five-fold cross validation , and we used the predicted labels as features
| 1 |
for syntax-based approaches , riloff and wiebe performed syntactic pattern learning while extracting subjective expressions---riloff and wiebe extracted subjective expressions from sentences using a bootstrapping pattern learning process
| 1 |
we learn our word embeddings by using word2vec 3 on unlabeled review data---for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level
| 1 |
the word embeddings are identified using the standard glove representations---we have crowdsourced a dataset of more than 14k comparison paragraphs comparing entities from a variety of categories
| 0 |
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit---we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data
| 1 |
to obtain this resource , we apply a computational method based on bootstrapping and corpus statistics---data-to-text generation refers to the task of automatically generating text from non-linguistic data
| 0 |
recent years have witnessed burgeoning development of statistical machine translation research , notably phrase-based and syntax-based approaches---recent efforts in statistical machine translation have seen promising improvements in output quality , especially the phrase-based models and syntax-based models
| 1 |
1 bunsetsu is a linguistic unit in japanese that roughly corresponds to a basic phrase in english---a bunsetsu consists of one independent word and more than zero ancillary words
| 1 |
as a final result , we improved the precision by 4.4 % against all the questions in our test set over the current state-of-the-art system of japanese why-qa ( cite-p-19-1-19 )---by applying these ideas to japanese why-qa , we improved precision by 4 . 4 % against all the questions in our test set over the current state-of-the-art system for japanese
| 1 |
we implemented linear models with the scikit learn package---in all cases , we used the implementations from the scikitlearn machine learning library
| 1 |
we used the moses decoder , with default settings , to obtain the translations---we translated each german sentence using the moses statistical machine translation toolkit
| 1 |
metaphor is a frequently used figure of speech , reflecting common cognitive processes---we implement the weight tuning component according to the minimum error rate training method
| 0 |
we use srilm for training a trigram language model on the english side of the training data---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
| 1 |
figure 5 : examples of asia¡¯s input and output---figure 5 shows some real examples of asia ¡¯ s input and output
| 1 |
our scorer can be used as a rich feature function for story generation or a reward function for systems that use reinforcement learning to learn to generate stories---druck et al described generalized expectation criteria in which a discriminative model can employ the labeled features and unlabeled instances
| 0 |
the model was built using the srilm toolkit with backoff and good-turing smoothing---in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit
| 1 |
however , these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor---however , such methods suffer from exponentially increasing computational complexity , as the outer product over multiple modalities results in extremely high dimensional tensor
| 1 |
granroth-wilding and clark utilized skip-gram and an event compositional neural network to adjust event representations---granroth-wilding and clark used a siamese network instead of pmi to calculate the coherence between two events
| 1 |
ngram features have been generated with the srilm toolkit---experiments on three diverse languages show that this straightforward semi-supervised extension greatly improves the segmentation accuracy of the purely supervised crfs
| 0 |
the unsupervised component gathers lexical statistics from an unannotated corpus of newswire text---moro et al propose a graphbased approach which uses wikipedia and wordnet as lexical resources
| 0 |
the decoder finds the best derivation that have the source yield of one source tree in the forest---and then finds the best derivation that has the source yield of one source tree in the forest
| 1 |
evaluation sets are translated using the cdec decoder and evaluated with the bleu metric---translation results are given in terms of the automatic bleu evaluation metric as well as the ter metric
| 1 |
we observe that the propbank roles are more robust in all tested experimental conditions , i.e. , the performance decrease is more severe for verbnet---with the two alternative role annotations , we show that the propbank role set is more robust to the lack of verb – specific semantic information
| 1 |
sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ( cite-p-15-3-1 )---more useable , we built an authoring tool so that teachers could prepare games that meet specific teaching goals
| 0 |
we trained a 4-gram language model on this data with kneser-ney discounting using srilm---in this paper , we present an experimental study on solving the answer selection problem
| 0 |
semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 )
| 1 |
recent years have witnessed burgeoning development of statistical machine translation research , notably phrase-based and syntax-based approaches---our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing
| 0 |
hatzivassiloglou and mckeown proposed a method for identifying the word polarity of adjectives---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
| 0 |
the bleu , rouge and ter scores by comparing the abstracts before and after human editing are presented in table 5---case-sensitive bleu scores 4 for the europarl devtest set are shown in table 1
| 1 |
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training---phrase translation strategy was statistically significantly better than that of the sentence translation strategy
| 0 |
shift-reduce parsing of context-free grammars and e of tree-adjoining grammars---shift-reduce non-deterministic pushdown machine corresponding to an arbitrary unrestricted context-free grammar
| 1 |
abstract meaning representation is a semantic formalism in which the meaning of a sentence is encoded as a rooted , directed , acyclic graph---abstract meaning representation is a semantic representation that expresses the logical meaning of english sentences with rooted , directed , acylic graphs
| 1 |
the pipeline is based on the uima framework and contains many text analysis components---our pipeline is built on top of the uima framework and contains many text analysis components
| 1 |
our word embeddings is initialized with 100-dimensional glove word embeddings---to keep consistent , we initialize the embedding weight with pre-trained word embeddings
| 1 |
culotta and sorensen described a slightly generalized version of this kernel based on dependency trees---a first version of dependency tree kernels was proposed by culotta and sorensen
| 1 |
all language models were trained using the srilm toolkit---language models were built using the srilm toolkit 16
| 1 |
we used an average multi-class perceptron adapted to multi-label learning---to get the the sub-fields of the community , we use latent dirichlet allocation to find topics and label them by hand
| 0 |
mbr decoding aims to find the candidate hypothesis that has the least expected loss under a probability model---mbr decision aims to find the candidate hypothesis that has the least expected loss under a probability model when the true reference is not known
| 1 |
we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence---we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset
| 1 |
we train the cbow model with default hyperparameters in word2vec---we pre-train the word embedding via word2vec on the whole dataset
| 1 |
the generation of referring expressions is a core ingredient of most natural language generation systems---aggregation is an essential component of many natural language generation systems
| 1 |
mikolov et al found that the learned word representations capture meaningful syntactic and semantic regularities referred to as linguistic regularities---mikolov et al showed that constant vector offsets of word pairs can represent linguistic regularities
| 1 |
the graph-based parsing model aims to search for the maximum spanning tree in a graph---the graph-based approach views dependency parsing as finding a highest scoring tree in a directed graph
| 1 |
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---in this work , we focus on coreference on mentions that arise in our end task of entity linking
| 0 |
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 )---word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 )
| 1 |
we evaluate our method on the nist mt-2003 chinese-english translation tasks---on the nist mt-2003 chinese-english translation task show that our method
| 1 |
we focus more on improving the robustness of nmt models---in this paper , we propose to improve the robustness of nmt models
| 1 |
all the weights of those features are tuned by using minimal error rate training---sentence compression is the task of producing a summary at the sentence level
| 0 |
minimalist grammars , are a mildly context-sensitive formalism inspired by minimalist syntax , the dominant theory in generative syntax---minimalist grammars are a mildly context-sensitive grammar formalism , which provide a rigorous foundation for some of the main ideas of the minimalist program
| 1 |
the itg constraint is also compatible with word alignments that are not covered by itg parse trees---itg constraint works also on word alignments that are not covered by itg parse trees
| 1 |
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing---for the fst representation , we used the the opengrm-ngram language modeling toolkit and used an n-gram order of 4 , with kneser-ney smoothing
| 1 |
their weights are optimized using minimum error-rate training on a held-out development set for each of the experiments---conditional random fields are undirected graphical models that are conditionally trained
| 0 |
we used the svm implementation provided within scikit-learn---we implemented linear models with the scikit learn package
| 1 |
our framework is based on the observation that ¡®from ... to¡¯-like patterns can encode connectedness in very precise manner---we start from a different pattern , ¡® from . . . to ¡¯ , which helps in discovering transport or connectedness
| 1 |
as an evaluation metric , we used bleu-4 calculated between our model predictions and rpe---for the automatic evaluation , we used the bleu metric from ibm
| 1 |
we evaluated the translation quality using the bleu-4 metric---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke
| 0 |
we used the chunker yamcha , which is based on support vector machines---we use a support vector machine -based chunker yamcha for the chunking process
| 1 |
our single endto-end model obtains state-of-the-art or competitive results on five different tasks from tagging , parsing , relatedness , and entailment tasks---on five nlp tasks , our single model achieves the state-of-the-art or competitive results on chunking , dependency parsing , semantic relatedness , and textual entailment
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.