repository_name
stringlengths 5
67
| func_path_in_repository
stringlengths 4
234
| func_name
stringlengths 0
314
| whole_func_string
stringlengths 52
3.87M
| language
stringclasses 6
values | func_code_string
stringlengths 52
3.87M
| func_documentation_string
stringlengths 1
47.2k
| func_code_url
stringlengths 85
339
|
---|---|---|---|---|---|---|---|
estnltk/estnltk | estnltk/textcleaner.py | TextCleaner.report | def report(self, texts, n_examples=10, context_size=10, f=sys.stdout):
"""Compute statistics of invalid characters and print them.
Parameters
----------
texts: list of str
The texts to search for invalid characters.
n_examples: int
How many examples to display per invalid character.
context_size: int
How many characters to return as the context.
f: file
The file to print the report (default is sys.stdout)
"""
result = list(self.compute_report(texts, context_size).items())
result.sort(key=lambda x: (len(x[1]), x[0]), reverse=True)
s = 'Analyzed {0} texts.\n'.format(len(texts))
if (len(texts)) == 0:
f.write(s)
return
if len(result) > 0:
s += 'Invalid characters and their counts:\n'
for c, examples in result:
s += '"{0}"\t{1}\n'.format(c, len(examples))
s += '\n'
for c, examples in result:
s += 'For character "{0}", found {1} occurrences.\nExamples:\n'.format(c, len(examples))
examples = sample(examples, min(len(examples), n_examples))
for idx, example in enumerate(examples):
s += 'example {0}: {1}\n'.format(idx+1, example)
s += '\n'
f.write(s)
else:
f.write('All OK\n') | python | def report(self, texts, n_examples=10, context_size=10, f=sys.stdout):
"""Compute statistics of invalid characters and print them.
Parameters
----------
texts: list of str
The texts to search for invalid characters.
n_examples: int
How many examples to display per invalid character.
context_size: int
How many characters to return as the context.
f: file
The file to print the report (default is sys.stdout)
"""
result = list(self.compute_report(texts, context_size).items())
result.sort(key=lambda x: (len(x[1]), x[0]), reverse=True)
s = 'Analyzed {0} texts.\n'.format(len(texts))
if (len(texts)) == 0:
f.write(s)
return
if len(result) > 0:
s += 'Invalid characters and their counts:\n'
for c, examples in result:
s += '"{0}"\t{1}\n'.format(c, len(examples))
s += '\n'
for c, examples in result:
s += 'For character "{0}", found {1} occurrences.\nExamples:\n'.format(c, len(examples))
examples = sample(examples, min(len(examples), n_examples))
for idx, example in enumerate(examples):
s += 'example {0}: {1}\n'.format(idx+1, example)
s += '\n'
f.write(s)
else:
f.write('All OK\n') | Compute statistics of invalid characters and print them.
Parameters
----------
texts: list of str
The texts to search for invalid characters.
n_examples: int
How many examples to display per invalid character.
context_size: int
How many characters to return as the context.
f: file
The file to print the report (default is sys.stdout) | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/textcleaner.py#L94-L128 |
estnltk/estnltk | estnltk/syntax/parsers.py | VISLCG3Parser.parse_text | def parse_text(self, text, **kwargs):
""" Parses given text with VISLCG3 based syntactic analyzer.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_VISLCG3, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
apply_tag_analysis : bool
Specifies whether, in case of a missing morphological ANALYSIS
layer, the text is morphologically analysed and disambiguated
via the method *text.tag_analysis* before proceeding with
the syntactic analysis.
Note that the syntactic analyser does its own morphological
disambiguation, but results of that disambiguation do not reach
back to the Text object, so the Text object will contain a layer
of ambiguous morphological analyses at the end of the parsing
step;
You can use *apply_tag_analysis=True* to ensure that at the
end of the parsing step, the input Text is both morphologically
analysed and disambiguated;
Default: False
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="vislcg3",
returns VISLCG3's output: a list of strings, each element in
the list corresponding to a line from VISLCG3's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_VISLCG3 ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
vislcg3_syntax.process_lines(),
vislcg3_syntax.align_cg3_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False
"""
# a) get the configuration:
apply_tag_analysis = False
augment_words = False
all_return_types = ["text","vislcg3","trees","dep_graphs"]
return_type = all_return_types[0]
for argName, argVal in kwargs.items():
if argName.lower() == 'return_type':
if argVal.lower() in all_return_types:
return_type = argVal.lower()
else:
raise Exception(' Unexpected return type: ', argVal)
elif argName.lower() == 'augment_words':
augment_words = bool(argVal)
elif argName.lower() == 'apply_tag_analysis':
apply_tag_analysis = bool(argVal)
kwargs['split_result'] = True
kwargs['clean_up'] = True
kwargs['remove_clo'] = kwargs.get('remove_clo', True)
kwargs['remove_cap'] = kwargs.get('remove_cap', True)
kwargs['keep_old'] = kwargs.get('keep_old', False)
kwargs['double_quotes'] = 'unesc'
# b) process:
if apply_tag_analysis:
text = text.tag_analysis()
result_lines1 = \
self.preprocessor.process_Text(text, **kwargs)
result_lines2 = \
self.vislcg3_processor.process_lines(result_lines1, **kwargs)
alignments = \
align_cg3_with_Text(result_lines2, text, **kwargs)
alignments = \
normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs )
# c) attach & return results
text[LAYER_VISLCG3] = alignments
if augment_words:
self._augment_text_w_syntactic_info( text, text[LAYER_VISLCG3] )
if return_type == "vislcg3":
return result_lines2
elif return_type == "trees":
return build_trees_from_text( text, layer=LAYER_VISLCG3, **kwargs )
elif return_type == "dep_graphs":
trees = build_trees_from_text( text, layer=LAYER_VISLCG3, **kwargs )
graphs = [tree.as_dependencygraph() for tree in trees]
return graphs
else:
return text | python | def parse_text(self, text, **kwargs):
""" Parses given text with VISLCG3 based syntactic analyzer.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_VISLCG3, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
apply_tag_analysis : bool
Specifies whether, in case of a missing morphological ANALYSIS
layer, the text is morphologically analysed and disambiguated
via the method *text.tag_analysis* before proceeding with
the syntactic analysis.
Note that the syntactic analyser does its own morphological
disambiguation, but results of that disambiguation do not reach
back to the Text object, so the Text object will contain a layer
of ambiguous morphological analyses at the end of the parsing
step;
You can use *apply_tag_analysis=True* to ensure that at the
end of the parsing step, the input Text is both morphologically
analysed and disambiguated;
Default: False
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="vislcg3",
returns VISLCG3's output: a list of strings, each element in
the list corresponding to a line from VISLCG3's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_VISLCG3 ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
vislcg3_syntax.process_lines(),
vislcg3_syntax.align_cg3_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False
"""
# a) get the configuration:
apply_tag_analysis = False
augment_words = False
all_return_types = ["text","vislcg3","trees","dep_graphs"]
return_type = all_return_types[0]
for argName, argVal in kwargs.items():
if argName.lower() == 'return_type':
if argVal.lower() in all_return_types:
return_type = argVal.lower()
else:
raise Exception(' Unexpected return type: ', argVal)
elif argName.lower() == 'augment_words':
augment_words = bool(argVal)
elif argName.lower() == 'apply_tag_analysis':
apply_tag_analysis = bool(argVal)
kwargs['split_result'] = True
kwargs['clean_up'] = True
kwargs['remove_clo'] = kwargs.get('remove_clo', True)
kwargs['remove_cap'] = kwargs.get('remove_cap', True)
kwargs['keep_old'] = kwargs.get('keep_old', False)
kwargs['double_quotes'] = 'unesc'
# b) process:
if apply_tag_analysis:
text = text.tag_analysis()
result_lines1 = \
self.preprocessor.process_Text(text, **kwargs)
result_lines2 = \
self.vislcg3_processor.process_lines(result_lines1, **kwargs)
alignments = \
align_cg3_with_Text(result_lines2, text, **kwargs)
alignments = \
normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs )
# c) attach & return results
text[LAYER_VISLCG3] = alignments
if augment_words:
self._augment_text_w_syntactic_info( text, text[LAYER_VISLCG3] )
if return_type == "vislcg3":
return result_lines2
elif return_type == "trees":
return build_trees_from_text( text, layer=LAYER_VISLCG3, **kwargs )
elif return_type == "dep_graphs":
trees = build_trees_from_text( text, layer=LAYER_VISLCG3, **kwargs )
graphs = [tree.as_dependencygraph() for tree in trees]
return graphs
else:
return text | Parses given text with VISLCG3 based syntactic analyzer.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_VISLCG3, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
apply_tag_analysis : bool
Specifies whether, in case of a missing morphological ANALYSIS
layer, the text is morphologically analysed and disambiguated
via the method *text.tag_analysis* before proceeding with
the syntactic analysis.
Note that the syntactic analyser does its own morphological
disambiguation, but results of that disambiguation do not reach
back to the Text object, so the Text object will contain a layer
of ambiguous morphological analyses at the end of the parsing
step;
You can use *apply_tag_analysis=True* to ensure that at the
end of the parsing step, the input Text is both morphologically
analysed and disambiguated;
Default: False
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="vislcg3",
returns VISLCG3's output: a list of strings, each element in
the list corresponding to a line from VISLCG3's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_VISLCG3 ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
vislcg3_syntax.process_lines(),
vislcg3_syntax.align_cg3_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/parsers.py#L143-L269 |
estnltk/estnltk | estnltk/syntax/parsers.py | VISLCG3Parser._filter_kwargs | def _filter_kwargs(self, keep_list, **kwargs):
''' Filters the dict of *kwargs*, keeping only arguments
whose keys are in *keep_list* and discarding all other
arguments.
Based on the filtring, constructs and returns a new
dict.
'''
new_kwargs = {}
for argName, argVal in kwargs.items():
if argName.lower() in keep_list:
new_kwargs[argName.lower()] = argVal
return new_kwargs | python | def _filter_kwargs(self, keep_list, **kwargs):
''' Filters the dict of *kwargs*, keeping only arguments
whose keys are in *keep_list* and discarding all other
arguments.
Based on the filtring, constructs and returns a new
dict.
'''
new_kwargs = {}
for argName, argVal in kwargs.items():
if argName.lower() in keep_list:
new_kwargs[argName.lower()] = argVal
return new_kwargs | Filters the dict of *kwargs*, keeping only arguments
whose keys are in *keep_list* and discarding all other
arguments.
Based on the filtring, constructs and returns a new
dict. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/parsers.py#L272-L284 |
estnltk/estnltk | estnltk/syntax/parsers.py | VISLCG3Parser._augment_text_w_syntactic_info | def _augment_text_w_syntactic_info( self, text, text_layer ):
''' Augments given Text object with the syntactic information
from the *text_layer*. More specifically, adds information
about SYNTAX_LABEL, SYNTAX_HEAD and DEPREL to each token
in the Text object;
(!) Note: this method is added to provide some initial
consistency with MaltParser based syntactic parsing;
If a better syntactic parsing interface is achieved in
the future, this method will be deprecated ...
'''
j = 0
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
for i in range(len(sentence)):
estnltkToken = sentence[i]
vislcg3Token = text_layer[j]
parse_found = False
if PARSER_OUT in vislcg3Token:
if len( vislcg3Token[PARSER_OUT] ) > 0:
firstParse = vislcg3Token[PARSER_OUT][0]
# Fetch information about the syntactic relation:
estnltkToken['s_label'] = str(i)
estnltkToken['s_head'] = str(firstParse[1])
# Fetch the name of the surface syntactic relation
deprels = '|'.join( [p[0] for p in vislcg3Token[PARSER_OUT]] )
estnltkToken['s_rel'] = deprels
parse_found = True
if not parse_found:
raise Exception("(!) Unable to retrieve syntactic analysis for the ",\
estnltkToken, ' from ', vislcg3Token )
j += 1 | python | def _augment_text_w_syntactic_info( self, text, text_layer ):
''' Augments given Text object with the syntactic information
from the *text_layer*. More specifically, adds information
about SYNTAX_LABEL, SYNTAX_HEAD and DEPREL to each token
in the Text object;
(!) Note: this method is added to provide some initial
consistency with MaltParser based syntactic parsing;
If a better syntactic parsing interface is achieved in
the future, this method will be deprecated ...
'''
j = 0
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
for i in range(len(sentence)):
estnltkToken = sentence[i]
vislcg3Token = text_layer[j]
parse_found = False
if PARSER_OUT in vislcg3Token:
if len( vislcg3Token[PARSER_OUT] ) > 0:
firstParse = vislcg3Token[PARSER_OUT][0]
# Fetch information about the syntactic relation:
estnltkToken['s_label'] = str(i)
estnltkToken['s_head'] = str(firstParse[1])
# Fetch the name of the surface syntactic relation
deprels = '|'.join( [p[0] for p in vislcg3Token[PARSER_OUT]] )
estnltkToken['s_rel'] = deprels
parse_found = True
if not parse_found:
raise Exception("(!) Unable to retrieve syntactic analysis for the ",\
estnltkToken, ' from ', vislcg3Token )
j += 1 | Augments given Text object with the syntactic information
from the *text_layer*. More specifically, adds information
about SYNTAX_LABEL, SYNTAX_HEAD and DEPREL to each token
in the Text object;
(!) Note: this method is added to provide some initial
consistency with MaltParser based syntactic parsing;
If a better syntactic parsing interface is achieved in
the future, this method will be deprecated ... | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/parsers.py#L287-L317 |
estnltk/estnltk | estnltk/syntax/parsers.py | MaltParser.parse_text | def parse_text( self, text, **kwargs ):
''' Parses given text with Maltparser.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_CONLL, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="conll",
returns Maltparser's results as list of CONLL format strings,
each element in the list corresponding to one line in
MaltParser's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_CONLL ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
maltparser_support.align_CONLL_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False
'''
# a) get the configuration:
augment_words = False
all_return_types = ["text", "conll", "trees", "dep_graphs"]
return_type = all_return_types[0]
for argName, argVal in kwargs.items():
if argName == 'return_type':
if argVal.lower() in all_return_types:
return_type = argVal.lower()
else:
raise Exception(' Unexpected return type: ', argVal)
elif argName.lower() == 'augment_words':
augment_words = bool(argVal)
# b) process:
# If text has not been morphologically analysed yet, add the
# morphological analysis
if not text.is_tagged(ANALYSIS):
text.tag_analysis()
# Obtain CONLL formatted version of the text
textConllStr = convert_text_to_CONLL( text, self.feature_generator )
# Execute MaltParser and get results as CONLL formatted string
resultsConllStr = \
_executeMaltparser( textConllStr, self.maltparser_dir, \
self.maltparser_jar, \
self.model_name )
# Align the results with the initial text
alignments = \
align_CONLL_with_Text( resultsConllStr, text, self.feature_generator, **kwargs )
alignments = \
normalise_alignments( alignments, data_type=CONLL_DATA, **kwargs )
# c) attach & return results
text[LAYER_CONLL] = alignments
if augment_words:
# Augment the input text with the dependency relation information
# obtained from MaltParser
# (!) Note: this will be deprecated in the future
augmentTextWithCONLLstr( resultsConllStr, text )
if return_type == "conll":
return resultsConllStr
elif return_type == "trees":
return build_trees_from_text( text, layer=LAYER_CONLL, **kwargs )
elif return_type == "dep_graphs":
trees = build_trees_from_text( text, layer=LAYER_CONLL, **kwargs )
graphs = [tree.as_dependencygraph() for tree in trees]
return graphs
# An alternative:
# Return DependencyGraphs
#from nltk.parse.dependencygraph import DependencyGraph
#all_trees = []
#for tree_str in ("\n".join(resultsConllStr)).split('\n\n'):
# t = DependencyGraph(tree_str)
# all_trees.append(t)
#return all_trees
else:
return text | python | def parse_text( self, text, **kwargs ):
''' Parses given text with Maltparser.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_CONLL, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="conll",
returns Maltparser's results as list of CONLL format strings,
each element in the list corresponding to one line in
MaltParser's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_CONLL ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
maltparser_support.align_CONLL_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False
'''
# a) get the configuration:
augment_words = False
all_return_types = ["text", "conll", "trees", "dep_graphs"]
return_type = all_return_types[0]
for argName, argVal in kwargs.items():
if argName == 'return_type':
if argVal.lower() in all_return_types:
return_type = argVal.lower()
else:
raise Exception(' Unexpected return type: ', argVal)
elif argName.lower() == 'augment_words':
augment_words = bool(argVal)
# b) process:
# If text has not been morphologically analysed yet, add the
# morphological analysis
if not text.is_tagged(ANALYSIS):
text.tag_analysis()
# Obtain CONLL formatted version of the text
textConllStr = convert_text_to_CONLL( text, self.feature_generator )
# Execute MaltParser and get results as CONLL formatted string
resultsConllStr = \
_executeMaltparser( textConllStr, self.maltparser_dir, \
self.maltparser_jar, \
self.model_name )
# Align the results with the initial text
alignments = \
align_CONLL_with_Text( resultsConllStr, text, self.feature_generator, **kwargs )
alignments = \
normalise_alignments( alignments, data_type=CONLL_DATA, **kwargs )
# c) attach & return results
text[LAYER_CONLL] = alignments
if augment_words:
# Augment the input text with the dependency relation information
# obtained from MaltParser
# (!) Note: this will be deprecated in the future
augmentTextWithCONLLstr( resultsConllStr, text )
if return_type == "conll":
return resultsConllStr
elif return_type == "trees":
return build_trees_from_text( text, layer=LAYER_CONLL, **kwargs )
elif return_type == "dep_graphs":
trees = build_trees_from_text( text, layer=LAYER_CONLL, **kwargs )
graphs = [tree.as_dependencygraph() for tree in trees]
return graphs
# An alternative:
# Return DependencyGraphs
#from nltk.parse.dependencygraph import DependencyGraph
#all_trees = []
#for tree_str in ("\n".join(resultsConllStr)).split('\n\n'):
# t = DependencyGraph(tree_str)
# all_trees.append(t)
#return all_trees
else:
return text | Parses given text with Maltparser.
As a result of parsing, the input Text object will obtain a new
layer named LAYER_CONLL, which contains a list of dicts.
Each dicts corresponds to analysis of a single word token, and
has the following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the
syntactic parser;
In the list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label:
surface syntactic label of the word, e.g. '@SUBJ',
'@OBJ', '@ADVL';
*) index_of_the_head:
index of the head (in the sentence);
-1 if the current token is root;
Parameters
-----------
text : estnltk.text.Text
The input text that should be analysed for dependency relations;
return_type : string
If return_type=="text" (Default),
returns the input Text object;
If return_type=="conll",
returns Maltparser's results as list of CONLL format strings,
each element in the list corresponding to one line in
MaltParser's output;
If return_type=="trees",
returns all syntactic trees of the text as a list of
EstNLTK's Tree objects (estnltk.syntax.utils.Tree);
If return_type=="dep_graphs",
returns all syntactic trees of the text as a list of NLTK's
DependencyGraph objects
(nltk.parse.dependencygraph.DependencyGraph);
Regardless the return type, the layer containing dependency syntactic
information ( LAYER_CONLL ) will be attached to the text object;
augment_words : bool
Specifies whether words in the input Text are to be augmented with
the syntactic information (SYNTAX_LABEL, SYNTAX_HEAD and DEPREL);
(!) This functionality is added to achieve a compatibility with the
old way syntactic processing, but it will be likely deprecated in
the future.
Default: False
Other arguments are the arguments that can be passed to methods:
maltparser_support.align_CONLL_with_Text(),
normalise_alignments()
keep_old : bool
Optional argument specifying whether the old analysis lines
should be preserved after overwriting 'parser_out' with new analysis
lines;
If True, each dict will be augmented with key 'init_parser_out'
which contains the initial/old analysis lines;
Default:False | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/parsers.py#L400-L520 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _create_clause_based_dep_links | def _create_clause_based_dep_links( orig_text, layer=LAYER_CONLL ):
''' Rewrites dependency links in the text from sentence-based linking to clause-
based linking:
*) words which have their parent outside-the-clause will become root
nodes (will obtain link value -1), and
*) words which have their parent inside-the-clause will have parent index
according to word indices inside the clause;
'''
sent_start_index = 0
for sent_text in orig_text.split_by( SENTENCES ):
# 1) Create a mapping: from sentence-based dependency links to clause-based dependency links
mapping = dict()
cl_ind = sent_text.clause_indices
for wid, word in enumerate(sent_text[WORDS]):
firstSyntaxRel = sent_text[layer][wid][PARSER_OUT][0]
parentIndex = firstSyntaxRel[1]
if parentIndex != -1:
if cl_ind[parentIndex] != cl_ind[wid]:
# Parent of the word is outside the current clause: make root
# node from the current node
mapping[wid] = -1
else:
# Find the beginning of the clause
clause_start = cl_ind.index( cl_ind[wid] )
# Find the index of parent label in the clause
j = 0
k = 0
while clause_start + j < len(cl_ind):
if clause_start + j == parentIndex:
break
if cl_ind[clause_start + j] == cl_ind[wid]:
k += 1
j += 1
assert clause_start + j < len(cl_ind), '(!) Parent index not found for: '+str(parentIndex)
mapping[wid] = k
else:
mapping[wid] = -1
# 2) Overwrite old links with new ones
for local_wid in mapping.keys():
global_wid = sent_start_index + local_wid
for syntax_rel in orig_text[layer][global_wid][PARSER_OUT]:
syntax_rel[1] = mapping[local_wid]
# 3) Advance the index for processing the next sentence
sent_start_index += len(cl_ind)
return orig_text | python | def _create_clause_based_dep_links( orig_text, layer=LAYER_CONLL ):
''' Rewrites dependency links in the text from sentence-based linking to clause-
based linking:
*) words which have their parent outside-the-clause will become root
nodes (will obtain link value -1), and
*) words which have their parent inside-the-clause will have parent index
according to word indices inside the clause;
'''
sent_start_index = 0
for sent_text in orig_text.split_by( SENTENCES ):
# 1) Create a mapping: from sentence-based dependency links to clause-based dependency links
mapping = dict()
cl_ind = sent_text.clause_indices
for wid, word in enumerate(sent_text[WORDS]):
firstSyntaxRel = sent_text[layer][wid][PARSER_OUT][0]
parentIndex = firstSyntaxRel[1]
if parentIndex != -1:
if cl_ind[parentIndex] != cl_ind[wid]:
# Parent of the word is outside the current clause: make root
# node from the current node
mapping[wid] = -1
else:
# Find the beginning of the clause
clause_start = cl_ind.index( cl_ind[wid] )
# Find the index of parent label in the clause
j = 0
k = 0
while clause_start + j < len(cl_ind):
if clause_start + j == parentIndex:
break
if cl_ind[clause_start + j] == cl_ind[wid]:
k += 1
j += 1
assert clause_start + j < len(cl_ind), '(!) Parent index not found for: '+str(parentIndex)
mapping[wid] = k
else:
mapping[wid] = -1
# 2) Overwrite old links with new ones
for local_wid in mapping.keys():
global_wid = sent_start_index + local_wid
for syntax_rel in orig_text[layer][global_wid][PARSER_OUT]:
syntax_rel[1] = mapping[local_wid]
# 3) Advance the index for processing the next sentence
sent_start_index += len(cl_ind)
return orig_text | Rewrites dependency links in the text from sentence-based linking to clause-
based linking:
*) words which have their parent outside-the-clause will become root
nodes (will obtain link value -1), and
*) words which have their parent inside-the-clause will have parent index
according to word indices inside the clause; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L267-L312 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | __sort_analyses | def __sort_analyses(sentence):
''' Sorts analysis of all the words in the sentence.
This is required for consistency, because by default, analyses are
listed in arbitrary order; '''
for word in sentence:
if ANALYSIS not in word:
raise Exception( '(!) Error: no analysis found from word: '+str(word) )
else:
word[ANALYSIS] = sorted(word[ANALYSIS], \
key=lambda x : "_".join( [x[ROOT],x[POSTAG],x[FORM],x[CLITIC]] ))
return sentence | python | def __sort_analyses(sentence):
''' Sorts analysis of all the words in the sentence.
This is required for consistency, because by default, analyses are
listed in arbitrary order; '''
for word in sentence:
if ANALYSIS not in word:
raise Exception( '(!) Error: no analysis found from word: '+str(word) )
else:
word[ANALYSIS] = sorted(word[ANALYSIS], \
key=lambda x : "_".join( [x[ROOT],x[POSTAG],x[FORM],x[CLITIC]] ))
return sentence | Sorts analysis of all the words in the sentence.
This is required for consistency, because by default, analyses are
listed in arbitrary order; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L315-L325 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | convert_text_to_CONLL | def convert_text_to_CONLL( text, feature_generator ):
''' Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fields to predict (HEAD, DEPREL) will be left empty.
This method is used in preparing parsing & testing data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
The aimed format looks something like this:
1 Öö öö S S sg|nom _ xxx _ _
2 oli ole V V indic|impf|ps3|sg _ xxx _ _
3 täiesti täiesti D D _ _ xxx _ _
4 tuuletu tuuletu A A sg|nom _ xxx _ _
5 . . Z Z Fst _ xxx _ _
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
try:
granularity = feature_generator.parseScope
except AttributeError:
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
sentenceStrs = []
for sentence_text in text.split_by( granularity ):
sentence_text[WORDS] = __sort_analyses( sentence_text[WORDS] )
for i in range(len( sentence_text[WORDS] )):
# Generate features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS
strForm = feature_generator.generate_features( sentence_text, i )
# *** HEAD (syntactic parent)
strForm.append( '_' )
strForm.append( '\t' )
# *** DEPREL (label of the syntactic relation)
strForm.append( 'xxx' )
strForm.append( '\t' )
# *** PHEAD
strForm.append( '_' )
strForm.append( '\t' )
# *** PDEPREL
strForm.append( '_' )
sentenceStrs.append( ''.join( strForm ) )
sentenceStrs.append( '' )
return '\n'.join( sentenceStrs ) | python | def convert_text_to_CONLL( text, feature_generator ):
''' Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fields to predict (HEAD, DEPREL) will be left empty.
This method is used in preparing parsing & testing data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
The aimed format looks something like this:
1 Öö öö S S sg|nom _ xxx _ _
2 oli ole V V indic|impf|ps3|sg _ xxx _ _
3 täiesti täiesti D D _ _ xxx _ _
4 tuuletu tuuletu A A sg|nom _ xxx _ _
5 . . Z Z Fst _ xxx _ _
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
try:
granularity = feature_generator.parseScope
except AttributeError:
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
sentenceStrs = []
for sentence_text in text.split_by( granularity ):
sentence_text[WORDS] = __sort_analyses( sentence_text[WORDS] )
for i in range(len( sentence_text[WORDS] )):
# Generate features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS
strForm = feature_generator.generate_features( sentence_text, i )
# *** HEAD (syntactic parent)
strForm.append( '_' )
strForm.append( '\t' )
# *** DEPREL (label of the syntactic relation)
strForm.append( 'xxx' )
strForm.append( '\t' )
# *** PHEAD
strForm.append( '_' )
strForm.append( '\t' )
# *** PDEPREL
strForm.append( '_' )
sentenceStrs.append( ''.join( strForm ) )
sentenceStrs.append( '' )
return '\n'.join( sentenceStrs ) | Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fields to predict (HEAD, DEPREL) will be left empty.
This method is used in preparing parsing & testing data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
The aimed format looks something like this:
1 Öö öö S S sg|nom _ xxx _ _
2 oli ole V V indic|impf|ps3|sg _ xxx _ _
3 täiesti täiesti D D _ _ xxx _ _
4 tuuletu tuuletu A A sg|nom _ xxx _ _
5 . . Z Z Fst _ xxx _ _ | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L328-L379 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | convert_text_w_syntax_to_CONLL | def convert_text_w_syntax_to_CONLL( text, feature_generator, layer=LAYER_CONLL ):
''' Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fills fields to predict (HEAD, DEPREL) with the syntactic information from
given *layer* (default: LAYER_CONLL).
This method is used in preparing training data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
layer : str
Name of the *text* layer from which syntactic information is to be taken.
Defaults to LAYER_CONLL.
The aimed format looks something like this:
1 Öö öö S S sg|n 2 @SUBJ _ _
2 oli ole V V s 0 ROOT _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A sg|n 2 @PRD _ _
5 . . Z Z _ 4 xxx _ _
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
assert layer in text, ' (!) The layer "'+layer+'" is missing form the Text object.'
try:
granularity = feature_generator.parseScope
except AttributeError:
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
sentenceStrs = []
if granularity == CLAUSES:
_create_clause_based_dep_links( text, layer )
for sentence_text in text.split_by( granularity ):
sentence_text[WORDS] = __sort_analyses( sentence_text[WORDS] )
for i in range(len( sentence_text[WORDS] )):
# Generate features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS
strForm = feature_generator.generate_features( sentence_text, i )
# Get syntactic analysis of the token
syntaxToken = sentence_text[layer][i]
firstSyntaxRel = syntaxToken[PARSER_OUT][0]
# *** HEAD (syntactic parent)
parentLabel = str( firstSyntaxRel[1] + 1 )
strForm.append( parentLabel )
strForm.append( '\t' )
# *** DEPREL (label of the syntactic relation)
if parentLabel == '0':
strForm.append( 'ROOT' )
strForm.append( '\t' )
else:
strForm.append( firstSyntaxRel[0] )
strForm.append( '\t' )
# *** PHEAD
strForm.append( '_' )
strForm.append( '\t' )
# *** PDEPREL
strForm.append( '_' )
sentenceStrs.append( ''.join( strForm ) )
sentenceStrs.append( '' )
return '\n'.join( sentenceStrs ) | python | def convert_text_w_syntax_to_CONLL( text, feature_generator, layer=LAYER_CONLL ):
''' Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fills fields to predict (HEAD, DEPREL) with the syntactic information from
given *layer* (default: LAYER_CONLL).
This method is used in preparing training data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
layer : str
Name of the *text* layer from which syntactic information is to be taken.
Defaults to LAYER_CONLL.
The aimed format looks something like this:
1 Öö öö S S sg|n 2 @SUBJ _ _
2 oli ole V V s 0 ROOT _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A sg|n 2 @PRD _ _
5 . . Z Z _ 4 xxx _ _
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
assert layer in text, ' (!) The layer "'+layer+'" is missing form the Text object.'
try:
granularity = feature_generator.parseScope
except AttributeError:
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
sentenceStrs = []
if granularity == CLAUSES:
_create_clause_based_dep_links( text, layer )
for sentence_text in text.split_by( granularity ):
sentence_text[WORDS] = __sort_analyses( sentence_text[WORDS] )
for i in range(len( sentence_text[WORDS] )):
# Generate features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS
strForm = feature_generator.generate_features( sentence_text, i )
# Get syntactic analysis of the token
syntaxToken = sentence_text[layer][i]
firstSyntaxRel = syntaxToken[PARSER_OUT][0]
# *** HEAD (syntactic parent)
parentLabel = str( firstSyntaxRel[1] + 1 )
strForm.append( parentLabel )
strForm.append( '\t' )
# *** DEPREL (label of the syntactic relation)
if parentLabel == '0':
strForm.append( 'ROOT' )
strForm.append( '\t' )
else:
strForm.append( firstSyntaxRel[0] )
strForm.append( '\t' )
# *** PHEAD
strForm.append( '_' )
strForm.append( '\t' )
# *** PDEPREL
strForm.append( '_' )
sentenceStrs.append( ''.join( strForm ) )
sentenceStrs.append( '' )
return '\n'.join( sentenceStrs ) | Converts given estnltk Text object into CONLL format and returns as a
string.
Uses given *feature_generator* to produce fields ID, FORM, LEMMA, CPOSTAG,
POSTAG, FEATS for each token.
Fills fields to predict (HEAD, DEPREL) with the syntactic information from
given *layer* (default: LAYER_CONLL).
This method is used in preparing training data for MaltParser.
Parameters
-----------
text : estnltk.text.Text
Morphologically analysed text from which the CONLL file is generated;
feature_generator : CONLLFeatGenerator
An instance of CONLLFeatGenerator, which has method *generate_features()*
for generating morphological features for a single token;
layer : str
Name of the *text* layer from which syntactic information is to be taken.
Defaults to LAYER_CONLL.
The aimed format looks something like this:
1 Öö öö S S sg|n 2 @SUBJ _ _
2 oli ole V V s 0 ROOT _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A sg|n 2 @PRD _ _
5 . . Z Z _ 4 xxx _ _ | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L382-L449 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _executeMaltparser | def _executeMaltparser( input_string, maltparser_dir, maltparser_jar, model_name ):
''' Executes Maltparser on given (CONLL-style) input string, and
returns the result. The result is an array of lines from Maltparser's
output.
Parameters
----------
input_string: string
input text in CONLL format;
maltparser_jar: string
name of the Maltparser's jar file that should be executed;
model_name: string
name of the model that should be used;
maltparser_dir: string
the directory containing Maltparser's jar and the model file;
Few of the ideas were also borrowed from NLTK's MaltParser class,
see http://www.nltk.org/_modules/nltk/parse/malt.html for the reference;
'''
temp_input_file = \
tempfile.NamedTemporaryFile(prefix='malt_in.', mode='w', delete=False)
temp_input_file.close()
# We have to open separately here for writing, because Py 2.7 does not support
# passing parameter encoding='utf-8' to the NamedTemporaryFile;
out_f = codecs.open(temp_input_file.name, mode='w', encoding='utf-8')
out_f.write( input_string )
out_f.close()
temp_output_file = tempfile.NamedTemporaryFile(prefix='malt_out.', mode='w', delete=False)
temp_output_file.close()
current_dir = os.getcwd()
os.chdir(maltparser_dir)
cmd = ['java', '-jar', os.path.join(maltparser_dir, maltparser_jar), \
'-c', model_name, \
'-i', temp_input_file.name, \
'-o', temp_output_file.name, \
'-m', 'parse' ]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if p.wait() != 0:
raise Exception(' Error on running Maltparser: ', p.stderr.read() )
os.chdir(current_dir)
results = []
in_f = codecs.open(temp_output_file.name, mode='r', encoding='utf-8')
for line in in_f:
results.append( line.rstrip() )
in_f.close()
if not temp_input_file.closed:
raise Exception('Temp input file unclosed!')
if not temp_output_file.closed:
raise Exception('Temp input file unclosed!')
if not out_f.closed:
raise Exception('Output file unclosed!')
if not in_f.closed:
raise Exception('Input file unclosed!')
# TODO: For some reason, the method gives "ResourceWarning: unclosed file"
# in Python 3.4, although, apparently, all file handles seem to be closed;
# Nothing seems to be wrong in Python 2.7;
os.remove(temp_input_file.name)
os.remove(temp_output_file.name)
return results | python | def _executeMaltparser( input_string, maltparser_dir, maltparser_jar, model_name ):
''' Executes Maltparser on given (CONLL-style) input string, and
returns the result. The result is an array of lines from Maltparser's
output.
Parameters
----------
input_string: string
input text in CONLL format;
maltparser_jar: string
name of the Maltparser's jar file that should be executed;
model_name: string
name of the model that should be used;
maltparser_dir: string
the directory containing Maltparser's jar and the model file;
Few of the ideas were also borrowed from NLTK's MaltParser class,
see http://www.nltk.org/_modules/nltk/parse/malt.html for the reference;
'''
temp_input_file = \
tempfile.NamedTemporaryFile(prefix='malt_in.', mode='w', delete=False)
temp_input_file.close()
# We have to open separately here for writing, because Py 2.7 does not support
# passing parameter encoding='utf-8' to the NamedTemporaryFile;
out_f = codecs.open(temp_input_file.name, mode='w', encoding='utf-8')
out_f.write( input_string )
out_f.close()
temp_output_file = tempfile.NamedTemporaryFile(prefix='malt_out.', mode='w', delete=False)
temp_output_file.close()
current_dir = os.getcwd()
os.chdir(maltparser_dir)
cmd = ['java', '-jar', os.path.join(maltparser_dir, maltparser_jar), \
'-c', model_name, \
'-i', temp_input_file.name, \
'-o', temp_output_file.name, \
'-m', 'parse' ]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if p.wait() != 0:
raise Exception(' Error on running Maltparser: ', p.stderr.read() )
os.chdir(current_dir)
results = []
in_f = codecs.open(temp_output_file.name, mode='r', encoding='utf-8')
for line in in_f:
results.append( line.rstrip() )
in_f.close()
if not temp_input_file.closed:
raise Exception('Temp input file unclosed!')
if not temp_output_file.closed:
raise Exception('Temp input file unclosed!')
if not out_f.closed:
raise Exception('Output file unclosed!')
if not in_f.closed:
raise Exception('Input file unclosed!')
# TODO: For some reason, the method gives "ResourceWarning: unclosed file"
# in Python 3.4, although, apparently, all file handles seem to be closed;
# Nothing seems to be wrong in Python 2.7;
os.remove(temp_input_file.name)
os.remove(temp_output_file.name)
return results | Executes Maltparser on given (CONLL-style) input string, and
returns the result. The result is an array of lines from Maltparser's
output.
Parameters
----------
input_string: string
input text in CONLL format;
maltparser_jar: string
name of the Maltparser's jar file that should be executed;
model_name: string
name of the model that should be used;
maltparser_dir: string
the directory containing Maltparser's jar and the model file;
Few of the ideas were also borrowed from NLTK's MaltParser class,
see http://www.nltk.org/_modules/nltk/parse/malt.html for the reference; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L458-L523 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | loadCONLLannotations | def loadCONLLannotations( in_file, addDepRels = False, splitIntoSentences = True ):
''' Loads syntactically annotated text from CONLL format input file and
returns as an array of tokens, where each token is represented as
an array in the format:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID]
If addDepRels == True, then the dependency relation label is also extracted
and added to the end of the array:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID, depRel]
If splitIntoSentences == True, the array of tokens is further divided
into subarrays representing sentences.
Example input:
2 Monstrumteleskoobid Monstrum_tele_skoop S S prop|pl|nom 0 ROOT _ _
3 ( ( Z Z Opr 4 xxx _ _
4 mosaiik- mosaiik A A pos|sg|nom 2 @<AN _ _
5 ja ja J J crd 6 @J _ _
6 mitmepeeglilised mitme_peegli=line A A pos|pl|nom 4 @<NN _ _
7 ) ) Z Z Cpr 6 xxx _ _
8 . . Z Z Fst 7 xxx _ _
'''
sentenceCount = 0
wordCountInSent = 0
tokens = []
in_f = codecs.open(in_file, mode='r', encoding='utf-8')
for line in in_f:
line = line.rstrip()
if len(line) == 0 or re.match('^\s+$', line):
sentenceCount += 1
wordCountInSent = 0
continue
features = line.split('\t')
if len(features) != 10:
raise Exception(' In file '+in_file+', line with unexpected format: "'+line+'" ')
selfLabel = features[0]
token = features[1]
lemma = features[2]
cpos = features[3]
pos = features[4]
form = features[5]
parentLabel = features[6]
tokens.append( [ str(sentenceCount), str(wordCountInSent), \
token, lemma+" "+pos+" "+form, selfLabel, parentLabel ] )
if addDepRels:
tokens[-1].append( features[7] )
wordCountInSent += 1
in_f.close()
if not splitIntoSentences:
return tokens
else:
sentences = []
lastSentID = ''
for tok in tokens:
if tok[0] != lastSentID:
sentences.append([])
sentences[-1].append(tok)
lastSentID = tok[0]
return sentences | python | def loadCONLLannotations( in_file, addDepRels = False, splitIntoSentences = True ):
''' Loads syntactically annotated text from CONLL format input file and
returns as an array of tokens, where each token is represented as
an array in the format:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID]
If addDepRels == True, then the dependency relation label is also extracted
and added to the end of the array:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID, depRel]
If splitIntoSentences == True, the array of tokens is further divided
into subarrays representing sentences.
Example input:
2 Monstrumteleskoobid Monstrum_tele_skoop S S prop|pl|nom 0 ROOT _ _
3 ( ( Z Z Opr 4 xxx _ _
4 mosaiik- mosaiik A A pos|sg|nom 2 @<AN _ _
5 ja ja J J crd 6 @J _ _
6 mitmepeeglilised mitme_peegli=line A A pos|pl|nom 4 @<NN _ _
7 ) ) Z Z Cpr 6 xxx _ _
8 . . Z Z Fst 7 xxx _ _
'''
sentenceCount = 0
wordCountInSent = 0
tokens = []
in_f = codecs.open(in_file, mode='r', encoding='utf-8')
for line in in_f:
line = line.rstrip()
if len(line) == 0 or re.match('^\s+$', line):
sentenceCount += 1
wordCountInSent = 0
continue
features = line.split('\t')
if len(features) != 10:
raise Exception(' In file '+in_file+', line with unexpected format: "'+line+'" ')
selfLabel = features[0]
token = features[1]
lemma = features[2]
cpos = features[3]
pos = features[4]
form = features[5]
parentLabel = features[6]
tokens.append( [ str(sentenceCount), str(wordCountInSent), \
token, lemma+" "+pos+" "+form, selfLabel, parentLabel ] )
if addDepRels:
tokens[-1].append( features[7] )
wordCountInSent += 1
in_f.close()
if not splitIntoSentences:
return tokens
else:
sentences = []
lastSentID = ''
for tok in tokens:
if tok[0] != lastSentID:
sentences.append([])
sentences[-1].append(tok)
lastSentID = tok[0]
return sentences | Loads syntactically annotated text from CONLL format input file and
returns as an array of tokens, where each token is represented as
an array in the format:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID]
If addDepRels == True, then the dependency relation label is also extracted
and added to the end of the array:
[sentenceID, wordID, tokenString, morphInfo, selfID, parentID, depRel]
If splitIntoSentences == True, the array of tokens is further divided
into subarrays representing sentences.
Example input:
2 Monstrumteleskoobid Monstrum_tele_skoop S S prop|pl|nom 0 ROOT _ _
3 ( ( Z Z Opr 4 xxx _ _
4 mosaiik- mosaiik A A pos|sg|nom 2 @<AN _ _
5 ja ja J J crd 6 @J _ _
6 mitmepeeglilised mitme_peegli=line A A pos|pl|nom 4 @<NN _ _
7 ) ) Z Z Cpr 6 xxx _ _
8 . . Z Z Fst 7 xxx _ _ | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L531-L588 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | convertCONLLtoText | def convertCONLLtoText( in_file, addDepRels = False, verbose = False, **kwargs ):
''' Loads CONLL format data from given input file, and creates
estnltk Text objects from the data, one Text per each
sentence. Returns a list of Text objects.
By default, applies estnltk's morphological analysis, clause
detection, and verb chain detection to each input sentence.
If addDepRels == True, in addition to SYNTAX_LABEL and SYNTAX_HEAD,
surface syntactic function (DEPREL) is also attributed to each
token;
'''
from estnltk.text import Text
sentences = loadCONLLannotations( in_file, addDepRels = addDepRels, \
splitIntoSentences = True )
if verbose:
print( str(len(sentences))+' sentences loaded. ')
estnltkSentTexts = []
for i in range(len(sentences)):
s = sentences[i]
sentenceString = " ".join( [ t[2] for t in s ] )
sentText = Text(sentenceString, **kwargs)
sentText.tag_analysis()
sentText.tag_clauses()
sentText.tag_verb_chains()
sentText = dict(sentText)
if len(sentText[WORDS]) == len(s):
# Add the dependency syntactic information
for j in range(len(sentText[WORDS])):
estnltkWord = sentText[WORDS][j]
depSyntaxWord = s[j]
estnltkWord[SYNTAX_LABEL] = depSyntaxWord[4]
estnltkWord[SYNTAX_HEAD] = depSyntaxWord[5]
if addDepRels:
estnltkWord[DEPREL] = depSyntaxWord[6]
estnltkSentTexts.append( sentText )
if verbose:
print ('*', end = '')
else:
if verbose:
print("The sentence segmentation of dependency syntax differs from the estnltk's sentence segmentation:", len(sentText[WORDS]), ' vs ',len(s))
return estnltkSentTexts | python | def convertCONLLtoText( in_file, addDepRels = False, verbose = False, **kwargs ):
''' Loads CONLL format data from given input file, and creates
estnltk Text objects from the data, one Text per each
sentence. Returns a list of Text objects.
By default, applies estnltk's morphological analysis, clause
detection, and verb chain detection to each input sentence.
If addDepRels == True, in addition to SYNTAX_LABEL and SYNTAX_HEAD,
surface syntactic function (DEPREL) is also attributed to each
token;
'''
from estnltk.text import Text
sentences = loadCONLLannotations( in_file, addDepRels = addDepRels, \
splitIntoSentences = True )
if verbose:
print( str(len(sentences))+' sentences loaded. ')
estnltkSentTexts = []
for i in range(len(sentences)):
s = sentences[i]
sentenceString = " ".join( [ t[2] for t in s ] )
sentText = Text(sentenceString, **kwargs)
sentText.tag_analysis()
sentText.tag_clauses()
sentText.tag_verb_chains()
sentText = dict(sentText)
if len(sentText[WORDS]) == len(s):
# Add the dependency syntactic information
for j in range(len(sentText[WORDS])):
estnltkWord = sentText[WORDS][j]
depSyntaxWord = s[j]
estnltkWord[SYNTAX_LABEL] = depSyntaxWord[4]
estnltkWord[SYNTAX_HEAD] = depSyntaxWord[5]
if addDepRels:
estnltkWord[DEPREL] = depSyntaxWord[6]
estnltkSentTexts.append( sentText )
if verbose:
print ('*', end = '')
else:
if verbose:
print("The sentence segmentation of dependency syntax differs from the estnltk's sentence segmentation:", len(sentText[WORDS]), ' vs ',len(s))
return estnltkSentTexts | Loads CONLL format data from given input file, and creates
estnltk Text objects from the data, one Text per each
sentence. Returns a list of Text objects.
By default, applies estnltk's morphological analysis, clause
detection, and verb chain detection to each input sentence.
If addDepRels == True, in addition to SYNTAX_LABEL and SYNTAX_HEAD,
surface syntactic function (DEPREL) is also attributed to each
token; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L591-L632 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | augmentTextWithCONLLstr | def augmentTextWithCONLLstr( conll_str_array, text ):
''' Augments given Text object with the information from Maltparser's output.
More specifically, adds information about SYNTAX_LABEL, SYNTAX_HEAD and
DEPREL to each token in the Text object;
'''
j = 0
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
sentence = __sort_analyses(sentence)
for i in range(len(sentence)):
estnltkToken = sentence[i]
maltparserToken = conll_str_array[j]
if len( maltparserToken ) > 1:
maltParserAnalysis = maltparserToken.split('\t')
if estnltkToken[TEXT] == maltParserAnalysis[1]:
# Fetch information about the syntactic relation:
estnltkToken[SYNTAX_LABEL] = maltParserAnalysis[0]
estnltkToken[SYNTAX_HEAD] = maltParserAnalysis[6]
# Fetch the name of the surface syntactic relation
estnltkToken[DEPREL] = maltParserAnalysis[7]
else:
raise Exception("A misalignment between Text and Maltparser's output: ",\
estnltkToken, maltparserToken )
j += 1
j += 1 | python | def augmentTextWithCONLLstr( conll_str_array, text ):
''' Augments given Text object with the information from Maltparser's output.
More specifically, adds information about SYNTAX_LABEL, SYNTAX_HEAD and
DEPREL to each token in the Text object;
'''
j = 0
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
sentence = __sort_analyses(sentence)
for i in range(len(sentence)):
estnltkToken = sentence[i]
maltparserToken = conll_str_array[j]
if len( maltparserToken ) > 1:
maltParserAnalysis = maltparserToken.split('\t')
if estnltkToken[TEXT] == maltParserAnalysis[1]:
# Fetch information about the syntactic relation:
estnltkToken[SYNTAX_LABEL] = maltParserAnalysis[0]
estnltkToken[SYNTAX_HEAD] = maltParserAnalysis[6]
# Fetch the name of the surface syntactic relation
estnltkToken[DEPREL] = maltParserAnalysis[7]
else:
raise Exception("A misalignment between Text and Maltparser's output: ",\
estnltkToken, maltparserToken )
j += 1
j += 1 | Augments given Text object with the information from Maltparser's output.
More specifically, adds information about SYNTAX_LABEL, SYNTAX_HEAD and
DEPREL to each token in the Text object; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L635-L658 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | align_CONLL_with_Text | def align_CONLL_with_Text( lines, text, feature_generator, **kwargs ):
''' Aligns CONLL format syntactic analysis (a list of strings) with given EstNLTK's Text
object.
Basically, for each word position in the Text object, finds corresponding line(s) in
the CONLL format output;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be the CONLL format syntactic analysis;
text : Text
EstNLTK Text object containing the original text that was analysed with
MaltParser;
feature_generator : CONLLFeatGenerator
The instance of CONLLFeatGenerator, which was used for generating the input of
the MaltParser; If None, assumes a default feature-generator with the scope set
to 'sentences';
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
try:
granularity = feature_generator.parseScope
except (AttributeError, NameError):
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
check_tokens = False
add_word_ids = False
for argName, argVal in kwargs.items() :
if argName in ['check_tokens', 'check'] and argVal in [True, False]:
check_tokens = argVal
if argName in ['add_word_ids', 'word_ids'] and argVal in [True, False]:
add_word_ids = argVal
generalWID = 0
sentenceID = 0
# Collect clause indices for each sentence (if required)
clause_indices = None
if granularity == CLAUSES:
c = 0
all_clause_indices = text.clause_indices
clause_indices = []
for sentence_words in text.divide( layer=WORDS, by=SENTENCES ):
clause_indices.append([])
for wid, estnltkToken in enumerate( sentence_words ):
clause_indices[-1].append( all_clause_indices[c] )
c += 1
# Iterate over the sentences and perform the alignment
results = []
j = 0
for sentence_words in text.divide( layer=WORDS, by=SENTENCES ):
tokens_to_collect = len( sentence_words )
tokens_collected = 0
chunks = [[]]
while j < len(lines):
maltparserToken = lines[j]
if len( maltparserToken ) > 1 and '\t' in maltparserToken:
# extend the existing clause chunk
token_dict = { 't':maltparserToken, \
'w':(maltparserToken.split('\t'))[1] }
chunks[-1].append( token_dict )
tokens_collected += 1
else:
# create a new clause chunk
if len(chunks[-1]) != 0:
chunks.append( [] )
j += 1
if tokens_to_collect == tokens_collected:
break
if tokens_to_collect != tokens_collected: # a sanity check
raise Exception('(!) Unable to collect the following sentence from the output of MaltParser: "'+\
str(sentence_words)+'"')
# 2) Put the sentence back together
if granularity == SENTENCES:
# A. The easy case: sentence-wise splitting was used
for wid, estnltkToken in enumerate( sentence_words ):
maltparserToken = chunks[0][wid]['t']
if check_tokens and estnltkToken[TEXT] != chunks[0][wid]['w']:
raise Exception("(!) A misalignment between Text and CONLL: ",\
estnltkToken, maltparserToken )
# Populate the alignment
result_dict = { START:estnltkToken[START], END:estnltkToken[END], \
SENT_ID:sentenceID, PARSER_OUT: [maltparserToken] }
if add_word_ids:
result_dict['text_word_id'] = generalWID # word id in the text
result_dict['sent_word_id'] = wid # word id in the sentence
results.append( result_dict )
generalWID += 1
elif granularity == CLAUSES:
# B. The tricky case: clause-wise splitting was used
results_by_wid = {}
# B.1 Try to find the location of each chunk in the original text
cl_ind = clause_indices[sentenceID]
for chunk_id, chunk in enumerate(chunks):
firstWord = chunk[0]['w']
chunkLen = len(chunk)
estnltk_token_ids = []
seen_clause_ids = {}
for wid, estnltkToken in enumerate( sentence_words ):
# Try to recollect tokens of the clause starting from location wid
if estnltkToken[TEXT] == firstWord and \
wid+chunkLen <= len(sentence_words) and cl_ind[wid] not in seen_clause_ids:
clause_index = cl_ind[wid]
i = wid
while i < len(sentence_words):
if cl_ind[i] == clause_index:
estnltk_token_ids.append( i )
i += 1
# Remember that we have already seen this clause
# (in order to avoid start collecting from the middle of the clause)
seen_clause_ids[cl_ind[wid]] = 1
if len(estnltk_token_ids) == chunkLen:
break
else:
estnltk_token_ids = []
if len(estnltk_token_ids) == chunkLen:
# Align the CONLL clause with the clause from the original estnltk Text
for wid, estnltk_wid in enumerate(estnltk_token_ids):
estnltkToken = sentence_words[estnltk_wid]
maltparserToken = chunk[wid]['t']
if check_tokens and estnltkToken[TEXT] != chunk[wid]['w']:
raise Exception("(!) A misalignment between Text and CONLL: ",\
estnltkToken, maltparserToken )
# Convert indices: from clause indices to sentence indices
tokenFields = maltparserToken.split('\t')
if tokenFields[6] != '0':
in_clause_index = int(tokenFields[6])-1
assert in_clause_index in range(0, len(estnltk_token_ids)), \
'(!) Unexpected clause index from CONLL: '+str(in_clause_index)+\
' \ '+str(len(estnltk_token_ids))
in_sent_index = estnltk_token_ids[in_clause_index]+1
tokenFields[6] = str(in_sent_index)
tokenFields[0] = str(estnltk_wid+1)
maltparserToken = '\t'.join(tokenFields)
# Populate the alignment
result_dict = { START:estnltkToken[START], END:estnltkToken[END], \
SENT_ID:sentenceID, PARSER_OUT: [maltparserToken] }
results_by_wid[estnltk_wid] = result_dict
else:
raise Exception('(!) Unable to locate the clause in the original input: '+str(chunk))
if len(results_by_wid.keys()) != len(sentence_words):
raise Exception('(!) Error in aligning Text and CONLL - token counts not matching:'+\
str(len(results_by_wid.keys()))+ ' vs '+str(len(sentence_words)) )
# B.2 Put the sentence back together
for wid in sorted(results_by_wid.keys()):
if add_word_ids:
results_by_wid[wid]['text_word_id'] = generalWID # word id in the text
results_by_wid[wid]['sent_word_id'] = wid # word id in the sentence
results.append( results_by_wid[wid] )
generalWID += 1
sentenceID += 1
return results | python | def align_CONLL_with_Text( lines, text, feature_generator, **kwargs ):
''' Aligns CONLL format syntactic analysis (a list of strings) with given EstNLTK's Text
object.
Basically, for each word position in the Text object, finds corresponding line(s) in
the CONLL format output;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be the CONLL format syntactic analysis;
text : Text
EstNLTK Text object containing the original text that was analysed with
MaltParser;
feature_generator : CONLLFeatGenerator
The instance of CONLLFeatGenerator, which was used for generating the input of
the MaltParser; If None, assumes a default feature-generator with the scope set
to 'sentences';
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
try:
granularity = feature_generator.parseScope
except (AttributeError, NameError):
granularity = SENTENCES
assert granularity in [SENTENCES, CLAUSES], '(!) Unsupported granularity: "'+str(granularity)+'"!'
check_tokens = False
add_word_ids = False
for argName, argVal in kwargs.items() :
if argName in ['check_tokens', 'check'] and argVal in [True, False]:
check_tokens = argVal
if argName in ['add_word_ids', 'word_ids'] and argVal in [True, False]:
add_word_ids = argVal
generalWID = 0
sentenceID = 0
# Collect clause indices for each sentence (if required)
clause_indices = None
if granularity == CLAUSES:
c = 0
all_clause_indices = text.clause_indices
clause_indices = []
for sentence_words in text.divide( layer=WORDS, by=SENTENCES ):
clause_indices.append([])
for wid, estnltkToken in enumerate( sentence_words ):
clause_indices[-1].append( all_clause_indices[c] )
c += 1
# Iterate over the sentences and perform the alignment
results = []
j = 0
for sentence_words in text.divide( layer=WORDS, by=SENTENCES ):
tokens_to_collect = len( sentence_words )
tokens_collected = 0
chunks = [[]]
while j < len(lines):
maltparserToken = lines[j]
if len( maltparserToken ) > 1 and '\t' in maltparserToken:
# extend the existing clause chunk
token_dict = { 't':maltparserToken, \
'w':(maltparserToken.split('\t'))[1] }
chunks[-1].append( token_dict )
tokens_collected += 1
else:
# create a new clause chunk
if len(chunks[-1]) != 0:
chunks.append( [] )
j += 1
if tokens_to_collect == tokens_collected:
break
if tokens_to_collect != tokens_collected: # a sanity check
raise Exception('(!) Unable to collect the following sentence from the output of MaltParser: "'+\
str(sentence_words)+'"')
# 2) Put the sentence back together
if granularity == SENTENCES:
# A. The easy case: sentence-wise splitting was used
for wid, estnltkToken in enumerate( sentence_words ):
maltparserToken = chunks[0][wid]['t']
if check_tokens and estnltkToken[TEXT] != chunks[0][wid]['w']:
raise Exception("(!) A misalignment between Text and CONLL: ",\
estnltkToken, maltparserToken )
# Populate the alignment
result_dict = { START:estnltkToken[START], END:estnltkToken[END], \
SENT_ID:sentenceID, PARSER_OUT: [maltparserToken] }
if add_word_ids:
result_dict['text_word_id'] = generalWID # word id in the text
result_dict['sent_word_id'] = wid # word id in the sentence
results.append( result_dict )
generalWID += 1
elif granularity == CLAUSES:
# B. The tricky case: clause-wise splitting was used
results_by_wid = {}
# B.1 Try to find the location of each chunk in the original text
cl_ind = clause_indices[sentenceID]
for chunk_id, chunk in enumerate(chunks):
firstWord = chunk[0]['w']
chunkLen = len(chunk)
estnltk_token_ids = []
seen_clause_ids = {}
for wid, estnltkToken in enumerate( sentence_words ):
# Try to recollect tokens of the clause starting from location wid
if estnltkToken[TEXT] == firstWord and \
wid+chunkLen <= len(sentence_words) and cl_ind[wid] not in seen_clause_ids:
clause_index = cl_ind[wid]
i = wid
while i < len(sentence_words):
if cl_ind[i] == clause_index:
estnltk_token_ids.append( i )
i += 1
# Remember that we have already seen this clause
# (in order to avoid start collecting from the middle of the clause)
seen_clause_ids[cl_ind[wid]] = 1
if len(estnltk_token_ids) == chunkLen:
break
else:
estnltk_token_ids = []
if len(estnltk_token_ids) == chunkLen:
# Align the CONLL clause with the clause from the original estnltk Text
for wid, estnltk_wid in enumerate(estnltk_token_ids):
estnltkToken = sentence_words[estnltk_wid]
maltparserToken = chunk[wid]['t']
if check_tokens and estnltkToken[TEXT] != chunk[wid]['w']:
raise Exception("(!) A misalignment between Text and CONLL: ",\
estnltkToken, maltparserToken )
# Convert indices: from clause indices to sentence indices
tokenFields = maltparserToken.split('\t')
if tokenFields[6] != '0':
in_clause_index = int(tokenFields[6])-1
assert in_clause_index in range(0, len(estnltk_token_ids)), \
'(!) Unexpected clause index from CONLL: '+str(in_clause_index)+\
' \ '+str(len(estnltk_token_ids))
in_sent_index = estnltk_token_ids[in_clause_index]+1
tokenFields[6] = str(in_sent_index)
tokenFields[0] = str(estnltk_wid+1)
maltparserToken = '\t'.join(tokenFields)
# Populate the alignment
result_dict = { START:estnltkToken[START], END:estnltkToken[END], \
SENT_ID:sentenceID, PARSER_OUT: [maltparserToken] }
results_by_wid[estnltk_wid] = result_dict
else:
raise Exception('(!) Unable to locate the clause in the original input: '+str(chunk))
if len(results_by_wid.keys()) != len(sentence_words):
raise Exception('(!) Error in aligning Text and CONLL - token counts not matching:'+\
str(len(results_by_wid.keys()))+ ' vs '+str(len(sentence_words)) )
# B.2 Put the sentence back together
for wid in sorted(results_by_wid.keys()):
if add_word_ids:
results_by_wid[wid]['text_word_id'] = generalWID # word id in the text
results_by_wid[wid]['sent_word_id'] = wid # word id in the sentence
results.append( results_by_wid[wid] )
generalWID += 1
sentenceID += 1
return results | Aligns CONLL format syntactic analysis (a list of strings) with given EstNLTK's Text
object.
Basically, for each word position in the Text object, finds corresponding line(s) in
the CONLL format output;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be the CONLL format syntactic analysis;
text : Text
EstNLTK Text object containing the original text that was analysed with
MaltParser;
feature_generator : CONLLFeatGenerator
The instance of CONLLFeatGenerator, which was used for generating the input of
the MaltParser; If None, assumes a default feature-generator with the scope set
to 'sentences';
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L661-L830 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _get_clause_words | def _get_clause_words( sentence_text, clause_id ):
''' Collects clause with index *clause_id* from given *sentence_text*.
Returns a pair (clause, isEmbedded), where:
*clause* is a list of word tokens in the clause;
*isEmbedded* is a bool indicating whether the clause is embedded;
'''
clause = []
isEmbedded = False
indices = sentence_text.clause_indices
clause_anno = sentence_text.clause_annotations
for wid, token in enumerate(sentence_text[WORDS]):
if indices[wid] == clause_id:
if not clause and clause_anno[wid] == EMBEDDED_CLAUSE_START:
isEmbedded = True
clause.append((wid, token))
return clause, isEmbedded | python | def _get_clause_words( sentence_text, clause_id ):
''' Collects clause with index *clause_id* from given *sentence_text*.
Returns a pair (clause, isEmbedded), where:
*clause* is a list of word tokens in the clause;
*isEmbedded* is a bool indicating whether the clause is embedded;
'''
clause = []
isEmbedded = False
indices = sentence_text.clause_indices
clause_anno = sentence_text.clause_annotations
for wid, token in enumerate(sentence_text[WORDS]):
if indices[wid] == clause_id:
if not clause and clause_anno[wid] == EMBEDDED_CLAUSE_START:
isEmbedded = True
clause.append((wid, token))
return clause, isEmbedded | Collects clause with index *clause_id* from given *sentence_text*.
Returns a pair (clause, isEmbedded), where:
*clause* is a list of word tokens in the clause;
*isEmbedded* is a bool indicating whether the clause is embedded; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L893-L908 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _detect_quotes | def _detect_quotes( sentence_text, wid, fromRight = True ):
''' Searches for quotation marks (both opening and closing) closest to
given location in sentence (given as word index *wid*);
If *fromRight == True* (default), searches from the right (all the
words having index greater than *wid*), otherwise, searches from the
left (all the words having index smaller than *wid*);
Returns index of the closest quotation mark found, or -1, if none was
found;
'''
i = wid
while (i > -1 and i < len(sentence_text[WORDS])):
token = sentence_text[WORDS][i]
if _pat_starting_quote.match(token[TEXT]) or \
_pat_ending_quote.match(token[TEXT]):
return i
i += 1 if fromRight else -1
return -1 | python | def _detect_quotes( sentence_text, wid, fromRight = True ):
''' Searches for quotation marks (both opening and closing) closest to
given location in sentence (given as word index *wid*);
If *fromRight == True* (default), searches from the right (all the
words having index greater than *wid*), otherwise, searches from the
left (all the words having index smaller than *wid*);
Returns index of the closest quotation mark found, or -1, if none was
found;
'''
i = wid
while (i > -1 and i < len(sentence_text[WORDS])):
token = sentence_text[WORDS][i]
if _pat_starting_quote.match(token[TEXT]) or \
_pat_ending_quote.match(token[TEXT]):
return i
i += 1 if fromRight else -1
return -1 | Searches for quotation marks (both opening and closing) closest to
given location in sentence (given as word index *wid*);
If *fromRight == True* (default), searches from the right (all the
words having index greater than *wid*), otherwise, searches from the
left (all the words having index smaller than *wid*);
Returns index of the closest quotation mark found, or -1, if none was
found; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L915-L933 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | detect_sentence_ending_saying_verbs | def detect_sentence_ending_saying_verbs( edt_sent_text ):
''' Detects cases where a saying verb (potential root of the sentence) ends the sentence.
We use a simple heuristic: if the given sentence has multiple clauses, and the last main
verb in the sentence is preceded by ", but is not followed by ", then the main verb is
most likely a saying verb.
Examples:
" See oli ainult unes , " [vaidles] Jan .
" Ma ei maga enam Joogaga ! " [protesteerisin] .
" Mis mõttega te jama suust välja ajate ? " [läks] Janil nüüd juba hari punaseks .
Note that the class of saying verbs is open, so we try not rely on a listing of verbs,
but rather on the conventional usage patterns of reported speech, indicated by quotation
marks.
Returns a dict containing word indexes of saying verbs;
'''
from estnltk.mw_verbs.utils import WordTemplate
if not edt_sent_text.is_tagged( VERB_CHAINS ):
edt_sent_text.tag_verb_chains()
saying_verbs = {}
if len(edt_sent_text[VERB_CHAINS]) < 2:
# Skip sentences that do not have any chains, or
# have only a single verb chain
return saying_verbs
patColon = WordTemplate({'partofspeech':'^[Z]$', 'text': '^:$'})
for vid, vc in enumerate( edt_sent_text[VERB_CHAINS] ):
#
# Look only multi-clause sentences, where the last verb chain has length 1
#
if len(vc['phrase']) == 1 and vid == len(edt_sent_text[VERB_CHAINS])-1:
wid = vc['phrase'][0]
token = edt_sent_text[WORDS][wid]
clause_id = vc[CLAUSE_IDX]
# Find corresponding clause and locations of quotation marks
clause, insideEmbeddedCl = _get_clause_words( edt_sent_text, clause_id )
quoteLeft = _detect_quotes( edt_sent_text, wid, fromRight = False )
quoteRight = _detect_quotes( edt_sent_text, wid, fromRight = True )
#
# Exclude cases, where there are double quotes within the same clause:
# ... ootab igaüks ,] [kuidas aga kähku tagasi " varrastusse " <saaks> .]
# ... miljonäre on ka nende seas ,] [kes oma “ papi ” mustas äris <teenivad> .]
#
quotes_in_clause = []
for (wid2, token2) in clause:
if _pat_starting_quote.match(token2[TEXT]) or \
_pat_ending_quote.match(token2[TEXT]):
quotes_in_clause.append(wid2)
multipleQuotes = len(quotes_in_clause) > 1 and quotes_in_clause[-1]==quoteLeft
#
# If the preceding double quotes are not within the same clause, and
# the verb is not within an embedded clause, and a quotation mark strictly
# precedes, but none follows, then we have most likely a saying verb:
# " Ma ei tea , " [kehitan] õlga .
# " Miks jumal meid karistab ? " [mõtles] sir Galahad .
# " Kaarsild pole teatavastki elusolend , " [lõpetasin] arutelu .
#
if not multipleQuotes and \
not insideEmbeddedCl and \
(quoteLeft != -1 and quoteLeft+1 == wid and quoteRight == -1):
saying_verbs[wid] = 'se_saying_verb'
return saying_verbs | python | def detect_sentence_ending_saying_verbs( edt_sent_text ):
''' Detects cases where a saying verb (potential root of the sentence) ends the sentence.
We use a simple heuristic: if the given sentence has multiple clauses, and the last main
verb in the sentence is preceded by ", but is not followed by ", then the main verb is
most likely a saying verb.
Examples:
" See oli ainult unes , " [vaidles] Jan .
" Ma ei maga enam Joogaga ! " [protesteerisin] .
" Mis mõttega te jama suust välja ajate ? " [läks] Janil nüüd juba hari punaseks .
Note that the class of saying verbs is open, so we try not rely on a listing of verbs,
but rather on the conventional usage patterns of reported speech, indicated by quotation
marks.
Returns a dict containing word indexes of saying verbs;
'''
from estnltk.mw_verbs.utils import WordTemplate
if not edt_sent_text.is_tagged( VERB_CHAINS ):
edt_sent_text.tag_verb_chains()
saying_verbs = {}
if len(edt_sent_text[VERB_CHAINS]) < 2:
# Skip sentences that do not have any chains, or
# have only a single verb chain
return saying_verbs
patColon = WordTemplate({'partofspeech':'^[Z]$', 'text': '^:$'})
for vid, vc in enumerate( edt_sent_text[VERB_CHAINS] ):
#
# Look only multi-clause sentences, where the last verb chain has length 1
#
if len(vc['phrase']) == 1 and vid == len(edt_sent_text[VERB_CHAINS])-1:
wid = vc['phrase'][0]
token = edt_sent_text[WORDS][wid]
clause_id = vc[CLAUSE_IDX]
# Find corresponding clause and locations of quotation marks
clause, insideEmbeddedCl = _get_clause_words( edt_sent_text, clause_id )
quoteLeft = _detect_quotes( edt_sent_text, wid, fromRight = False )
quoteRight = _detect_quotes( edt_sent_text, wid, fromRight = True )
#
# Exclude cases, where there are double quotes within the same clause:
# ... ootab igaüks ,] [kuidas aga kähku tagasi " varrastusse " <saaks> .]
# ... miljonäre on ka nende seas ,] [kes oma “ papi ” mustas äris <teenivad> .]
#
quotes_in_clause = []
for (wid2, token2) in clause:
if _pat_starting_quote.match(token2[TEXT]) or \
_pat_ending_quote.match(token2[TEXT]):
quotes_in_clause.append(wid2)
multipleQuotes = len(quotes_in_clause) > 1 and quotes_in_clause[-1]==quoteLeft
#
# If the preceding double quotes are not within the same clause, and
# the verb is not within an embedded clause, and a quotation mark strictly
# precedes, but none follows, then we have most likely a saying verb:
# " Ma ei tea , " [kehitan] õlga .
# " Miks jumal meid karistab ? " [mõtles] sir Galahad .
# " Kaarsild pole teatavastki elusolend , " [lõpetasin] arutelu .
#
if not multipleQuotes and \
not insideEmbeddedCl and \
(quoteLeft != -1 and quoteLeft+1 == wid and quoteRight == -1):
saying_verbs[wid] = 'se_saying_verb'
return saying_verbs | Detects cases where a saying verb (potential root of the sentence) ends the sentence.
We use a simple heuristic: if the given sentence has multiple clauses, and the last main
verb in the sentence is preceded by ", but is not followed by ", then the main verb is
most likely a saying verb.
Examples:
" See oli ainult unes , " [vaidles] Jan .
" Ma ei maga enam Joogaga ! " [protesteerisin] .
" Mis mõttega te jama suust välja ajate ? " [läks] Janil nüüd juba hari punaseks .
Note that the class of saying verbs is open, so we try not rely on a listing of verbs,
but rather on the conventional usage patterns of reported speech, indicated by quotation
marks.
Returns a dict containing word indexes of saying verbs; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L936-L1002 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _loadKSubcatRelations | def _loadKSubcatRelations( inputFile ):
''' Laeb sisendfailist (inputFile) kaassõnade rektsiooniseoste mustrid.
Iga muster peab olema failis eraldi real, kujul:
(sõnalemma);(sõnaliik);(post|pre);(nõutud_käänete_regexp)
nt
ees;_K_;post;g
eest;_K_;post;g
enne;_K_;pre;p
Tagastab laetud andmed sõnastikuna;
'''
kSubCatRelations = dict()
in_f = codecs.open(inputFile, mode='r', encoding='utf-8')
for line in in_f:
line = line.rstrip()
if len(line) > 0 and not re.match("^#.+$", line):
items = line.split(';')
if len(items) == 4:
root = items[0]
partofspeech = items[1]
postPre = items[2]
morphPattern = items[3]
fpattern = '(sg|pl)\s'+morphPattern
if root not in kSubCatRelations:
kSubCatRelations[root] = []
kSubCatRelations[root].append( [postPre, fpattern] )
root_clean = root.replace('_', '')
if root != root_clean:
if root_clean not in kSubCatRelations:
kSubCatRelations[root_clean] = []
kSubCatRelations[root_clean].append( [postPre, fpattern] )
else:
raise Exception(' Unexpected number of items in the input lexicon line: '+line)
in_f.close()
return kSubCatRelations | python | def _loadKSubcatRelations( inputFile ):
''' Laeb sisendfailist (inputFile) kaassõnade rektsiooniseoste mustrid.
Iga muster peab olema failis eraldi real, kujul:
(sõnalemma);(sõnaliik);(post|pre);(nõutud_käänete_regexp)
nt
ees;_K_;post;g
eest;_K_;post;g
enne;_K_;pre;p
Tagastab laetud andmed sõnastikuna;
'''
kSubCatRelations = dict()
in_f = codecs.open(inputFile, mode='r', encoding='utf-8')
for line in in_f:
line = line.rstrip()
if len(line) > 0 and not re.match("^#.+$", line):
items = line.split(';')
if len(items) == 4:
root = items[0]
partofspeech = items[1]
postPre = items[2]
morphPattern = items[3]
fpattern = '(sg|pl)\s'+morphPattern
if root not in kSubCatRelations:
kSubCatRelations[root] = []
kSubCatRelations[root].append( [postPre, fpattern] )
root_clean = root.replace('_', '')
if root != root_clean:
if root_clean not in kSubCatRelations:
kSubCatRelations[root_clean] = []
kSubCatRelations[root_clean].append( [postPre, fpattern] )
else:
raise Exception(' Unexpected number of items in the input lexicon line: '+line)
in_f.close()
return kSubCatRelations | Laeb sisendfailist (inputFile) kaassõnade rektsiooniseoste mustrid.
Iga muster peab olema failis eraldi real, kujul:
(sõnalemma);(sõnaliik);(post|pre);(nõutud_käänete_regexp)
nt
ees;_K_;post;g
eest;_K_;post;g
enne;_K_;pre;p
Tagastab laetud andmed sõnastikuna; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L1008-L1041 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _detectKsubcatRelType | def _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon ):
''' Given the adposition appearing in the sentence at the location i,
checks whether the adposition appears in the kSubCatRelsLexicon,
and if so, attempts to further detect whether the adposition is a
preposition or a postposition;
Returns a tuple (string, int), where the first item indicates the
type of adposition ('pre', 'post', '_'), and the second item points
to its possible child (index of the word in sentence, or -1, if
possible child was not detected from close range);
'''
curToken = sentence[i]
root = curToken[ANALYSIS][0][ROOT]
if root in kSubCatRelsLexicon:
for [postPre, fpattern] in kSubCatRelsLexicon[root]:
if postPre == 'post' and i-1 > -1:
lastTokenAnalysis = sentence[i-1][ANALYSIS][0]
if re.match(fpattern, lastTokenAnalysis[FORM]):
return ('post', i-1)
elif postPre == 'pre' and i+1 < len(sentence):
nextTokenAnalysis = sentence[i+1][ANALYSIS][0]
if re.match(fpattern, nextTokenAnalysis[FORM]):
return ('pre', i+1)
# If the word is not ambiguous between pre and post, but
# the possible child was not detected, return only the
# post/pre label:
if len(kSubCatRelsLexicon[root]) == 1:
return (kSubCatRelsLexicon[root][0][0], -1)
return ('_', -1) | python | def _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon ):
''' Given the adposition appearing in the sentence at the location i,
checks whether the adposition appears in the kSubCatRelsLexicon,
and if so, attempts to further detect whether the adposition is a
preposition or a postposition;
Returns a tuple (string, int), where the first item indicates the
type of adposition ('pre', 'post', '_'), and the second item points
to its possible child (index of the word in sentence, or -1, if
possible child was not detected from close range);
'''
curToken = sentence[i]
root = curToken[ANALYSIS][0][ROOT]
if root in kSubCatRelsLexicon:
for [postPre, fpattern] in kSubCatRelsLexicon[root]:
if postPre == 'post' and i-1 > -1:
lastTokenAnalysis = sentence[i-1][ANALYSIS][0]
if re.match(fpattern, lastTokenAnalysis[FORM]):
return ('post', i-1)
elif postPre == 'pre' and i+1 < len(sentence):
nextTokenAnalysis = sentence[i+1][ANALYSIS][0]
if re.match(fpattern, nextTokenAnalysis[FORM]):
return ('pre', i+1)
# If the word is not ambiguous between pre and post, but
# the possible child was not detected, return only the
# post/pre label:
if len(kSubCatRelsLexicon[root]) == 1:
return (kSubCatRelsLexicon[root][0][0], -1)
return ('_', -1) | Given the adposition appearing in the sentence at the location i,
checks whether the adposition appears in the kSubCatRelsLexicon,
and if so, attempts to further detect whether the adposition is a
preposition or a postposition;
Returns a tuple (string, int), where the first item indicates the
type of adposition ('pre', 'post', '_'), and the second item points
to its possible child (index of the word in sentence, or -1, if
possible child was not detected from close range); | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L1044-L1071 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _detectPossibleKsubcatRelsFromSent | def _detectPossibleKsubcatRelsFromSent( sentence, kSubCatRelsLexicon, reverseMapping = False ):
''' Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic method _detectKsubcatRelType();
Returns a dictionary of relations where the key corresponds to the
index of its parent node (the K node) and the value corresponds to
index of its child.
If reverseMapping = True, the mapping is reversed: keys correspond
to children and values correspond to parent nodes (K-s);
'''
relationIndex = dict()
relationType = dict()
for i in range(len(sentence)):
estnltkWord = sentence[i]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
if firstAnalysis[POSTAG] == 'K':
(grammCats, kChild) = _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon )
if kChild != -1:
if reverseMapping:
relationIndex[ kChild ] = i
relationType[ kChild ] = grammCats
else:
relationIndex[ i ] = kChild
relationType[ i ] = grammCats
return relationIndex, relationType | python | def _detectPossibleKsubcatRelsFromSent( sentence, kSubCatRelsLexicon, reverseMapping = False ):
''' Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic method _detectKsubcatRelType();
Returns a dictionary of relations where the key corresponds to the
index of its parent node (the K node) and the value corresponds to
index of its child.
If reverseMapping = True, the mapping is reversed: keys correspond
to children and values correspond to parent nodes (K-s);
'''
relationIndex = dict()
relationType = dict()
for i in range(len(sentence)):
estnltkWord = sentence[i]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
if firstAnalysis[POSTAG] == 'K':
(grammCats, kChild) = _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon )
if kChild != -1:
if reverseMapping:
relationIndex[ kChild ] = i
relationType[ kChild ] = grammCats
else:
relationIndex[ i ] = kChild
relationType[ i ] = grammCats
return relationIndex, relationType | Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic method _detectKsubcatRelType();
Returns a dictionary of relations where the key corresponds to the
index of its parent node (the K node) and the value corresponds to
index of its child.
If reverseMapping = True, the mapping is reversed: keys correspond
to children and values correspond to parent nodes (K-s); | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L1074-L1100 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | _findKsubcatFeatures | def _findKsubcatFeatures( sentence, kSubCatRelsLexicon, addFeaturesToK = True ):
''' Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic methods _detectKsubcatRelType()
and _detectPossibleKsubcatRelsFromSent();
Returns a dictionary where the keys correspond to token indices,
and values are grammatical features related to K subcat relations.
Not all tokens in the sentence are indexed, but only tokens relevant
to K subcat relations;
If addFeaturesToK == True, grammatical features are added to K-s,
otherwise, grammatical features are added to K's child tokens.
'''
features = dict()
# Add features to the K (adposition)
if addFeaturesToK:
for i in range(len(sentence)):
estnltkWord = sentence[i]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
if firstAnalysis[POSTAG] == 'K':
(grammCats, kChild) = _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon )
features[i] = grammCats
# Add features to the noun governed by K
else:
relationIndex, relationType = \
_detectPossibleKsubcatRelsFromSent( sentence, kSubCatRelsLexicon, reverseMapping = True )
for i in relationIndex:
features[i] = relationType[i]
return features | python | def _findKsubcatFeatures( sentence, kSubCatRelsLexicon, addFeaturesToK = True ):
''' Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic methods _detectKsubcatRelType()
and _detectPossibleKsubcatRelsFromSent();
Returns a dictionary where the keys correspond to token indices,
and values are grammatical features related to K subcat relations.
Not all tokens in the sentence are indexed, but only tokens relevant
to K subcat relations;
If addFeaturesToK == True, grammatical features are added to K-s,
otherwise, grammatical features are added to K's child tokens.
'''
features = dict()
# Add features to the K (adposition)
if addFeaturesToK:
for i in range(len(sentence)):
estnltkWord = sentence[i]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
if firstAnalysis[POSTAG] == 'K':
(grammCats, kChild) = _detectKsubcatRelType( sentence, i, kSubCatRelsLexicon )
features[i] = grammCats
# Add features to the noun governed by K
else:
relationIndex, relationType = \
_detectPossibleKsubcatRelsFromSent( sentence, kSubCatRelsLexicon, reverseMapping = True )
for i in relationIndex:
features[i] = relationType[i]
return features | Attempts to detect all possible K subcategorization relations from
given sentence, using the heuristic methods _detectKsubcatRelType()
and _detectPossibleKsubcatRelsFromSent();
Returns a dictionary where the keys correspond to token indices,
and values are grammatical features related to K subcat relations.
Not all tokens in the sentence are indexed, but only tokens relevant
to K subcat relations;
If addFeaturesToK == True, grammatical features are added to K-s,
otherwise, grammatical features are added to K's child tokens. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L1103-L1132 |
estnltk/estnltk | estnltk/syntax/maltparser_support.py | CONLLFeatGenerator.generate_features | def generate_features( self, sentence_text, wid ):
''' Generates and returns a list of strings, containing tab-separated
features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS of the word
(the word with index *wid* from the given *sentence_text*).
Parameters
-----------
sentence_text : estnltk.text.Text
Text object corresponding to a single sentence.
Words of the sentence, along with their morphological analyses,
should be accessible via the layer WORDS.
And each word should be a dict, containing morphological features
in ANALYSIS part;
wid : int
Index of the word/token, whose features need to be generated;
'''
assert WORDS in sentence_text and len(sentence_text[WORDS])>0, \
" (!) 'words' layer missing or empty in given Text!"
sentence = sentence_text[WORDS]
assert -1 < wid and wid < len(sentence), ' (!) Invalid word id: '+str(wid)
# 1) Pre-process (if required)
if wid == 0:
# *** Add adposition (_K_) type
if self.kSubCatRelsLex:
self.kFeatures = \
_findKsubcatFeatures( sentence, self.kSubCatRelsLex, addFeaturesToK = True )
# *** Add verb chain info
if self.addVerbcGramm or self.addNomAdvVinf:
self.vcFeatures = generate_verb_chain_features( sentence_text, \
addGrammPred=self.addVerbcGramm, \
addNomAdvVinf=self.addNomAdvVinf )
# *** Add sentence ending saying verbs
if self.addSeSayingVerbs:
self.sayingverbs = detect_sentence_ending_saying_verbs( sentence_text )
# *** Add clause boundary info
if self.addClauseBound:
self.clbFeatures = []
for tag in sentence_text.clause_annotations:
if not tag:
self.clbFeatures.append( [] )
elif tag == EMBEDDED_CLAUSE_START:
self.clbFeatures.append( ['emb_cl_start'] )
elif tag == EMBEDDED_CLAUSE_END:
self.clbFeatures.append( ['emb_cl_end'] )
elif tag == CLAUSE_BOUNDARY:
self.clbFeatures.append (['clb'] )
# 2) Generate the features
estnltkWord = sentence[wid]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
strForm = []
# *** ID
strForm.append( str(wid+1) )
strForm.append( '\t' )
# *** FORM
word_text = estnltkWord[TEXT]
word_text = word_text.replace(' ', '_')
strForm.append( word_text )
strForm.append( '\t' )
# *** LEMMA
word_root = firstAnalysis[ROOT]
word_root = word_root.replace(' ', '_')
if len(word_root) == 0:
word_root = "??"
strForm.append( word_root )
strForm.append( '\t' )
# *** CPOSTAG
strForm.append( firstAnalysis[POSTAG] )
strForm.append( '\t' )
# *** POSTAG
finePos = firstAnalysis[POSTAG]
if self.addAmbiguousPos and len(estnltkWord[ANALYSIS]) > 1:
pos_tags = sorted(list(set([ a[POSTAG] for a in estnltkWord[ANALYSIS] ])))
finePos = '_'.join(pos_tags)
#if self.kFeatures and wid in self.kFeatures:
# finePos += '_'+self.kFeatures[wid]
strForm.append( finePos )
strForm.append( '\t' )
# *** FEATS (grammatical categories)
grammCats = []
if len(firstAnalysis[FORM]) != 0:
forms = firstAnalysis[FORM].split()
grammCats.extend( forms )
# add features from verb chains:
if self.vcFeatures and self.vcFeatures[wid]:
grammCats.extend( self.vcFeatures[wid] )
# add features from clause boundaries:
if self.addClauseBound and self.clbFeatures[wid]:
grammCats.extend( self.clbFeatures[wid] )
# add adposition type ("post" or "pre")
if self.kFeatures and wid in self.kFeatures:
grammCats.extend( [self.kFeatures[wid]] )
# add saying verb features
if self.sayingverbs and wid in self.sayingverbs:
grammCats.extend( [self.sayingverbs[wid]] )
# wrap up
if not grammCats:
grammCats = '_'
else:
grammCats = '|'.join( grammCats )
strForm.append( grammCats )
strForm.append( '\t' )
return strForm | python | def generate_features( self, sentence_text, wid ):
''' Generates and returns a list of strings, containing tab-separated
features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS of the word
(the word with index *wid* from the given *sentence_text*).
Parameters
-----------
sentence_text : estnltk.text.Text
Text object corresponding to a single sentence.
Words of the sentence, along with their morphological analyses,
should be accessible via the layer WORDS.
And each word should be a dict, containing morphological features
in ANALYSIS part;
wid : int
Index of the word/token, whose features need to be generated;
'''
assert WORDS in sentence_text and len(sentence_text[WORDS])>0, \
" (!) 'words' layer missing or empty in given Text!"
sentence = sentence_text[WORDS]
assert -1 < wid and wid < len(sentence), ' (!) Invalid word id: '+str(wid)
# 1) Pre-process (if required)
if wid == 0:
# *** Add adposition (_K_) type
if self.kSubCatRelsLex:
self.kFeatures = \
_findKsubcatFeatures( sentence, self.kSubCatRelsLex, addFeaturesToK = True )
# *** Add verb chain info
if self.addVerbcGramm or self.addNomAdvVinf:
self.vcFeatures = generate_verb_chain_features( sentence_text, \
addGrammPred=self.addVerbcGramm, \
addNomAdvVinf=self.addNomAdvVinf )
# *** Add sentence ending saying verbs
if self.addSeSayingVerbs:
self.sayingverbs = detect_sentence_ending_saying_verbs( sentence_text )
# *** Add clause boundary info
if self.addClauseBound:
self.clbFeatures = []
for tag in sentence_text.clause_annotations:
if not tag:
self.clbFeatures.append( [] )
elif tag == EMBEDDED_CLAUSE_START:
self.clbFeatures.append( ['emb_cl_start'] )
elif tag == EMBEDDED_CLAUSE_END:
self.clbFeatures.append( ['emb_cl_end'] )
elif tag == CLAUSE_BOUNDARY:
self.clbFeatures.append (['clb'] )
# 2) Generate the features
estnltkWord = sentence[wid]
# Pick the first analysis
firstAnalysis = estnltkWord[ANALYSIS][0]
strForm = []
# *** ID
strForm.append( str(wid+1) )
strForm.append( '\t' )
# *** FORM
word_text = estnltkWord[TEXT]
word_text = word_text.replace(' ', '_')
strForm.append( word_text )
strForm.append( '\t' )
# *** LEMMA
word_root = firstAnalysis[ROOT]
word_root = word_root.replace(' ', '_')
if len(word_root) == 0:
word_root = "??"
strForm.append( word_root )
strForm.append( '\t' )
# *** CPOSTAG
strForm.append( firstAnalysis[POSTAG] )
strForm.append( '\t' )
# *** POSTAG
finePos = firstAnalysis[POSTAG]
if self.addAmbiguousPos and len(estnltkWord[ANALYSIS]) > 1:
pos_tags = sorted(list(set([ a[POSTAG] for a in estnltkWord[ANALYSIS] ])))
finePos = '_'.join(pos_tags)
#if self.kFeatures and wid in self.kFeatures:
# finePos += '_'+self.kFeatures[wid]
strForm.append( finePos )
strForm.append( '\t' )
# *** FEATS (grammatical categories)
grammCats = []
if len(firstAnalysis[FORM]) != 0:
forms = firstAnalysis[FORM].split()
grammCats.extend( forms )
# add features from verb chains:
if self.vcFeatures and self.vcFeatures[wid]:
grammCats.extend( self.vcFeatures[wid] )
# add features from clause boundaries:
if self.addClauseBound and self.clbFeatures[wid]:
grammCats.extend( self.clbFeatures[wid] )
# add adposition type ("post" or "pre")
if self.kFeatures and wid in self.kFeatures:
grammCats.extend( [self.kFeatures[wid]] )
# add saying verb features
if self.sayingverbs and wid in self.sayingverbs:
grammCats.extend( [self.sayingverbs[wid]] )
# wrap up
if not grammCats:
grammCats = '_'
else:
grammCats = '|'.join( grammCats )
strForm.append( grammCats )
strForm.append( '\t' )
return strForm | Generates and returns a list of strings, containing tab-separated
features ID, FORM, LEMMA, CPOSTAG, POSTAG, FEATS of the word
(the word with index *wid* from the given *sentence_text*).
Parameters
-----------
sentence_text : estnltk.text.Text
Text object corresponding to a single sentence.
Words of the sentence, along with their morphological analyses,
should be accessible via the layer WORDS.
And each word should be a dict, containing morphological features
in ANALYSIS part;
wid : int
Index of the word/token, whose features need to be generated; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/maltparser_support.py#L152-L258 |
estnltk/estnltk | estnltk/prettyprinter/rules.py | create_rules | def create_rules(aes, value):
"""Create a Rules instance for a single aesthetic value.
Parameter
---------
aes: str
The name of the aesthetic
value: str or list
The value associated with any aesthetic
"""
if isinstance(value, six.string_types):
return Rules(aes)
else:
rules = Rules()
for idx, (pattern, css_value) in enumerate(value):
rules.add_rule(pattern, '{0}_{1}'.format(aes, idx))
return rules | python | def create_rules(aes, value):
"""Create a Rules instance for a single aesthetic value.
Parameter
---------
aes: str
The name of the aesthetic
value: str or list
The value associated with any aesthetic
"""
if isinstance(value, six.string_types):
return Rules(aes)
else:
rules = Rules()
for idx, (pattern, css_value) in enumerate(value):
rules.add_rule(pattern, '{0}_{1}'.format(aes, idx))
return rules | Create a Rules instance for a single aesthetic value.
Parameter
---------
aes: str
The name of the aesthetic
value: str or list
The value associated with any aesthetic | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/prettyprinter/rules.py#L51-L67 |
estnltk/estnltk | estnltk/prettyprinter/rules.py | Rules.add_rule | def add_rule(self, pattern, css_class):
"""Add a new rule.
Parameters
----------
pattern: str
Pattern that is compiled to a regular expression.
css_class: str
The class that will corresponds to given pattern.
"""
#print('adding rule <{0}> <{1}>'.format(pattern, css_class))
self.__patterns.append(re.compile(pattern, flags=re.U | re.M))
self.__css_classes.append(css_class) | python | def add_rule(self, pattern, css_class):
"""Add a new rule.
Parameters
----------
pattern: str
Pattern that is compiled to a regular expression.
css_class: str
The class that will corresponds to given pattern.
"""
#print('adding rule <{0}> <{1}>'.format(pattern, css_class))
self.__patterns.append(re.compile(pattern, flags=re.U | re.M))
self.__css_classes.append(css_class) | Add a new rule.
Parameters
----------
pattern: str
Pattern that is compiled to a regular expression.
css_class: str
The class that will corresponds to given pattern. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/prettyprinter/rules.py#L25-L37 |
estnltk/estnltk | estnltk/prettyprinter/rules.py | Rules.get_css_class | def get_css_class(self, value):
"""Return the css class of first pattern that matches given value.
If no rules match, the default css class will be returned (see the constructor)
"""
#print ('get_css_class for {0}'.format(value))
for idx, pattern in enumerate(self.__patterns):
if pattern.match(value) is not None:
#print ('matched rule {0} and returning {1}'.format(idx, self.__css_classes[idx]))
return self.__css_classes[idx]
return self.__default | python | def get_css_class(self, value):
"""Return the css class of first pattern that matches given value.
If no rules match, the default css class will be returned (see the constructor)
"""
#print ('get_css_class for {0}'.format(value))
for idx, pattern in enumerate(self.__patterns):
if pattern.match(value) is not None:
#print ('matched rule {0} and returning {1}'.format(idx, self.__css_classes[idx]))
return self.__css_classes[idx]
return self.__default | Return the css class of first pattern that matches given value.
If no rules match, the default css class will be returned (see the constructor) | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/prettyprinter/rules.py#L39-L48 |
estnltk/estnltk | estnltk/database/elastic/__init__.py | create_index | def create_index(index_name, **kwargs):
"""
Parameters
----------
index_name : str
Name of the index to be created
**kwargs
Arguments to pass to Elasticsearch instance.
Returns
-------
Index
"""
es = elasticsearch.Elasticsearch(**kwargs)
es.indices.create(index=index_name, body=mapping)
return connect(index_name, **kwargs) | python | def create_index(index_name, **kwargs):
"""
Parameters
----------
index_name : str
Name of the index to be created
**kwargs
Arguments to pass to Elasticsearch instance.
Returns
-------
Index
"""
es = elasticsearch.Elasticsearch(**kwargs)
es.indices.create(index=index_name, body=mapping)
return connect(index_name, **kwargs) | Parameters
----------
index_name : str
Name of the index to be created
**kwargs
Arguments to pass to Elasticsearch instance.
Returns
-------
Index | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/database/elastic/__init__.py#L16-L32 |
estnltk/estnltk | estnltk/database/elastic/__init__.py | Index._get_indexable_sentences | def _get_indexable_sentences(document):
"""
Parameters
----------
document : Text
Article, book, paragraph, chapter, etc. Anything that is considered a document on its own.
Yields
------
str
json representation of elasticsearch type sentence
"""
def unroll_lists(list_of_lists):
for i in itertools.product(*[set(j) for j in list_of_lists]):
yield ' '.join(i)
sents = document.split_by_sentences()
for order, sent in enumerate(sents):
postags = list(unroll_lists(sent.postag_lists))
lemmas = list(unroll_lists(sent.lemma_lists))
text = sent.text
words = copy.deepcopy(sent.words)
for i in words:
del i['start']
del i['end']
sentence = {
'estnltk_text_object': json.dumps(sent),
'meta': {
'order_in_parent': order
},
'text': text,
'words': words,
'postags': postags,
'lemmas': lemmas
}
yield json.dumps(sentence) | python | def _get_indexable_sentences(document):
"""
Parameters
----------
document : Text
Article, book, paragraph, chapter, etc. Anything that is considered a document on its own.
Yields
------
str
json representation of elasticsearch type sentence
"""
def unroll_lists(list_of_lists):
for i in itertools.product(*[set(j) for j in list_of_lists]):
yield ' '.join(i)
sents = document.split_by_sentences()
for order, sent in enumerate(sents):
postags = list(unroll_lists(sent.postag_lists))
lemmas = list(unroll_lists(sent.lemma_lists))
text = sent.text
words = copy.deepcopy(sent.words)
for i in words:
del i['start']
del i['end']
sentence = {
'estnltk_text_object': json.dumps(sent),
'meta': {
'order_in_parent': order
},
'text': text,
'words': words,
'postags': postags,
'lemmas': lemmas
}
yield json.dumps(sentence) | Parameters
----------
document : Text
Article, book, paragraph, chapter, etc. Anything that is considered a document on its own.
Yields
------
str
json representation of elasticsearch type sentence | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/database/elastic/__init__.py#L91-L129 |
estnltk/estnltk | estnltk/estner/featureextraction.py | apply_templates | def apply_templates(toks, templates):
"""
Generate features for an item sequence by applying feature templates.
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value. Generated features are stored
in the 'F' field of each item in the sequence.
Parameters
----------
toks: list of tokens
A list of processed toknes.
templates: list of template tuples (str, int)
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value.
"""
for template in templates:
name = '|'.join(['%s[%d]' % (f, o) for f, o in template])
for t in range(len(toks)):
values_list = []
for field, offset in template:
p = t + offset
if p < 0 or p >= len(toks):
values_list = []
break
if field in toks[p]:
value = toks[p][field]
values_list.append(value if isinstance(value, (set, list)) else [value])
if len(template) == len(values_list):
for values in product(*values_list):
toks[t]['F'].append('%s=%s' % (name, '|'.join(values))) | python | def apply_templates(toks, templates):
"""
Generate features for an item sequence by applying feature templates.
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value. Generated features are stored
in the 'F' field of each item in the sequence.
Parameters
----------
toks: list of tokens
A list of processed toknes.
templates: list of template tuples (str, int)
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value.
"""
for template in templates:
name = '|'.join(['%s[%d]' % (f, o) for f, o in template])
for t in range(len(toks)):
values_list = []
for field, offset in template:
p = t + offset
if p < 0 or p >= len(toks):
values_list = []
break
if field in toks[p]:
value = toks[p][field]
values_list.append(value if isinstance(value, (set, list)) else [value])
if len(template) == len(values_list):
for values in product(*values_list):
toks[t]['F'].append('%s=%s' % (name, '|'.join(values))) | Generate features for an item sequence by applying feature templates.
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value. Generated features are stored
in the 'F' field of each item in the sequence.
Parameters
----------
toks: list of tokens
A list of processed toknes.
templates: list of template tuples (str, int)
A feature template consists of a tuple of (name, offset) pairs,
where name and offset specify a field name and offset from which
the template extracts a feature value. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/estner/featureextraction.py#L505-L537 |
estnltk/estnltk | estnltk/ner.py | json_document_to_estner_document | def json_document_to_estner_document(jsondoc):
"""Convert an estnltk document to an estner document.
Parameters
----------
jsondoc: dict
Estnltk JSON-style document.
Returns
-------
estnltk.estner.ner.Document
A ner document.
"""
sentences = []
for json_sent in jsondoc.split_by_sentences():
snt = Sentence()
zipped = list(zip(
json_sent.word_texts,
json_sent.lemmas,
json_sent.root_tokens,
json_sent.forms,
json_sent.endings,
json_sent.postags))
json_toks = [{TEXT: text, LEMMA: lemma, ROOT_TOKENS: root_tokens, FORM: form, ENDING: ending, POSTAG: postag}
for text, lemma, root_tokens, form, ending, postag in zipped]
# add labels, if they are present
for tok, word in zip(json_toks, json_sent.words):
if LABEL in word:
tok[LABEL] = word[LABEL]
for json_tok in json_toks:
token = json_token_to_estner_token(json_tok)
snt.append(token)
if snt:
for i in range(1, len(snt)):
snt[i - 1].next = snt[i]
snt[i].prew = snt[i - 1]
sentences.append(snt)
return Document(sentences=sentences) | python | def json_document_to_estner_document(jsondoc):
"""Convert an estnltk document to an estner document.
Parameters
----------
jsondoc: dict
Estnltk JSON-style document.
Returns
-------
estnltk.estner.ner.Document
A ner document.
"""
sentences = []
for json_sent in jsondoc.split_by_sentences():
snt = Sentence()
zipped = list(zip(
json_sent.word_texts,
json_sent.lemmas,
json_sent.root_tokens,
json_sent.forms,
json_sent.endings,
json_sent.postags))
json_toks = [{TEXT: text, LEMMA: lemma, ROOT_TOKENS: root_tokens, FORM: form, ENDING: ending, POSTAG: postag}
for text, lemma, root_tokens, form, ending, postag in zipped]
# add labels, if they are present
for tok, word in zip(json_toks, json_sent.words):
if LABEL in word:
tok[LABEL] = word[LABEL]
for json_tok in json_toks:
token = json_token_to_estner_token(json_tok)
snt.append(token)
if snt:
for i in range(1, len(snt)):
snt[i - 1].next = snt[i]
snt[i].prew = snt[i - 1]
sentences.append(snt)
return Document(sentences=sentences) | Convert an estnltk document to an estner document.
Parameters
----------
jsondoc: dict
Estnltk JSON-style document.
Returns
-------
estnltk.estner.ner.Document
A ner document. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L64-L101 |
estnltk/estnltk | estnltk/ner.py | json_token_to_estner_token | def json_token_to_estner_token(json_token):
"""Convert a JSON-style word token to an estner token.
Parameters
----------
vabamorf_token: dict
Vabamorf token representing a single word.
label: str
The label string.
Returns
-------
estnltk.estner.ner.Token
"""
token = Token()
word = json_token[TEXT]
lemma = word
morph = ''
label = 'O'
ending = json_token[ENDING]
root_toks = json_token[ROOT_TOKENS]
if isinstance(root_toks[0], list):
root_toks = root_toks[0]
lemma = '_'.join(root_toks) + ('+' + ending if ending else '')
if not lemma:
lemma = word
morph = '_%s_' % json_token[POSTAG]
morph += ' ' + json_token[FORM]
if LABEL in json_token:
label = json_token[LABEL]
return Token(word, lemma, morph, label) | python | def json_token_to_estner_token(json_token):
"""Convert a JSON-style word token to an estner token.
Parameters
----------
vabamorf_token: dict
Vabamorf token representing a single word.
label: str
The label string.
Returns
-------
estnltk.estner.ner.Token
"""
token = Token()
word = json_token[TEXT]
lemma = word
morph = ''
label = 'O'
ending = json_token[ENDING]
root_toks = json_token[ROOT_TOKENS]
if isinstance(root_toks[0], list):
root_toks = root_toks[0]
lemma = '_'.join(root_toks) + ('+' + ending if ending else '')
if not lemma:
lemma = word
morph = '_%s_' % json_token[POSTAG]
morph += ' ' + json_token[FORM]
if LABEL in json_token:
label = json_token[LABEL]
return Token(word, lemma, morph, label) | Convert a JSON-style word token to an estner token.
Parameters
----------
vabamorf_token: dict
Vabamorf token representing a single word.
label: str
The label string.
Returns
-------
estnltk.estner.ner.Token | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L104-L134 |
estnltk/estnltk | estnltk/ner.py | ModelStorageUtil.makedir | def makedir(self):
""" Create model_dir directory """
try:
os.makedirs(self.model_dir)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise | python | def makedir(self):
""" Create model_dir directory """
try:
os.makedirs(self.model_dir)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise | Create model_dir directory | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L38-L44 |
estnltk/estnltk | estnltk/ner.py | ModelStorageUtil.copy_settings | def copy_settings(self, settings_module):
""" Copy settings module to the model_dir directory """
source = inspect.getsourcefile(settings_module)
dest = os.path.join(self.model_dir, 'settings.py')
shutil.copyfile(source, dest) | python | def copy_settings(self, settings_module):
""" Copy settings module to the model_dir directory """
source = inspect.getsourcefile(settings_module)
dest = os.path.join(self.model_dir, 'settings.py')
shutil.copyfile(source, dest) | Copy settings module to the model_dir directory | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L46-L50 |
estnltk/estnltk | estnltk/ner.py | ModelStorageUtil.load_settings | def load_settings(self):
"""Load settings module from the model_dir directory."""
mname = 'loaded_module'
if six.PY2:
import imp
return imp.load_source(mname, self.settings_filename)
else:
import importlib.machinery
loader = importlib.machinery.SourceFileLoader(mname, self.settings_filename)
return loader.load_module(mname) | python | def load_settings(self):
"""Load settings module from the model_dir directory."""
mname = 'loaded_module'
if six.PY2:
import imp
return imp.load_source(mname, self.settings_filename)
else:
import importlib.machinery
loader = importlib.machinery.SourceFileLoader(mname, self.settings_filename)
return loader.load_module(mname) | Load settings module from the model_dir directory. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L52-L61 |
estnltk/estnltk | estnltk/ner.py | NerTrainer.train | def train(self, jsondocs, model_dir):
""" Train a NER model using given documents.
Each word in the documents must have a "label" attribute, which
denote the named entities in the documents.
Parameters
----------
jsondocs: list of JSON-style documents.
The documents used for training the CRF model.
model_dir: str
A directory where the model will be saved.
"""
modelUtil = ModelStorageUtil(model_dir)
modelUtil.makedir()
modelUtil.copy_settings(self.settings)
# Convert json documents to ner documents
nerdocs = [json_document_to_estner_document(jsondoc)
for jsondoc in jsondocs]
self.fex.prepare(nerdocs)
self.fex.process(nerdocs)
self.trainer.train(nerdocs, modelUtil.model_filename) | python | def train(self, jsondocs, model_dir):
""" Train a NER model using given documents.
Each word in the documents must have a "label" attribute, which
denote the named entities in the documents.
Parameters
----------
jsondocs: list of JSON-style documents.
The documents used for training the CRF model.
model_dir: str
A directory where the model will be saved.
"""
modelUtil = ModelStorageUtil(model_dir)
modelUtil.makedir()
modelUtil.copy_settings(self.settings)
# Convert json documents to ner documents
nerdocs = [json_document_to_estner_document(jsondoc)
for jsondoc in jsondocs]
self.fex.prepare(nerdocs)
self.fex.process(nerdocs)
self.trainer.train(nerdocs, modelUtil.model_filename) | Train a NER model using given documents.
Each word in the documents must have a "label" attribute, which
denote the named entities in the documents.
Parameters
----------
jsondocs: list of JSON-style documents.
The documents used for training the CRF model.
model_dir: str
A directory where the model will be saved. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/ner.py#L153-L177 |
estnltk/estnltk | estnltk/syntax/vislcg3_syntax.py | cleanup_lines | def cleanup_lines( lines, **kwargs ):
''' Cleans up annotation after syntactic pre-processing and processing:
-- Removes embedded clause boundaries "<{>" and "<}>";
-- Removes CLBC markings from analysis;
-- Removes additional information between < and > from analysis;
-- Removes additional information between " and " from analysis;
-- If remove_caps==True , removes 'cap' annotations from analysis;
-- If remove_clo==True , removes CLO CLC CLB markings from analysis;
-- If double_quotes=='esc' then " will be overwritten with \\";
and
if double_quotes=='unesc' then \\" will be overwritten with ";
-- If fix_sent_tags=True, then sentence tags (<s> and </s>) will be
checked for mistakenly added analysis, and found analysis will be
removed;
Returns the input list, which has been cleaned from additional information;
'''
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
remove_caps = False
remove_clo = False
double_quotes = None
fix_sent_tags = False
for argName, argVal in kwargs.items() :
if argName in ['remove_caps', 'remove_cap']:
remove_caps = bool(argVal)
if argName == 'remove_clo':
remove_clo = bool(argVal)
if argName == 'fix_sent_tags':
fix_sent_tags = bool(argVal)
if argName in ['double_quotes', 'quotes'] and argVal and \
argVal.lower() in ['esc', 'escape', 'unesc', 'unescape']:
double_quotes = argVal.lower()
pat_token_line = re.compile('^"<(.+)>"\s*$')
pat_analysis_start = re.compile('^(\s+)"(.+)"(\s[LZT].*)$')
i = 0
to_delete = []
while ( i < len(lines) ):
line = lines[i]
isAnalysisLine = line.startswith(' ') or line.startswith('\t')
if not isAnalysisLine:
removeCurrentTokenAndAnalysis = False
# 1) Remove embedded clause boundaries "<{>" and "<}>"
if line.startswith('"<{>"'):
if i+1 == len(lines) or (i+1 < len(lines) and not '"{"' in lines[i+1]):
removeCurrentTokenAndAnalysis = True
if line.startswith('"<}>"'):
if i+1 == len(lines) or (i+1 < len(lines) and not '"}"' in lines[i+1]):
removeCurrentTokenAndAnalysis = True
if removeCurrentTokenAndAnalysis:
# Remove the current token and all the subsequent analyses
del lines[i]
j=i
while ( j < len(lines) ):
line2 = lines[j]
if line2.startswith(' ') or line2.startswith('\t'):
del lines[j]
else:
break
continue
# 2) Convert double quotes (if required)
if double_quotes:
# '^"<(.+)>"\s*$'
if pat_token_line.match( lines[i] ):
token_cleaned = (pat_token_line.match(lines[i])).group(1)
# Escape or unescape double quotes
if double_quotes in ['esc', 'escape']:
token_cleaned = token_cleaned.replace('"', '\\"')
lines[i] = '"<'+token_cleaned+'>"'
elif double_quotes in ['unesc', 'unescape']:
token_cleaned = token_cleaned.replace('\\"', '"')
lines[i] = '"<'+token_cleaned+'>"'
else:
# Normalize analysis line
lines[i] = re.sub('^\s{4,}', '\t', lines[i])
# Remove clause boundary markings
lines[i] = re.sub('(.*)" ([LZT].*) CLBC (.*)', '\\1" \\2 \\3', lines[i])
# Remove additional information that was added during the analysis
lines[i] = re.sub('(.*)" L([^"<]*) ["<]([^@]*) (@.*)', '\\1" L\\2 \\4', lines[i])
# Remove 'cap' tags
if remove_caps:
lines[i] = lines[i].replace(' cap ', ' ')
# Convert double quotes (if required)
if double_quotes and double_quotes in ['unesc', 'unescape']:
lines[i] = lines[i].replace('\\"', '"')
elif double_quotes and double_quotes in ['esc', 'escape']:
m = pat_analysis_start.match( lines[i] )
if m:
# '^(\s+)"(.+)"(\s[LZT].*)$'
start = m.group(1)
content = m.group(2)
end = m.group(3)
content = content.replace('"', '\\"')
lines[i] = ''.join([start, '"', content, '"', end])
# Remove CLO CLC CLB markings
if remove_clo and 'CL' in lines[i]:
lines[i] = re.sub('\sCL[OCB]', ' ', lines[i])
lines[i] = re.sub('\s{2,}', ' ', lines[i])
# Fix sentence tags that mistakenly could have analysis (in EDT corpus)
if fix_sent_tags:
if i-1 > -1 and ('"</s>"' in lines[i-1] or '"<s>"' in lines[i-1]):
lines[i] = ''
i += 1
return lines | python | def cleanup_lines( lines, **kwargs ):
''' Cleans up annotation after syntactic pre-processing and processing:
-- Removes embedded clause boundaries "<{>" and "<}>";
-- Removes CLBC markings from analysis;
-- Removes additional information between < and > from analysis;
-- Removes additional information between " and " from analysis;
-- If remove_caps==True , removes 'cap' annotations from analysis;
-- If remove_clo==True , removes CLO CLC CLB markings from analysis;
-- If double_quotes=='esc' then " will be overwritten with \\";
and
if double_quotes=='unesc' then \\" will be overwritten with ";
-- If fix_sent_tags=True, then sentence tags (<s> and </s>) will be
checked for mistakenly added analysis, and found analysis will be
removed;
Returns the input list, which has been cleaned from additional information;
'''
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
remove_caps = False
remove_clo = False
double_quotes = None
fix_sent_tags = False
for argName, argVal in kwargs.items() :
if argName in ['remove_caps', 'remove_cap']:
remove_caps = bool(argVal)
if argName == 'remove_clo':
remove_clo = bool(argVal)
if argName == 'fix_sent_tags':
fix_sent_tags = bool(argVal)
if argName in ['double_quotes', 'quotes'] and argVal and \
argVal.lower() in ['esc', 'escape', 'unesc', 'unescape']:
double_quotes = argVal.lower()
pat_token_line = re.compile('^"<(.+)>"\s*$')
pat_analysis_start = re.compile('^(\s+)"(.+)"(\s[LZT].*)$')
i = 0
to_delete = []
while ( i < len(lines) ):
line = lines[i]
isAnalysisLine = line.startswith(' ') or line.startswith('\t')
if not isAnalysisLine:
removeCurrentTokenAndAnalysis = False
# 1) Remove embedded clause boundaries "<{>" and "<}>"
if line.startswith('"<{>"'):
if i+1 == len(lines) or (i+1 < len(lines) and not '"{"' in lines[i+1]):
removeCurrentTokenAndAnalysis = True
if line.startswith('"<}>"'):
if i+1 == len(lines) or (i+1 < len(lines) and not '"}"' in lines[i+1]):
removeCurrentTokenAndAnalysis = True
if removeCurrentTokenAndAnalysis:
# Remove the current token and all the subsequent analyses
del lines[i]
j=i
while ( j < len(lines) ):
line2 = lines[j]
if line2.startswith(' ') or line2.startswith('\t'):
del lines[j]
else:
break
continue
# 2) Convert double quotes (if required)
if double_quotes:
# '^"<(.+)>"\s*$'
if pat_token_line.match( lines[i] ):
token_cleaned = (pat_token_line.match(lines[i])).group(1)
# Escape or unescape double quotes
if double_quotes in ['esc', 'escape']:
token_cleaned = token_cleaned.replace('"', '\\"')
lines[i] = '"<'+token_cleaned+'>"'
elif double_quotes in ['unesc', 'unescape']:
token_cleaned = token_cleaned.replace('\\"', '"')
lines[i] = '"<'+token_cleaned+'>"'
else:
# Normalize analysis line
lines[i] = re.sub('^\s{4,}', '\t', lines[i])
# Remove clause boundary markings
lines[i] = re.sub('(.*)" ([LZT].*) CLBC (.*)', '\\1" \\2 \\3', lines[i])
# Remove additional information that was added during the analysis
lines[i] = re.sub('(.*)" L([^"<]*) ["<]([^@]*) (@.*)', '\\1" L\\2 \\4', lines[i])
# Remove 'cap' tags
if remove_caps:
lines[i] = lines[i].replace(' cap ', ' ')
# Convert double quotes (if required)
if double_quotes and double_quotes in ['unesc', 'unescape']:
lines[i] = lines[i].replace('\\"', '"')
elif double_quotes and double_quotes in ['esc', 'escape']:
m = pat_analysis_start.match( lines[i] )
if m:
# '^(\s+)"(.+)"(\s[LZT].*)$'
start = m.group(1)
content = m.group(2)
end = m.group(3)
content = content.replace('"', '\\"')
lines[i] = ''.join([start, '"', content, '"', end])
# Remove CLO CLC CLB markings
if remove_clo and 'CL' in lines[i]:
lines[i] = re.sub('\sCL[OCB]', ' ', lines[i])
lines[i] = re.sub('\s{2,}', ' ', lines[i])
# Fix sentence tags that mistakenly could have analysis (in EDT corpus)
if fix_sent_tags:
if i-1 > -1 and ('"</s>"' in lines[i-1] or '"<s>"' in lines[i-1]):
lines[i] = ''
i += 1
return lines | Cleans up annotation after syntactic pre-processing and processing:
-- Removes embedded clause boundaries "<{>" and "<}>";
-- Removes CLBC markings from analysis;
-- Removes additional information between < and > from analysis;
-- Removes additional information between " and " from analysis;
-- If remove_caps==True , removes 'cap' annotations from analysis;
-- If remove_clo==True , removes CLO CLC CLB markings from analysis;
-- If double_quotes=='esc' then " will be overwritten with \\";
and
if double_quotes=='unesc' then \\" will be overwritten with ";
-- If fix_sent_tags=True, then sentence tags (<s> and </s>) will be
checked for mistakenly added analysis, and found analysis will be
removed;
Returns the input list, which has been cleaned from additional information; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/vislcg3_syntax.py#L312-L415 |
estnltk/estnltk | estnltk/syntax/vislcg3_syntax.py | align_cg3_with_Text | def align_cg3_with_Text( lines, text, **kwargs ):
''' Aligns VISLCG3's output (a list of strings) with given EstNLTK\'s Text object.
Basically, for each word position in the Text object, finds corresponding VISLCG3's
analyses;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
text : Text
EstNLTK Text object containing the original text that was analysed via
VISLCG3Pipeline;
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False
Example output (for text 'Jah . Öö oli täiesti tuuletu .'):
-----------------------------------------------------------
{'sent_id': 0, 'start': 0, 'end': 3, 'parser_out': ['\t"jah" L0 D @ADVL #1->0\r']}
{'sent_id': 0, 'start': 4, 'end': 5, 'parser_out': ['\t"." Z Fst CLB #2->2\r']}
{'sent_id': 1, 'start': 6, 'end': 8, 'parser_out': ['\t"öö" L0 S com sg nom @SUBJ #1->2\r']}
{'sent_id': 1, 'start': 9, 'end': 12, 'parser_out': ['\t"ole" Li V main indic impf ps3 sg ps af @FMV #2->0\r']}
{'sent_id': 1, 'start': 13, 'end': 20, 'parser_out': ['\t"täiesti" L0 D @ADVL #3->4\r']}
{'sent_id': 1, 'start': 21, 'end': 28, 'parser_out': ['\t"tuuletu" L0 A pos sg nom @PRD #4->2\r']}
{'sent_id': 1, 'start': 29, 'end': 30, 'parser_out': ['\t"." Z Fst CLB #5->5\r']}
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
check_tokens = False
add_word_ids = False
for argName, argVal in kwargs.items() :
if argName in ['check_tokens', 'check'] and argVal in [True, False]:
check_tokens = argVal
if argName in ['add_word_ids', 'word_ids'] and argVal in [True, False]:
add_word_ids = argVal
pat_empty_line = re.compile('^\s+$')
pat_token_line = re.compile('^"<(.+)>"$')
pat_analysis_start = re.compile('^(\s+)"(.+)"(\s[LZTS].*)$')
pat_sent_bound = re.compile('^("<s>"|"</s>"|<s>|</s>)\s*$')
generalWID = 0
sentWID = 0
sentenceID = 0
j = 0
# Iterate over the sentences and perform the alignment
results = []
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
sentWID = 0
for i in range(len(sentence)):
# 1) take the next word in Text
wordJson = sentence[i]
wordStr = wordJson[TEXT]
cg3word = None
cg3analyses = []
# 2) find next word in the VISLCG3's output
while (j < len(lines)):
# a) a sentence boundary: skip it entirely
if pat_sent_bound.match( lines[j] ) and j+1 < len(lines) and \
(len(lines[j+1])==0 or pat_empty_line.match(lines[j+1])):
j += 2
continue
# b) a word token: collect the analyses
token_match = pat_token_line.match( lines[j].rstrip() )
if token_match:
cg3word = token_match.group(1)
j += 1
while (j < len(lines)):
if pat_analysis_start.match(lines[j]):
cg3analyses.append(lines[j])
else:
break
j += 1
break
j += 1
# 3) Check whether two tokens match (if requested)
if cg3word:
if check_tokens and wordStr != cg3word:
raise Exception('(!) Unable to align EstNLTK\'s token nr ',generalWID,\
':',wordStr,' vs ',cg3word)
# Populate the alignment
result_dict = { START:wordJson[START], END:wordJson[END], \
SENT_ID:sentenceID, PARSER_OUT: cg3analyses }
if add_word_ids:
result_dict['text_word_id'] = generalWID # word id in the text
result_dict['sent_word_id'] = sentWID # word id in the sentence
results.append( result_dict )
else:
if j >= len(lines):
print('(!) End of VISLCG3 analysis reached: '+str(j)+' '+str(len(lines)),\
file = sys.stderr)
raise Exception ('(!) Unable to find matching syntactic analysis ',\
'for EstNLTK\'s token nr ', generalWID, ':', wordStr)
sentWID += 1
generalWID += 1
sentenceID += 1
return results | python | def align_cg3_with_Text( lines, text, **kwargs ):
''' Aligns VISLCG3's output (a list of strings) with given EstNLTK\'s Text object.
Basically, for each word position in the Text object, finds corresponding VISLCG3's
analyses;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
text : Text
EstNLTK Text object containing the original text that was analysed via
VISLCG3Pipeline;
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False
Example output (for text 'Jah . Öö oli täiesti tuuletu .'):
-----------------------------------------------------------
{'sent_id': 0, 'start': 0, 'end': 3, 'parser_out': ['\t"jah" L0 D @ADVL #1->0\r']}
{'sent_id': 0, 'start': 4, 'end': 5, 'parser_out': ['\t"." Z Fst CLB #2->2\r']}
{'sent_id': 1, 'start': 6, 'end': 8, 'parser_out': ['\t"öö" L0 S com sg nom @SUBJ #1->2\r']}
{'sent_id': 1, 'start': 9, 'end': 12, 'parser_out': ['\t"ole" Li V main indic impf ps3 sg ps af @FMV #2->0\r']}
{'sent_id': 1, 'start': 13, 'end': 20, 'parser_out': ['\t"täiesti" L0 D @ADVL #3->4\r']}
{'sent_id': 1, 'start': 21, 'end': 28, 'parser_out': ['\t"tuuletu" L0 A pos sg nom @PRD #4->2\r']}
{'sent_id': 1, 'start': 29, 'end': 30, 'parser_out': ['\t"." Z Fst CLB #5->5\r']}
'''
from estnltk.text import Text
if not isinstance( text, Text ):
raise Exception('(!) Unexpected type of input argument! Expected EstNLTK\'s Text. ')
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
check_tokens = False
add_word_ids = False
for argName, argVal in kwargs.items() :
if argName in ['check_tokens', 'check'] and argVal in [True, False]:
check_tokens = argVal
if argName in ['add_word_ids', 'word_ids'] and argVal in [True, False]:
add_word_ids = argVal
pat_empty_line = re.compile('^\s+$')
pat_token_line = re.compile('^"<(.+)>"$')
pat_analysis_start = re.compile('^(\s+)"(.+)"(\s[LZTS].*)$')
pat_sent_bound = re.compile('^("<s>"|"</s>"|<s>|</s>)\s*$')
generalWID = 0
sentWID = 0
sentenceID = 0
j = 0
# Iterate over the sentences and perform the alignment
results = []
for sentence in text.divide( layer=WORDS, by=SENTENCES ):
sentWID = 0
for i in range(len(sentence)):
# 1) take the next word in Text
wordJson = sentence[i]
wordStr = wordJson[TEXT]
cg3word = None
cg3analyses = []
# 2) find next word in the VISLCG3's output
while (j < len(lines)):
# a) a sentence boundary: skip it entirely
if pat_sent_bound.match( lines[j] ) and j+1 < len(lines) and \
(len(lines[j+1])==0 or pat_empty_line.match(lines[j+1])):
j += 2
continue
# b) a word token: collect the analyses
token_match = pat_token_line.match( lines[j].rstrip() )
if token_match:
cg3word = token_match.group(1)
j += 1
while (j < len(lines)):
if pat_analysis_start.match(lines[j]):
cg3analyses.append(lines[j])
else:
break
j += 1
break
j += 1
# 3) Check whether two tokens match (if requested)
if cg3word:
if check_tokens and wordStr != cg3word:
raise Exception('(!) Unable to align EstNLTK\'s token nr ',generalWID,\
':',wordStr,' vs ',cg3word)
# Populate the alignment
result_dict = { START:wordJson[START], END:wordJson[END], \
SENT_ID:sentenceID, PARSER_OUT: cg3analyses }
if add_word_ids:
result_dict['text_word_id'] = generalWID # word id in the text
result_dict['sent_word_id'] = sentWID # word id in the sentence
results.append( result_dict )
else:
if j >= len(lines):
print('(!) End of VISLCG3 analysis reached: '+str(j)+' '+str(len(lines)),\
file = sys.stderr)
raise Exception ('(!) Unable to find matching syntactic analysis ',\
'for EstNLTK\'s token nr ', generalWID, ':', wordStr)
sentWID += 1
generalWID += 1
sentenceID += 1
return results | Aligns VISLCG3's output (a list of strings) with given EstNLTK\'s Text object.
Basically, for each word position in the Text object, finds corresponding VISLCG3's
analyses;
Returns a list of dicts, where each dict has following attributes:
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
text : Text
EstNLTK Text object containing the original text that was analysed via
VISLCG3Pipeline;
check_tokens : bool
Optional argument specifying whether tokens should be checked for match
during the alignment. In case of a mismatch, an exception is raised.
Default:False
add_word_ids : bool
Optional argument specifying whether each alignment should include attributes:
* 'text_word_id' - current word index in the whole Text, starting from 0;
* 'sent_word_id' - index of the current word in the sentence, starting from 0;
Default:False
Example output (for text 'Jah . Öö oli täiesti tuuletu .'):
-----------------------------------------------------------
{'sent_id': 0, 'start': 0, 'end': 3, 'parser_out': ['\t"jah" L0 D @ADVL #1->0\r']}
{'sent_id': 0, 'start': 4, 'end': 5, 'parser_out': ['\t"." Z Fst CLB #2->2\r']}
{'sent_id': 1, 'start': 6, 'end': 8, 'parser_out': ['\t"öö" L0 S com sg nom @SUBJ #1->2\r']}
{'sent_id': 1, 'start': 9, 'end': 12, 'parser_out': ['\t"ole" Li V main indic impf ps3 sg ps af @FMV #2->0\r']}
{'sent_id': 1, 'start': 13, 'end': 20, 'parser_out': ['\t"täiesti" L0 D @ADVL #3->4\r']}
{'sent_id': 1, 'start': 21, 'end': 28, 'parser_out': ['\t"tuuletu" L0 A pos sg nom @PRD #4->2\r']}
{'sent_id': 1, 'start': 29, 'end': 30, 'parser_out': ['\t"." Z Fst CLB #5->5\r']} | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/vislcg3_syntax.py#L422-L537 |
estnltk/estnltk | estnltk/syntax/vislcg3_syntax.py | convert_cg3_to_conll | def convert_cg3_to_conll( lines, **kwargs ):
''' Converts the output of VISL_CG3 based syntactic parsing into CONLL format.
Expects that the output has been cleaned ( via method cleanup_lines() ).
Returns a list of CONLL format lines;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
Default:True
fix_open_punct : bool
Optional argument specifying whether opening punctuation marks should
be made dependents of the following token;
Default:True
unesc_quotes : bool
Optional argument specifying whether double quotes should be unescaped
in the output, i.e. converted from '\"' to '"';
Default:True
rep_spaces : bool
Optional argument specifying whether spaces in a multiword token (e.g.
'Rio de Janeiro') should be replaced with underscores ('Rio_de_Janeiro');
Default:False
error_on_unexp : bool
Optional argument specifying whether an exception should be raised in
case of missing or unexpected analysis line; if not, only prints warnings
in case of such lines;
Default:False
Example input
--------------
"<s>"
"<Öö>"
"öö" L0 S com sg nom @SUBJ #1->2
"<oli>"
"ole" Li V main indic impf ps3 sg ps af @FMV #2->0
"<täiesti>"
"täiesti" L0 D @ADVL #3->4
"<tuuletu>"
"tuuletu" L0 A pos sg nom @PRD #4->2
"<.>"
"." Z Fst CLB #5->5
"</s>"
Example output
---------------
1 Öö öö S S com|sg|nom 2 @SUBJ _ _
2 oli ole V V main|indic|impf|ps3|sg|ps|af 0 @FMV _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A pos|sg|nom 2 @PRD _ _
5 . . Z Z Fst|CLB 4 xxx _ _
'''
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
fix_selfrefs = True
fix_open_punct = True
unesc_quotes = True
rep_spaces = False
error_on_unexp = False
for argName, argVal in kwargs.items() :
if argName in ['selfrefs', 'fix_selfrefs'] and argVal in [True, False]:
fix_selfrefs = argVal
if argName in ['fix_open_punct'] and argVal in [True, False]:
fix_open_punct = argVal
if argName in ['error_on_unexp'] and argVal in [True, False]:
error_on_unexp = argVal
if argName in ['unesc_quotes'] and argVal in [True, False]:
unesc_quotes = argVal
if argName in ['rep_spaces'] and argVal in [True, False]:
rep_spaces = argVal
pat_empty_line = re.compile('^\s+$')
pat_token_line = re.compile('^"<(.+)>"$')
pat_analysis_line = re.compile('^\s+"(.+)"\s([^"]+)$')
# 3 types of analyses:
pat_ending_pos_form = re.compile('^L\S+\s+\S\s+([^#@]+).+$')
pat_pos_form = re.compile('^\S\s+([^#@]+).+$')
pat_ending_pos = re.compile('^(L\S+\s+)?\S\s+[#@].+$')
pat_opening_punct = re.compile('.+\s(Opr|Oqu|Quo)\s')
analyses_added = 0
conll_lines = []
word_id = 1
i = 0
while ( i < len(lines) ):
line = lines[i]
# Check, whether it is an analysis line or not
if not (line.startswith(' ') or line.startswith('\t')):
# ****** TOKEN
if len(line)>0 and not (line.startswith('"<s>"') or \
line.startswith('"</s>"')) and not pat_empty_line.match(line):
# Convert double quotes back to normal form (if requested)
if unesc_quotes:
line = line.replace( '\\"', '"' )
# Broken stuff: if previous word was without analysis
if analyses_added == 0 and word_id > 1:
# Missing analysis line
if error_on_unexp:
raise Exception('(!) Analysis missing at line '+str(i)+': '+\
'\n'+lines[i-1])
else:
print('(!) Analysis missing at line '+str(i)+': '+\
'\n'+lines[i-1], file=sys.stderr)
# Add an empty analysis
conll_lines[-1] += '\t_'
conll_lines[-1] += '\tX'
conll_lines[-1] += '\tX'
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t'+str(word_id-2)
conll_lines[-1] += '\txxx'
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t_'
# Start of a new token/word
token_match = pat_token_line.match( line.rstrip() )
if token_match:
word = token_match.group(1)
else:
raise Exception('(!) Unexpected token format: ', line)
if rep_spaces and re.search('\s', word):
# Replace spaces in the token with '_' symbols
word = re.sub('\s+', '_', word)
conll_lines.append( str(word_id) + '\t' + word )
analyses_added = 0
word_id += 1
# End of a sentence
if line.startswith('"</s>"'):
conll_lines.append('')
word_id = 1
else:
# ****** ANALYSIS
# If there is more than one pair of "", we have some kind of
# inconsistency: try to remove extra quotation marks from the
# end of the analysis line ...
if line.count('"') > 2:
new_line = []
q_count = 0
for j in range( len(line) ):
if line[j]=='"' and (j==0 or line[j-1]!='\\'):
q_count += 1
if q_count < 3:
new_line.append(line[j])
else:
new_line.append(line[j])
line = ''.join( new_line )
# Convert double quotes back to normal form (if requested)
if unesc_quotes:
line = line.replace( '\\"', '"' )
analysis_match = pat_analysis_line.match( line )
# Analysis line; in case of multiple analyses, pick the first one;
if analysis_match and analyses_added==0:
lemma = analysis_match.group(1)
cats = analysis_match.group(2)
if cats.startswith('Z '):
postag = 'Z'
else:
postag = (cats.split())[1] if len(cats.split())>1 else 'X'
deprels = re.findall( '(@\S+)', cats )
deprel = deprels[0] if deprels else 'xxx'
heads = re.findall( '#\d+\s*->\s*(\d+)', cats )
head = heads[0] if heads else str(word_id-2)
m1 = pat_ending_pos_form.match(cats)
m2 = pat_pos_form.match(cats)
m3 = pat_ending_pos.match(cats)
if m1:
forms = (m1.group(1)).split()
elif m2:
forms = (m2.group(1)).split()
elif m3:
forms = ['_'] # no form (in case of adpositions and adverbs)
else:
# Unexpected format of analysis line
if error_on_unexp:
raise Exception('(!) Unexpected format of analysis line: '+line)
else:
postag = 'X'
forms = ['_']
print('(!) Unexpected format of analysis line: '+line, file=sys.stderr)
# If required, fix self-references (in punctuation):
if fix_selfrefs and int(head) == word_id-1 and word_id-2>0:
head = str(word_id-2) # add link to the previous word
# Fix opening punctuation
if fix_open_punct and pat_opening_punct.match(line):
head = str(word_id) # add link to the following word
conll_lines[-1] += '\t'+lemma
conll_lines[-1] += '\t'+postag
conll_lines[-1] += '\t'+postag
conll_lines[-1] += '\t'+('|'.join(forms))
conll_lines[-1] += '\t'+head
conll_lines[-1] += '\t'+deprel
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t_'
analyses_added += 1
i += 1
return conll_lines | python | def convert_cg3_to_conll( lines, **kwargs ):
''' Converts the output of VISL_CG3 based syntactic parsing into CONLL format.
Expects that the output has been cleaned ( via method cleanup_lines() ).
Returns a list of CONLL format lines;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
Default:True
fix_open_punct : bool
Optional argument specifying whether opening punctuation marks should
be made dependents of the following token;
Default:True
unesc_quotes : bool
Optional argument specifying whether double quotes should be unescaped
in the output, i.e. converted from '\"' to '"';
Default:True
rep_spaces : bool
Optional argument specifying whether spaces in a multiword token (e.g.
'Rio de Janeiro') should be replaced with underscores ('Rio_de_Janeiro');
Default:False
error_on_unexp : bool
Optional argument specifying whether an exception should be raised in
case of missing or unexpected analysis line; if not, only prints warnings
in case of such lines;
Default:False
Example input
--------------
"<s>"
"<Öö>"
"öö" L0 S com sg nom @SUBJ #1->2
"<oli>"
"ole" Li V main indic impf ps3 sg ps af @FMV #2->0
"<täiesti>"
"täiesti" L0 D @ADVL #3->4
"<tuuletu>"
"tuuletu" L0 A pos sg nom @PRD #4->2
"<.>"
"." Z Fst CLB #5->5
"</s>"
Example output
---------------
1 Öö öö S S com|sg|nom 2 @SUBJ _ _
2 oli ole V V main|indic|impf|ps3|sg|ps|af 0 @FMV _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A pos|sg|nom 2 @PRD _ _
5 . . Z Z Fst|CLB 4 xxx _ _
'''
if not isinstance( lines, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
fix_selfrefs = True
fix_open_punct = True
unesc_quotes = True
rep_spaces = False
error_on_unexp = False
for argName, argVal in kwargs.items() :
if argName in ['selfrefs', 'fix_selfrefs'] and argVal in [True, False]:
fix_selfrefs = argVal
if argName in ['fix_open_punct'] and argVal in [True, False]:
fix_open_punct = argVal
if argName in ['error_on_unexp'] and argVal in [True, False]:
error_on_unexp = argVal
if argName in ['unesc_quotes'] and argVal in [True, False]:
unesc_quotes = argVal
if argName in ['rep_spaces'] and argVal in [True, False]:
rep_spaces = argVal
pat_empty_line = re.compile('^\s+$')
pat_token_line = re.compile('^"<(.+)>"$')
pat_analysis_line = re.compile('^\s+"(.+)"\s([^"]+)$')
# 3 types of analyses:
pat_ending_pos_form = re.compile('^L\S+\s+\S\s+([^#@]+).+$')
pat_pos_form = re.compile('^\S\s+([^#@]+).+$')
pat_ending_pos = re.compile('^(L\S+\s+)?\S\s+[#@].+$')
pat_opening_punct = re.compile('.+\s(Opr|Oqu|Quo)\s')
analyses_added = 0
conll_lines = []
word_id = 1
i = 0
while ( i < len(lines) ):
line = lines[i]
# Check, whether it is an analysis line or not
if not (line.startswith(' ') or line.startswith('\t')):
# ****** TOKEN
if len(line)>0 and not (line.startswith('"<s>"') or \
line.startswith('"</s>"')) and not pat_empty_line.match(line):
# Convert double quotes back to normal form (if requested)
if unesc_quotes:
line = line.replace( '\\"', '"' )
# Broken stuff: if previous word was without analysis
if analyses_added == 0 and word_id > 1:
# Missing analysis line
if error_on_unexp:
raise Exception('(!) Analysis missing at line '+str(i)+': '+\
'\n'+lines[i-1])
else:
print('(!) Analysis missing at line '+str(i)+': '+\
'\n'+lines[i-1], file=sys.stderr)
# Add an empty analysis
conll_lines[-1] += '\t_'
conll_lines[-1] += '\tX'
conll_lines[-1] += '\tX'
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t'+str(word_id-2)
conll_lines[-1] += '\txxx'
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t_'
# Start of a new token/word
token_match = pat_token_line.match( line.rstrip() )
if token_match:
word = token_match.group(1)
else:
raise Exception('(!) Unexpected token format: ', line)
if rep_spaces and re.search('\s', word):
# Replace spaces in the token with '_' symbols
word = re.sub('\s+', '_', word)
conll_lines.append( str(word_id) + '\t' + word )
analyses_added = 0
word_id += 1
# End of a sentence
if line.startswith('"</s>"'):
conll_lines.append('')
word_id = 1
else:
# ****** ANALYSIS
# If there is more than one pair of "", we have some kind of
# inconsistency: try to remove extra quotation marks from the
# end of the analysis line ...
if line.count('"') > 2:
new_line = []
q_count = 0
for j in range( len(line) ):
if line[j]=='"' and (j==0 or line[j-1]!='\\'):
q_count += 1
if q_count < 3:
new_line.append(line[j])
else:
new_line.append(line[j])
line = ''.join( new_line )
# Convert double quotes back to normal form (if requested)
if unesc_quotes:
line = line.replace( '\\"', '"' )
analysis_match = pat_analysis_line.match( line )
# Analysis line; in case of multiple analyses, pick the first one;
if analysis_match and analyses_added==0:
lemma = analysis_match.group(1)
cats = analysis_match.group(2)
if cats.startswith('Z '):
postag = 'Z'
else:
postag = (cats.split())[1] if len(cats.split())>1 else 'X'
deprels = re.findall( '(@\S+)', cats )
deprel = deprels[0] if deprels else 'xxx'
heads = re.findall( '#\d+\s*->\s*(\d+)', cats )
head = heads[0] if heads else str(word_id-2)
m1 = pat_ending_pos_form.match(cats)
m2 = pat_pos_form.match(cats)
m3 = pat_ending_pos.match(cats)
if m1:
forms = (m1.group(1)).split()
elif m2:
forms = (m2.group(1)).split()
elif m3:
forms = ['_'] # no form (in case of adpositions and adverbs)
else:
# Unexpected format of analysis line
if error_on_unexp:
raise Exception('(!) Unexpected format of analysis line: '+line)
else:
postag = 'X'
forms = ['_']
print('(!) Unexpected format of analysis line: '+line, file=sys.stderr)
# If required, fix self-references (in punctuation):
if fix_selfrefs and int(head) == word_id-1 and word_id-2>0:
head = str(word_id-2) # add link to the previous word
# Fix opening punctuation
if fix_open_punct and pat_opening_punct.match(line):
head = str(word_id) # add link to the following word
conll_lines[-1] += '\t'+lemma
conll_lines[-1] += '\t'+postag
conll_lines[-1] += '\t'+postag
conll_lines[-1] += '\t'+('|'.join(forms))
conll_lines[-1] += '\t'+head
conll_lines[-1] += '\t'+deprel
conll_lines[-1] += '\t_'
conll_lines[-1] += '\t_'
analyses_added += 1
i += 1
return conll_lines | Converts the output of VISL_CG3 based syntactic parsing into CONLL format.
Expects that the output has been cleaned ( via method cleanup_lines() ).
Returns a list of CONLL format lines;
Parameters
-----------
lines : list of str
The input text for the pipeline; Should be in same format as the output
of VISLCG3Pipeline;
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
Default:True
fix_open_punct : bool
Optional argument specifying whether opening punctuation marks should
be made dependents of the following token;
Default:True
unesc_quotes : bool
Optional argument specifying whether double quotes should be unescaped
in the output, i.e. converted from '\"' to '"';
Default:True
rep_spaces : bool
Optional argument specifying whether spaces in a multiword token (e.g.
'Rio de Janeiro') should be replaced with underscores ('Rio_de_Janeiro');
Default:False
error_on_unexp : bool
Optional argument specifying whether an exception should be raised in
case of missing or unexpected analysis line; if not, only prints warnings
in case of such lines;
Default:False
Example input
--------------
"<s>"
"<Öö>"
"öö" L0 S com sg nom @SUBJ #1->2
"<oli>"
"ole" Li V main indic impf ps3 sg ps af @FMV #2->0
"<täiesti>"
"täiesti" L0 D @ADVL #3->4
"<tuuletu>"
"tuuletu" L0 A pos sg nom @PRD #4->2
"<.>"
"." Z Fst CLB #5->5
"</s>"
Example output
---------------
1 Öö öö S S com|sg|nom 2 @SUBJ _ _
2 oli ole V V main|indic|impf|ps3|sg|ps|af 0 @FMV _ _
3 täiesti täiesti D D _ 4 @ADVL _ _
4 tuuletu tuuletu A A pos|sg|nom 2 @PRD _ _
5 . . Z Z Fst|CLB 4 xxx _ _ | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/vislcg3_syntax.py#L544-L747 |
estnltk/estnltk | estnltk/syntax/vislcg3_syntax.py | VISLCG3Pipeline.check_if_vislcg_is_in_path | def check_if_vislcg_is_in_path( self, vislcg_cmd1 ):
''' Checks whether given vislcg_cmd1 is in system's PATH. Returns True, there is
a file named vislcg_cmd1 in the path, otherwise returns False;
The idea borrows from: http://stackoverflow.com/a/377028
'''
for path in os.environ["PATH"].split( os.pathsep ):
path1 = path.strip('"')
file1 = os.path.join(path1, vislcg_cmd1)
if os.path.isfile(file1) or os.path.isfile(file1+'.exe'):
return True
return False | python | def check_if_vislcg_is_in_path( self, vislcg_cmd1 ):
''' Checks whether given vislcg_cmd1 is in system's PATH. Returns True, there is
a file named vislcg_cmd1 in the path, otherwise returns False;
The idea borrows from: http://stackoverflow.com/a/377028
'''
for path in os.environ["PATH"].split( os.pathsep ):
path1 = path.strip('"')
file1 = os.path.join(path1, vislcg_cmd1)
if os.path.isfile(file1) or os.path.isfile(file1+'.exe'):
return True
return False | Checks whether given vislcg_cmd1 is in system's PATH. Returns True, there is
a file named vislcg_cmd1 in the path, otherwise returns False;
The idea borrows from: http://stackoverflow.com/a/377028 | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/vislcg3_syntax.py#L199-L210 |
estnltk/estnltk | estnltk/syntax/vislcg3_syntax.py | VISLCG3Pipeline.process_lines | def process_lines( self, input_lines, **kwargs ):
''' Executes the pipeline of subsequent VISL_CG3 commands. The first process
in pipeline gets input_lines as an input, and each subsequent process gets
the output of the previous process as an input.
The idea of how to construct the pipeline borrows from:
https://github.com/estnltk/estnltk/blob/1.4.0/estnltk/syntax/tagger.py
Returns the result of the last process in the pipeline, either as a string
or, alternatively, as a list of strings (if split_result == True);
Parameters
-----------
input_lines : list of str
The input text for the pipeline; Should be in same format as the output
of SyntaxPreprocessing;
split_result : bool
Optional argument specifying whether the result should be split by
newlines, and returned as a list of strings/lines instead;
Default:False
remove_info : bool
Optional argument specifying whether the additional information added
during the preprocessing and syntactic processing should be removed
from the results;
Default:True;
The method cleanup_lines() will be used for removing additional info,
and all the parameters passed to this method will be also forwarded to
the cleanup method;
'''
split_result_lines = False
remove_info = True
for argName, argVal in kwargs.items() :
if argName in ['split_result_lines', 'split_result'] and argVal in [True, False]:
split_result_lines = argVal
if argName in ['remove_info', 'info_remover', 'clean_up'] and argVal in [True, False]:
remove_info = argVal
# 1) Construct the input file for the first process in the pipeline
temp_input_file = \
tempfile.NamedTemporaryFile(prefix='vislcg3_in.', mode='w', delete=False)
temp_input_file.close()
# We have to open separately here for writing, because Py 2.7 does not support
# passing parameter encoding='utf-8' to the NamedTemporaryFile;
out_f = codecs.open(temp_input_file.name, mode='w', encoding='utf-8')
for line in input_lines:
out_f.write( line.rstrip() )
out_f.write( '\n' )
out_f.close()
# TODO: tempfile is currently used to ensure that the input is in 'utf-8',
# but perhaps we can somehow ensure it without using tempfile ??
# 2) Dynamically construct the pipeline and open processes
pipeline = []
for i in range( len(self.rules_pipeline) ):
rule_file = self.rules_pipeline[i]
process_cmd = [self.vislcg_cmd, '-o', '-g', os.path.join(self.rules_dir, rule_file)]
process = None
if i == 0:
# The first process takes input from the file
process_cmd.extend( ['-I', temp_input_file.name] )
process = Popen(process_cmd, stdin=PIPE, stdout=PIPE)
else:
# A subsequent process takes output of the last process as an input
process = Popen(process_cmd, stdin=pipeline[-1]['process'].stdout, stdout=PIPE)
# Record the process
process_dict = {'process':process, 'cmd':process_cmd}
pipeline.append( process_dict )
# 3) Close all stdout streams, except the last one
for i in range( len(pipeline) ):
if i != len(pipeline) - 1:
pipeline[i]['process'].stdout.close()
# 4) Communicate results form the last item in the pipeline
result = as_unicode( pipeline[-1]['process'].communicate()[0] )
pipeline[-1]['process'].stdout.close() # Close the last process
# Clean-up
# 1) remove temp file
os.remove(temp_input_file.name)
# 2) remove additional info, if required
if remove_info:
result = '\n'.join( cleanup_lines( result.split('\n'), **kwargs ))
return result if not split_result_lines else result.split('\n') | python | def process_lines( self, input_lines, **kwargs ):
''' Executes the pipeline of subsequent VISL_CG3 commands. The first process
in pipeline gets input_lines as an input, and each subsequent process gets
the output of the previous process as an input.
The idea of how to construct the pipeline borrows from:
https://github.com/estnltk/estnltk/blob/1.4.0/estnltk/syntax/tagger.py
Returns the result of the last process in the pipeline, either as a string
or, alternatively, as a list of strings (if split_result == True);
Parameters
-----------
input_lines : list of str
The input text for the pipeline; Should be in same format as the output
of SyntaxPreprocessing;
split_result : bool
Optional argument specifying whether the result should be split by
newlines, and returned as a list of strings/lines instead;
Default:False
remove_info : bool
Optional argument specifying whether the additional information added
during the preprocessing and syntactic processing should be removed
from the results;
Default:True;
The method cleanup_lines() will be used for removing additional info,
and all the parameters passed to this method will be also forwarded to
the cleanup method;
'''
split_result_lines = False
remove_info = True
for argName, argVal in kwargs.items() :
if argName in ['split_result_lines', 'split_result'] and argVal in [True, False]:
split_result_lines = argVal
if argName in ['remove_info', 'info_remover', 'clean_up'] and argVal in [True, False]:
remove_info = argVal
# 1) Construct the input file for the first process in the pipeline
temp_input_file = \
tempfile.NamedTemporaryFile(prefix='vislcg3_in.', mode='w', delete=False)
temp_input_file.close()
# We have to open separately here for writing, because Py 2.7 does not support
# passing parameter encoding='utf-8' to the NamedTemporaryFile;
out_f = codecs.open(temp_input_file.name, mode='w', encoding='utf-8')
for line in input_lines:
out_f.write( line.rstrip() )
out_f.write( '\n' )
out_f.close()
# TODO: tempfile is currently used to ensure that the input is in 'utf-8',
# but perhaps we can somehow ensure it without using tempfile ??
# 2) Dynamically construct the pipeline and open processes
pipeline = []
for i in range( len(self.rules_pipeline) ):
rule_file = self.rules_pipeline[i]
process_cmd = [self.vislcg_cmd, '-o', '-g', os.path.join(self.rules_dir, rule_file)]
process = None
if i == 0:
# The first process takes input from the file
process_cmd.extend( ['-I', temp_input_file.name] )
process = Popen(process_cmd, stdin=PIPE, stdout=PIPE)
else:
# A subsequent process takes output of the last process as an input
process = Popen(process_cmd, stdin=pipeline[-1]['process'].stdout, stdout=PIPE)
# Record the process
process_dict = {'process':process, 'cmd':process_cmd}
pipeline.append( process_dict )
# 3) Close all stdout streams, except the last one
for i in range( len(pipeline) ):
if i != len(pipeline) - 1:
pipeline[i]['process'].stdout.close()
# 4) Communicate results form the last item in the pipeline
result = as_unicode( pipeline[-1]['process'].communicate()[0] )
pipeline[-1]['process'].stdout.close() # Close the last process
# Clean-up
# 1) remove temp file
os.remove(temp_input_file.name)
# 2) remove additional info, if required
if remove_info:
result = '\n'.join( cleanup_lines( result.split('\n'), **kwargs ))
return result if not split_result_lines else result.split('\n') | Executes the pipeline of subsequent VISL_CG3 commands. The first process
in pipeline gets input_lines as an input, and each subsequent process gets
the output of the previous process as an input.
The idea of how to construct the pipeline borrows from:
https://github.com/estnltk/estnltk/blob/1.4.0/estnltk/syntax/tagger.py
Returns the result of the last process in the pipeline, either as a string
or, alternatively, as a list of strings (if split_result == True);
Parameters
-----------
input_lines : list of str
The input text for the pipeline; Should be in same format as the output
of SyntaxPreprocessing;
split_result : bool
Optional argument specifying whether the result should be split by
newlines, and returned as a list of strings/lines instead;
Default:False
remove_info : bool
Optional argument specifying whether the additional information added
during the preprocessing and syntactic processing should be removed
from the results;
Default:True;
The method cleanup_lines() will be used for removing additional info,
and all the parameters passed to this method will be also forwarded to
the cleanup method; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/vislcg3_syntax.py#L213-L302 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | copy_analysis_dict | def copy_analysis_dict( analysis ):
''' Creates a copy from given analysis dict. '''
assert isinstance(analysis, dict), "(!) Input 'analysis' should be a dict!"
new_dict = { POSTAG: analysis[POSTAG],\
ROOT: analysis[ROOT],\
FORM: analysis[FORM],\
CLITIC: analysis[CLITIC],\
ENDING: analysis[ENDING] }
if LEMMA in analysis:
new_dict[LEMMA] = analysis[LEMMA]
if ROOT_TOKENS in analysis:
new_dict[ROOT_TOKENS] = analysis[ROOT_TOKENS]
return new_dict | python | def copy_analysis_dict( analysis ):
''' Creates a copy from given analysis dict. '''
assert isinstance(analysis, dict), "(!) Input 'analysis' should be a dict!"
new_dict = { POSTAG: analysis[POSTAG],\
ROOT: analysis[ROOT],\
FORM: analysis[FORM],\
CLITIC: analysis[CLITIC],\
ENDING: analysis[ENDING] }
if LEMMA in analysis:
new_dict[LEMMA] = analysis[LEMMA]
if ROOT_TOKENS in analysis:
new_dict[ROOT_TOKENS] = analysis[ROOT_TOKENS]
return new_dict | Creates a copy from given analysis dict. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L27-L39 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | get_unique_clause_indices | def get_unique_clause_indices( text ):
''' Returns a list of clause indices for the whole text. For each token in text,
the list contains index of the clause the word belongs to, and the indices
are unique over the whole text. '''
# Add clause boundary annotation (if missing)
if not text.is_tagged( CLAUSES ):
text.tag_clauses()
# Collect (unique) clause indices over the whole text
clause_indices = []
sent_id = 0
for sub_text in text.split_by( SENTENCES ):
for word, cl_index in zip( sub_text.words, sub_text.clause_indices ):
clause_indices.append( sent_id+cl_index )
nr_of_clauses = len(set(sub_text.clause_indices))
sent_id += nr_of_clauses
assert len(clause_indices) == len(text.words), '(!) Number of clause indices should match nr of words!'
return clause_indices | python | def get_unique_clause_indices( text ):
''' Returns a list of clause indices for the whole text. For each token in text,
the list contains index of the clause the word belongs to, and the indices
are unique over the whole text. '''
# Add clause boundary annotation (if missing)
if not text.is_tagged( CLAUSES ):
text.tag_clauses()
# Collect (unique) clause indices over the whole text
clause_indices = []
sent_id = 0
for sub_text in text.split_by( SENTENCES ):
for word, cl_index in zip( sub_text.words, sub_text.clause_indices ):
clause_indices.append( sent_id+cl_index )
nr_of_clauses = len(set(sub_text.clause_indices))
sent_id += nr_of_clauses
assert len(clause_indices) == len(text.words), '(!) Number of clause indices should match nr of words!'
return clause_indices | Returns a list of clause indices for the whole text. For each token in text,
the list contains index of the clause the word belongs to, and the indices
are unique over the whole text. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L42-L58 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | get_unique_sentence_indices | def get_unique_sentence_indices( text ):
''' Returns a list of sentence indices for the whole text. For each token in text,
the list contains index of the sentence the word belongs to, and the indices
are unique over the whole text. '''
# Add sentence annotation (if missing)
if not text.is_tagged( SENTENCES ):
text.tokenize_sentences()
# Collect (unique) sent indices over the whole text
sent_indices = []
sent_id = 0
for sub_text in text.split_by( SENTENCES ):
for word in sub_text.words:
sent_indices.append( sent_id )
sent_id += 1
assert len(sent_indices) == len(text.words), '(!) Number of sent indices should match nr of words!'
return sent_indices | python | def get_unique_sentence_indices( text ):
''' Returns a list of sentence indices for the whole text. For each token in text,
the list contains index of the sentence the word belongs to, and the indices
are unique over the whole text. '''
# Add sentence annotation (if missing)
if not text.is_tagged( SENTENCES ):
text.tokenize_sentences()
# Collect (unique) sent indices over the whole text
sent_indices = []
sent_id = 0
for sub_text in text.split_by( SENTENCES ):
for word in sub_text.words:
sent_indices.append( sent_id )
sent_id += 1
assert len(sent_indices) == len(text.words), '(!) Number of sent indices should match nr of words!'
return sent_indices | Returns a list of sentence indices for the whole text. For each token in text,
the list contains index of the sentence the word belongs to, and the indices
are unique over the whole text. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L61-L76 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _convert_nominal_form | def _convert_nominal_form( analysis ):
''' Converts nominal categories of the input analysis.
Performs one-to-one conversions only. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
for idx, pattern_items in enumerate(_noun_conversion_rules):
pattern_str, replacement = pattern_items
if pattern_str in analysis[FORM]:
analysis[FORM] = analysis[FORM].replace( pattern_str, replacement )
return analysis | python | def _convert_nominal_form( analysis ):
''' Converts nominal categories of the input analysis.
Performs one-to-one conversions only. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
for idx, pattern_items in enumerate(_noun_conversion_rules):
pattern_str, replacement = pattern_items
if pattern_str in analysis[FORM]:
analysis[FORM] = analysis[FORM].replace( pattern_str, replacement )
return analysis | Converts nominal categories of the input analysis.
Performs one-to-one conversions only. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L115-L123 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _convert_amb_verbal_form | def _convert_amb_verbal_form( analysis ):
''' Converts ambiguous verbal categories of the input analysis.
Performs one-to-many conversions. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
results = []
for root_pat, pos, form_pat, replacements in _amb_verb_conversion_rules:
if analysis[POSTAG] == pos and re.match(root_pat, analysis[ROOT]) and \
re.match(form_pat, analysis[FORM]):
for replacement in replacements:
new_analysis = copy_analysis_dict( analysis )
new_form = re.sub(form_pat, replacement, analysis[FORM])
new_analysis[FORM] = new_form
results.append( new_analysis )
# break after the replacement has been made
# ( to avoid over-generation )
break
if not results:
results.append( analysis )
return results | python | def _convert_amb_verbal_form( analysis ):
''' Converts ambiguous verbal categories of the input analysis.
Performs one-to-many conversions. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
results = []
for root_pat, pos, form_pat, replacements in _amb_verb_conversion_rules:
if analysis[POSTAG] == pos and re.match(root_pat, analysis[ROOT]) and \
re.match(form_pat, analysis[FORM]):
for replacement in replacements:
new_analysis = copy_analysis_dict( analysis )
new_form = re.sub(form_pat, replacement, analysis[FORM])
new_analysis[FORM] = new_form
results.append( new_analysis )
# break after the replacement has been made
# ( to avoid over-generation )
break
if not results:
results.append( analysis )
return results | Converts ambiguous verbal categories of the input analysis.
Performs one-to-many conversions. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L152-L170 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _convert_verbal_form | def _convert_verbal_form( analysis ):
''' Converts ordinary verbal categories of the input analysis.
Performs one-to-one conversions. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
for form, replacement in _verb_conversion_rules:
# Exact match
if analysis[FORM] == form:
assert analysis[POSTAG] == 'V', \
'(!) Expected analysis of verb, but got analysis of "'+str(analysis[POSTAG])+'" instead.'
analysis[FORM] = replacement
# Inclusion : the case of some_prefix+' '+form ;
elif analysis[FORM].endswith(' '+form):
parts = analysis[FORM].split()
prefix = ' '.join( parts[:len(parts)-1] )
analysis[FORM] = prefix+' '+replacement
return analysis | python | def _convert_verbal_form( analysis ):
''' Converts ordinary verbal categories of the input analysis.
Performs one-to-one conversions. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
for form, replacement in _verb_conversion_rules:
# Exact match
if analysis[FORM] == form:
assert analysis[POSTAG] == 'V', \
'(!) Expected analysis of verb, but got analysis of "'+str(analysis[POSTAG])+'" instead.'
analysis[FORM] = replacement
# Inclusion : the case of some_prefix+' '+form ;
elif analysis[FORM].endswith(' '+form):
parts = analysis[FORM].split()
prefix = ' '.join( parts[:len(parts)-1] )
analysis[FORM] = prefix+' '+replacement
return analysis | Converts ordinary verbal categories of the input analysis.
Performs one-to-one conversions. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L220-L235 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _make_postfixes_1 | def _make_postfixes_1( analysis ):
''' Provides some post-fixes. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
if 'neg' in analysis[FORM]:
analysis[FORM] = re.sub( '^\s*neg ([^,]*)$', '\\1 Neg', analysis[FORM] )
analysis[FORM] = re.sub( ' Neg Neg$', ' Neg', analysis[FORM] )
analysis[FORM] = re.sub( ' Aff Neg$', ' Neg', analysis[FORM] )
analysis[FORM] = re.sub( 'neg', 'Neg', analysis[FORM] )
analysis[FORM] = analysis[FORM].rstrip().lstrip()
assert 'neg' not in analysis[FORM], \
'(!) The label "neg" should be removed by now.'
assert 'Neg' not in analysis[FORM] or ('Neg' in analysis[FORM] and analysis[FORM].endswith('Neg')), \
'(!) The label "Neg" should end the analysis line: '+str(analysis[FORM])
return analysis | python | def _make_postfixes_1( analysis ):
''' Provides some post-fixes. '''
assert FORM in analysis, '(!) The input analysis does not contain "'+FORM+'" key.'
if 'neg' in analysis[FORM]:
analysis[FORM] = re.sub( '^\s*neg ([^,]*)$', '\\1 Neg', analysis[FORM] )
analysis[FORM] = re.sub( ' Neg Neg$', ' Neg', analysis[FORM] )
analysis[FORM] = re.sub( ' Aff Neg$', ' Neg', analysis[FORM] )
analysis[FORM] = re.sub( 'neg', 'Neg', analysis[FORM] )
analysis[FORM] = analysis[FORM].rstrip().lstrip()
assert 'neg' not in analysis[FORM], \
'(!) The label "neg" should be removed by now.'
assert 'Neg' not in analysis[FORM] or ('Neg' in analysis[FORM] and analysis[FORM].endswith('Neg')), \
'(!) The label "Neg" should end the analysis line: '+str(analysis[FORM])
return analysis | Provides some post-fixes. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L241-L254 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _keep_analyses | def _keep_analyses( analyses, keep_forms, target_forms ):
''' Filters the given list of *analyses* by morphological forms:
deletes analyses that are listed in *target_forms*, but not in
*keep_forms*. '''
to_delete = []
for aid, analysis in enumerate(analyses):
delete = False
for target in target_forms:
if (target == analysis[FORM] and not analysis[FORM] in keep_forms):
delete = True
if delete:
to_delete.append( aid )
if to_delete:
to_delete.reverse()
for aid in to_delete:
del analyses[aid] | python | def _keep_analyses( analyses, keep_forms, target_forms ):
''' Filters the given list of *analyses* by morphological forms:
deletes analyses that are listed in *target_forms*, but not in
*keep_forms*. '''
to_delete = []
for aid, analysis in enumerate(analyses):
delete = False
for target in target_forms:
if (target == analysis[FORM] and not analysis[FORM] in keep_forms):
delete = True
if delete:
to_delete.append( aid )
if to_delete:
to_delete.reverse()
for aid in to_delete:
del analyses[aid] | Filters the given list of *analyses* by morphological forms:
deletes analyses that are listed in *target_forms*, but not in
*keep_forms*. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L260-L275 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _disambiguate_neg | def _disambiguate_neg( words_layer ):
''' Disambiguates forms ambiguous between multiword negation and some
other form;
'''
prev_word_lemma = ''
for word_dict in words_layer:
forms = [ a[FORM] for a in word_dict[ANALYSIS] ]
if ('Pers Prs Imprt Sg2' in forms and 'Pers Prs Ind Neg' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saa, ei tee
_keep_analyses( word_dict[ANALYSIS], ['Pers Prs Ind Neg'], ['Pers Prs Imprt Sg2', 'Pers Prs Ind Neg'] )
else:
# saa! tee!
_keep_analyses( word_dict[ANALYSIS], ['Pers Prs Imprt Sg2'], ['Pers Prs Imprt Sg2', 'Pers Prs Ind Neg'] )
if ('Pers Prt Imprt' in forms and 'Pers Prt Ind Neg' in forms and 'Pers Prt Prc' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saanud, ei teinud
_keep_analyses( word_dict[ANALYSIS], ['Pers Prt Ind Neg'], ['Pers Prt Imprt','Pers Prt Ind Neg','Pers Prt Prc'] )
else:
# on, oli saanud teinud; kukkunud õun; ...
_keep_analyses( word_dict[ANALYSIS], ['Pers Prt Prc'], ['Pers Prt Imprt','Pers Prt Ind Neg','Pers Prt Prc'] )
if ('Impers Prt Ind Neg' in forms and 'Impers Prt Prc' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saadud, ei tehtud
_keep_analyses( word_dict[ANALYSIS], ['Impers Prt Ind Neg'], ['Impers Prt Ind Neg','Impers Prt Prc'] )
else:
# on, oli saadud tehtud; saadud õun; ...
_keep_analyses( word_dict[ANALYSIS], ['Impers Prt Prc'], ['Impers Prt Ind Neg','Impers Prt Prc'] )
prev_word_lemma = word_dict[ANALYSIS][0][ROOT] | python | def _disambiguate_neg( words_layer ):
''' Disambiguates forms ambiguous between multiword negation and some
other form;
'''
prev_word_lemma = ''
for word_dict in words_layer:
forms = [ a[FORM] for a in word_dict[ANALYSIS] ]
if ('Pers Prs Imprt Sg2' in forms and 'Pers Prs Ind Neg' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saa, ei tee
_keep_analyses( word_dict[ANALYSIS], ['Pers Prs Ind Neg'], ['Pers Prs Imprt Sg2', 'Pers Prs Ind Neg'] )
else:
# saa! tee!
_keep_analyses( word_dict[ANALYSIS], ['Pers Prs Imprt Sg2'], ['Pers Prs Imprt Sg2', 'Pers Prs Ind Neg'] )
if ('Pers Prt Imprt' in forms and 'Pers Prt Ind Neg' in forms and 'Pers Prt Prc' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saanud, ei teinud
_keep_analyses( word_dict[ANALYSIS], ['Pers Prt Ind Neg'], ['Pers Prt Imprt','Pers Prt Ind Neg','Pers Prt Prc'] )
else:
# on, oli saanud teinud; kukkunud õun; ...
_keep_analyses( word_dict[ANALYSIS], ['Pers Prt Prc'], ['Pers Prt Imprt','Pers Prt Ind Neg','Pers Prt Prc'] )
if ('Impers Prt Ind Neg' in forms and 'Impers Prt Prc' in forms):
if (prev_word_lemma == "ei" or prev_word_lemma == "ega"):
# ei saadud, ei tehtud
_keep_analyses( word_dict[ANALYSIS], ['Impers Prt Ind Neg'], ['Impers Prt Ind Neg','Impers Prt Prc'] )
else:
# on, oli saadud tehtud; saadud õun; ...
_keep_analyses( word_dict[ANALYSIS], ['Impers Prt Prc'], ['Impers Prt Ind Neg','Impers Prt Prc'] )
prev_word_lemma = word_dict[ANALYSIS][0][ROOT] | Disambiguates forms ambiguous between multiword negation and some
other form; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L277-L305 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _disambiguate_sid_ksid | def _disambiguate_sid_ksid( words_layer, text, scope=CLAUSES ):
''' Disambiguates verb forms based on existence of 2nd person pronoun ('sina') in given scope.
The scope could be either CLAUSES or SENTENCES.
'''
assert scope in [CLAUSES, SENTENCES], '(!) The scope should be either "clauses" or "sentences".'
group_indices = get_unique_clause_indices( text ) if scope==CLAUSES else get_unique_sentence_indices( text )
i = 0
gr_2nd_person_pron = {}
while i < len( words_layer ):
gr_index = group_indices[i]
if gr_index not in gr_2nd_person_pron:
# 1) Find out whether the current group (clause or sentence) contains "sina"
j = i
gr_2nd_person_pron_found = False
while j < len( words_layer ):
if group_indices[j] == gr_index:
forms = [ a[FORM] for a in words_layer[j][ANALYSIS] ]
lemmas = [ a[ROOT] for a in words_layer[j][ANALYSIS] ]
if 'sina' in lemmas and 'Sg Nom' in forms:
gr_2nd_person_pron_found = True
break
if group_indices[j] >= gr_index+10: # do not venture too far ...
break
j += 1
gr_2nd_person_pron[gr_index] = gr_2nd_person_pron_found
forms = [ a[FORM] for a in words_layer[i][ANALYSIS] ]
# 2) Disambiguate verb forms based on existence of 'sina' in the clause
if ('Pers Prt Ind Pl3 Aff' in forms and 'Pers Prt Ind Sg2 Aff' in forms): # -sid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Ind Pl3 Aff'], ['Pers Prt Ind Pl3 Aff', 'Pers Prt Ind Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Ind Sg2 Aff'], ['Pers Prt Ind Pl3 Aff', 'Pers Prt Ind Sg2 Aff'] )
if ('Pers Prs Cond Pl3 Aff' in forms and 'Pers Prs Cond Sg2 Aff' in forms): # -ksid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prs Cond Pl3 Aff'], ['Pers Prs Cond Pl3 Aff', 'Pers Prs Cond Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prs Cond Sg2 Aff'], ['Pers Prs Cond Pl3 Aff', 'Pers Prs Cond Sg2 Aff'] )
if ('Pers Prt Cond Pl3 Aff' in forms and 'Pers Prt Cond Sg2 Aff' in forms): # -nuksid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Cond Pl3 Aff'], ['Pers Prt Cond Pl3 Aff', 'Pers Prt Cond Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Cond Sg2 Aff'], ['Pers Prt Cond Pl3 Aff', 'Pers Prt Cond Sg2 Aff'] )
i += 1 | python | def _disambiguate_sid_ksid( words_layer, text, scope=CLAUSES ):
''' Disambiguates verb forms based on existence of 2nd person pronoun ('sina') in given scope.
The scope could be either CLAUSES or SENTENCES.
'''
assert scope in [CLAUSES, SENTENCES], '(!) The scope should be either "clauses" or "sentences".'
group_indices = get_unique_clause_indices( text ) if scope==CLAUSES else get_unique_sentence_indices( text )
i = 0
gr_2nd_person_pron = {}
while i < len( words_layer ):
gr_index = group_indices[i]
if gr_index not in gr_2nd_person_pron:
# 1) Find out whether the current group (clause or sentence) contains "sina"
j = i
gr_2nd_person_pron_found = False
while j < len( words_layer ):
if group_indices[j] == gr_index:
forms = [ a[FORM] for a in words_layer[j][ANALYSIS] ]
lemmas = [ a[ROOT] for a in words_layer[j][ANALYSIS] ]
if 'sina' in lemmas and 'Sg Nom' in forms:
gr_2nd_person_pron_found = True
break
if group_indices[j] >= gr_index+10: # do not venture too far ...
break
j += 1
gr_2nd_person_pron[gr_index] = gr_2nd_person_pron_found
forms = [ a[FORM] for a in words_layer[i][ANALYSIS] ]
# 2) Disambiguate verb forms based on existence of 'sina' in the clause
if ('Pers Prt Ind Pl3 Aff' in forms and 'Pers Prt Ind Sg2 Aff' in forms): # -sid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Ind Pl3 Aff'], ['Pers Prt Ind Pl3 Aff', 'Pers Prt Ind Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Ind Sg2 Aff'], ['Pers Prt Ind Pl3 Aff', 'Pers Prt Ind Sg2 Aff'] )
if ('Pers Prs Cond Pl3 Aff' in forms and 'Pers Prs Cond Sg2 Aff' in forms): # -ksid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prs Cond Pl3 Aff'], ['Pers Prs Cond Pl3 Aff', 'Pers Prs Cond Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prs Cond Sg2 Aff'], ['Pers Prs Cond Pl3 Aff', 'Pers Prs Cond Sg2 Aff'] )
if ('Pers Prt Cond Pl3 Aff' in forms and 'Pers Prt Cond Sg2 Aff' in forms): # -nuksid
if not gr_2nd_person_pron[ gr_index ]:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Cond Pl3 Aff'], ['Pers Prt Cond Pl3 Aff', 'Pers Prt Cond Sg2 Aff'] )
else:
_keep_analyses( words_layer[i][ANALYSIS], ['Pers Prt Cond Sg2 Aff'], ['Pers Prt Cond Pl3 Aff', 'Pers Prt Cond Sg2 Aff'] )
i += 1 | Disambiguates verb forms based on existence of 2nd person pronoun ('sina') in given scope.
The scope could be either CLAUSES or SENTENCES. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L308-L350 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | _make_postfixes_2 | def _make_postfixes_2( words_layer ):
''' Provides some post-fixes after the disambiguation. '''
for word_dict in words_layer:
for analysis in word_dict[ANALYSIS]:
analysis[FORM] = re.sub( '(Sg|Pl)([123])', '\\1 \\2', analysis[FORM] )
return words_layer | python | def _make_postfixes_2( words_layer ):
''' Provides some post-fixes after the disambiguation. '''
for word_dict in words_layer:
for analysis in word_dict[ANALYSIS]:
analysis[FORM] = re.sub( '(Sg|Pl)([123])', '\\1 \\2', analysis[FORM] )
return words_layer | Provides some post-fixes after the disambiguation. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L356-L361 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | convert_analysis | def convert_analysis( analyses ):
''' Converts a list of analyses (list of dict objects) from FS's vabamorf format to
giellatekno (GT) format.
Due to one-to-many conversion rules, the number of analyses returned by this method
can be greater than the number of analyses in the input list.
'''
resulting_analyses = []
for analysis in analyses:
# Make a copy of the analysis
new_analyses = [ copy_analysis_dict( analysis ) ]
# Convert noun categories
new_analyses[0] = _convert_nominal_form( new_analyses[0] )
# Convert ambiguous verb categories
new_analyses = _convert_amb_verbal_form( new_analyses[0] )
# Convert remaining verbal categories
new_analyses = [_convert_verbal_form( a ) for a in new_analyses]
# Make postfixes
new_analyses = [_make_postfixes_1( a ) for a in new_analyses]
resulting_analyses.extend( new_analyses )
return resulting_analyses | python | def convert_analysis( analyses ):
''' Converts a list of analyses (list of dict objects) from FS's vabamorf format to
giellatekno (GT) format.
Due to one-to-many conversion rules, the number of analyses returned by this method
can be greater than the number of analyses in the input list.
'''
resulting_analyses = []
for analysis in analyses:
# Make a copy of the analysis
new_analyses = [ copy_analysis_dict( analysis ) ]
# Convert noun categories
new_analyses[0] = _convert_nominal_form( new_analyses[0] )
# Convert ambiguous verb categories
new_analyses = _convert_amb_verbal_form( new_analyses[0] )
# Convert remaining verbal categories
new_analyses = [_convert_verbal_form( a ) for a in new_analyses]
# Make postfixes
new_analyses = [_make_postfixes_1( a ) for a in new_analyses]
resulting_analyses.extend( new_analyses )
return resulting_analyses | Converts a list of analyses (list of dict objects) from FS's vabamorf format to
giellatekno (GT) format.
Due to one-to-many conversion rules, the number of analyses returned by this method
can be greater than the number of analyses in the input list. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L368-L387 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | convert_to_gt | def convert_to_gt( text, layer_name=GT_WORDS ):
''' Converts all words in a morphologically analysed Text from FS format to
giellatekno (GT) format, and stores in a new layer named GT_WORDS.
If the keyword argument *layer_name=='words'* , overwrites the old 'words'
layer with the new layer containing GT format annotations.
Parameters
-----------
text : estnltk.text.Text
Morphologically annotated text that needs to be converted from FS format
to GT format;
layer_name : str
Name of the Text's layer in which GT format morphological annotations
are stored;
Defaults to GT_WORDS;
'''
assert WORDS in text, \
'(!) The input text should contain "'+str(WORDS)+'" layer.'
assert len(text[WORDS])==0 or (len(text[WORDS])>0 and ANALYSIS in text[WORDS][0]), \
'(!) Words in the input text should contain "'+str(ANALYSIS)+'" layer.'
new_words_layer = []
# 1) Perform the conversion
for word in text[WORDS]:
new_analysis = []
new_analysis.extend( convert_analysis( word[ANALYSIS] ) )
new_words_layer.append( {TEXT:word[TEXT], ANALYSIS:new_analysis, START:word[START], END:word[END]} )
# 2) Perform some context-specific disambiguation
_disambiguate_neg( new_words_layer )
_disambiguate_sid_ksid( new_words_layer, text, scope=CLAUSES )
_disambiguate_sid_ksid( new_words_layer, text, scope=SENTENCES )
_make_postfixes_2( new_words_layer )
# 3) Attach the layer
if layer_name != WORDS:
# Simply attach the new layer
text[layer_name] = new_words_layer
else:
# Perform word-by-word replacements
# (because simple attaching won't work here)
for wid, new_word in enumerate( new_words_layer ):
text[WORDS][wid] = new_word
return text | python | def convert_to_gt( text, layer_name=GT_WORDS ):
''' Converts all words in a morphologically analysed Text from FS format to
giellatekno (GT) format, and stores in a new layer named GT_WORDS.
If the keyword argument *layer_name=='words'* , overwrites the old 'words'
layer with the new layer containing GT format annotations.
Parameters
-----------
text : estnltk.text.Text
Morphologically annotated text that needs to be converted from FS format
to GT format;
layer_name : str
Name of the Text's layer in which GT format morphological annotations
are stored;
Defaults to GT_WORDS;
'''
assert WORDS in text, \
'(!) The input text should contain "'+str(WORDS)+'" layer.'
assert len(text[WORDS])==0 or (len(text[WORDS])>0 and ANALYSIS in text[WORDS][0]), \
'(!) Words in the input text should contain "'+str(ANALYSIS)+'" layer.'
new_words_layer = []
# 1) Perform the conversion
for word in text[WORDS]:
new_analysis = []
new_analysis.extend( convert_analysis( word[ANALYSIS] ) )
new_words_layer.append( {TEXT:word[TEXT], ANALYSIS:new_analysis, START:word[START], END:word[END]} )
# 2) Perform some context-specific disambiguation
_disambiguate_neg( new_words_layer )
_disambiguate_sid_ksid( new_words_layer, text, scope=CLAUSES )
_disambiguate_sid_ksid( new_words_layer, text, scope=SENTENCES )
_make_postfixes_2( new_words_layer )
# 3) Attach the layer
if layer_name != WORDS:
# Simply attach the new layer
text[layer_name] = new_words_layer
else:
# Perform word-by-word replacements
# (because simple attaching won't work here)
for wid, new_word in enumerate( new_words_layer ):
text[WORDS][wid] = new_word
return text | Converts all words in a morphologically analysed Text from FS format to
giellatekno (GT) format, and stores in a new layer named GT_WORDS.
If the keyword argument *layer_name=='words'* , overwrites the old 'words'
layer with the new layer containing GT format annotations.
Parameters
-----------
text : estnltk.text.Text
Morphologically annotated text that needs to be converted from FS format
to GT format;
layer_name : str
Name of the Text's layer in which GT format morphological annotations
are stored;
Defaults to GT_WORDS; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L390-L431 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | get_analysis_dict | def get_analysis_dict( root, pos, form ):
''' Takes *root*, *pos* and *form* from Filosoft's mrf input and reformats as
EstNLTK's analysis dict:
{
"clitic": string,
"ending": string,
"form": string,
"partofspeech": string,
"root": string
},
Returns the dict;
'''
import sys
result = { CLITIC:"", ENDING:"", FORM:form, POSTAG:pos, ROOT:"" }
breakpoint = -1
for i in range(len(root)-1, -1, -1):
if root[i] == '+':
breakpoint = i
break
if breakpoint == -1:
result[ROOT] = root
result[ENDING] = "0"
if not re.match("^\W+$", root):
try:
print( " No breakpoint found from: ", root, pos, form, file=sys.stderr )
except UnicodeEncodeError:
print( " No breakpoint found from input *root*!", file=sys.stderr )
else:
result[ROOT] = root[0:breakpoint]
result[ENDING] = root[breakpoint+1:]
if result[ENDING].endswith('ki') and len(result[ENDING]) > 2:
result[CLITIC] = 'ki'
result[ENDING] = re.sub('ki$', '', result[ENDING])
if result[ENDING].endswith('gi') and len(result[ENDING]) > 2:
result[CLITIC] = 'gi'
result[ENDING] = re.sub('gi$', '', result[ENDING])
return result | python | def get_analysis_dict( root, pos, form ):
''' Takes *root*, *pos* and *form* from Filosoft's mrf input and reformats as
EstNLTK's analysis dict:
{
"clitic": string,
"ending": string,
"form": string,
"partofspeech": string,
"root": string
},
Returns the dict;
'''
import sys
result = { CLITIC:"", ENDING:"", FORM:form, POSTAG:pos, ROOT:"" }
breakpoint = -1
for i in range(len(root)-1, -1, -1):
if root[i] == '+':
breakpoint = i
break
if breakpoint == -1:
result[ROOT] = root
result[ENDING] = "0"
if not re.match("^\W+$", root):
try:
print( " No breakpoint found from: ", root, pos, form, file=sys.stderr )
except UnicodeEncodeError:
print( " No breakpoint found from input *root*!", file=sys.stderr )
else:
result[ROOT] = root[0:breakpoint]
result[ENDING] = root[breakpoint+1:]
if result[ENDING].endswith('ki') and len(result[ENDING]) > 2:
result[CLITIC] = 'ki'
result[ENDING] = re.sub('ki$', '', result[ENDING])
if result[ENDING].endswith('gi') and len(result[ENDING]) > 2:
result[CLITIC] = 'gi'
result[ENDING] = re.sub('gi$', '', result[ENDING])
return result | Takes *root*, *pos* and *form* from Filosoft's mrf input and reformats as
EstNLTK's analysis dict:
{
"clitic": string,
"ending": string,
"form": string,
"partofspeech": string,
"root": string
},
Returns the dict; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L441-L477 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | read_text_from_idx_file | def read_text_from_idx_file( file_name, layer_name=WORDS, keep_init_lines=False ):
''' Reads IDX format morphological annotations from given file, and returns as a Text
object.
The Text object will be tokenized for paragraphs, sentences, words, and it will
contain morphological annotations in the layer *layer_name* (by default: WORDS);
Parameters
-----------
file_name : str
Name of the input file; Should contain IDX format text segmentation and
morphological annotation;
keep_init_lines : bool
Optional argument specifying whether the lines from the file should also be
preserved on a special layer named 'init_lines';
Default: False
layer_name : str
Name of the Text's layer in which morphological annotations from text are
stored;
Defaults to WORDS;
Example: expected format of the input:
129 1 1 " " " Z
129 2 1 Mul mina mina+l P sg ad
129 3 1 on olema ole+0 V b
129 3 1 on olema ole+0 V vad
129 4 1 palju palju palju+0 D
129 5 1 igasugust igasugune iga_sugune+t P sg p
129 6 1 informatsiooni informatsioon informatsioon+0 S sg p
129 7 1 . . . Z
'''
from nltk.tokenize.simple import LineTokenizer
from nltk.tokenize.regexp import RegexpTokenizer
from estnltk import Text
# 1) Collect the text along with morphological analyses from the input IDX file
init_lines = []
words = []
sentence = []
sentences = []
prev_sent_id = -1
prev_word_id = -1
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
fields = line.split('\t')
assert len(fields) == 8, '(!) Unexpected number of fields in the line: '+str(len(fields))
sent_id = fields[0]
word_id = fields[1]
clause_id = fields[2]
token = fields[3]
if prev_sent_id != sent_id:
# Record the old sentence, start a new
if sentence:
sentences.append( ' '.join(sentence) )
sentence = []
if prev_word_id != word_id:
# Record a new token
sentence.append( token )
word = { TEXT:token, ANALYSIS:[] }
words.append(word)
# Augment the last word in the list with new analysis
lemma = fields[4]
root = fields[5]
pos = fields[6]
form = fields[7].rstrip()
ending = ''
clitic = ''
analysis = get_analysis_dict( root, pos, form )
analysis[LEMMA] = lemma
words[-1][ANALYSIS].append( analysis )
prev_sent_id = sent_id
prev_word_id = word_id
if keep_init_lines:
init_lines.append( [sent_id+' '+word_id, line] )
in_f.close()
if sentence:
# Record the last sentence
sentences.append( ' '.join(sentence) )
# 2) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 3) Create a new layer with morphological analyses, or
# populate the old layer with morphological analyses;
assert len(text[WORDS]) == len(words), \
'(!) Number of words from input does not match with the number of words in EstNLTK Text: '+\
str(len(text[WORDS]) )+' != '+str(len(words))
if layer_name != WORDS:
# If necessary, create a new layer duplicating the WORDS layer
text[layer_name] = []
for word in text[WORDS]:
text[layer_name].append({START:word[START], END:word[END], TEXT:word[TEXT]})
# Copy morphological analyses to the new layer / populate the old layer
for wid, word in enumerate( text[WORDS] ):
text[layer_name][wid][ANALYSIS] = words[wid][ANALYSIS]
if layer_name == WORDS:
assert text.is_tagged(ANALYSIS), '(!) The layer of analysis should exist by now!'
if keep_init_lines:
# Preserve the initial lines from file in a separate layer
text['init_lines'] = []
i = 0
for wid, word in enumerate( text[layer_name] ):
words_lines = []
# collect lines associated with the word
while i < len(init_lines):
[lid, line] = init_lines[i]
if not words_lines or words_lines[-1][0]==lid:
words_lines.append([lid, line])
else:
break
i += 1
# record lines
text['init_lines'].append( \
{START:word[START], END:word[END], 'lines':[l[1] for l in words_lines]} )
assert len(text['init_lines']) == len(text[layer_name]), \
'(!) The number of initial lines should match the number of words in text!'
return text | python | def read_text_from_idx_file( file_name, layer_name=WORDS, keep_init_lines=False ):
''' Reads IDX format morphological annotations from given file, and returns as a Text
object.
The Text object will be tokenized for paragraphs, sentences, words, and it will
contain morphological annotations in the layer *layer_name* (by default: WORDS);
Parameters
-----------
file_name : str
Name of the input file; Should contain IDX format text segmentation and
morphological annotation;
keep_init_lines : bool
Optional argument specifying whether the lines from the file should also be
preserved on a special layer named 'init_lines';
Default: False
layer_name : str
Name of the Text's layer in which morphological annotations from text are
stored;
Defaults to WORDS;
Example: expected format of the input:
129 1 1 " " " Z
129 2 1 Mul mina mina+l P sg ad
129 3 1 on olema ole+0 V b
129 3 1 on olema ole+0 V vad
129 4 1 palju palju palju+0 D
129 5 1 igasugust igasugune iga_sugune+t P sg p
129 6 1 informatsiooni informatsioon informatsioon+0 S sg p
129 7 1 . . . Z
'''
from nltk.tokenize.simple import LineTokenizer
from nltk.tokenize.regexp import RegexpTokenizer
from estnltk import Text
# 1) Collect the text along with morphological analyses from the input IDX file
init_lines = []
words = []
sentence = []
sentences = []
prev_sent_id = -1
prev_word_id = -1
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
fields = line.split('\t')
assert len(fields) == 8, '(!) Unexpected number of fields in the line: '+str(len(fields))
sent_id = fields[0]
word_id = fields[1]
clause_id = fields[2]
token = fields[3]
if prev_sent_id != sent_id:
# Record the old sentence, start a new
if sentence:
sentences.append( ' '.join(sentence) )
sentence = []
if prev_word_id != word_id:
# Record a new token
sentence.append( token )
word = { TEXT:token, ANALYSIS:[] }
words.append(word)
# Augment the last word in the list with new analysis
lemma = fields[4]
root = fields[5]
pos = fields[6]
form = fields[7].rstrip()
ending = ''
clitic = ''
analysis = get_analysis_dict( root, pos, form )
analysis[LEMMA] = lemma
words[-1][ANALYSIS].append( analysis )
prev_sent_id = sent_id
prev_word_id = word_id
if keep_init_lines:
init_lines.append( [sent_id+' '+word_id, line] )
in_f.close()
if sentence:
# Record the last sentence
sentences.append( ' '.join(sentence) )
# 2) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 3) Create a new layer with morphological analyses, or
# populate the old layer with morphological analyses;
assert len(text[WORDS]) == len(words), \
'(!) Number of words from input does not match with the number of words in EstNLTK Text: '+\
str(len(text[WORDS]) )+' != '+str(len(words))
if layer_name != WORDS:
# If necessary, create a new layer duplicating the WORDS layer
text[layer_name] = []
for word in text[WORDS]:
text[layer_name].append({START:word[START], END:word[END], TEXT:word[TEXT]})
# Copy morphological analyses to the new layer / populate the old layer
for wid, word in enumerate( text[WORDS] ):
text[layer_name][wid][ANALYSIS] = words[wid][ANALYSIS]
if layer_name == WORDS:
assert text.is_tagged(ANALYSIS), '(!) The layer of analysis should exist by now!'
if keep_init_lines:
# Preserve the initial lines from file in a separate layer
text['init_lines'] = []
i = 0
for wid, word in enumerate( text[layer_name] ):
words_lines = []
# collect lines associated with the word
while i < len(init_lines):
[lid, line] = init_lines[i]
if not words_lines or words_lines[-1][0]==lid:
words_lines.append([lid, line])
else:
break
i += 1
# record lines
text['init_lines'].append( \
{START:word[START], END:word[END], 'lines':[l[1] for l in words_lines]} )
assert len(text['init_lines']) == len(text[layer_name]), \
'(!) The number of initial lines should match the number of words in text!'
return text | Reads IDX format morphological annotations from given file, and returns as a Text
object.
The Text object will be tokenized for paragraphs, sentences, words, and it will
contain morphological annotations in the layer *layer_name* (by default: WORDS);
Parameters
-----------
file_name : str
Name of the input file; Should contain IDX format text segmentation and
morphological annotation;
keep_init_lines : bool
Optional argument specifying whether the lines from the file should also be
preserved on a special layer named 'init_lines';
Default: False
layer_name : str
Name of the Text's layer in which morphological annotations from text are
stored;
Defaults to WORDS;
Example: expected format of the input:
129 1 1 " " " Z
129 2 1 Mul mina mina+l P sg ad
129 3 1 on olema ole+0 V b
129 3 1 on olema ole+0 V vad
129 4 1 palju palju palju+0 D
129 5 1 igasugust igasugune iga_sugune+t P sg p
129 6 1 informatsiooni informatsioon informatsioon+0 S sg p
129 7 1 . . . Z | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L481-L609 |
estnltk/estnltk | estnltk/converters/gt_conversion.py | get_original_vs_converted_diff | def get_original_vs_converted_diff( original ,converted ):
''' Compares the *original* text to *converted* text, and detects changes/differences in
morphological annotations.
The method constructs line-by-line comparison string, where lines are separated by
newline, and '***' at the beginning of the line indicates the difference.
Returns a pair: results of the line-by-line comparison as a string, and boolean value
indicating whether there were any differences.
'''
from estnltk.syntax.syntax_preprocessing import convert_Text_to_mrf
old_layer_mrf = convert_Text_to_mrf( original )
new_layer_mrf = convert_Text_to_mrf( converted )
max_len_1 = max([len(l) for l in old_layer_mrf ])
max_len_2 = max([len(l) for l in new_layer_mrf ])
max_len = max( max_len_1, max_len_2 )
format_str = '{:<'+str(max_len+1)+'}'
i = 0
j = 0
comp_lines = []
diff_found = False
while(i < len(old_layer_mrf) or j < len(new_layer_mrf)):
l1 = old_layer_mrf[i]
l2 = new_layer_mrf[j]
# 1) Output line containing tokens
if not l1.startswith(' ') and not l2.startswith(' '):
diff = '*** ' if format_str.format(l1) != format_str.format(l2) else ' '
comp_lines.append( diff+format_str.format(l1)+format_str.format(l2) )
if diff == '*** ':
diff_found = True
i += 1
j += 1
else:
# 2) Output analysis line(s)
while(i < len(old_layer_mrf) or j < len(new_layer_mrf)):
l1 = old_layer_mrf[i]
l2 = new_layer_mrf[j]
if l1.startswith(' ') and l2.startswith(' '):
diff = '*** ' if format_str.format(l1) != format_str.format(l2) else ' '
comp_lines.append( diff+format_str.format(l1)+format_str.format(l2) )
if diff == '*** ':
diff_found = True
i += 1
j += 1
elif l1.startswith(' ') and not l2.startswith(' '):
diff = '*** '
comp_lines.append( diff+format_str.format(l1)+format_str.format(' ') )
diff_found = True
i += 1
elif not l1.startswith(' ') and l2.startswith(' '):
diff = '*** '
comp_lines.append( diff+format_str.format(' ')+format_str.format(l2) )
diff_found = True
j += 1
else:
break
return '\n'.join( comp_lines ), diff_found | python | def get_original_vs_converted_diff( original ,converted ):
''' Compares the *original* text to *converted* text, and detects changes/differences in
morphological annotations.
The method constructs line-by-line comparison string, where lines are separated by
newline, and '***' at the beginning of the line indicates the difference.
Returns a pair: results of the line-by-line comparison as a string, and boolean value
indicating whether there were any differences.
'''
from estnltk.syntax.syntax_preprocessing import convert_Text_to_mrf
old_layer_mrf = convert_Text_to_mrf( original )
new_layer_mrf = convert_Text_to_mrf( converted )
max_len_1 = max([len(l) for l in old_layer_mrf ])
max_len_2 = max([len(l) for l in new_layer_mrf ])
max_len = max( max_len_1, max_len_2 )
format_str = '{:<'+str(max_len+1)+'}'
i = 0
j = 0
comp_lines = []
diff_found = False
while(i < len(old_layer_mrf) or j < len(new_layer_mrf)):
l1 = old_layer_mrf[i]
l2 = new_layer_mrf[j]
# 1) Output line containing tokens
if not l1.startswith(' ') and not l2.startswith(' '):
diff = '*** ' if format_str.format(l1) != format_str.format(l2) else ' '
comp_lines.append( diff+format_str.format(l1)+format_str.format(l2) )
if diff == '*** ':
diff_found = True
i += 1
j += 1
else:
# 2) Output analysis line(s)
while(i < len(old_layer_mrf) or j < len(new_layer_mrf)):
l1 = old_layer_mrf[i]
l2 = new_layer_mrf[j]
if l1.startswith(' ') and l2.startswith(' '):
diff = '*** ' if format_str.format(l1) != format_str.format(l2) else ' '
comp_lines.append( diff+format_str.format(l1)+format_str.format(l2) )
if diff == '*** ':
diff_found = True
i += 1
j += 1
elif l1.startswith(' ') and not l2.startswith(' '):
diff = '*** '
comp_lines.append( diff+format_str.format(l1)+format_str.format(' ') )
diff_found = True
i += 1
elif not l1.startswith(' ') and l2.startswith(' '):
diff = '*** '
comp_lines.append( diff+format_str.format(' ')+format_str.format(l2) )
diff_found = True
j += 1
else:
break
return '\n'.join( comp_lines ), diff_found | Compares the *original* text to *converted* text, and detects changes/differences in
morphological annotations.
The method constructs line-by-line comparison string, where lines are separated by
newline, and '***' at the beginning of the line indicates the difference.
Returns a pair: results of the line-by-line comparison as a string, and boolean value
indicating whether there were any differences. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/converters/gt_conversion.py#L613-L669 |
estnltk/estnltk | estnltk/mw_verbs/verbchain_detector.py | removeRedundantVerbChains | def removeRedundantVerbChains( foundChains, removeOverlapping = True, removeSingleAraAndEi = False ):
''' Eemaldab yleliigsed verbiahelad: ahelad, mis katavad osaliselt v6i t2ielikult
teisi ahelaid (removeOverlapping == True), yhes6nalised 'ei' ja 'ära' ahelad (kui
removeSingleAraAndEi == True);
Yldiselt on nii, et ylekattuvaid ei tohiks palju olla, kuna fraaside laiendamisel
pyytakse alati kontrollida, et laiendus ei kattuks m6ne olemasoleva fraasiga;
Peamiselt tekivad ylekattuvused siis, kui morf analyysi on sattunud valed
finiitverbi analyysid (v6i analyysid on j22nud mitmesteks) ja seega tuvastatakse
osalausest rohkem finiitverbe, kui oleks vaja.
Heuristik: kahe ylekattuva puhul j2tame alles fraasi, mis algab eespool ning
m2rgime sellel OTHER_VERBS v22rtuseks True, mis m2rgib, et kontekstis on mingi
segadus teiste verbidega.
'''
toDelete = []
for i in range(len(foundChains)):
matchObj1 = foundChains[i]
if removeOverlapping:
for j in range(i+1, len(foundChains)):
matchObj2 = foundChains[j]
if matchObj1 != matchObj2 and matchObj1[CLAUSE_IDX] == matchObj2[CLAUSE_IDX]:
phrase1 = set(matchObj1[PHRASE])
phrase2 = set(matchObj2[PHRASE])
intersect = phrase1.intersection(phrase2)
if len(intersect) > 0:
# Yldiselt on nii, et ylekattuvaid ei tohiks olla, kuna fraaside laiendamisel
# pyytakse alati kontrollida, et laiendus ei kattuks m6ne olemasoleva fraasiga;
# Peamiselt tekivad ylekattuvused siis, kui morf analyysil on finiitverbi
# analyysidesse j22nud sisse mitmesused (v6i on sattunud valed analyysid) ja
# seega tuvastatakse osalausest rohkem finiitverbe, kui oleks vaja.
# Heuristik: j2tame alles fraasi, mis algab eespool ning lisame selle otsa
# kysim2rgi (kuna pole kindel, et asjad on korras)
minWid1 = min(matchObj1[PHRASE])
minWid2 = min(matchObj2[PHRASE])
if minWid1 < minWid2:
matchObj1[OTHER_VERBS] = True
toDelete.append(j)
else:
matchObj2[OTHER_VERBS] = True
toDelete.append(i)
if removeSingleAraAndEi:
if ( len(matchObj1[PATTERN])==1 and re.match('^(ei|ära)$', matchObj1[PATTERN][0]) ):
toDelete.append(i)
if toDelete:
if len(set(toDelete)) != len(toDelete):
toDelete = list(set(toDelete)) # Eemaldame duplikaadid
toDelete = [ foundChains[i] for i in toDelete ]
for verbObj in toDelete:
foundChains.remove(verbObj) | python | def removeRedundantVerbChains( foundChains, removeOverlapping = True, removeSingleAraAndEi = False ):
''' Eemaldab yleliigsed verbiahelad: ahelad, mis katavad osaliselt v6i t2ielikult
teisi ahelaid (removeOverlapping == True), yhes6nalised 'ei' ja 'ära' ahelad (kui
removeSingleAraAndEi == True);
Yldiselt on nii, et ylekattuvaid ei tohiks palju olla, kuna fraaside laiendamisel
pyytakse alati kontrollida, et laiendus ei kattuks m6ne olemasoleva fraasiga;
Peamiselt tekivad ylekattuvused siis, kui morf analyysi on sattunud valed
finiitverbi analyysid (v6i analyysid on j22nud mitmesteks) ja seega tuvastatakse
osalausest rohkem finiitverbe, kui oleks vaja.
Heuristik: kahe ylekattuva puhul j2tame alles fraasi, mis algab eespool ning
m2rgime sellel OTHER_VERBS v22rtuseks True, mis m2rgib, et kontekstis on mingi
segadus teiste verbidega.
'''
toDelete = []
for i in range(len(foundChains)):
matchObj1 = foundChains[i]
if removeOverlapping:
for j in range(i+1, len(foundChains)):
matchObj2 = foundChains[j]
if matchObj1 != matchObj2 and matchObj1[CLAUSE_IDX] == matchObj2[CLAUSE_IDX]:
phrase1 = set(matchObj1[PHRASE])
phrase2 = set(matchObj2[PHRASE])
intersect = phrase1.intersection(phrase2)
if len(intersect) > 0:
# Yldiselt on nii, et ylekattuvaid ei tohiks olla, kuna fraaside laiendamisel
# pyytakse alati kontrollida, et laiendus ei kattuks m6ne olemasoleva fraasiga;
# Peamiselt tekivad ylekattuvused siis, kui morf analyysil on finiitverbi
# analyysidesse j22nud sisse mitmesused (v6i on sattunud valed analyysid) ja
# seega tuvastatakse osalausest rohkem finiitverbe, kui oleks vaja.
# Heuristik: j2tame alles fraasi, mis algab eespool ning lisame selle otsa
# kysim2rgi (kuna pole kindel, et asjad on korras)
minWid1 = min(matchObj1[PHRASE])
minWid2 = min(matchObj2[PHRASE])
if minWid1 < minWid2:
matchObj1[OTHER_VERBS] = True
toDelete.append(j)
else:
matchObj2[OTHER_VERBS] = True
toDelete.append(i)
if removeSingleAraAndEi:
if ( len(matchObj1[PATTERN])==1 and re.match('^(ei|ära)$', matchObj1[PATTERN][0]) ):
toDelete.append(i)
if toDelete:
if len(set(toDelete)) != len(toDelete):
toDelete = list(set(toDelete)) # Eemaldame duplikaadid
toDelete = [ foundChains[i] for i in toDelete ]
for verbObj in toDelete:
foundChains.remove(verbObj) | Eemaldab yleliigsed verbiahelad: ahelad, mis katavad osaliselt v6i t2ielikult
teisi ahelaid (removeOverlapping == True), yhes6nalised 'ei' ja 'ära' ahelad (kui
removeSingleAraAndEi == True);
Yldiselt on nii, et ylekattuvaid ei tohiks palju olla, kuna fraaside laiendamisel
pyytakse alati kontrollida, et laiendus ei kattuks m6ne olemasoleva fraasiga;
Peamiselt tekivad ylekattuvused siis, kui morf analyysi on sattunud valed
finiitverbi analyysid (v6i analyysid on j22nud mitmesteks) ja seega tuvastatakse
osalausest rohkem finiitverbe, kui oleks vaja.
Heuristik: kahe ylekattuva puhul j2tame alles fraasi, mis algab eespool ning
m2rgime sellel OTHER_VERBS v22rtuseks True, mis m2rgib, et kontekstis on mingi
segadus teiste verbidega. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/mw_verbs/verbchain_detector.py#L37-L85 |
estnltk/estnltk | estnltk/mw_verbs/verbchain_detector.py | addGrammaticalFeatsAndRoots | def addGrammaticalFeatsAndRoots( sentence, foundChains ):
''' Täiendab leitud verbiahelaid, lisades iga ahela kylge selle s6nade lemmad (ROOT
v2ljad morf analyysist) ning morfoloogilised tunnused (POSTAG+FORM: eraldajaks
'_' ning kui on mitu varianti, siis tuuakse k6ik variandid, eraldajaks '/');
Atribuudid ROOTS ja MORPH sisaldavad tunnuste loetelusid (iga ahela liikme jaoks
yks tunnus:
Nt.
** 'püüab kodeerida' puhul tuleb MORPH väärtuseks ['V_b', 'V_da'] ning
ROOTS väärtuseks ['püüd', 'kodeeri'];
** 'on tulnud' puhul tuleb MORPH väärtuseks ['V_vad/b', 'V_nud'] ning
ROOTS väärtuseks ['ole', 'tule'];
Lisaks leiatakse ahela p6hiverbi (esimese verbi) grammatilised tunnused:
** aeg (TENSE): present, imperfect, perfect, pluperfect, past, ??
** k6neviis (MOOD): indic, imper, condit, quotat, ??
** tegumood (VOICE): personal, impersonal, ??
'''
_indicPresent = ['n','d','b','me','te','vad']
_indicImperfect = ['sin', 'sid', 's', 'sime', 'site', 'sid']
_imperatPlural = ['gem', 'ge', 'gu']
_conditPreesens = ['ksin', 'ksid', 'ks', 'ksime', 'ksite', 'ksid']
_conditPreteerium = ['nuksin', 'nuksid', 'nuks', 'nuksime', 'nuksite', 'nuksid']
for i in range(len(foundChains)):
matchObj1 = foundChains[i]
roots = []
grammFeats = []
grammPosAndForms = []
#
# 1) Leiame kogu ahela morfoloogilised tunnused ja lemmad
#
for j in range(len( matchObj1[PHRASE] )):
wid = matchObj1[PHRASE][j]
token = [token for token in sentence if token[WORD_ID]==wid][0]
analysisIDs = matchObj1[ANALYSIS_IDS][j]
analyses = [ token[ANALYSIS][k] for k in range(len( token[ANALYSIS] )) if k in analysisIDs ]
pos = set( [a[POSTAG] for a in analyses] )
form = set( [a[FORM] for a in analyses] )
root = [a[ROOT] for a in analyses][0]
grammatical = ("/".join(list(pos))) + '_' + ("/".join(list(form)))
grammPosAndForms.append( (pos, form) )
# Yhtlustame m6ningaid mustreid (st kohendame nende morf analyysi selliseks, nagu
# mustri poolt on eeldatud) ...
if root == 'ei' and len(matchObj1[PHRASE])>1:
grammatical = 'V_neg'
if matchObj1[PATTERN][j] == '&':
grammatical = 'J_'
roots.append( root )
grammFeats.append( grammatical )
matchObj1[ROOTS] = roots
matchObj1[MORPH] = grammFeats
#
# 2) Leiame eelneva põhjal ahela põhiverbi tunnused: grammatilise aja (tense),
# kõneviisi (mood), tegumoe (voice)
#
tense = "??"
mood = "??"
voice = "??"
if matchObj1[POLARITY] == 'POS':
#
# Jaatuse tunnused
#
(pos, form) = grammPosAndForms[0]
if 'V' in pos:
#
# Indikatiiv e kindel kõneviis
#
if len(form.intersection( _indicPresent )) > 0:
tense = "present"
mood = "indic"
voice = "personal"
elif len(form.intersection( _indicImperfect )) > 0:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'takse' in form:
tense = "present"
mood = "indic"
voice = "impersonal"
elif 'ti' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
#
# Imperatiiv e käskiv kõneviis
#
elif 'o' in form or 'gu' in form:
tense = "present"
mood = "imper"
voice = "personal"
elif len(form.intersection( _imperatPlural )) > 0:
tense = "present"
mood = "imper"
voice = "personal"
elif 'tagu' in form:
tense = "present"
mood = "imper"
voice = "impersonal"
#
# Konditsionaal e tingiv kõneviis
#
elif len(form.intersection( _conditPreesens )) > 0:
tense = "present"
mood = "condit"
voice = "personal"
elif 'taks' in form:
tense = "present"
mood = "condit"
voice = "impersonal"
elif len(form.intersection( _conditPreteerium )) > 0:
tense = "past"
mood = "condit"
voice = "personal"
elif 'tuks' in form:
tense = "past"
mood = "condit"
voice = "impersonal"
#
# Kvotatiiv e kaudne kõneviis
#
elif 'vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
elif 'tavat' in form:
tense = "present"
mood = "quotat"
voice = "impersonal"
elif 'nuvat' in form:
tense = "past"
mood = "quotat"
voice = "personal"
elif 'tuvat' in form:
tense = "past"
mood = "quotat"
voice = "impersonal"
#
# Liitaeg: olema + nud (personaal), olema + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 1 and matchObj1[PATTERN][0] == 'ole':
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[1] == "V_nud" or grammFeats[1] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[1] == "V_nud":
tense = "past"
elif grammFeats[1] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
elif matchObj1[POLARITY] == 'NEG':
#
# Eituse tunnused
#
if len(matchObj1[PATTERN]) > 1 and \
(matchObj1[PATTERN][0] == 'ei' or matchObj1[PATTERN][0] == 'ega'):
(pos, form) = grammPosAndForms[1]
# Indikatiiv
if 'o' in form or 'neg o' in form:
tense = "present"
mood = "indic"
voice = "personal"
elif 'ta' in form:
tense = "present"
mood = "indic"
voice = "impersonal"
elif 'nud' in form:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'tud' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
# Konditsionaal
elif 'ks' in form:
tense = "present"
mood = "condit"
voice = "personal"
elif 'taks' in form:
tense = "present"
mood = "condit"
voice = "impersonal"
elif 'nuks' in form:
tense = "past"
mood = "condit"
voice = "personal"
elif 'tuks' in form:
tense = "past"
mood = "condit"
voice = "impersonal"
# Kvotatiiv
elif 'vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
elif 'tavat' in form:
tense = "present"
mood = "quotat"
voice = "impersonal"
elif 'nuvat' in form:
tense = "past"
mood = "quotat"
voice = "personal"
elif 'tuvat' in form:
tense = "past"
mood = "quotat"
voice = "impersonal"
#
# Liitaeg: ei + olema + nud (personaal), ei + olema + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 2 and matchObj1[PATTERN][1] == 'ole':
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[2] == "V_nud" or grammFeats[2] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[2] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[2] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[2] == "V_nud":
tense = "past"
elif grammFeats[2] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
elif len(matchObj1[PATTERN]) > 1 and matchObj1[PATTERN][0] == 'ära':
(pos, form) = grammPosAndForms[1]
# Imperatiiv
if 'tagu' in form:
tense = "present"
mood = "imper"
voice = "impersonal"
else:
tense = "present"
mood = "imper"
voice = "personal"
elif matchObj1[PATTERN][0] == 'pole':
(pos, form) = grammPosAndForms[0]
# Indikatiiv
if 'neg o' in form:
tense = "present"
mood = "indic"
voice = "personal"
elif 'neg nud' in form:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'neg tud' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
# Konditsionaal
elif 'neg ks' in form:
tense = "present"
mood = "condit"
voice = "personal"
elif 'neg nuks' in form:
tense = "past"
mood = "condit"
voice = "personal"
# Kvotatiiv
elif 'neg vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
#
# Liitaeg: pole + nud (personaal), pole + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 1:
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[1] == "V_nud" or grammFeats[1] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[1] == "V_nud":
tense = "past"
elif grammFeats[1] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
matchObj1[MOOD] = mood
matchObj1[TENSE] = tense
matchObj1[VOICE] = voice | python | def addGrammaticalFeatsAndRoots( sentence, foundChains ):
''' Täiendab leitud verbiahelaid, lisades iga ahela kylge selle s6nade lemmad (ROOT
v2ljad morf analyysist) ning morfoloogilised tunnused (POSTAG+FORM: eraldajaks
'_' ning kui on mitu varianti, siis tuuakse k6ik variandid, eraldajaks '/');
Atribuudid ROOTS ja MORPH sisaldavad tunnuste loetelusid (iga ahela liikme jaoks
yks tunnus:
Nt.
** 'püüab kodeerida' puhul tuleb MORPH väärtuseks ['V_b', 'V_da'] ning
ROOTS väärtuseks ['püüd', 'kodeeri'];
** 'on tulnud' puhul tuleb MORPH väärtuseks ['V_vad/b', 'V_nud'] ning
ROOTS väärtuseks ['ole', 'tule'];
Lisaks leiatakse ahela p6hiverbi (esimese verbi) grammatilised tunnused:
** aeg (TENSE): present, imperfect, perfect, pluperfect, past, ??
** k6neviis (MOOD): indic, imper, condit, quotat, ??
** tegumood (VOICE): personal, impersonal, ??
'''
_indicPresent = ['n','d','b','me','te','vad']
_indicImperfect = ['sin', 'sid', 's', 'sime', 'site', 'sid']
_imperatPlural = ['gem', 'ge', 'gu']
_conditPreesens = ['ksin', 'ksid', 'ks', 'ksime', 'ksite', 'ksid']
_conditPreteerium = ['nuksin', 'nuksid', 'nuks', 'nuksime', 'nuksite', 'nuksid']
for i in range(len(foundChains)):
matchObj1 = foundChains[i]
roots = []
grammFeats = []
grammPosAndForms = []
#
# 1) Leiame kogu ahela morfoloogilised tunnused ja lemmad
#
for j in range(len( matchObj1[PHRASE] )):
wid = matchObj1[PHRASE][j]
token = [token for token in sentence if token[WORD_ID]==wid][0]
analysisIDs = matchObj1[ANALYSIS_IDS][j]
analyses = [ token[ANALYSIS][k] for k in range(len( token[ANALYSIS] )) if k in analysisIDs ]
pos = set( [a[POSTAG] for a in analyses] )
form = set( [a[FORM] for a in analyses] )
root = [a[ROOT] for a in analyses][0]
grammatical = ("/".join(list(pos))) + '_' + ("/".join(list(form)))
grammPosAndForms.append( (pos, form) )
# Yhtlustame m6ningaid mustreid (st kohendame nende morf analyysi selliseks, nagu
# mustri poolt on eeldatud) ...
if root == 'ei' and len(matchObj1[PHRASE])>1:
grammatical = 'V_neg'
if matchObj1[PATTERN][j] == '&':
grammatical = 'J_'
roots.append( root )
grammFeats.append( grammatical )
matchObj1[ROOTS] = roots
matchObj1[MORPH] = grammFeats
#
# 2) Leiame eelneva põhjal ahela põhiverbi tunnused: grammatilise aja (tense),
# kõneviisi (mood), tegumoe (voice)
#
tense = "??"
mood = "??"
voice = "??"
if matchObj1[POLARITY] == 'POS':
#
# Jaatuse tunnused
#
(pos, form) = grammPosAndForms[0]
if 'V' in pos:
#
# Indikatiiv e kindel kõneviis
#
if len(form.intersection( _indicPresent )) > 0:
tense = "present"
mood = "indic"
voice = "personal"
elif len(form.intersection( _indicImperfect )) > 0:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'takse' in form:
tense = "present"
mood = "indic"
voice = "impersonal"
elif 'ti' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
#
# Imperatiiv e käskiv kõneviis
#
elif 'o' in form or 'gu' in form:
tense = "present"
mood = "imper"
voice = "personal"
elif len(form.intersection( _imperatPlural )) > 0:
tense = "present"
mood = "imper"
voice = "personal"
elif 'tagu' in form:
tense = "present"
mood = "imper"
voice = "impersonal"
#
# Konditsionaal e tingiv kõneviis
#
elif len(form.intersection( _conditPreesens )) > 0:
tense = "present"
mood = "condit"
voice = "personal"
elif 'taks' in form:
tense = "present"
mood = "condit"
voice = "impersonal"
elif len(form.intersection( _conditPreteerium )) > 0:
tense = "past"
mood = "condit"
voice = "personal"
elif 'tuks' in form:
tense = "past"
mood = "condit"
voice = "impersonal"
#
# Kvotatiiv e kaudne kõneviis
#
elif 'vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
elif 'tavat' in form:
tense = "present"
mood = "quotat"
voice = "impersonal"
elif 'nuvat' in form:
tense = "past"
mood = "quotat"
voice = "personal"
elif 'tuvat' in form:
tense = "past"
mood = "quotat"
voice = "impersonal"
#
# Liitaeg: olema + nud (personaal), olema + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 1 and matchObj1[PATTERN][0] == 'ole':
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[1] == "V_nud" or grammFeats[1] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[1] == "V_nud":
tense = "past"
elif grammFeats[1] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
elif matchObj1[POLARITY] == 'NEG':
#
# Eituse tunnused
#
if len(matchObj1[PATTERN]) > 1 and \
(matchObj1[PATTERN][0] == 'ei' or matchObj1[PATTERN][0] == 'ega'):
(pos, form) = grammPosAndForms[1]
# Indikatiiv
if 'o' in form or 'neg o' in form:
tense = "present"
mood = "indic"
voice = "personal"
elif 'ta' in form:
tense = "present"
mood = "indic"
voice = "impersonal"
elif 'nud' in form:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'tud' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
# Konditsionaal
elif 'ks' in form:
tense = "present"
mood = "condit"
voice = "personal"
elif 'taks' in form:
tense = "present"
mood = "condit"
voice = "impersonal"
elif 'nuks' in form:
tense = "past"
mood = "condit"
voice = "personal"
elif 'tuks' in form:
tense = "past"
mood = "condit"
voice = "impersonal"
# Kvotatiiv
elif 'vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
elif 'tavat' in form:
tense = "present"
mood = "quotat"
voice = "impersonal"
elif 'nuvat' in form:
tense = "past"
mood = "quotat"
voice = "personal"
elif 'tuvat' in form:
tense = "past"
mood = "quotat"
voice = "impersonal"
#
# Liitaeg: ei + olema + nud (personaal), ei + olema + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 2 and matchObj1[PATTERN][1] == 'ole':
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[2] == "V_nud" or grammFeats[2] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[2] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[2] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[2] == "V_nud":
tense = "past"
elif grammFeats[2] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
elif len(matchObj1[PATTERN]) > 1 and matchObj1[PATTERN][0] == 'ära':
(pos, form) = grammPosAndForms[1]
# Imperatiiv
if 'tagu' in form:
tense = "present"
mood = "imper"
voice = "impersonal"
else:
tense = "present"
mood = "imper"
voice = "personal"
elif matchObj1[PATTERN][0] == 'pole':
(pos, form) = grammPosAndForms[0]
# Indikatiiv
if 'neg o' in form:
tense = "present"
mood = "indic"
voice = "personal"
elif 'neg nud' in form:
tense = "imperfect"
mood = "indic"
voice = "personal"
elif 'neg tud' in form:
tense = "imperfect"
mood = "indic"
voice = "impersonal"
# Konditsionaal
elif 'neg ks' in form:
tense = "present"
mood = "condit"
voice = "personal"
elif 'neg nuks' in form:
tense = "past"
mood = "condit"
voice = "personal"
# Kvotatiiv
elif 'neg vat' in form:
tense = "present"
mood = "quotat"
voice = "personal"
#
# Liitaeg: pole + nud (personaal), pole + tud (impersonaal)
#
if len(matchObj1[PATTERN]) > 1:
# Kindla kõneviisi liitaeg
if mood == "indic" and (grammFeats[1] == "V_nud" or grammFeats[1] == "V_tud"):
if tense == "present":
# Täisminevik
tense = "perfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
elif tense == "imperfect":
# Enneminevik
tense = "pluperfect"
if grammFeats[1] == "V_tud":
voice = "impersonal"
# Tingiva ja kaudse kõneviisi liitaeg (nn üldminevik)
elif mood in ["quotat", "condit"] and tense == "present" and \
voice == "personal":
if grammFeats[1] == "V_nud":
tense = "past"
elif grammFeats[1] == "V_tud":
if mood == "quotat":
tense = "past"
voice = "impersonal"
else:
# tingiv + tud jääb esialgu lahtiseks
tense = "??"
voice = "??"
matchObj1[MOOD] = mood
matchObj1[TENSE] = tense
matchObj1[VOICE] = voice | Täiendab leitud verbiahelaid, lisades iga ahela kylge selle s6nade lemmad (ROOT
v2ljad morf analyysist) ning morfoloogilised tunnused (POSTAG+FORM: eraldajaks
'_' ning kui on mitu varianti, siis tuuakse k6ik variandid, eraldajaks '/');
Atribuudid ROOTS ja MORPH sisaldavad tunnuste loetelusid (iga ahela liikme jaoks
yks tunnus:
Nt.
** 'püüab kodeerida' puhul tuleb MORPH väärtuseks ['V_b', 'V_da'] ning
ROOTS väärtuseks ['püüd', 'kodeeri'];
** 'on tulnud' puhul tuleb MORPH väärtuseks ['V_vad/b', 'V_nud'] ning
ROOTS väärtuseks ['ole', 'tule'];
Lisaks leiatakse ahela p6hiverbi (esimese verbi) grammatilised tunnused:
** aeg (TENSE): present, imperfect, perfect, pluperfect, past, ??
** k6neviis (MOOD): indic, imper, condit, quotat, ??
** tegumood (VOICE): personal, impersonal, ?? | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/mw_verbs/verbchain_detector.py#L88-L410 |
estnltk/estnltk | estnltk/mw_verbs/verbchain_detector.py | VerbChainDetector.detectVerbChainsFromSent | def detectVerbChainsFromSent( self, sentence, **kwargs):
''' Detect verb chains from given sentence.
Parameters
----------
sentence: list of dict
A list of sentence words, each word in form of a dictionary containing
morphological analysis and clause boundary annotations (must have CLAUSE_IDX);
Keyword parameters
------------------
expand2ndTime: boolean
If True, regular verb chains (chains not ending with 'olema') are expanded twice.
(default: False)
breakOnPunctuation: boolean
If True, expansion of regular verb chains will be broken in case of intervening punctuation.
(default: False)
removeSingleAraEi: boolean
if True, verb chains consisting of a single word, 'ära' or 'ei', will be removed.
(default: True)
removeOverlapping: boolean
If True, overlapping verb chains will be removed.
(default: True)
Returns
-------
list of dict
List of detected verb chains, each verb chain has following attributes (keys):
PHRASE -- list of int : indexes pointing to elements in sentence that belong
to the chain;
PATTERN -- list of str : for each word in phrase, marks whether it is 'ega', 'ei',
'ära', 'pole', 'ole', '&' (conjunction: ja/ning/ega/või)
'verb' (verb different than 'ole') or 'nom/adv';
ANALYSIS_IDS -- list of (list of int) : for each word in phrase, points to index(es) of
morphological analyses that correspond to words
in the verb chains;
ROOTS -- list of str : for each word in phrase, lists its corresponding ROOT
value from the morphological analysis; e.g. for the verb
chain 'püüab kodeerida', the ROOT will be ['püüd',
'kodeeri'];
MORPH -- list of str : for each word in phrase, lists its part-of-speech value
and morphological form (in one string, separated by '_',
and multiple variants of the pos/form separated by '/');
e.g. for the verb chain 'on tulnud', the MORPH value
will be ['V_vad/b', 'V_nud'];
OTHER_VERBS -- bool : whether there are other verbs in the context, potentially being
part of the verb chain; if this is True, it is uncertain whether
the chain is complete or not;
POLARITY -- 'POS', 'NEG' or '??' : grammatical polarity of the verb chain; Negative
polarity indicates that the verb phrase begins
with 'ei', 'ega', 'ära' or 'pole';
TENSE -- tense of the main verb: 'present', 'imperfect', 'perfect',
'pluperfect', 'past', '??';
MOOD -- mood of the main verb: 'indic', 'imper', 'condit', 'quotat', '??';
VOICE -- voice of the main verb: 'personal', 'impersonal', '??';
'''
# 0) Parse given arguments
expand2ndTime = False
removeOverlapping = True
removeSingleAraEi = True
breakOnPunctuation = False
for argName, argVal in kwargs.items():
if argName == 'expand2ndTime':
expand2ndTime = bool(argVal)
elif argName == 'removeOverlapping':
removeOverlapping = bool(argVal)
elif argName == 'removeSingleAraEi':
removeSingleAraEi = bool(argVal)
elif argName == 'breakOnPunctuation':
breakOnPunctuation = bool(argVal)
else:
raise Exception(' Unsupported argument given: '+argName)
# 1) Preprocessing
sentence = addWordIDs( sentence )
clauses = getClausesByClauseIDs( sentence )
# 2) Extract predicate-centric verb chains within each clause
allDetectedVerbChains = []
for clauseID in clauses:
clause = clauses[clauseID]
# 2.1) Extract predicate-centric verb chains within each clause
detectedBasicChains = _extractBasicPredicateFromClause(clause, clauseID)
allDetectedVerbChains.extend( detectedBasicChains )
# 2.2) Extract 'saama' + 'tud' verb phrases (typically rare)
_expandSaamaWithTud( clause, clauseID, allDetectedVerbChains )
# 2.3) Extend 'olema' chains with 'nud/tud/mas/mata' verbs (if possible)
_expandOlemaVerbChains( clause, clauseID, allDetectedVerbChains )
# 2.4) Expand non-olema verb chains inside the clause where possible (verb+verb chains)
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
# 2.5) Determine for which verb chains the context should be clear
# (no additional verbs can be added to the phrase)
_determineVerbChainContextualAmbiguity( clause, clauseID, allDetectedVerbChains)
# 2.6) Expand non-olema verb chains inside the clause 2nd time (verb+verb+verb chains)
# (Note that while verb+verb+verb+verb+... chains are also possible, three verbs
# seems to be a critical length: longer chains are rare and thus making longer
# chains will likely lead to errors);
if expand2ndTime:
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
# 3) Extract 'ega' negations (considering the whole sentence context)
expandableEgaFound = _extractEgaNegFromSent( sentence, clauses, allDetectedVerbChains )
if expandableEgaFound:
for clauseID in clauses:
clause = clauses[clauseID]
# 3.1) Expand non-olema 'ega' verb chains inside the clause, if possible;
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
#_debugPrint(' | '+getJsonAsTextString(sentence, markTokens = [ verbObj[PHRASE] for verbObj in allDetectedVerbChains ]))
# 4) Extend chains with nom/adv + Vinf relations
if self.verbNomAdvVinfExtender:
addGrammaticalFeatsAndRoots( sentence, allDetectedVerbChains )
for clauseID in clauses:
clause = clauses[clauseID]
expansionPerformed = \
self.verbNomAdvVinfExtender.extendChainsInClause( clause, clauseID, allDetectedVerbChains )
if expansionPerformed:
_determineVerbChainContextualAmbiguity( clause, clauseID, allDetectedVerbChains)
# ) Remove redundant and overlapping verb phrases
removeRedundantVerbChains( allDetectedVerbChains, removeOverlapping = removeOverlapping, removeSingleAraAndEi = removeSingleAraEi )
# ) Add grammatical features (in the end)
addGrammaticalFeatsAndRoots( sentence, allDetectedVerbChains )
return allDetectedVerbChains | python | def detectVerbChainsFromSent( self, sentence, **kwargs):
''' Detect verb chains from given sentence.
Parameters
----------
sentence: list of dict
A list of sentence words, each word in form of a dictionary containing
morphological analysis and clause boundary annotations (must have CLAUSE_IDX);
Keyword parameters
------------------
expand2ndTime: boolean
If True, regular verb chains (chains not ending with 'olema') are expanded twice.
(default: False)
breakOnPunctuation: boolean
If True, expansion of regular verb chains will be broken in case of intervening punctuation.
(default: False)
removeSingleAraEi: boolean
if True, verb chains consisting of a single word, 'ära' or 'ei', will be removed.
(default: True)
removeOverlapping: boolean
If True, overlapping verb chains will be removed.
(default: True)
Returns
-------
list of dict
List of detected verb chains, each verb chain has following attributes (keys):
PHRASE -- list of int : indexes pointing to elements in sentence that belong
to the chain;
PATTERN -- list of str : for each word in phrase, marks whether it is 'ega', 'ei',
'ära', 'pole', 'ole', '&' (conjunction: ja/ning/ega/või)
'verb' (verb different than 'ole') or 'nom/adv';
ANALYSIS_IDS -- list of (list of int) : for each word in phrase, points to index(es) of
morphological analyses that correspond to words
in the verb chains;
ROOTS -- list of str : for each word in phrase, lists its corresponding ROOT
value from the morphological analysis; e.g. for the verb
chain 'püüab kodeerida', the ROOT will be ['püüd',
'kodeeri'];
MORPH -- list of str : for each word in phrase, lists its part-of-speech value
and morphological form (in one string, separated by '_',
and multiple variants of the pos/form separated by '/');
e.g. for the verb chain 'on tulnud', the MORPH value
will be ['V_vad/b', 'V_nud'];
OTHER_VERBS -- bool : whether there are other verbs in the context, potentially being
part of the verb chain; if this is True, it is uncertain whether
the chain is complete or not;
POLARITY -- 'POS', 'NEG' or '??' : grammatical polarity of the verb chain; Negative
polarity indicates that the verb phrase begins
with 'ei', 'ega', 'ära' or 'pole';
TENSE -- tense of the main verb: 'present', 'imperfect', 'perfect',
'pluperfect', 'past', '??';
MOOD -- mood of the main verb: 'indic', 'imper', 'condit', 'quotat', '??';
VOICE -- voice of the main verb: 'personal', 'impersonal', '??';
'''
# 0) Parse given arguments
expand2ndTime = False
removeOverlapping = True
removeSingleAraEi = True
breakOnPunctuation = False
for argName, argVal in kwargs.items():
if argName == 'expand2ndTime':
expand2ndTime = bool(argVal)
elif argName == 'removeOverlapping':
removeOverlapping = bool(argVal)
elif argName == 'removeSingleAraEi':
removeSingleAraEi = bool(argVal)
elif argName == 'breakOnPunctuation':
breakOnPunctuation = bool(argVal)
else:
raise Exception(' Unsupported argument given: '+argName)
# 1) Preprocessing
sentence = addWordIDs( sentence )
clauses = getClausesByClauseIDs( sentence )
# 2) Extract predicate-centric verb chains within each clause
allDetectedVerbChains = []
for clauseID in clauses:
clause = clauses[clauseID]
# 2.1) Extract predicate-centric verb chains within each clause
detectedBasicChains = _extractBasicPredicateFromClause(clause, clauseID)
allDetectedVerbChains.extend( detectedBasicChains )
# 2.2) Extract 'saama' + 'tud' verb phrases (typically rare)
_expandSaamaWithTud( clause, clauseID, allDetectedVerbChains )
# 2.3) Extend 'olema' chains with 'nud/tud/mas/mata' verbs (if possible)
_expandOlemaVerbChains( clause, clauseID, allDetectedVerbChains )
# 2.4) Expand non-olema verb chains inside the clause where possible (verb+verb chains)
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
# 2.5) Determine for which verb chains the context should be clear
# (no additional verbs can be added to the phrase)
_determineVerbChainContextualAmbiguity( clause, clauseID, allDetectedVerbChains)
# 2.6) Expand non-olema verb chains inside the clause 2nd time (verb+verb+verb chains)
# (Note that while verb+verb+verb+verb+... chains are also possible, three verbs
# seems to be a critical length: longer chains are rare and thus making longer
# chains will likely lead to errors);
if expand2ndTime:
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
# 3) Extract 'ega' negations (considering the whole sentence context)
expandableEgaFound = _extractEgaNegFromSent( sentence, clauses, allDetectedVerbChains )
if expandableEgaFound:
for clauseID in clauses:
clause = clauses[clauseID]
# 3.1) Expand non-olema 'ega' verb chains inside the clause, if possible;
_expandVerbChainsBySubcat( clause, clauseID, allDetectedVerbChains, self.verbInfSubcatLexicon, False, breakOnPunctuation)
#_debugPrint(' | '+getJsonAsTextString(sentence, markTokens = [ verbObj[PHRASE] for verbObj in allDetectedVerbChains ]))
# 4) Extend chains with nom/adv + Vinf relations
if self.verbNomAdvVinfExtender:
addGrammaticalFeatsAndRoots( sentence, allDetectedVerbChains )
for clauseID in clauses:
clause = clauses[clauseID]
expansionPerformed = \
self.verbNomAdvVinfExtender.extendChainsInClause( clause, clauseID, allDetectedVerbChains )
if expansionPerformed:
_determineVerbChainContextualAmbiguity( clause, clauseID, allDetectedVerbChains)
# ) Remove redundant and overlapping verb phrases
removeRedundantVerbChains( allDetectedVerbChains, removeOverlapping = removeOverlapping, removeSingleAraAndEi = removeSingleAraEi )
# ) Add grammatical features (in the end)
addGrammaticalFeatsAndRoots( sentence, allDetectedVerbChains )
return allDetectedVerbChains | Detect verb chains from given sentence.
Parameters
----------
sentence: list of dict
A list of sentence words, each word in form of a dictionary containing
morphological analysis and clause boundary annotations (must have CLAUSE_IDX);
Keyword parameters
------------------
expand2ndTime: boolean
If True, regular verb chains (chains not ending with 'olema') are expanded twice.
(default: False)
breakOnPunctuation: boolean
If True, expansion of regular verb chains will be broken in case of intervening punctuation.
(default: False)
removeSingleAraEi: boolean
if True, verb chains consisting of a single word, 'ära' or 'ei', will be removed.
(default: True)
removeOverlapping: boolean
If True, overlapping verb chains will be removed.
(default: True)
Returns
-------
list of dict
List of detected verb chains, each verb chain has following attributes (keys):
PHRASE -- list of int : indexes pointing to elements in sentence that belong
to the chain;
PATTERN -- list of str : for each word in phrase, marks whether it is 'ega', 'ei',
'ära', 'pole', 'ole', '&' (conjunction: ja/ning/ega/või)
'verb' (verb different than 'ole') or 'nom/adv';
ANALYSIS_IDS -- list of (list of int) : for each word in phrase, points to index(es) of
morphological analyses that correspond to words
in the verb chains;
ROOTS -- list of str : for each word in phrase, lists its corresponding ROOT
value from the morphological analysis; e.g. for the verb
chain 'püüab kodeerida', the ROOT will be ['püüd',
'kodeeri'];
MORPH -- list of str : for each word in phrase, lists its part-of-speech value
and morphological form (in one string, separated by '_',
and multiple variants of the pos/form separated by '/');
e.g. for the verb chain 'on tulnud', the MORPH value
will be ['V_vad/b', 'V_nud'];
OTHER_VERBS -- bool : whether there are other verbs in the context, potentially being
part of the verb chain; if this is True, it is uncertain whether
the chain is complete or not;
POLARITY -- 'POS', 'NEG' or '??' : grammatical polarity of the verb chain; Negative
polarity indicates that the verb phrase begins
with 'ei', 'ega', 'ära' or 'pole';
TENSE -- tense of the main verb: 'present', 'imperfect', 'perfect',
'pluperfect', 'past', '??';
MOOD -- mood of the main verb: 'indic', 'imper', 'condit', 'quotat', '??';
VOICE -- voice of the main verb: 'personal', 'impersonal', '??'; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/mw_verbs/verbchain_detector.py#L460-L595 |
estnltk/estnltk | estnltk/wiki/internalLink.py | findBalanced | def findBalanced(text, openDelim, closeDelim):
"""
Assuming that text contains a properly balanced expression
:param openDelim: as opening delimiters and
:param closeDelim: as closing delimiters.
:return: an iterator producing pairs (start, end) of start and end
positions in text containing a balanced expression.
"""
openPat = '|'.join([re.escape(x) for x in openDelim])
# pattern for delimiters expected after each opening delimiter
afterPat = {o: re.compile(openPat+'|'+c, re.DOTALL) for o,c in zip(openDelim, closeDelim)}
stack = []
start = 0
cur = 0
end = len(text)
startSet = False
startPat = re.compile(openPat)
nextPat = startPat
while True:
next = nextPat.search(text, cur)
if not next:
return
if not startSet:
start = next.start()
startSet = True
delim = next.group(0)
if delim in openDelim:
stack.append(delim)
nextPat = afterPat[delim]
else:
opening = stack.pop()
# assert opening == openDelim[closeDelim.index(next.group(0))]
if stack:
nextPat = afterPat[stack[-1]]
else:
yield start, next.end()
nextPat = startPat
start = next.end()
startSet = False
cur = next.end() | python | def findBalanced(text, openDelim, closeDelim):
"""
Assuming that text contains a properly balanced expression
:param openDelim: as opening delimiters and
:param closeDelim: as closing delimiters.
:return: an iterator producing pairs (start, end) of start and end
positions in text containing a balanced expression.
"""
openPat = '|'.join([re.escape(x) for x in openDelim])
# pattern for delimiters expected after each opening delimiter
afterPat = {o: re.compile(openPat+'|'+c, re.DOTALL) for o,c in zip(openDelim, closeDelim)}
stack = []
start = 0
cur = 0
end = len(text)
startSet = False
startPat = re.compile(openPat)
nextPat = startPat
while True:
next = nextPat.search(text, cur)
if not next:
return
if not startSet:
start = next.start()
startSet = True
delim = next.group(0)
if delim in openDelim:
stack.append(delim)
nextPat = afterPat[delim]
else:
opening = stack.pop()
# assert opening == openDelim[closeDelim.index(next.group(0))]
if stack:
nextPat = afterPat[stack[-1]]
else:
yield start, next.end()
nextPat = startPat
start = next.end()
startSet = False
cur = next.end() | Assuming that text contains a properly balanced expression
:param openDelim: as opening delimiters and
:param closeDelim: as closing delimiters.
:return: an iterator producing pairs (start, end) of start and end
positions in text containing a balanced expression. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wiki/internalLink.py#L31-L70 |
estnltk/estnltk | estnltk/core.py | as_unicode | def as_unicode(s, encoding='utf-8'):
"""Force conversion of given string to unicode type.
Unicode is ``str`` type for Python 3.x and ``unicode`` for Python 2.x .
If the string is already in unicode, then no conversion is done and the same string is returned.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to unicode.
encoding: str
The encoding of the input string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``str`` for Python3 or ``unicode`` for Python 2.
"""
if isinstance(s, six.text_type):
return s
elif isinstance(s, six.binary_type):
return s.decode(encoding)
else:
raise ValueError('Can only convert types {0} and {1}'.format(six.text_type, six.binary_type)) | python | def as_unicode(s, encoding='utf-8'):
"""Force conversion of given string to unicode type.
Unicode is ``str`` type for Python 3.x and ``unicode`` for Python 2.x .
If the string is already in unicode, then no conversion is done and the same string is returned.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to unicode.
encoding: str
The encoding of the input string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``str`` for Python3 or ``unicode`` for Python 2.
"""
if isinstance(s, six.text_type):
return s
elif isinstance(s, six.binary_type):
return s.decode(encoding)
else:
raise ValueError('Can only convert types {0} and {1}'.format(six.text_type, six.binary_type)) | Force conversion of given string to unicode type.
Unicode is ``str`` type for Python 3.x and ``unicode`` for Python 2.x .
If the string is already in unicode, then no conversion is done and the same string is returned.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to unicode.
encoding: str
The encoding of the input string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``str`` for Python3 or ``unicode`` for Python 2. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/core.py#L141-L168 |
estnltk/estnltk | estnltk/core.py | as_binary | def as_binary(s, encoding='utf-8'):
"""Force conversion of given string to binary type.
Binary is ``bytes`` type for Python 3.x and ``str`` for Python 2.x .
If the string is already in binary, then no conversion is done and the same string is returned
and ``encoding`` argument is ignored.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to binary.
encoding: str
The encoding of the resulting binary string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``bytes`` for Python3 or ``str`` for Python 2.
"""
if isinstance(s, six.text_type):
return s.encode(encoding)
elif isinstance(s, six.binary_type):
# make sure the binary is in required encoding
return s.decode(encoding).encode(encoding)
else:
raise ValueError('Can only convert types {0} and {1}'.format(six.text_type, six.binary_type)) | python | def as_binary(s, encoding='utf-8'):
"""Force conversion of given string to binary type.
Binary is ``bytes`` type for Python 3.x and ``str`` for Python 2.x .
If the string is already in binary, then no conversion is done and the same string is returned
and ``encoding`` argument is ignored.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to binary.
encoding: str
The encoding of the resulting binary string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``bytes`` for Python3 or ``str`` for Python 2.
"""
if isinstance(s, six.text_type):
return s.encode(encoding)
elif isinstance(s, six.binary_type):
# make sure the binary is in required encoding
return s.decode(encoding).encode(encoding)
else:
raise ValueError('Can only convert types {0} and {1}'.format(six.text_type, six.binary_type)) | Force conversion of given string to binary type.
Binary is ``bytes`` type for Python 3.x and ``str`` for Python 2.x .
If the string is already in binary, then no conversion is done and the same string is returned
and ``encoding`` argument is ignored.
Parameters
----------
s: str or bytes (Python3), str or unicode (Python2)
The string to convert to binary.
encoding: str
The encoding of the resulting binary string (default: utf-8)
Raises
------
ValueError
In case an input of invalid type was passed to the function.
Returns
-------
``bytes`` for Python3 or ``str`` for Python 2. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/core.py#L171-L200 |
estnltk/estnltk | estnltk/core.py | get_filenames | def get_filenames(root, prefix=u'', suffix=u''):
"""Function for listing filenames with given prefix and suffix in the root directory.
Parameters
----------
prefix: str
The prefix of the required files.
suffix: str
The suffix of the required files
Returns
-------
list of str
List of filenames matching the prefix and suffix criteria.
"""
return [fnm for fnm in os.listdir(root) if fnm.startswith(prefix) and fnm.endswith(suffix)] | python | def get_filenames(root, prefix=u'', suffix=u''):
"""Function for listing filenames with given prefix and suffix in the root directory.
Parameters
----------
prefix: str
The prefix of the required files.
suffix: str
The suffix of the required files
Returns
-------
list of str
List of filenames matching the prefix and suffix criteria.
"""
return [fnm for fnm in os.listdir(root) if fnm.startswith(prefix) and fnm.endswith(suffix)] | Function for listing filenames with given prefix and suffix in the root directory.
Parameters
----------
prefix: str
The prefix of the required files.
suffix: str
The suffix of the required files
Returns
-------
list of str
List of filenames matching the prefix and suffix criteria. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/core.py#L203-L219 |
estnltk/estnltk | estnltk/taggers/event_tagger.py | KeywordTagger.tag | def tag(self, text):
"""Retrieves list of keywords in text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of vents sorted by start, end
"""
if self.search_method == 'ahocorasick':
events = self._find_keywords_ahocorasick(text.text)
elif self.search_method == 'naive':
events = self._find_keywords_naive(text.text)
events = self._resolve_conflicts(events)
if self.mapping:
for item in events:
item['type'] = self.map[
text.text[item['start']:item['end']]
]
if self.return_layer:
return events
else:
text[self.layer_name] = events | python | def tag(self, text):
"""Retrieves list of keywords in text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of vents sorted by start, end
"""
if self.search_method == 'ahocorasick':
events = self._find_keywords_ahocorasick(text.text)
elif self.search_method == 'naive':
events = self._find_keywords_naive(text.text)
events = self._resolve_conflicts(events)
if self.mapping:
for item in events:
item['type'] = self.map[
text.text[item['start']:item['end']]
]
if self.return_layer:
return events
else:
text[self.layer_name] = events | Retrieves list of keywords in text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of vents sorted by start, end | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/taggers/event_tagger.py#L129-L156 |
estnltk/estnltk | estnltk/taggers/event_tagger.py | RegexTagger.tag | def tag(self, text):
"""Retrieves list of regex_matches in text.
Parameters
----------
text: Text
The estnltk text object to search for events.
Returns
-------
list of matches
"""
matches = self._match(text.text)
matches = self._resolve_conflicts(matches)
if self.return_layer:
return matches
else:
text[self.layer_name] = matches | python | def tag(self, text):
"""Retrieves list of regex_matches in text.
Parameters
----------
text: Text
The estnltk text object to search for events.
Returns
-------
list of matches
"""
matches = self._match(text.text)
matches = self._resolve_conflicts(matches)
if self.return_layer:
return matches
else:
text[self.layer_name] = matches | Retrieves list of regex_matches in text.
Parameters
----------
text: Text
The estnltk text object to search for events.
Returns
-------
list of matches | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/taggers/event_tagger.py#L201-L219 |
estnltk/estnltk | estnltk/taggers/event_tagger.py | EventTagger.tag | def tag(self, text):
"""Retrieves list of events in the text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of events sorted by start, end
"""
if self.search_method == 'ahocorasick':
events = self._find_events_ahocorasick(text.text)
elif self.search_method == 'naive':
events = self._find_events_naive(text.text)
events = self._resolve_conflicts(events)
self._event_intervals(events, text)
if self.return_layer:
return events
else:
text[self.layer_name] = events | python | def tag(self, text):
"""Retrieves list of events in the text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of events sorted by start, end
"""
if self.search_method == 'ahocorasick':
events = self._find_events_ahocorasick(text.text)
elif self.search_method == 'naive':
events = self._find_events_naive(text.text)
events = self._resolve_conflicts(events)
self._event_intervals(events, text)
if self.return_layer:
return events
else:
text[self.layer_name] = events | Retrieves list of events in the text.
Parameters
----------
text: Text
The text to search for events.
Returns
-------
list of events sorted by start, end | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/taggers/event_tagger.py#L433-L457 |
estnltk/estnltk | estnltk/taggers/adjective_phrase_tagger/adj_phrase_tagger.py | AdjectivePhraseTagger.__extract_lemmas | def __extract_lemmas(self, doc, m, phrase):
"""
:param sent: sentence from which the match was found
:param m: the found match
:phrase: name of the phrase
:return: tuple of the lemmas in the match
"""
ph_start = m['start']
ph_end = m['end']
start_index = None
for ind, word in enumerate(doc['words']):
if word['start'] == ph_start:
start_index = ind
break
end_index = None
for ind, word in enumerate(doc['words']):
if word['end'] == ph_end:
end_index = ind
break
if start_index is not None and end_index is not None:
lem = []
for i in doc['words'][start_index:end_index + 1]:
word_lem = []
for idx, j in enumerate(i['analysis']):
if i['analysis'][idx]['partofspeech'] in ['A', 'D', 'C', 'J']:
if i['analysis'][idx]['lemma'] not in word_lem:
word_lem.append(i['analysis'][idx]['lemma'])
word_lem_str = '|'.join(word_lem)
lem.append(word_lem_str)
else:
raise Exception('Something went really wrong')
return lem | python | def __extract_lemmas(self, doc, m, phrase):
"""
:param sent: sentence from which the match was found
:param m: the found match
:phrase: name of the phrase
:return: tuple of the lemmas in the match
"""
ph_start = m['start']
ph_end = m['end']
start_index = None
for ind, word in enumerate(doc['words']):
if word['start'] == ph_start:
start_index = ind
break
end_index = None
for ind, word in enumerate(doc['words']):
if word['end'] == ph_end:
end_index = ind
break
if start_index is not None and end_index is not None:
lem = []
for i in doc['words'][start_index:end_index + 1]:
word_lem = []
for idx, j in enumerate(i['analysis']):
if i['analysis'][idx]['partofspeech'] in ['A', 'D', 'C', 'J']:
if i['analysis'][idx]['lemma'] not in word_lem:
word_lem.append(i['analysis'][idx]['lemma'])
word_lem_str = '|'.join(word_lem)
lem.append(word_lem_str)
else:
raise Exception('Something went really wrong')
return lem | :param sent: sentence from which the match was found
:param m: the found match
:phrase: name of the phrase
:return: tuple of the lemmas in the match | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/taggers/adjective_phrase_tagger/adj_phrase_tagger.py#L23-L58 |
estnltk/estnltk | estnltk/corpus.py | yield_json_corpus | def yield_json_corpus(fnm):
"""Function to read a JSON corpus from a file.
A JSON corpus contains one document per line, encoded in JSON.
Each line is yielded after it is read.
Parameters
----------
fnm: str
The filename of the corpus.
Returns
-------
generator of Text
"""
with codecs.open(fnm, 'rb', 'ascii') as f:
line = f.readline()
while line != '':
yield Text(json.loads(line))
line = f.readline() | python | def yield_json_corpus(fnm):
"""Function to read a JSON corpus from a file.
A JSON corpus contains one document per line, encoded in JSON.
Each line is yielded after it is read.
Parameters
----------
fnm: str
The filename of the corpus.
Returns
-------
generator of Text
"""
with codecs.open(fnm, 'rb', 'ascii') as f:
line = f.readline()
while line != '':
yield Text(json.loads(line))
line = f.readline() | Function to read a JSON corpus from a file.
A JSON corpus contains one document per line, encoded in JSON.
Each line is yielded after it is read.
Parameters
----------
fnm: str
The filename of the corpus.
Returns
-------
generator of Text | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/corpus.py#L10-L28 |
estnltk/estnltk | estnltk/corpus.py | write_json_corpus | def write_json_corpus(documents, fnm):
"""Write a lisst of Text instances as JSON corpus on disk.
A JSON corpus contains one document per line, encoded in JSON.
Parameters
----------
documents: iterable of estnltk.text.Text
The documents of the corpus
fnm: str
The path to save the corpus.
"""
with codecs.open(fnm, 'wb', 'ascii') as f:
for document in documents:
f.write(json.dumps(document) + '\n')
return documents | python | def write_json_corpus(documents, fnm):
"""Write a lisst of Text instances as JSON corpus on disk.
A JSON corpus contains one document per line, encoded in JSON.
Parameters
----------
documents: iterable of estnltk.text.Text
The documents of the corpus
fnm: str
The path to save the corpus.
"""
with codecs.open(fnm, 'wb', 'ascii') as f:
for document in documents:
f.write(json.dumps(document) + '\n')
return documents | Write a lisst of Text instances as JSON corpus on disk.
A JSON corpus contains one document per line, encoded in JSON.
Parameters
----------
documents: iterable of estnltk.text.Text
The documents of the corpus
fnm: str
The path to save the corpus. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/corpus.py#L47-L61 |
estnltk/estnltk | estnltk/corpus.py | read_document | def read_document(fnm):
"""Read a document that is stored in a text file as JSON.
Parameters
----------
fnm: str
The path of the document.
Returns
-------
Text
"""
with codecs.open(fnm, 'rb', 'ascii') as f:
return Text(json.loads(f.read())) | python | def read_document(fnm):
"""Read a document that is stored in a text file as JSON.
Parameters
----------
fnm: str
The path of the document.
Returns
-------
Text
"""
with codecs.open(fnm, 'rb', 'ascii') as f:
return Text(json.loads(f.read())) | Read a document that is stored in a text file as JSON.
Parameters
----------
fnm: str
The path of the document.
Returns
-------
Text | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/corpus.py#L64-L77 |
estnltk/estnltk | estnltk/corpus.py | write_document | def write_document(doc, fnm):
"""Write a Text document to file.
Parameters
----------
doc: Text
The document to save.
fnm: str
The filename to save the document
"""
with codecs.open(fnm, 'wb', 'ascii') as f:
f.write(json.dumps(doc, indent=2)) | python | def write_document(doc, fnm):
"""Write a Text document to file.
Parameters
----------
doc: Text
The document to save.
fnm: str
The filename to save the document
"""
with codecs.open(fnm, 'wb', 'ascii') as f:
f.write(json.dumps(doc, indent=2)) | Write a Text document to file.
Parameters
----------
doc: Text
The document to save.
fnm: str
The filename to save the document | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/corpus.py#L80-L91 |
estnltk/estnltk | estnltk/wordnet/eurown.py | addRelation | def addRelation(sourceSynset,relationName,targetSynset):
"""
Adds relation with name <relationName> to
<targetSynset>.
"""
if not isinstance(sourceSynset, Synset):
raise TypeError("sourceSynset not Synset instance")
elif not isinstance(targetSynset, Synset):
raise TypeError("targetSynset not Synset instance")
elif relationName not in RELATION_NAMES:
raise TypeError("relationName not in RELATION_NAMES")
else:
sourceSynset.addRelation(
Relation(relationName,targetSynset)
)
return sourceSynset | python | def addRelation(sourceSynset,relationName,targetSynset):
"""
Adds relation with name <relationName> to
<targetSynset>.
"""
if not isinstance(sourceSynset, Synset):
raise TypeError("sourceSynset not Synset instance")
elif not isinstance(targetSynset, Synset):
raise TypeError("targetSynset not Synset instance")
elif relationName not in RELATION_NAMES:
raise TypeError("relationName not in RELATION_NAMES")
else:
sourceSynset.addRelation(
Relation(relationName,targetSynset)
)
return sourceSynset | Adds relation with name <relationName> to
<targetSynset>. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2399-L2415 |
estnltk/estnltk | estnltk/wordnet/eurown.py | _TypedList.polarisText | def polarisText():
"""polarisText part of _TypedList objects
"""
def fget(self):
_out = ''
_n = '\n'
if len(self):
if self.parent:
_out = '%s%s%s' % (_out, PolarisText(
*self.parent).out,_n)
_out = _out + _n.join(
map(lambda x: x.polarisText,
self)
)
else:
_out = ''
return _out
return locals() | python | def polarisText():
"""polarisText part of _TypedList objects
"""
def fget(self):
_out = ''
_n = '\n'
if len(self):
if self.parent:
_out = '%s%s%s' % (_out, PolarisText(
*self.parent).out,_n)
_out = _out + _n.join(
map(lambda x: x.polarisText,
self)
)
else:
_out = ''
return _out
return locals() | polarisText part of _TypedList objects | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L97-L115 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Relation.addFeature | def addFeature(self, feature):
'''Appends Feature'''
if isinstance(feature, Feature):
self.features.append(feature)
else:
raise TypeError(
'feature Type should be Feature, not %s' % type(feature)) | python | def addFeature(self, feature):
'''Appends Feature'''
if isinstance(feature, Feature):
self.features.append(feature)
else:
raise TypeError(
'feature Type should be Feature, not %s' % type(feature)) | Appends Feature | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L231-L238 |
estnltk/estnltk | estnltk/wordnet/eurown.py | External_Info.addSourceId | def addSourceId(self, value):
'''Adds SourceId to External_Info
'''
if isinstance(value, Source_Id):
self.source_ids.append(value)
else:
raise (TypeError,
'source_id Type should be Source_Id, not %s' % type(source_id)) | python | def addSourceId(self, value):
'''Adds SourceId to External_Info
'''
if isinstance(value, Source_Id):
self.source_ids.append(value)
else:
raise (TypeError,
'source_id Type should be Source_Id, not %s' % type(source_id)) | Adds SourceId to External_Info | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L815-L822 |
estnltk/estnltk | estnltk/wordnet/eurown.py | External_Info.addCorpusId | def addCorpusId(self, value):
'''Adds SourceId to External_Info
'''
if isinstance(value, Corpus_Id):
self.corpus_ids.append(value)
else:
raise (TypeError,
'source_id Type should be Source_Id, not %s' % type(source_id)) | python | def addCorpusId(self, value):
'''Adds SourceId to External_Info
'''
if isinstance(value, Corpus_Id):
self.corpus_ids.append(value)
else:
raise (TypeError,
'source_id Type should be Source_Id, not %s' % type(source_id)) | Adds SourceId to External_Info | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L823-L830 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Parser.parse_line | def parse_line(self,iStr):
"""Parses ewn file line
"""
self.levelNumber = None
self.DRN = None
self.fieldTag = None
self.fieldValue = None
self.noQuotes = None
if iStr and not(iStr.strip().startswith('#')):
iList = iStr.strip().split(' ')
self.levelNumber = int(iList.pop(0))
if iList[0].startswith('@') and self.levelNumber != 3:
self.DRN = int(iList.pop(0).strip('@'))
else:
self.DRN = None
self.fieldTag = iList.pop(0)
if iList and (
iList[0].startswith('"') or
iList[0].startswith('@')
):
fv = ' '.join(iList)
self.fieldValue = fv[1:-1]
elif iList:
if len(iList) == 1:
self.fieldValue = iList.pop(0)
else:
self.fieldValue = ' '.join(iList)
try:
self.fieldValue = int(self.fieldValue)
except ValueError:
self.noQuotes = True | python | def parse_line(self,iStr):
"""Parses ewn file line
"""
self.levelNumber = None
self.DRN = None
self.fieldTag = None
self.fieldValue = None
self.noQuotes = None
if iStr and not(iStr.strip().startswith('#')):
iList = iStr.strip().split(' ')
self.levelNumber = int(iList.pop(0))
if iList[0].startswith('@') and self.levelNumber != 3:
self.DRN = int(iList.pop(0).strip('@'))
else:
self.DRN = None
self.fieldTag = iList.pop(0)
if iList and (
iList[0].startswith('"') or
iList[0].startswith('@')
):
fv = ' '.join(iList)
self.fieldValue = fv[1:-1]
elif iList:
if len(iList) == 1:
self.fieldValue = iList.pop(0)
else:
self.fieldValue = ' '.join(iList)
try:
self.fieldValue = int(self.fieldValue)
except ValueError:
self.noQuotes = True | Parses ewn file line | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1126-L1156 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Parser.parse_synset | def parse_synset(self, offset=None, debug=False):
"""Parses Synset from file
"""
if False:
pass
else:
# WORD_INSTANCE
def _word_instance():
_synset(True)
# WORD_MEANING
def _synset(pn=False):
if not pn:
self.synset = Synset()
self.pn = False
else:
self.synset = WordInstance()
self.pn = True
if self.DRN:
self.synset.number = self.DRN
self.targetType = None
def _variants():
self.synset.variants = Variants()
def _literal():
a = Variant()
self.synset.variants.append(a)
self.synset.variants[-1].literal = self.fieldValue
def _target_literal():
self.target_synset.variants.append(Variant())
self.target_synset.variants[-1].literal = self.fieldValue
def _sense():
self.synset.variants[-1].sense = self.fieldValue
def _status():
self.noQuotes = True
try:
self.synset.variants[-1].status = as_unicode(self.fieldValue)
except:
self.synset.variants[-1].status = as_unicode(str(self.fieldValue))
self.noQuotes = False
def _target_sense():
self.target_synset.variants[-1].sense = self.fieldValue
if self.targetType == 'internal':
self.synset.internalLinks[
-1].target_concept = self.target_synset
elif self.targetType == 'ili':
self.synset.eqLinks[-1].target_concept = self.target_synset
elif self.targetType == 'pv':
self.synset.propertyValues[-1].value = self.target_synset
else:
print ('BOOOOOOOOO!!') # Error TODO
def _gloss():
self.synset.variants[-1].gloss = self.fieldValue
self.synset.definition = self.fieldValue # ADDED BY KOM
def _translations():
self.synset.variants[-1].translations = Translations()
def _translation():
self.synset.variants[-1].translations.append(
Translation(
language=self.fieldValue.split(':')[0],
translation_value = self.fieldValue.split(':')[1])
)
def _examples():
self.synset.variants[-1].examples = Examples()
def _usage_labels():
self.synset.variants[-1].usage_labels = Usage_Labels()
def _external_info():
self.synset.variants[-1].externalInfo = External_Info()
def _example():
self.synset.variants[-1].examples.append(
Example(self.fieldValue)
)
def _usage_label():
self.synset.variants[
-1].usage_labels.append(
Usage_Label(name=self.fieldValue)
)
def _usage_label_value():
self.synset.variants[
-1].usage_labels[-1].usage_label_value = self.fieldValue
def _source_id():
if self.targetType == 'internal':
self.synset.internalLinks[-1].source_id = self.fieldValue
# self.synset.internalLinks[-1].source_ids.append(
# Relation_Source_Id(number=self.fieldValue))
elif self.targetType == 'ili':
self.synset.eqLinks[-1].source_id = self.fieldValue
# self.synset.eqLinks[-1].source_ids.append(
# Relation_Source_Id(number=self.fieldValue))
else:
if self.synset.variants[-1].external_info:
self.synset.variants[
-1].external_info.source_ids.append(
Source_Id(number=self.fieldValue)
)
else:
self.synset.variants[-1].external_info = External_Info()
self.synset.variants[
-1].external_info.source_ids.append(
Source_Id(number=self.fieldValue)
)
def _corpus_id():
if self.targetType == 'internal': # not needed
self.synset.internalLinks[-1].corpus_ids.append(
Relation_Corpus_Id(number=self.fieldValue))
else:
if self.synset.variants[-1].external_info:
self.synset.variants[
-1].external_info.corpus_ids.append(
Corpus_Id(number=self.fieldValue)
)
else:
self.synset.variants[-1].external_info = External_Info()
self.synset.variants[
-1].external_info.corpus_ids.append(
Corpus_Id(number=self.fieldValue)
)
def _frequency():
self.synset.variants[
-1].external_info.corpus_ids[-1].frequency = self.fieldValue
def _text_key():
self.synset.variants[
-1].external_info.source_ids[-1].text_key = self.fieldValue
def _number_key():
self.synset.variants[
-1].external_info.source_ids[
-1].number_key = self.fieldValue
def _pos():
self.synset.pos = self.fieldValue
# INTERNAL_LINKS
def _target_concept():
self.target_synset = Synset()
self.target_synset.variants = Variants()
if self.levelNumber == 3: # and self.fieldValue:
self.target_synset.number = int(self.fieldValue or 0)
def _target_pos():
self.target_synset.pos = self.fieldValue
def _internal_links():
self.synset.internalLinks = InternalLinks()
self.targetType = 'internal'
def _relation():
if self.targetType == 'internal':
self.synset.internalLinks.append(Relation())
self.synset.internalLinks[-1].name = self.fieldValue
elif self.targetType == 'ili':
self.synset.eqLinks.append(EqLink())
self.synset.eqLinks[-1].name = self.fieldValue
else:
print ('BOOOOOOOOO!!') # Error TODO
def _features():
if self.targetType == 'internal':
self.synset.internalLinks[-1].features = Features()
else:
self.synset.variants[-1].features = Features()
self.synset.variants[-1].features.append(Feature())
def _feature():
self.synset.variants[-1].features[-1].name = self.fieldValue
def _feature_value():
self.synset.variants[
-1].features[-1].featureValue = self.fieldValue
def _reversed():
self.synset.internalLinks[-1].features.append(Feature())
self.synset.internalLinks[-1].features[-1].name = self.fieldTag
self.synset.internalLinks[-1].features[-1].featureValue = True
def _variant_to_variant():
self.synset.internalLinks[-1].features.append(Feature())
self.synset.internalLinks[-1].features[-1].name = self.fieldTag
def _source_variant():
self.variant_to_variant_source = self.fieldValue
def _target_variant():
self.variant_to_variant_target = self.fieldValue
self.synset.internalLinks[
-1].features[-1].featureValue = (
self.variant_to_variant_source,
self.variant_to_variant_target)
# EQ_LINKS
def _eq_links():
self.synset.eqLinks = EqLinks()
self.targetType = 'ili'
def _wn_offset():
self.target_synset.wordnet_offset = self.fieldValue
self.synset.eqLinks[-1].target_concept = self.target_synset
def _add_on_id():
self.target_synset.add_on_id = self.fieldValue
self.synset.eqLinks[-1].target_concept = self.target_synset
# PROPERTIES
def _properties():
self.synset.properties = Properties()
def _name():
if self.pn:
self.synset.propertyValues.append(
PropertyValue(name=self.fieldValue))
else:
self.synset.properties.append(Property(self.fieldValue))
# PROPERTY_VALUES
def _property_values():
self.synset.propertyValues = PropertyValues()
def _property_value():
self.synset.propertyValues[-1].value = self.fieldValue
self.targetType = 'pv'
def _property_wm():
pass
rulez = {
(0,'WORD_MEANING'): _synset,
(0,'WORD_INSTANCE'): _word_instance,
(1,'PART_OF_SPEECH'): _pos,
(1,'VARIANTS'): _variants,
(2,'LITERAL'): _literal,
(3,'SENSE'): _sense,
(3,'STATUS'): _status,
(3,'DEFINITION'): _gloss,
(3,'EXAMPLES'): _examples,
(3,'USAGE_LABELS'): _usage_labels,
(4,'USAGE_LABEL'): _usage_label,
(5,'USAGE_LABEL_VALUE'): _usage_label_value,
(4,'EXAMPLE'): _example,
(3,'TRANSLATIONS'): _translations,
(4,'TRANSLATION'): _translation,
(3,'EXTERNAL_INFO'): _external_info,
(4,'SOURCE_ID'): _source_id,
(4,'CORPUS_ID'): _corpus_id,
(5,'FREQUENCY'): _frequency,
(5,'TEXT_KEY'): _text_key,
(5,'NUMBER_KEY'): _number_key,
(1,'INTERNAL_LINKS'): _internal_links,
(2,'RELATION'): _relation,
(3,'TARGET_CONCEPT'): _target_concept,
(4,'PART_OF_SPEECH'): _target_pos,
(4,'LITERAL'): _target_literal,
(5,'SENSE'): _target_sense,
(3,'FEATURES'): _features,
(4,'FEATURE'): _feature,
(5,'FEATURE_VALUE'): _feature_value,
(4,'REVERSED'): _reversed,
(4,'VARIANT_TO_VARIANT'): _variant_to_variant,
(5,'SOURCE_VARIANT'): _source_variant,
(5,'TARGET_VARIANT'): _target_variant,
(3,'SOURCE_ID'): _source_id,
(1,'EQ_LINKS'): _eq_links,
(2,'EQ_RELATION'): _relation,
(3,'TARGET_ILI'): _target_concept,
(4,'WORDNET_OFFSET'): _wn_offset,
(4,'ADD_ON_ID'): _add_on_id,
(1,'PROPERTIES'): _properties,
(1,'PROPERTY_VALUES'): _property_values,
(2,'NAME'): _name,
(3,'VALUE'): _property_value,
(3,'VALUE_AS_TEXT'): _property_value,
(3,'VALUE_AS_WORD_MEANING'): _target_concept,
}
if not offset:
offset = self.milestone
else:
self.milestone=offset
if self.file:
self.file.seek(offset,0)
line = 'X'
ili = False
var = False
while line.strip():
offset = self.file.tell()
self.file.seek(offset,0)
line = as_unicode(self.file.readline(), self.encoding).strip()
if debug:
print (line.encode('utf-8'))
self.parse_line(line)
self.noQuotes = None
select = (self.levelNumber,self.fieldTag)
if select in rulez.keys():
rulez[select]()
else:
if line:
print (self.synset.polarisText)
raise ParseError("No parsing rule for '%s'" % line)
return self.synset | python | def parse_synset(self, offset=None, debug=False):
"""Parses Synset from file
"""
if False:
pass
else:
# WORD_INSTANCE
def _word_instance():
_synset(True)
# WORD_MEANING
def _synset(pn=False):
if not pn:
self.synset = Synset()
self.pn = False
else:
self.synset = WordInstance()
self.pn = True
if self.DRN:
self.synset.number = self.DRN
self.targetType = None
def _variants():
self.synset.variants = Variants()
def _literal():
a = Variant()
self.synset.variants.append(a)
self.synset.variants[-1].literal = self.fieldValue
def _target_literal():
self.target_synset.variants.append(Variant())
self.target_synset.variants[-1].literal = self.fieldValue
def _sense():
self.synset.variants[-1].sense = self.fieldValue
def _status():
self.noQuotes = True
try:
self.synset.variants[-1].status = as_unicode(self.fieldValue)
except:
self.synset.variants[-1].status = as_unicode(str(self.fieldValue))
self.noQuotes = False
def _target_sense():
self.target_synset.variants[-1].sense = self.fieldValue
if self.targetType == 'internal':
self.synset.internalLinks[
-1].target_concept = self.target_synset
elif self.targetType == 'ili':
self.synset.eqLinks[-1].target_concept = self.target_synset
elif self.targetType == 'pv':
self.synset.propertyValues[-1].value = self.target_synset
else:
print ('BOOOOOOOOO!!') # Error TODO
def _gloss():
self.synset.variants[-1].gloss = self.fieldValue
self.synset.definition = self.fieldValue # ADDED BY KOM
def _translations():
self.synset.variants[-1].translations = Translations()
def _translation():
self.synset.variants[-1].translations.append(
Translation(
language=self.fieldValue.split(':')[0],
translation_value = self.fieldValue.split(':')[1])
)
def _examples():
self.synset.variants[-1].examples = Examples()
def _usage_labels():
self.synset.variants[-1].usage_labels = Usage_Labels()
def _external_info():
self.synset.variants[-1].externalInfo = External_Info()
def _example():
self.synset.variants[-1].examples.append(
Example(self.fieldValue)
)
def _usage_label():
self.synset.variants[
-1].usage_labels.append(
Usage_Label(name=self.fieldValue)
)
def _usage_label_value():
self.synset.variants[
-1].usage_labels[-1].usage_label_value = self.fieldValue
def _source_id():
if self.targetType == 'internal':
self.synset.internalLinks[-1].source_id = self.fieldValue
# self.synset.internalLinks[-1].source_ids.append(
# Relation_Source_Id(number=self.fieldValue))
elif self.targetType == 'ili':
self.synset.eqLinks[-1].source_id = self.fieldValue
# self.synset.eqLinks[-1].source_ids.append(
# Relation_Source_Id(number=self.fieldValue))
else:
if self.synset.variants[-1].external_info:
self.synset.variants[
-1].external_info.source_ids.append(
Source_Id(number=self.fieldValue)
)
else:
self.synset.variants[-1].external_info = External_Info()
self.synset.variants[
-1].external_info.source_ids.append(
Source_Id(number=self.fieldValue)
)
def _corpus_id():
if self.targetType == 'internal': # not needed
self.synset.internalLinks[-1].corpus_ids.append(
Relation_Corpus_Id(number=self.fieldValue))
else:
if self.synset.variants[-1].external_info:
self.synset.variants[
-1].external_info.corpus_ids.append(
Corpus_Id(number=self.fieldValue)
)
else:
self.synset.variants[-1].external_info = External_Info()
self.synset.variants[
-1].external_info.corpus_ids.append(
Corpus_Id(number=self.fieldValue)
)
def _frequency():
self.synset.variants[
-1].external_info.corpus_ids[-1].frequency = self.fieldValue
def _text_key():
self.synset.variants[
-1].external_info.source_ids[-1].text_key = self.fieldValue
def _number_key():
self.synset.variants[
-1].external_info.source_ids[
-1].number_key = self.fieldValue
def _pos():
self.synset.pos = self.fieldValue
# INTERNAL_LINKS
def _target_concept():
self.target_synset = Synset()
self.target_synset.variants = Variants()
if self.levelNumber == 3: # and self.fieldValue:
self.target_synset.number = int(self.fieldValue or 0)
def _target_pos():
self.target_synset.pos = self.fieldValue
def _internal_links():
self.synset.internalLinks = InternalLinks()
self.targetType = 'internal'
def _relation():
if self.targetType == 'internal':
self.synset.internalLinks.append(Relation())
self.synset.internalLinks[-1].name = self.fieldValue
elif self.targetType == 'ili':
self.synset.eqLinks.append(EqLink())
self.synset.eqLinks[-1].name = self.fieldValue
else:
print ('BOOOOOOOOO!!') # Error TODO
def _features():
if self.targetType == 'internal':
self.synset.internalLinks[-1].features = Features()
else:
self.synset.variants[-1].features = Features()
self.synset.variants[-1].features.append(Feature())
def _feature():
self.synset.variants[-1].features[-1].name = self.fieldValue
def _feature_value():
self.synset.variants[
-1].features[-1].featureValue = self.fieldValue
def _reversed():
self.synset.internalLinks[-1].features.append(Feature())
self.synset.internalLinks[-1].features[-1].name = self.fieldTag
self.synset.internalLinks[-1].features[-1].featureValue = True
def _variant_to_variant():
self.synset.internalLinks[-1].features.append(Feature())
self.synset.internalLinks[-1].features[-1].name = self.fieldTag
def _source_variant():
self.variant_to_variant_source = self.fieldValue
def _target_variant():
self.variant_to_variant_target = self.fieldValue
self.synset.internalLinks[
-1].features[-1].featureValue = (
self.variant_to_variant_source,
self.variant_to_variant_target)
# EQ_LINKS
def _eq_links():
self.synset.eqLinks = EqLinks()
self.targetType = 'ili'
def _wn_offset():
self.target_synset.wordnet_offset = self.fieldValue
self.synset.eqLinks[-1].target_concept = self.target_synset
def _add_on_id():
self.target_synset.add_on_id = self.fieldValue
self.synset.eqLinks[-1].target_concept = self.target_synset
# PROPERTIES
def _properties():
self.synset.properties = Properties()
def _name():
if self.pn:
self.synset.propertyValues.append(
PropertyValue(name=self.fieldValue))
else:
self.synset.properties.append(Property(self.fieldValue))
# PROPERTY_VALUES
def _property_values():
self.synset.propertyValues = PropertyValues()
def _property_value():
self.synset.propertyValues[-1].value = self.fieldValue
self.targetType = 'pv'
def _property_wm():
pass
rulez = {
(0,'WORD_MEANING'): _synset,
(0,'WORD_INSTANCE'): _word_instance,
(1,'PART_OF_SPEECH'): _pos,
(1,'VARIANTS'): _variants,
(2,'LITERAL'): _literal,
(3,'SENSE'): _sense,
(3,'STATUS'): _status,
(3,'DEFINITION'): _gloss,
(3,'EXAMPLES'): _examples,
(3,'USAGE_LABELS'): _usage_labels,
(4,'USAGE_LABEL'): _usage_label,
(5,'USAGE_LABEL_VALUE'): _usage_label_value,
(4,'EXAMPLE'): _example,
(3,'TRANSLATIONS'): _translations,
(4,'TRANSLATION'): _translation,
(3,'EXTERNAL_INFO'): _external_info,
(4,'SOURCE_ID'): _source_id,
(4,'CORPUS_ID'): _corpus_id,
(5,'FREQUENCY'): _frequency,
(5,'TEXT_KEY'): _text_key,
(5,'NUMBER_KEY'): _number_key,
(1,'INTERNAL_LINKS'): _internal_links,
(2,'RELATION'): _relation,
(3,'TARGET_CONCEPT'): _target_concept,
(4,'PART_OF_SPEECH'): _target_pos,
(4,'LITERAL'): _target_literal,
(5,'SENSE'): _target_sense,
(3,'FEATURES'): _features,
(4,'FEATURE'): _feature,
(5,'FEATURE_VALUE'): _feature_value,
(4,'REVERSED'): _reversed,
(4,'VARIANT_TO_VARIANT'): _variant_to_variant,
(5,'SOURCE_VARIANT'): _source_variant,
(5,'TARGET_VARIANT'): _target_variant,
(3,'SOURCE_ID'): _source_id,
(1,'EQ_LINKS'): _eq_links,
(2,'EQ_RELATION'): _relation,
(3,'TARGET_ILI'): _target_concept,
(4,'WORDNET_OFFSET'): _wn_offset,
(4,'ADD_ON_ID'): _add_on_id,
(1,'PROPERTIES'): _properties,
(1,'PROPERTY_VALUES'): _property_values,
(2,'NAME'): _name,
(3,'VALUE'): _property_value,
(3,'VALUE_AS_TEXT'): _property_value,
(3,'VALUE_AS_WORD_MEANING'): _target_concept,
}
if not offset:
offset = self.milestone
else:
self.milestone=offset
if self.file:
self.file.seek(offset,0)
line = 'X'
ili = False
var = False
while line.strip():
offset = self.file.tell()
self.file.seek(offset,0)
line = as_unicode(self.file.readline(), self.encoding).strip()
if debug:
print (line.encode('utf-8'))
self.parse_line(line)
self.noQuotes = None
select = (self.levelNumber,self.fieldTag)
if select in rulez.keys():
rulez[select]()
else:
if line:
print (self.synset.polarisText)
raise ParseError("No parsing rule for '%s'" % line)
return self.synset | Parses Synset from file | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1158-L1479 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Parser.parse_wordnet | def parse_wordnet(self,debug=False):
'''Parses wordnet from
<self.file>
'''
synList = []
self.milestone = 0 # to start from beginning of file
while self.milestone < os.path.getsize(self.fileName) - 5:
if debug:
print ('self.milestone', self.milestone)
a = self.parse_synset(offset=self.milestone)
synList.append(a)
self.milestone = self.file.tell()
return synList | python | def parse_wordnet(self,debug=False):
'''Parses wordnet from
<self.file>
'''
synList = []
self.milestone = 0 # to start from beginning of file
while self.milestone < os.path.getsize(self.fileName) - 5:
if debug:
print ('self.milestone', self.milestone)
a = self.parse_synset(offset=self.milestone)
synList.append(a)
self.milestone = self.file.tell()
return synList | Parses wordnet from
<self.file> | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1481-L1493 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Variant.addTranslation | def addTranslation(self,translation):
'''Appends one Translation to translations
'''
if isinstance(translation, Translation):
self.translations.append(translation)
else:
raise(TranslationError,
'translation Type should be Translation, not %s' % type(
translation)
) | python | def addTranslation(self,translation):
'''Appends one Translation to translations
'''
if isinstance(translation, Translation):
self.translations.append(translation)
else:
raise(TranslationError,
'translation Type should be Translation, not %s' % type(
translation)
) | Appends one Translation to translations | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1717-L1726 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Variant.addVariantFeature | def addVariantFeature(self,variantFeature):
'''Appends one VariantFeature to variantFeatures
'''
if isinstance(variantFeature, Feature):
self.features.append(variantFeature)
else:
raise(TypeError,
'variantFeature Type should be Feature, not %s' % type(
variantFeature)
) | python | def addVariantFeature(self,variantFeature):
'''Appends one VariantFeature to variantFeatures
'''
if isinstance(variantFeature, Feature):
self.features.append(variantFeature)
else:
raise(TypeError,
'variantFeature Type should be Feature, not %s' % type(
variantFeature)
) | Appends one VariantFeature to variantFeatures | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1728-L1737 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Variant.addUsage_Label | def addUsage_Label(self,usage_label):
'''Appends one Usage_Label to usage_labels
'''
if isinstance(usage_label, Usage_Label):
self.usage_labels.append(usage_label)
else:
raise (Usage_LabelError,
'usage_label Type should be Usage_Label, not %s' % type(
usage_label)
) | python | def addUsage_Label(self,usage_label):
'''Appends one Usage_Label to usage_labels
'''
if isinstance(usage_label, Usage_Label):
self.usage_labels.append(usage_label)
else:
raise (Usage_LabelError,
'usage_label Type should be Usage_Label, not %s' % type(
usage_label)
) | Appends one Usage_Label to usage_labels | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1739-L1748 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Variant.addExample | def addExample(self,example):
'''Appends one Example to examples
'''
if isinstance(example, Example):
self.examples.append(example)
else:
raise (ExampleError,
'example Type should be Example, not %s' % type(example)
) | python | def addExample(self,example):
'''Appends one Example to examples
'''
if isinstance(example, Example):
self.examples.append(example)
else:
raise (ExampleError,
'example Type should be Example, not %s' % type(example)
) | Appends one Example to examples | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1750-L1758 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.firstVariant | def firstVariant():
"""first variant of Variants
Read-only
"""
def fget(self):
if self.variants:
return self.variants[0]
else:
variant = Variant()
return variant
return locals() | python | def firstVariant():
"""first variant of Variants
Read-only
"""
def fget(self):
if self.variants:
return self.variants[0]
else:
variant = Variant()
return variant
return locals() | first variant of Variants
Read-only | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1978-L1990 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.literals | def literals():
'''Returns a list of literals
in the Synset
read-only
'''
def fget(self):
if self.variants:
return map(lambda x: x.literal,
self.variants)
else:
return None
return locals() | python | def literals():
'''Returns a list of literals
in the Synset
read-only
'''
def fget(self):
if self.variants:
return map(lambda x: x.literal,
self.variants)
else:
return None
return locals() | Returns a list of literals
in the Synset
read-only | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L1994-L2006 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.addVariantOld | def addVariantOld(self,
literal='',
sense=0,
gloss='',
examples=[]):
'''Appends variant
sth to do that it would be possible
to add Variant object
'''
var = Variant(literal=literal,
sense=sense,
gloss=gloss,
examples=examples)
self.variants.append(var) | python | def addVariantOld(self,
literal='',
sense=0,
gloss='',
examples=[]):
'''Appends variant
sth to do that it would be possible
to add Variant object
'''
var = Variant(literal=literal,
sense=sense,
gloss=gloss,
examples=examples)
self.variants.append(var) | Appends variant
sth to do that it would be possible
to add Variant object | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2010-L2024 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.addVariant | def addVariant(self,variant):
'''Appends one Variant to variants
'''
if isinstance(variant, Variant):
self.variants.append(variant)
else:
raise (VariantError,
'variant Type should be Variant, not %s' % type(variant)) | python | def addVariant(self,variant):
'''Appends one Variant to variants
'''
if isinstance(variant, Variant):
self.variants.append(variant)
else:
raise (VariantError,
'variant Type should be Variant, not %s' % type(variant)) | Appends one Variant to variants | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2026-L2034 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.addInternalLink | def addInternalLink(self, link):
'''Appends InternalLink
'''
if isinstance(link, InternalLink):
self.internalLinks.append(link)
else:
raise InternalLinkError(
'link Type should be InternalLink, not %s' % type(link)) | python | def addInternalLink(self, link):
'''Appends InternalLink
'''
if isinstance(link, InternalLink):
self.internalLinks.append(link)
else:
raise InternalLinkError(
'link Type should be InternalLink, not %s' % type(link)) | Appends InternalLink | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2037-L2045 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.addRelation | def addRelation(self, link):
'''Appends Relation
'''
if isinstance(link, Relation):
self.internalLinks.append(link)
else:
raise TypeError(
'link Type should be InternalLink, not %s' % type(link)) | python | def addRelation(self, link):
'''Appends Relation
'''
if isinstance(link, Relation):
self.internalLinks.append(link)
else:
raise TypeError(
'link Type should be InternalLink, not %s' % type(link)) | Appends Relation | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2047-L2055 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.addEqLink | def addEqLink(self, link):
'''Appends EqLink
'''
if isinstance(link, EqLink):
self.eqLinks.append(link)
else:
raise TypeError(
'link Type should be InternalLink, not %s' % type(link)) | python | def addEqLink(self, link):
'''Appends EqLink
'''
if isinstance(link, EqLink):
self.eqLinks.append(link)
else:
raise TypeError(
'link Type should be InternalLink, not %s' % type(link)) | Appends EqLink | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2058-L2066 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.named_relations | def named_relations(self, name, neg=False):
'''Returns list of named Relations.
<name> may be string or list.
'''
if self.internalLinks and not neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.name == name,
self.internalLinks)
elif isinstance(name, list):
return filter(lambda x: x.name in name,
self.internalLinks)
else:
return None #should rise error
elif self.internalLinks and neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.name != name,
self.internalLinks)
elif isinstance(name, list):
return filter(lambda x: x.name not in name,
self.internalLinks)
else:
return None #should rise error
else:
return [] | python | def named_relations(self, name, neg=False):
'''Returns list of named Relations.
<name> may be string or list.
'''
if self.internalLinks and not neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.name == name,
self.internalLinks)
elif isinstance(name, list):
return filter(lambda x: x.name in name,
self.internalLinks)
else:
return None #should rise error
elif self.internalLinks and neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.name != name,
self.internalLinks)
elif isinstance(name, list):
return filter(lambda x: x.name not in name,
self.internalLinks)
else:
return None #should rise error
else:
return [] | Returns list of named Relations.
<name> may be string or list. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2069-L2099 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.named_eq_relations | def named_eq_relations(self, name, neg=False):
'''Returns list of named eqLinks.
<name> may be string or list.
'''
if self.eqLinks and not neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.relation.name == name,
self.eqLinks)
elif isinstance(name, list):
return filter(lambda x: x.relation.name in name,
self.eqLinks)
else:
return None #should rise error
elif self.eqLinks and neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.relation.name != name,
self.eqLinks)
elif isinstance(name, list):
return filter(lambda x: x.relation.name not in name,
self.eqLinks)
else:
return None #should rise error
else:
return None | python | def named_eq_relations(self, name, neg=False):
'''Returns list of named eqLinks.
<name> may be string or list.
'''
if self.eqLinks and not neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.relation.name == name,
self.eqLinks)
elif isinstance(name, list):
return filter(lambda x: x.relation.name in name,
self.eqLinks)
else:
return None #should rise error
elif self.eqLinks and neg:
if isinstance(name, six.string_types):
return filter(lambda x: x.relation.name != name,
self.eqLinks)
elif isinstance(name, list):
return filter(lambda x: x.relation.name not in name,
self.eqLinks)
else:
return None #should rise error
else:
return None | Returns list of named eqLinks.
<name> may be string or list. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2102-L2131 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.parse | def parse(self,fileName,offset):
'''Parses synset from file <fileName>
from offset <offset>
'''
p = Parser()
p.file = open(fileName, 'rb')
a = p.parse_synset(offset=offset)
p.file.close()
self.__dict__.update(a.__dict__) | python | def parse(self,fileName,offset):
'''Parses synset from file <fileName>
from offset <offset>
'''
p = Parser()
p.file = open(fileName, 'rb')
a = p.parse_synset(offset=offset)
p.file.close()
self.__dict__.update(a.__dict__) | Parses synset from file <fileName>
from offset <offset> | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2177-L2185 |
estnltk/estnltk | estnltk/wordnet/eurown.py | Synset.write | def write(self,fileName):
'''Appends synset to Polaris IO file <fileName>
'''
f = open(fileName, 'ab')
f.write('%s%s' % (self.polarisText,
Synset.linebreak)
)
f.close() | python | def write(self,fileName):
'''Appends synset to Polaris IO file <fileName>
'''
f = open(fileName, 'ab')
f.write('%s%s' % (self.polarisText,
Synset.linebreak)
)
f.close() | Appends synset to Polaris IO file <fileName> | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/wordnet/eurown.py#L2188-L2195 |
estnltk/estnltk | estnltk/syntax/utils.py | _fix_out_of_sentence_links | def _fix_out_of_sentence_links( alignments, sent_start, sent_end ):
''' Fixes out-of-the-sentence links in the given sentence.
The sentence is a sublist of *alignments*, starting from
*sent_start* and ending one token before *sent_end*;
'''
sent_len = sent_end - sent_start
j = sent_start
while j < sent_start + sent_len:
for rel_id, rel in enumerate( alignments[j][PARSER_OUT] ):
if int( rel[1] ) >= sent_len:
# If the link points out-of-the-sentence, fix
# the link so that it points inside the sentence
# boundaries:
wid = j - sent_start
if sent_len == 1:
# a single word becomes a root
rel[1] = -1
elif wid-1 > -1:
# word at the middle/end is linked to the previous
rel[1] = wid - 1
elif wid-1 == -1:
# word at the beginning is linked to the next
rel[1] = wid + 1
alignments[j][PARSER_OUT][rel_id] = rel
j += 1 | python | def _fix_out_of_sentence_links( alignments, sent_start, sent_end ):
''' Fixes out-of-the-sentence links in the given sentence.
The sentence is a sublist of *alignments*, starting from
*sent_start* and ending one token before *sent_end*;
'''
sent_len = sent_end - sent_start
j = sent_start
while j < sent_start + sent_len:
for rel_id, rel in enumerate( alignments[j][PARSER_OUT] ):
if int( rel[1] ) >= sent_len:
# If the link points out-of-the-sentence, fix
# the link so that it points inside the sentence
# boundaries:
wid = j - sent_start
if sent_len == 1:
# a single word becomes a root
rel[1] = -1
elif wid-1 > -1:
# word at the middle/end is linked to the previous
rel[1] = wid - 1
elif wid-1 == -1:
# word at the beginning is linked to the next
rel[1] = wid + 1
alignments[j][PARSER_OUT][rel_id] = rel
j += 1 | Fixes out-of-the-sentence links in the given sentence.
The sentence is a sublist of *alignments*, starting from
*sent_start* and ending one token before *sent_end*; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L47-L71 |
estnltk/estnltk | estnltk/syntax/utils.py | normalise_alignments | def normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs ):
''' Normalises dependency syntactic information in the given list of alignments.
*) Translates tree node indices from the syntax format (indices starting
from 1), to EstNLTK format (indices starting from 0);
*) Removes redundant information (morphological analyses) and keeps only
syntactic information, in the most compact format;
*) Brings MaltParser and VISLCG3 info into common format;
Expects that the list of alignments contains dicts, where each dict has
following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Assumes that dicts are listed in the order of words appearance in the text;
( basically, assumes the output of methods align_CONLL_with_Text() and
align_cg3_with_Text() )
Returns the input list (alignments), where old analysis lines ('parser_out') have
been replaced with the new compact form of analyses (if keep_old == False), or where
old analysis lines ('parser_out') have been replaced the new compact form of analyses,
and the old analysis lines are preserved under a separate key: 'init_parser_out' (if
keep_old == True);
In the compact list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label
surface syntactic label of the word, e.g. '@SUBJ', '@OBJ', '@ADVL'
*) index_of_the_head
index of the head; -1 if the current token is root;
Parameters
-----------
alignments : list of items
A list of dicts, where each item/dict has following attributes:
'start', 'end', 'sent_id', 'parser_out'
data_type : str
Type of data in list_of_analysis_lines; Possible types: 'vislcg3'
(default), and 'conll';
rep_miss_w_dummy : bool
Optional argument specifying whether missing analyses should be replaced
with dummy analyses ( in the form ['xxx', link_to_self] ); If False,
an Exception is raised in case of a missing analysis;
Default:True
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
A self-reference link is firstly re-oriented as a link to the previous word
in the sentence, and if the previous word does not exist, the link is
re-oriented to the next word in the sentence; If the self-linked word is
the only word in the sentence, it is made the root of the sentence;
Default:True
fix_out_of_sent : bool
Optional argument specifying whether references pointing out of the sentence
(the parent index exceeds the sentence boundaries) should be fixed;
The logic used in fixing out-of-sentence links is the same as the logic for
fix_selfrefs;
Default:False
keep_old : bool
Optional argument specifying whether the old analysis lines should be
preserved after overwriting 'parser_out' with new analysis lines;
If True, each dict will be augmented with key 'init_parser_out' which
contains the initial/old analysis lines;
Default:False
mark_root : bool
Optional argument specifying whether the root node in the dependency tree
(the node pointing to -1) should be assigned the label 'ROOT' (regardless
its current label).
This might be required, if one wants to make MaltParser's and VISLCG3 out-
puts more similar, as MaltParser currently uses 'ROOT' labels, while VISLCG3
does not;
Default:False
(Example text: 'Millega pitsat tellida ? Hea küsimus .')
Example input (VISLC3):
-----------------------
{'end': 7, 'sent_id': 0, 'start': 0, 'parser_out': ['\t"mis" Lga P inter rel sg kom @NN> @ADVL #1->3\r']}
{'end': 14, 'sent_id': 0, 'start': 8, 'parser_out': ['\t"pitsa" Lt S com sg part @OBJ #2->3\r']}
{'end': 22, 'sent_id': 0, 'start': 15, 'parser_out': ['\t"telli" Lda V main inf @IMV #3->0\r']}
{'end': 23, 'sent_id': 0, 'start': 22, 'parser_out': ['\t"?" Z Int CLB #4->4\r']}
{'end': 27, 'sent_id': 1, 'start': 24, 'parser_out': ['\t"hea" L0 A pos sg nom @AN> #1->2\r']}
{'end': 35, 'sent_id': 1, 'start': 28, 'parser_out': ['\t"küsimus" L0 S com sg nom @SUBJ #2->0\r']}
{'end': 36, 'sent_id': 1, 'start': 35, 'parser_out': ['\t"." Z Fst CLB #3->3\r']}
Example output:
---------------
{'sent_id': 0, 'start': 0, 'end': 7, 'parser_out': [['@NN>', 2], ['@ADVL', 2]]}
{'sent_id': 0, 'start': 8, 'end': 14, 'parser_out': [['@OBJ', 2]]}
{'sent_id': 0, 'start': 15, 'end': 22, 'parser_out': [['@IMV', -1]]}
{'sent_id': 0, 'start': 22, 'end': 23, 'parser_out': [['xxx', 2]]}
{'sent_id': 1, 'start': 24, 'end': 27, 'parser_out': [['@AN>', 1]]}
{'sent_id': 1, 'start': 28, 'end': 35, 'parser_out': [['@SUBJ', -1]]}
{'sent_id': 1, 'start': 35, 'end': 36, 'parser_out': [['xxx', 1]]}
'''
if not isinstance( alignments, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
if data_type.lower() == VISLCG3_DATA:
data_type = VISLCG3_DATA
elif data_type.lower() == CONLL_DATA:
data_type = CONLL_DATA
else:
raise Exception('(!) Unexpected type of data: ', data_type)
keep_old = False
rep_miss_w_dummy = True
mark_root = False
fix_selfrefs = True
fix_out_of_sent = False
for argName, argVal in kwargs.items():
if argName in ['selfrefs', 'fix_selfrefs'] and argVal in [True, False]:
# Fix self-references
fix_selfrefs = argVal
if argName in ['keep_old'] and argVal in [True, False]:
# After the normalisation, keep also the original analyses;
keep_old = argVal
if argName in ['rep_miss_w_dummy', 'rep_miss'] and argVal in [True, False]:
# Replace missing analyses with dummy analyses;
rep_miss_w_dummy = argVal
if argName in ['mark_root', 'root'] and argVal in [True, False]:
# Mark the root node in the syntactic tree with the label ROOT;
mark_root = argVal
if argName in ['fix_out_of_sent']:
# Fix links pointing out of the sentence;
fix_out_of_sent = bool(argVal)
# Iterate over the alignments and normalise information
prev_sent_id = -1
wordID = 0
sentStart = -1
for i in range(len(alignments)):
alignment = alignments[i]
if prev_sent_id != alignment[SENT_ID]:
# Detect and fix out-of-the-sentence links in the previous sentence (if required)
if fix_out_of_sent and sentStart > -1:
_fix_out_of_sentence_links( alignments, sentStart, i )
# Start of a new sentence: reset word id
wordID = 0
sentStart = i
# 1) Extract syntactic information
foundRelations = []
if data_type == VISLCG3_DATA:
# ***************** VISLCG3 format
for line in alignment[PARSER_OUT]:
# Extract info from VISLCG3 format analysis:
sfuncs = pat_cg3_surface_rel.findall( line )
deprels = pat_cg3_dep_rel.findall( line )
# If sfuncs is empty, generate an empty syntactic function (e.g. for
# punctuation)
sfuncs = ['xxx'] if not sfuncs else sfuncs
# Generate all pairs of labels vs dependency
for func in sfuncs:
for (relS,relT) in deprels:
relS = int(relS)-1
relT = int(relT)-1
foundRelations.append( [func, relT] )
elif data_type == CONLL_DATA:
# ***************** CONLL format
for line in alignment[PARSER_OUT]:
parts = line.split('\t')
if len(parts) != 10:
raise Exception('(!) Unexpected line format for CONLL data:', line)
relT = int( parts[6] ) - 1
func = parts[7]
foundRelations.append( [func, relT] )
# Handle missing relations (VISLCG3 specific problem)
if not foundRelations:
# If no alignments were found (probably due to an error in analysis)
if rep_miss_w_dummy:
# Replace missing analysis with a dummy analysis, with dep link
# pointing to self;
foundRelations.append( ['xxx', wordID] )
else:
raise Exception('(!) Analysis missing for the word nr.', alignment[0])
# Fix self references ( if requested )
if fix_selfrefs:
for r in range(len(foundRelations)):
if foundRelations[r][1] == wordID:
# Make it to point to the previous word in the sentence,
# and if the previous one does not exist, make it to point
# to the next word;
foundRelations[r][1] = \
wordID-1 if wordID-1 > -1 else wordID+1
# If the self-linked token is the only token in the sentence,
# mark it as the root of the sentence:
if wordID-1 == -1 and (i+1 == len(alignments) or \
alignments[i][SENT_ID] != alignments[i+1][SENT_ID]):
foundRelations[r][1] = -1
# Mark the root node in the syntactic tree with the label ROOT ( if requested )
if mark_root:
for r in range(len(foundRelations)):
if foundRelations[r][1] == -1:
foundRelations[r][0] = 'ROOT'
# 2) Replace existing syntactic info with more compact info
if not keep_old:
# Overwrite old info
alignment[PARSER_OUT] = foundRelations
else:
# or preserve the initial information, and add new compact information
alignment[INIT_PARSER_OUT] = alignment[PARSER_OUT]
alignment[PARSER_OUT] = foundRelations
alignments[i] = alignment
prev_sent_id = alignment[SENT_ID]
# Increase word id
wordID += 1
# Detect and fix out-of-the-sentence links in the last sentence (if required)
if fix_out_of_sent and sentStart > -1:
_fix_out_of_sentence_links( alignments, sentStart, len(alignments) )
return alignments | python | def normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs ):
''' Normalises dependency syntactic information in the given list of alignments.
*) Translates tree node indices from the syntax format (indices starting
from 1), to EstNLTK format (indices starting from 0);
*) Removes redundant information (morphological analyses) and keeps only
syntactic information, in the most compact format;
*) Brings MaltParser and VISLCG3 info into common format;
Expects that the list of alignments contains dicts, where each dict has
following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Assumes that dicts are listed in the order of words appearance in the text;
( basically, assumes the output of methods align_CONLL_with_Text() and
align_cg3_with_Text() )
Returns the input list (alignments), where old analysis lines ('parser_out') have
been replaced with the new compact form of analyses (if keep_old == False), or where
old analysis lines ('parser_out') have been replaced the new compact form of analyses,
and the old analysis lines are preserved under a separate key: 'init_parser_out' (if
keep_old == True);
In the compact list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label
surface syntactic label of the word, e.g. '@SUBJ', '@OBJ', '@ADVL'
*) index_of_the_head
index of the head; -1 if the current token is root;
Parameters
-----------
alignments : list of items
A list of dicts, where each item/dict has following attributes:
'start', 'end', 'sent_id', 'parser_out'
data_type : str
Type of data in list_of_analysis_lines; Possible types: 'vislcg3'
(default), and 'conll';
rep_miss_w_dummy : bool
Optional argument specifying whether missing analyses should be replaced
with dummy analyses ( in the form ['xxx', link_to_self] ); If False,
an Exception is raised in case of a missing analysis;
Default:True
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
A self-reference link is firstly re-oriented as a link to the previous word
in the sentence, and if the previous word does not exist, the link is
re-oriented to the next word in the sentence; If the self-linked word is
the only word in the sentence, it is made the root of the sentence;
Default:True
fix_out_of_sent : bool
Optional argument specifying whether references pointing out of the sentence
(the parent index exceeds the sentence boundaries) should be fixed;
The logic used in fixing out-of-sentence links is the same as the logic for
fix_selfrefs;
Default:False
keep_old : bool
Optional argument specifying whether the old analysis lines should be
preserved after overwriting 'parser_out' with new analysis lines;
If True, each dict will be augmented with key 'init_parser_out' which
contains the initial/old analysis lines;
Default:False
mark_root : bool
Optional argument specifying whether the root node in the dependency tree
(the node pointing to -1) should be assigned the label 'ROOT' (regardless
its current label).
This might be required, if one wants to make MaltParser's and VISLCG3 out-
puts more similar, as MaltParser currently uses 'ROOT' labels, while VISLCG3
does not;
Default:False
(Example text: 'Millega pitsat tellida ? Hea küsimus .')
Example input (VISLC3):
-----------------------
{'end': 7, 'sent_id': 0, 'start': 0, 'parser_out': ['\t"mis" Lga P inter rel sg kom @NN> @ADVL #1->3\r']}
{'end': 14, 'sent_id': 0, 'start': 8, 'parser_out': ['\t"pitsa" Lt S com sg part @OBJ #2->3\r']}
{'end': 22, 'sent_id': 0, 'start': 15, 'parser_out': ['\t"telli" Lda V main inf @IMV #3->0\r']}
{'end': 23, 'sent_id': 0, 'start': 22, 'parser_out': ['\t"?" Z Int CLB #4->4\r']}
{'end': 27, 'sent_id': 1, 'start': 24, 'parser_out': ['\t"hea" L0 A pos sg nom @AN> #1->2\r']}
{'end': 35, 'sent_id': 1, 'start': 28, 'parser_out': ['\t"küsimus" L0 S com sg nom @SUBJ #2->0\r']}
{'end': 36, 'sent_id': 1, 'start': 35, 'parser_out': ['\t"." Z Fst CLB #3->3\r']}
Example output:
---------------
{'sent_id': 0, 'start': 0, 'end': 7, 'parser_out': [['@NN>', 2], ['@ADVL', 2]]}
{'sent_id': 0, 'start': 8, 'end': 14, 'parser_out': [['@OBJ', 2]]}
{'sent_id': 0, 'start': 15, 'end': 22, 'parser_out': [['@IMV', -1]]}
{'sent_id': 0, 'start': 22, 'end': 23, 'parser_out': [['xxx', 2]]}
{'sent_id': 1, 'start': 24, 'end': 27, 'parser_out': [['@AN>', 1]]}
{'sent_id': 1, 'start': 28, 'end': 35, 'parser_out': [['@SUBJ', -1]]}
{'sent_id': 1, 'start': 35, 'end': 36, 'parser_out': [['xxx', 1]]}
'''
if not isinstance( alignments, list ):
raise Exception('(!) Unexpected type of input argument! Expected a list of strings.')
if data_type.lower() == VISLCG3_DATA:
data_type = VISLCG3_DATA
elif data_type.lower() == CONLL_DATA:
data_type = CONLL_DATA
else:
raise Exception('(!) Unexpected type of data: ', data_type)
keep_old = False
rep_miss_w_dummy = True
mark_root = False
fix_selfrefs = True
fix_out_of_sent = False
for argName, argVal in kwargs.items():
if argName in ['selfrefs', 'fix_selfrefs'] and argVal in [True, False]:
# Fix self-references
fix_selfrefs = argVal
if argName in ['keep_old'] and argVal in [True, False]:
# After the normalisation, keep also the original analyses;
keep_old = argVal
if argName in ['rep_miss_w_dummy', 'rep_miss'] and argVal in [True, False]:
# Replace missing analyses with dummy analyses;
rep_miss_w_dummy = argVal
if argName in ['mark_root', 'root'] and argVal in [True, False]:
# Mark the root node in the syntactic tree with the label ROOT;
mark_root = argVal
if argName in ['fix_out_of_sent']:
# Fix links pointing out of the sentence;
fix_out_of_sent = bool(argVal)
# Iterate over the alignments and normalise information
prev_sent_id = -1
wordID = 0
sentStart = -1
for i in range(len(alignments)):
alignment = alignments[i]
if prev_sent_id != alignment[SENT_ID]:
# Detect and fix out-of-the-sentence links in the previous sentence (if required)
if fix_out_of_sent and sentStart > -1:
_fix_out_of_sentence_links( alignments, sentStart, i )
# Start of a new sentence: reset word id
wordID = 0
sentStart = i
# 1) Extract syntactic information
foundRelations = []
if data_type == VISLCG3_DATA:
# ***************** VISLCG3 format
for line in alignment[PARSER_OUT]:
# Extract info from VISLCG3 format analysis:
sfuncs = pat_cg3_surface_rel.findall( line )
deprels = pat_cg3_dep_rel.findall( line )
# If sfuncs is empty, generate an empty syntactic function (e.g. for
# punctuation)
sfuncs = ['xxx'] if not sfuncs else sfuncs
# Generate all pairs of labels vs dependency
for func in sfuncs:
for (relS,relT) in deprels:
relS = int(relS)-1
relT = int(relT)-1
foundRelations.append( [func, relT] )
elif data_type == CONLL_DATA:
# ***************** CONLL format
for line in alignment[PARSER_OUT]:
parts = line.split('\t')
if len(parts) != 10:
raise Exception('(!) Unexpected line format for CONLL data:', line)
relT = int( parts[6] ) - 1
func = parts[7]
foundRelations.append( [func, relT] )
# Handle missing relations (VISLCG3 specific problem)
if not foundRelations:
# If no alignments were found (probably due to an error in analysis)
if rep_miss_w_dummy:
# Replace missing analysis with a dummy analysis, with dep link
# pointing to self;
foundRelations.append( ['xxx', wordID] )
else:
raise Exception('(!) Analysis missing for the word nr.', alignment[0])
# Fix self references ( if requested )
if fix_selfrefs:
for r in range(len(foundRelations)):
if foundRelations[r][1] == wordID:
# Make it to point to the previous word in the sentence,
# and if the previous one does not exist, make it to point
# to the next word;
foundRelations[r][1] = \
wordID-1 if wordID-1 > -1 else wordID+1
# If the self-linked token is the only token in the sentence,
# mark it as the root of the sentence:
if wordID-1 == -1 and (i+1 == len(alignments) or \
alignments[i][SENT_ID] != alignments[i+1][SENT_ID]):
foundRelations[r][1] = -1
# Mark the root node in the syntactic tree with the label ROOT ( if requested )
if mark_root:
for r in range(len(foundRelations)):
if foundRelations[r][1] == -1:
foundRelations[r][0] = 'ROOT'
# 2) Replace existing syntactic info with more compact info
if not keep_old:
# Overwrite old info
alignment[PARSER_OUT] = foundRelations
else:
# or preserve the initial information, and add new compact information
alignment[INIT_PARSER_OUT] = alignment[PARSER_OUT]
alignment[PARSER_OUT] = foundRelations
alignments[i] = alignment
prev_sent_id = alignment[SENT_ID]
# Increase word id
wordID += 1
# Detect and fix out-of-the-sentence links in the last sentence (if required)
if fix_out_of_sent and sentStart > -1:
_fix_out_of_sentence_links( alignments, sentStart, len(alignments) )
return alignments | Normalises dependency syntactic information in the given list of alignments.
*) Translates tree node indices from the syntax format (indices starting
from 1), to EstNLTK format (indices starting from 0);
*) Removes redundant information (morphological analyses) and keeps only
syntactic information, in the most compact format;
*) Brings MaltParser and VISLCG3 info into common format;
Expects that the list of alignments contains dicts, where each dict has
following attributes (at minimum):
'start' -- start index of the word in Text;
'end' -- end index of the word in Text;
'sent_id' -- index of the sentence in Text, starting from 0;
'parser_out' -- list of analyses from the output of the syntactic parser;
Assumes that dicts are listed in the order of words appearance in the text;
( basically, assumes the output of methods align_CONLL_with_Text() and
align_cg3_with_Text() )
Returns the input list (alignments), where old analysis lines ('parser_out') have
been replaced with the new compact form of analyses (if keep_old == False), or where
old analysis lines ('parser_out') have been replaced the new compact form of analyses,
and the old analysis lines are preserved under a separate key: 'init_parser_out' (if
keep_old == True);
In the compact list of analyses, each item has the following structure:
[ syntactic_label, index_of_the_head ]
*) syntactic_label
surface syntactic label of the word, e.g. '@SUBJ', '@OBJ', '@ADVL'
*) index_of_the_head
index of the head; -1 if the current token is root;
Parameters
-----------
alignments : list of items
A list of dicts, where each item/dict has following attributes:
'start', 'end', 'sent_id', 'parser_out'
data_type : str
Type of data in list_of_analysis_lines; Possible types: 'vislcg3'
(default), and 'conll';
rep_miss_w_dummy : bool
Optional argument specifying whether missing analyses should be replaced
with dummy analyses ( in the form ['xxx', link_to_self] ); If False,
an Exception is raised in case of a missing analysis;
Default:True
fix_selfrefs : bool
Optional argument specifying whether self-references in syntactic
dependencies should be fixed;
A self-reference link is firstly re-oriented as a link to the previous word
in the sentence, and if the previous word does not exist, the link is
re-oriented to the next word in the sentence; If the self-linked word is
the only word in the sentence, it is made the root of the sentence;
Default:True
fix_out_of_sent : bool
Optional argument specifying whether references pointing out of the sentence
(the parent index exceeds the sentence boundaries) should be fixed;
The logic used in fixing out-of-sentence links is the same as the logic for
fix_selfrefs;
Default:False
keep_old : bool
Optional argument specifying whether the old analysis lines should be
preserved after overwriting 'parser_out' with new analysis lines;
If True, each dict will be augmented with key 'init_parser_out' which
contains the initial/old analysis lines;
Default:False
mark_root : bool
Optional argument specifying whether the root node in the dependency tree
(the node pointing to -1) should be assigned the label 'ROOT' (regardless
its current label).
This might be required, if one wants to make MaltParser's and VISLCG3 out-
puts more similar, as MaltParser currently uses 'ROOT' labels, while VISLCG3
does not;
Default:False
(Example text: 'Millega pitsat tellida ? Hea küsimus .')
Example input (VISLC3):
-----------------------
{'end': 7, 'sent_id': 0, 'start': 0, 'parser_out': ['\t"mis" Lga P inter rel sg kom @NN> @ADVL #1->3\r']}
{'end': 14, 'sent_id': 0, 'start': 8, 'parser_out': ['\t"pitsa" Lt S com sg part @OBJ #2->3\r']}
{'end': 22, 'sent_id': 0, 'start': 15, 'parser_out': ['\t"telli" Lda V main inf @IMV #3->0\r']}
{'end': 23, 'sent_id': 0, 'start': 22, 'parser_out': ['\t"?" Z Int CLB #4->4\r']}
{'end': 27, 'sent_id': 1, 'start': 24, 'parser_out': ['\t"hea" L0 A pos sg nom @AN> #1->2\r']}
{'end': 35, 'sent_id': 1, 'start': 28, 'parser_out': ['\t"küsimus" L0 S com sg nom @SUBJ #2->0\r']}
{'end': 36, 'sent_id': 1, 'start': 35, 'parser_out': ['\t"." Z Fst CLB #3->3\r']}
Example output:
---------------
{'sent_id': 0, 'start': 0, 'end': 7, 'parser_out': [['@NN>', 2], ['@ADVL', 2]]}
{'sent_id': 0, 'start': 8, 'end': 14, 'parser_out': [['@OBJ', 2]]}
{'sent_id': 0, 'start': 15, 'end': 22, 'parser_out': [['@IMV', -1]]}
{'sent_id': 0, 'start': 22, 'end': 23, 'parser_out': [['xxx', 2]]}
{'sent_id': 1, 'start': 24, 'end': 27, 'parser_out': [['@AN>', 1]]}
{'sent_id': 1, 'start': 28, 'end': 35, 'parser_out': [['@SUBJ', -1]]}
{'sent_id': 1, 'start': 35, 'end': 36, 'parser_out': [['xxx', 1]]} | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L74-L288 |
estnltk/estnltk | estnltk/syntax/utils.py | read_text_from_cg3_file | def read_text_from_cg3_file( file_name, layer_name=LAYER_VISLCG3, **kwargs ):
''' Reads the output of VISLCG3 syntactic analysis from given file, and
returns as a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_VISLCG3);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Note: when loading data from https://github.com/EstSyntax/EDT corpus,
it is advisable to add flags: clean_up=True, fix_sent_tags=True,
fix_out_of_sent=True in order to ensure that well-formed data will be
read from the corpus;
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the format of the output of VISLCG3 syntactic analyser;
clean_up : bool
Optional argument specifying whether the vislcg3_syntax.cleanup_lines()
should be applied in the lines of syntactic analyses read from the
file;
Default: False
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'vislcg3_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
vislcg3_syntax.align_cg3_with_Text(): "check_tokens", "add_word_ids";
vislcg3_syntax.cleanup_lines(): "remove_caps", "remove_clo",
"double_quotes", "fix_sent_tags"
'''
clean_up = False
for argName, argVal in kwargs.items():
if argName in ['clean_up', 'cleanup'] and argVal in [True, False]:
# Clean up lines
clean_up = argVal
# 1) Load vislcg3 analysed text from file
cg3_lines = []
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
# Skip comment lines
if line.startswith('#'):
continue
cg3_lines.append( line.rstrip() )
in_f.close()
# Clean up lines of syntactic analyses (if requested)
if clean_up:
cg3_lines = cleanup_lines( cg3_lines, **kwargs )
# 2) Extract sentences and word tokens
sentences = []
sentence = []
for i, line in enumerate( cg3_lines ):
if line == '"<s>"':
if sentence:
print('(!) Sentence begins before previous ends at line: '+str(i), \
file=sys.stderr)
sentence = []
elif pat_double_quoted.match( line ) and line != '"<s>"' and line != '"</s>"':
token_match = pat_cg3_word_token.match( line )
if token_match:
line = token_match.group(1)
else:
raise Exception('(!) Unexpected token format: ', line)
sentence.append( line )
elif line == '"</s>"':
if not sentence:
print('(!) Empty sentence at line: '+str(i), \
file=sys.stderr)
# (!) Use double space instead of single space in order to distinguish
# word-tokenizing space from the single space in the multiwords
# (e.g. 'Rio de Janeiro' as a single word);
sentences.append( ' '.join(sentence) )
sentence = []
# 3) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 4) Align syntactic analyses with the Text
alignments = align_cg3_with_Text( cg3_lines, text, **kwargs )
normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs )
# Attach alignments to the text
text[ layer_name ] = alignments
return text | python | def read_text_from_cg3_file( file_name, layer_name=LAYER_VISLCG3, **kwargs ):
''' Reads the output of VISLCG3 syntactic analysis from given file, and
returns as a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_VISLCG3);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Note: when loading data from https://github.com/EstSyntax/EDT corpus,
it is advisable to add flags: clean_up=True, fix_sent_tags=True,
fix_out_of_sent=True in order to ensure that well-formed data will be
read from the corpus;
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the format of the output of VISLCG3 syntactic analyser;
clean_up : bool
Optional argument specifying whether the vislcg3_syntax.cleanup_lines()
should be applied in the lines of syntactic analyses read from the
file;
Default: False
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'vislcg3_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
vislcg3_syntax.align_cg3_with_Text(): "check_tokens", "add_word_ids";
vislcg3_syntax.cleanup_lines(): "remove_caps", "remove_clo",
"double_quotes", "fix_sent_tags"
'''
clean_up = False
for argName, argVal in kwargs.items():
if argName in ['clean_up', 'cleanup'] and argVal in [True, False]:
# Clean up lines
clean_up = argVal
# 1) Load vislcg3 analysed text from file
cg3_lines = []
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
# Skip comment lines
if line.startswith('#'):
continue
cg3_lines.append( line.rstrip() )
in_f.close()
# Clean up lines of syntactic analyses (if requested)
if clean_up:
cg3_lines = cleanup_lines( cg3_lines, **kwargs )
# 2) Extract sentences and word tokens
sentences = []
sentence = []
for i, line in enumerate( cg3_lines ):
if line == '"<s>"':
if sentence:
print('(!) Sentence begins before previous ends at line: '+str(i), \
file=sys.stderr)
sentence = []
elif pat_double_quoted.match( line ) and line != '"<s>"' and line != '"</s>"':
token_match = pat_cg3_word_token.match( line )
if token_match:
line = token_match.group(1)
else:
raise Exception('(!) Unexpected token format: ', line)
sentence.append( line )
elif line == '"</s>"':
if not sentence:
print('(!) Empty sentence at line: '+str(i), \
file=sys.stderr)
# (!) Use double space instead of single space in order to distinguish
# word-tokenizing space from the single space in the multiwords
# (e.g. 'Rio de Janeiro' as a single word);
sentences.append( ' '.join(sentence) )
sentence = []
# 3) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 4) Align syntactic analyses with the Text
alignments = align_cg3_with_Text( cg3_lines, text, **kwargs )
normalise_alignments( alignments, data_type=VISLCG3_DATA, **kwargs )
# Attach alignments to the text
text[ layer_name ] = alignments
return text | Reads the output of VISLCG3 syntactic analysis from given file, and
returns as a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_VISLCG3);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Note: when loading data from https://github.com/EstSyntax/EDT corpus,
it is advisable to add flags: clean_up=True, fix_sent_tags=True,
fix_out_of_sent=True in order to ensure that well-formed data will be
read from the corpus;
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the format of the output of VISLCG3 syntactic analyser;
clean_up : bool
Optional argument specifying whether the vislcg3_syntax.cleanup_lines()
should be applied in the lines of syntactic analyses read from the
file;
Default: False
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'vislcg3_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
vislcg3_syntax.align_cg3_with_Text(): "check_tokens", "add_word_ids";
vislcg3_syntax.cleanup_lines(): "remove_caps", "remove_clo",
"double_quotes", "fix_sent_tags" | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L300-L403 |
estnltk/estnltk | estnltk/syntax/utils.py | read_text_from_conll_file | def read_text_from_conll_file( file_name, layer_name=LAYER_CONLL, **kwargs ):
''' Reads the CONLL format syntactic analysis from given file, and returns as
a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_CONLL);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the CONLL format;
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'conll_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
maltparser_support.align_CONLL_with_Text(): "check_tokens", "add_word_ids";
'''
# 1) Load conll analysed text from file
conll_lines = []
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
# Skip comment lines
if line.startswith('#'):
continue
conll_lines.append( line.rstrip() )
in_f.close()
# 2) Extract sentences and word tokens
sentences = []
sentence = []
for i, line in enumerate( conll_lines ):
if len(line) > 0 and '\t' in line:
features = line.split('\t')
if len(features) != 10:
raise Exception(' In file '+in_file+', line '+str(i)+\
' with unexpected format: "'+line+'" ')
word_id = features[0]
token = features[1]
sentence.append( token )
elif len(line)==0 or re.match('^\s+$', line):
# End of a sentence
if sentence:
# (!) Use double space instead of single space in order to distinguish
# word-tokenizing space from the single space in the multiwords
# (e.g. 'Rio de Janeiro' as a single word);
sentences.append( ' '.join(sentence) )
sentence = []
if sentence:
sentences.append( ' '.join(sentence) )
# 3) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 4) Align syntactic analyses with the Text
alignments = align_CONLL_with_Text( conll_lines, text, None, **kwargs )
normalise_alignments( alignments, data_type=CONLL_DATA, **kwargs )
# Attach alignments to the text
text[ layer_name ] = alignments
return text | python | def read_text_from_conll_file( file_name, layer_name=LAYER_CONLL, **kwargs ):
''' Reads the CONLL format syntactic analysis from given file, and returns as
a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_CONLL);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the CONLL format;
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'conll_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
maltparser_support.align_CONLL_with_Text(): "check_tokens", "add_word_ids";
'''
# 1) Load conll analysed text from file
conll_lines = []
in_f = codecs.open(file_name, mode='r', encoding='utf-8')
for line in in_f:
# Skip comment lines
if line.startswith('#'):
continue
conll_lines.append( line.rstrip() )
in_f.close()
# 2) Extract sentences and word tokens
sentences = []
sentence = []
for i, line in enumerate( conll_lines ):
if len(line) > 0 and '\t' in line:
features = line.split('\t')
if len(features) != 10:
raise Exception(' In file '+in_file+', line '+str(i)+\
' with unexpected format: "'+line+'" ')
word_id = features[0]
token = features[1]
sentence.append( token )
elif len(line)==0 or re.match('^\s+$', line):
# End of a sentence
if sentence:
# (!) Use double space instead of single space in order to distinguish
# word-tokenizing space from the single space in the multiwords
# (e.g. 'Rio de Janeiro' as a single word);
sentences.append( ' '.join(sentence) )
sentence = []
if sentence:
sentences.append( ' '.join(sentence) )
# 3) Construct the estnltk's Text
kwargs4text = {
# Use custom tokenization utils in order to preserve exactly the same
# tokenization as was in the input;
"word_tokenizer": RegexpTokenizer(" ", gaps=True),
"sentence_tokenizer": LineTokenizer()
}
from estnltk.text import Text
text = Text( '\n'.join(sentences), **kwargs4text )
# Tokenize up to the words layer
text.tokenize_words()
# 4) Align syntactic analyses with the Text
alignments = align_CONLL_with_Text( conll_lines, text, None, **kwargs )
normalise_alignments( alignments, data_type=CONLL_DATA, **kwargs )
# Attach alignments to the text
text[ layer_name ] = alignments
return text | Reads the CONLL format syntactic analysis from given file, and returns as
a Text object.
The Text object has been tokenized for paragraphs, sentences, words, and it
contains syntactic analyses aligned with word spans, in the layer *layer_name*
(by default: LAYER_CONLL);
Attached syntactic analyses are in the format as is the output of
utils.normalise_alignments();
Parameters
-----------
file_name : str
Name of the input file; Should contain syntactically analysed text,
following the CONLL format;
layer_name : str
Name of the Text's layer in which syntactic analyses are stored;
Defaults to 'conll_syntax';
For other parameters, see optional parameters of the methods:
utils.normalise_alignments(): "rep_miss_w_dummy", "fix_selfrefs",
"keep_old", "mark_root";
maltparser_support.align_CONLL_with_Text(): "check_tokens", "add_word_ids"; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L406-L484 |
estnltk/estnltk | estnltk/syntax/utils.py | build_trees_from_sentence | def build_trees_from_sentence( sentence, syntactic_relations, layer=LAYER_VISLCG3, \
sentence_id=0, **kwargs ):
''' Given a sentence ( a list of EstNLTK's word tokens ), and a list of
dependency syntactic relations ( output of normalise_alignments() ),
builds trees ( estnltk.syntax.utils.Tree objects ) from the sentence,
and returns as a list of Trees (roots of trees).
Note that there is one-to-many correspondence between EstNLTK's
sentences and dependency syntactic trees, so the resulting list can
contain more than one tree (root);
'''
trees_of_sentence = []
nodes = [ -1 ]
while( len(nodes) > 0 ):
node = nodes.pop(0)
# Find tokens in the sentence that take this node as their parent
for i, syntax_token in enumerate( syntactic_relations ):
parents = [ o[1] for o in syntax_token[PARSER_OUT] ]
# There should be only one parent node; If there is more than one, take the
# first node;
parent = parents[0]
if parent == node:
labels = [ o[0] for o in syntax_token[PARSER_OUT] ]
estnltk_token = sentence[i]
tree1 = Tree( estnltk_token, i, sentence_id, labels, parser=layer )
if INIT_PARSER_OUT in syntax_token:
tree1.parser_output = syntax_token[INIT_PARSER_OUT]
tree1.syntax_token = syntax_token
if parent == -1:
# Add the root node
trees_of_sentence.append( tree1 )
elif parent == i:
# If, for some strange reason, the node is unnormalised and is still
# linked to itself, add it as a singleton tree
trees_of_sentence.append( tree1 )
else:
# For each root node, attempt to add the child
for root_node in trees_of_sentence:
root_node.add_child_to_subtree( parent, tree1 )
if parent != i:
# Add the current node as a future parent to be examined
nodes.append( i )
return trees_of_sentence | python | def build_trees_from_sentence( sentence, syntactic_relations, layer=LAYER_VISLCG3, \
sentence_id=0, **kwargs ):
''' Given a sentence ( a list of EstNLTK's word tokens ), and a list of
dependency syntactic relations ( output of normalise_alignments() ),
builds trees ( estnltk.syntax.utils.Tree objects ) from the sentence,
and returns as a list of Trees (roots of trees).
Note that there is one-to-many correspondence between EstNLTK's
sentences and dependency syntactic trees, so the resulting list can
contain more than one tree (root);
'''
trees_of_sentence = []
nodes = [ -1 ]
while( len(nodes) > 0 ):
node = nodes.pop(0)
# Find tokens in the sentence that take this node as their parent
for i, syntax_token in enumerate( syntactic_relations ):
parents = [ o[1] for o in syntax_token[PARSER_OUT] ]
# There should be only one parent node; If there is more than one, take the
# first node;
parent = parents[0]
if parent == node:
labels = [ o[0] for o in syntax_token[PARSER_OUT] ]
estnltk_token = sentence[i]
tree1 = Tree( estnltk_token, i, sentence_id, labels, parser=layer )
if INIT_PARSER_OUT in syntax_token:
tree1.parser_output = syntax_token[INIT_PARSER_OUT]
tree1.syntax_token = syntax_token
if parent == -1:
# Add the root node
trees_of_sentence.append( tree1 )
elif parent == i:
# If, for some strange reason, the node is unnormalised and is still
# linked to itself, add it as a singleton tree
trees_of_sentence.append( tree1 )
else:
# For each root node, attempt to add the child
for root_node in trees_of_sentence:
root_node.add_child_to_subtree( parent, tree1 )
if parent != i:
# Add the current node as a future parent to be examined
nodes.append( i )
return trees_of_sentence | Given a sentence ( a list of EstNLTK's word tokens ), and a list of
dependency syntactic relations ( output of normalise_alignments() ),
builds trees ( estnltk.syntax.utils.Tree objects ) from the sentence,
and returns as a list of Trees (roots of trees).
Note that there is one-to-many correspondence between EstNLTK's
sentences and dependency syntactic trees, so the resulting list can
contain more than one tree (root); | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L874-L916 |
estnltk/estnltk | estnltk/syntax/utils.py | build_trees_from_text | def build_trees_from_text( text, layer, **kwargs ):
''' Given a text object and the name of the layer where dependency syntactic
relations are stored, builds trees ( estnltk.syntax.utils.Tree objects )
from all the sentences of the text and returns as a list of Trees.
Uses the method build_trees_from_sentence() for acquiring trees of each
sentence;
Note that there is one-to-many correspondence between EstNLTK's sentences
and dependency syntactic trees: one sentence can evoke multiple trees;
'''
from estnltk.text import Text
assert isinstance(text, Text), \
'(!) Unexpected text argument! Should be Estnltk\'s Text object.'
assert layer in text, \
'(!) The layer '+str(layer)+' is missing from the input text.'
text_sentences = list( text.divide( layer=WORDS, by=SENTENCES ) )
all_sentence_trees = [] # Collected sentence trees
prev_sent_id = -1
# (!) Note: if the Text object has been split into smaller Texts with split_by(),
# SENT_ID-s still refer to old text, and thus are not useful as indices
# anymore;
# Therefore, we also use another variable -- norm_prev_sent_id -- that always
# counts sentences starting from 0, and use SENT_ID / prev_sent_id only for
# deciding whether one sentence ends and another begins;
norm_prev_sent_id = -1
current_sentence = []
k = 0
while k < len( text[layer] ):
node_desc = text[layer][k]
if prev_sent_id != node_desc[SENT_ID] and current_sentence:
norm_prev_sent_id += 1
# If the index of the sentence has changed, and we have collected a sentence,
# then build tree(s) from this sentence
assert norm_prev_sent_id<len(text_sentences), '(!) Sentence with the index '+str(norm_prev_sent_id)+\
' not found from the input text.'
sentence = text_sentences[norm_prev_sent_id]
trees_of_sentence = \
build_trees_from_sentence( sentence, current_sentence, layer, sentence_id=norm_prev_sent_id, \
**kwargs )
# Record trees constructed from this sentence
all_sentence_trees.extend( trees_of_sentence )
# Reset the sentence collector
current_sentence = []
# Collect sentence
current_sentence.append( node_desc )
prev_sent_id = node_desc[SENT_ID]
k += 1
if current_sentence:
norm_prev_sent_id += 1
assert norm_prev_sent_id<len(text_sentences), '(!) Sentence with the index '+str(norm_prev_sent_id)+\
' not found from the input text.'
sentence = text_sentences[norm_prev_sent_id]
# If we have collected a sentence, then build tree(s) from this sentence
trees_of_sentence = \
build_trees_from_sentence( sentence, current_sentence, layer, sentence_id=norm_prev_sent_id, \
**kwargs )
# Record trees constructed from this sentence
all_sentence_trees.extend( trees_of_sentence )
return all_sentence_trees | python | def build_trees_from_text( text, layer, **kwargs ):
''' Given a text object and the name of the layer where dependency syntactic
relations are stored, builds trees ( estnltk.syntax.utils.Tree objects )
from all the sentences of the text and returns as a list of Trees.
Uses the method build_trees_from_sentence() for acquiring trees of each
sentence;
Note that there is one-to-many correspondence between EstNLTK's sentences
and dependency syntactic trees: one sentence can evoke multiple trees;
'''
from estnltk.text import Text
assert isinstance(text, Text), \
'(!) Unexpected text argument! Should be Estnltk\'s Text object.'
assert layer in text, \
'(!) The layer '+str(layer)+' is missing from the input text.'
text_sentences = list( text.divide( layer=WORDS, by=SENTENCES ) )
all_sentence_trees = [] # Collected sentence trees
prev_sent_id = -1
# (!) Note: if the Text object has been split into smaller Texts with split_by(),
# SENT_ID-s still refer to old text, and thus are not useful as indices
# anymore;
# Therefore, we also use another variable -- norm_prev_sent_id -- that always
# counts sentences starting from 0, and use SENT_ID / prev_sent_id only for
# deciding whether one sentence ends and another begins;
norm_prev_sent_id = -1
current_sentence = []
k = 0
while k < len( text[layer] ):
node_desc = text[layer][k]
if prev_sent_id != node_desc[SENT_ID] and current_sentence:
norm_prev_sent_id += 1
# If the index of the sentence has changed, and we have collected a sentence,
# then build tree(s) from this sentence
assert norm_prev_sent_id<len(text_sentences), '(!) Sentence with the index '+str(norm_prev_sent_id)+\
' not found from the input text.'
sentence = text_sentences[norm_prev_sent_id]
trees_of_sentence = \
build_trees_from_sentence( sentence, current_sentence, layer, sentence_id=norm_prev_sent_id, \
**kwargs )
# Record trees constructed from this sentence
all_sentence_trees.extend( trees_of_sentence )
# Reset the sentence collector
current_sentence = []
# Collect sentence
current_sentence.append( node_desc )
prev_sent_id = node_desc[SENT_ID]
k += 1
if current_sentence:
norm_prev_sent_id += 1
assert norm_prev_sent_id<len(text_sentences), '(!) Sentence with the index '+str(norm_prev_sent_id)+\
' not found from the input text.'
sentence = text_sentences[norm_prev_sent_id]
# If we have collected a sentence, then build tree(s) from this sentence
trees_of_sentence = \
build_trees_from_sentence( sentence, current_sentence, layer, sentence_id=norm_prev_sent_id, \
**kwargs )
# Record trees constructed from this sentence
all_sentence_trees.extend( trees_of_sentence )
return all_sentence_trees | Given a text object and the name of the layer where dependency syntactic
relations are stored, builds trees ( estnltk.syntax.utils.Tree objects )
from all the sentences of the text and returns as a list of Trees.
Uses the method build_trees_from_sentence() for acquiring trees of each
sentence;
Note that there is one-to-many correspondence between EstNLTK's sentences
and dependency syntactic trees: one sentence can evoke multiple trees; | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L920-L979 |
estnltk/estnltk | estnltk/syntax/utils.py | Tree.add_child_to_self | def add_child_to_self( self, tree ):
''' Adds given *tree* as a child of the current tree. '''
assert isinstance(tree, Tree), \
'(!) Unexpected type of argument for '+argName+'! Should be Tree.'
if (not self.children):
self.children = []
tree.parent = self
self.children.append(tree) | python | def add_child_to_self( self, tree ):
''' Adds given *tree* as a child of the current tree. '''
assert isinstance(tree, Tree), \
'(!) Unexpected type of argument for '+argName+'! Should be Tree.'
if (not self.children):
self.children = []
tree.parent = self
self.children.append(tree) | Adds given *tree* as a child of the current tree. | https://github.com/estnltk/estnltk/blob/28ae334a68a0673072febc318635f04da0dcc54a/estnltk/syntax/utils.py#L573-L580 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.