code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
| text
stringlengths 164
112k
|
---|---|---|
def rm_first_of_dup_args(self) -> None:
"""Eliminate duplicate arguments by removing the first occurrences.
Remove the first occurrences of duplicate arguments, regardless of
their value. Result of the rendered wikitext should remain the same.
Warning: Some meaningful data may be removed from wikitext.
Also see `rm_dup_args_safe` function.
"""
names = set() # type: set
for a in reversed(self.arguments):
name = a.name.strip(WS)
if name in names:
del a[:len(a.string)]
else:
names.add(name) | Eliminate duplicate arguments by removing the first occurrences.
Remove the first occurrences of duplicate arguments, regardless of
their value. Result of the rendered wikitext should remain the same.
Warning: Some meaningful data may be removed from wikitext.
Also see `rm_dup_args_safe` function. | Below is the the instruction that describes the task:
### Input:
Eliminate duplicate arguments by removing the first occurrences.
Remove the first occurrences of duplicate arguments, regardless of
their value. Result of the rendered wikitext should remain the same.
Warning: Some meaningful data may be removed from wikitext.
Also see `rm_dup_args_safe` function.
### Response:
def rm_first_of_dup_args(self) -> None:
"""Eliminate duplicate arguments by removing the first occurrences.
Remove the first occurrences of duplicate arguments, regardless of
their value. Result of the rendered wikitext should remain the same.
Warning: Some meaningful data may be removed from wikitext.
Also see `rm_dup_args_safe` function.
"""
names = set() # type: set
for a in reversed(self.arguments):
name = a.name.strip(WS)
if name in names:
del a[:len(a.string)]
else:
names.add(name) |
def bool(cls, must=None, should=None, must_not=None, minimum_number_should_match=None, boost=None):
'''
http://www.elasticsearch.org/guide/reference/query-dsl/bool-query.html
A query that matches documents matching boolean combinations of other queris. The bool query maps to Lucene BooleanQuery. It is built using one of more boolean clauses, each clause with a typed occurrence. The occurrence types are:
'must' - The clause(query) must appear in matching documents.
'should' - The clause(query) should appear in the matching document. A boolean query with no 'must' clauses, one or more 'should' clauses must match a document. The minimum number of 'should' clauses to match can be set using 'minimum_number_should_match' parameter.
'must_not' - The clause(query) must not appear in the matching documents. Note that it is not possible to search on documents that only consists of a 'must_not' clause(s).
'minimum_number_should_match' - Minimum number of documents that should match
'boost' - boost value
> term = ElasticQuery()
> term.term(user='kimchy')
> query = ElasticQuery()
> query.bool(should=term)
> query.query()
{ 'bool' : { 'should' : { 'term' : {'user':'kimchy'}}}}
'''
instance = cls(bool={})
if must is not None:
instance['bool']['must'] = must
if should is not None:
instance['bool']['should'] = should
if must_not is not None:
instance['bool']['must_not'] = must_not
if minimum_number_should_match is not None:
instance['bool']['minimum_number_should_match'] = minimum_number_should_match
if boost is not None:
instance['bool']['boost'] = boost
return instance | http://www.elasticsearch.org/guide/reference/query-dsl/bool-query.html
A query that matches documents matching boolean combinations of other queris. The bool query maps to Lucene BooleanQuery. It is built using one of more boolean clauses, each clause with a typed occurrence. The occurrence types are:
'must' - The clause(query) must appear in matching documents.
'should' - The clause(query) should appear in the matching document. A boolean query with no 'must' clauses, one or more 'should' clauses must match a document. The minimum number of 'should' clauses to match can be set using 'minimum_number_should_match' parameter.
'must_not' - The clause(query) must not appear in the matching documents. Note that it is not possible to search on documents that only consists of a 'must_not' clause(s).
'minimum_number_should_match' - Minimum number of documents that should match
'boost' - boost value
> term = ElasticQuery()
> term.term(user='kimchy')
> query = ElasticQuery()
> query.bool(should=term)
> query.query()
{ 'bool' : { 'should' : { 'term' : {'user':'kimchy'}}}} | Below is the the instruction that describes the task:
### Input:
http://www.elasticsearch.org/guide/reference/query-dsl/bool-query.html
A query that matches documents matching boolean combinations of other queris. The bool query maps to Lucene BooleanQuery. It is built using one of more boolean clauses, each clause with a typed occurrence. The occurrence types are:
'must' - The clause(query) must appear in matching documents.
'should' - The clause(query) should appear in the matching document. A boolean query with no 'must' clauses, one or more 'should' clauses must match a document. The minimum number of 'should' clauses to match can be set using 'minimum_number_should_match' parameter.
'must_not' - The clause(query) must not appear in the matching documents. Note that it is not possible to search on documents that only consists of a 'must_not' clause(s).
'minimum_number_should_match' - Minimum number of documents that should match
'boost' - boost value
> term = ElasticQuery()
> term.term(user='kimchy')
> query = ElasticQuery()
> query.bool(should=term)
> query.query()
{ 'bool' : { 'should' : { 'term' : {'user':'kimchy'}}}}
### Response:
def bool(cls, must=None, should=None, must_not=None, minimum_number_should_match=None, boost=None):
'''
http://www.elasticsearch.org/guide/reference/query-dsl/bool-query.html
A query that matches documents matching boolean combinations of other queris. The bool query maps to Lucene BooleanQuery. It is built using one of more boolean clauses, each clause with a typed occurrence. The occurrence types are:
'must' - The clause(query) must appear in matching documents.
'should' - The clause(query) should appear in the matching document. A boolean query with no 'must' clauses, one or more 'should' clauses must match a document. The minimum number of 'should' clauses to match can be set using 'minimum_number_should_match' parameter.
'must_not' - The clause(query) must not appear in the matching documents. Note that it is not possible to search on documents that only consists of a 'must_not' clause(s).
'minimum_number_should_match' - Minimum number of documents that should match
'boost' - boost value
> term = ElasticQuery()
> term.term(user='kimchy')
> query = ElasticQuery()
> query.bool(should=term)
> query.query()
{ 'bool' : { 'should' : { 'term' : {'user':'kimchy'}}}}
'''
instance = cls(bool={})
if must is not None:
instance['bool']['must'] = must
if should is not None:
instance['bool']['should'] = should
if must_not is not None:
instance['bool']['must_not'] = must_not
if minimum_number_should_match is not None:
instance['bool']['minimum_number_should_match'] = minimum_number_should_match
if boost is not None:
instance['bool']['boost'] = boost
return instance |
def copy(self, klass=_x):
"""A new chain beginning with the current chain tokens and argument.
"""
chain = super().copy()
new_chain = klass(chain._args[0])
new_chain._tokens = [[
chain.compose, [], {},
]]
return new_chain | A new chain beginning with the current chain tokens and argument. | Below is the the instruction that describes the task:
### Input:
A new chain beginning with the current chain tokens and argument.
### Response:
def copy(self, klass=_x):
"""A new chain beginning with the current chain tokens and argument.
"""
chain = super().copy()
new_chain = klass(chain._args[0])
new_chain._tokens = [[
chain.compose, [], {},
]]
return new_chain |
def cql_query_with_prepare(query, statement_name, statement_arguments, callback_errors=None, contact_points=None,
port=None, cql_user=None, cql_pass=None, **kwargs):
'''
Run a query on a Cassandra cluster and return a dictionary.
This function should not be used asynchronously for SELECTs -- it will not
return anything and we don't currently have a mechanism for handling a future
that will return results.
:param query: The query to execute.
:type query: str
:param statement_name: Name to assign the prepared statement in the __context__ dictionary
:type statement_name: str
:param statement_arguments: Bind parameters for the SQL statement
:type statement_arguments: list[str]
:param async: Run this query in asynchronous mode
:type async: bool
:param callback_errors: Function to call after query runs if there is an error
:type callback_errors: Function callable
:param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs.
:type contact_points: str | list[str]
:param cql_user: The Cassandra user if authentication is turned on.
:type cql_user: str
:param cql_pass: The Cassandra user password if authentication is turned on.
:type cql_pass: str
:param port: The Cassandra cluster port, defaults to None.
:type port: int
:param params: The parameters for the query, optional.
:type params: str
:return: A dictionary from the return values of the query
:rtype: list[dict]
CLI Example:
.. code-block:: bash
# Insert data asynchronously
salt this-node cassandra_cql.cql_query_with_prepare "name_insert" "INSERT INTO USERS (first_name, last_name) VALUES (?, ?)" \
statement_arguments=['John','Doe'], asynchronous=True
# Select data, should not be asynchronous because there is not currently a facility to return data from a future
salt this-node cassandra_cql.cql_query_with_prepare "name_select" "SELECT * FROM USERS WHERE first_name=?" \
statement_arguments=['John']
'''
# Backward-compatibility with Python 3.7: "async" is a reserved word
asynchronous = kwargs.get('async', False)
try:
cluster, session = _connect(contact_points=contact_points, port=port,
cql_user=cql_user, cql_pass=cql_pass)
except CommandExecutionError:
log.critical('Could not get Cassandra cluster session.')
raise
except BaseException as e:
log.critical('Unexpected error while getting Cassandra cluster session: %s', e)
raise
if statement_name not in __context__['cassandra_cql_prepared']:
try:
bound_statement = session.prepare(query)
__context__['cassandra_cql_prepared'][statement_name] = bound_statement
except BaseException as e:
log.critical('Unexpected error while preparing SQL statement: %s', e)
raise
else:
bound_statement = __context__['cassandra_cql_prepared'][statement_name]
session.row_factory = dict_factory
ret = []
try:
if asynchronous:
future_results = session.execute_async(bound_statement.bind(statement_arguments))
# future_results.add_callbacks(_async_log_errors)
else:
results = session.execute(bound_statement.bind(statement_arguments))
except BaseException as e:
log.error('Failed to execute query: %s\n reason: %s', query, e)
msg = "ERROR: Cassandra query failed: {0} reason: {1}".format(query, e)
raise CommandExecutionError(msg)
if not asynchronous and results:
for result in results:
values = {}
for key, value in six.iteritems(result):
# Salt won't return dictionaries with odd types like uuid.UUID
if not isinstance(value, six.text_type):
# Must support Cassandra collection types.
# Namely, Cassandras set, list, and map collections.
if not isinstance(value, (set, list, dict)):
value = six.text_type(value)
values[key] = value
ret.append(values)
# If this was a synchronous call, then we either have an empty list
# because there was no return, or we have a return
# If this was an asynchronous call we only return the empty list
return ret | Run a query on a Cassandra cluster and return a dictionary.
This function should not be used asynchronously for SELECTs -- it will not
return anything and we don't currently have a mechanism for handling a future
that will return results.
:param query: The query to execute.
:type query: str
:param statement_name: Name to assign the prepared statement in the __context__ dictionary
:type statement_name: str
:param statement_arguments: Bind parameters for the SQL statement
:type statement_arguments: list[str]
:param async: Run this query in asynchronous mode
:type async: bool
:param callback_errors: Function to call after query runs if there is an error
:type callback_errors: Function callable
:param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs.
:type contact_points: str | list[str]
:param cql_user: The Cassandra user if authentication is turned on.
:type cql_user: str
:param cql_pass: The Cassandra user password if authentication is turned on.
:type cql_pass: str
:param port: The Cassandra cluster port, defaults to None.
:type port: int
:param params: The parameters for the query, optional.
:type params: str
:return: A dictionary from the return values of the query
:rtype: list[dict]
CLI Example:
.. code-block:: bash
# Insert data asynchronously
salt this-node cassandra_cql.cql_query_with_prepare "name_insert" "INSERT INTO USERS (first_name, last_name) VALUES (?, ?)" \
statement_arguments=['John','Doe'], asynchronous=True
# Select data, should not be asynchronous because there is not currently a facility to return data from a future
salt this-node cassandra_cql.cql_query_with_prepare "name_select" "SELECT * FROM USERS WHERE first_name=?" \
statement_arguments=['John'] | Below is the the instruction that describes the task:
### Input:
Run a query on a Cassandra cluster and return a dictionary.
This function should not be used asynchronously for SELECTs -- it will not
return anything and we don't currently have a mechanism for handling a future
that will return results.
:param query: The query to execute.
:type query: str
:param statement_name: Name to assign the prepared statement in the __context__ dictionary
:type statement_name: str
:param statement_arguments: Bind parameters for the SQL statement
:type statement_arguments: list[str]
:param async: Run this query in asynchronous mode
:type async: bool
:param callback_errors: Function to call after query runs if there is an error
:type callback_errors: Function callable
:param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs.
:type contact_points: str | list[str]
:param cql_user: The Cassandra user if authentication is turned on.
:type cql_user: str
:param cql_pass: The Cassandra user password if authentication is turned on.
:type cql_pass: str
:param port: The Cassandra cluster port, defaults to None.
:type port: int
:param params: The parameters for the query, optional.
:type params: str
:return: A dictionary from the return values of the query
:rtype: list[dict]
CLI Example:
.. code-block:: bash
# Insert data asynchronously
salt this-node cassandra_cql.cql_query_with_prepare "name_insert" "INSERT INTO USERS (first_name, last_name) VALUES (?, ?)" \
statement_arguments=['John','Doe'], asynchronous=True
# Select data, should not be asynchronous because there is not currently a facility to return data from a future
salt this-node cassandra_cql.cql_query_with_prepare "name_select" "SELECT * FROM USERS WHERE first_name=?" \
statement_arguments=['John']
### Response:
def cql_query_with_prepare(query, statement_name, statement_arguments, callback_errors=None, contact_points=None,
port=None, cql_user=None, cql_pass=None, **kwargs):
'''
Run a query on a Cassandra cluster and return a dictionary.
This function should not be used asynchronously for SELECTs -- it will not
return anything and we don't currently have a mechanism for handling a future
that will return results.
:param query: The query to execute.
:type query: str
:param statement_name: Name to assign the prepared statement in the __context__ dictionary
:type statement_name: str
:param statement_arguments: Bind parameters for the SQL statement
:type statement_arguments: list[str]
:param async: Run this query in asynchronous mode
:type async: bool
:param callback_errors: Function to call after query runs if there is an error
:type callback_errors: Function callable
:param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs.
:type contact_points: str | list[str]
:param cql_user: The Cassandra user if authentication is turned on.
:type cql_user: str
:param cql_pass: The Cassandra user password if authentication is turned on.
:type cql_pass: str
:param port: The Cassandra cluster port, defaults to None.
:type port: int
:param params: The parameters for the query, optional.
:type params: str
:return: A dictionary from the return values of the query
:rtype: list[dict]
CLI Example:
.. code-block:: bash
# Insert data asynchronously
salt this-node cassandra_cql.cql_query_with_prepare "name_insert" "INSERT INTO USERS (first_name, last_name) VALUES (?, ?)" \
statement_arguments=['John','Doe'], asynchronous=True
# Select data, should not be asynchronous because there is not currently a facility to return data from a future
salt this-node cassandra_cql.cql_query_with_prepare "name_select" "SELECT * FROM USERS WHERE first_name=?" \
statement_arguments=['John']
'''
# Backward-compatibility with Python 3.7: "async" is a reserved word
asynchronous = kwargs.get('async', False)
try:
cluster, session = _connect(contact_points=contact_points, port=port,
cql_user=cql_user, cql_pass=cql_pass)
except CommandExecutionError:
log.critical('Could not get Cassandra cluster session.')
raise
except BaseException as e:
log.critical('Unexpected error while getting Cassandra cluster session: %s', e)
raise
if statement_name not in __context__['cassandra_cql_prepared']:
try:
bound_statement = session.prepare(query)
__context__['cassandra_cql_prepared'][statement_name] = bound_statement
except BaseException as e:
log.critical('Unexpected error while preparing SQL statement: %s', e)
raise
else:
bound_statement = __context__['cassandra_cql_prepared'][statement_name]
session.row_factory = dict_factory
ret = []
try:
if asynchronous:
future_results = session.execute_async(bound_statement.bind(statement_arguments))
# future_results.add_callbacks(_async_log_errors)
else:
results = session.execute(bound_statement.bind(statement_arguments))
except BaseException as e:
log.error('Failed to execute query: %s\n reason: %s', query, e)
msg = "ERROR: Cassandra query failed: {0} reason: {1}".format(query, e)
raise CommandExecutionError(msg)
if not asynchronous and results:
for result in results:
values = {}
for key, value in six.iteritems(result):
# Salt won't return dictionaries with odd types like uuid.UUID
if not isinstance(value, six.text_type):
# Must support Cassandra collection types.
# Namely, Cassandras set, list, and map collections.
if not isinstance(value, (set, list, dict)):
value = six.text_type(value)
values[key] = value
ret.append(values)
# If this was a synchronous call, then we either have an empty list
# because there was no return, or we have a return
# If this was an asynchronous call we only return the empty list
return ret |
def read_xref_from(self, parser, start, xrefs):
"""Reads XRefs from the given location."""
parser.seek(start)
parser.reset()
try:
(pos, token) = parser.nexttoken()
except PSEOF:
raise PDFNoValidXRef('Unexpected EOF')
if self.debug:
logging.info('read_xref_from: start=%d, token=%r' % (start, token))
if isinstance(token, int):
# XRefStream: PDF-1.5
parser.seek(pos)
parser.reset()
xref = PDFXRefStream()
xref.load(parser)
else:
if token is parser.KEYWORD_XREF:
parser.nextline()
xref = PDFXRef()
xref.load(parser)
xrefs.append(xref)
trailer = xref.get_trailer()
if self.debug:
logging.info('trailer: %r' % trailer)
if 'XRefStm' in trailer:
pos = int_value(trailer['XRefStm'])
self.read_xref_from(parser, pos, xrefs)
if 'Prev' in trailer:
# find previous xref
pos = int_value(trailer['Prev'])
self.read_xref_from(parser, pos, xrefs)
return | Reads XRefs from the given location. | Below is the the instruction that describes the task:
### Input:
Reads XRefs from the given location.
### Response:
def read_xref_from(self, parser, start, xrefs):
"""Reads XRefs from the given location."""
parser.seek(start)
parser.reset()
try:
(pos, token) = parser.nexttoken()
except PSEOF:
raise PDFNoValidXRef('Unexpected EOF')
if self.debug:
logging.info('read_xref_from: start=%d, token=%r' % (start, token))
if isinstance(token, int):
# XRefStream: PDF-1.5
parser.seek(pos)
parser.reset()
xref = PDFXRefStream()
xref.load(parser)
else:
if token is parser.KEYWORD_XREF:
parser.nextline()
xref = PDFXRef()
xref.load(parser)
xrefs.append(xref)
trailer = xref.get_trailer()
if self.debug:
logging.info('trailer: %r' % trailer)
if 'XRefStm' in trailer:
pos = int_value(trailer['XRefStm'])
self.read_xref_from(parser, pos, xrefs)
if 'Prev' in trailer:
# find previous xref
pos = int_value(trailer['Prev'])
self.read_xref_from(parser, pos, xrefs)
return |
def _add_id_column(layer):
"""Add an ID column if it's not present in the attribute table.
:param layer: The vector layer.
:type layer: QgsVectorLayer
"""
layer_purpose = layer.keywords['layer_purpose']
mapping = {
layer_purpose_exposure['key']: exposure_id_field,
layer_purpose_hazard['key']: hazard_id_field,
layer_purpose_aggregation['key']: aggregation_id_field
}
has_id_column = False
for layer_type, field in list(mapping.items()):
if layer_purpose == layer_type:
safe_id = field
if layer.keywords['inasafe_fields'].get(field['key']):
has_id_column = True
break
if not has_id_column:
LOGGER.info(
'We add an ID column in {purpose}'.format(purpose=layer_purpose))
layer.startEditing()
id_field = QgsField()
id_field.setName(safe_id['field_name'])
if isinstance(safe_id['type'], list):
# Use the first element in the list of type
id_field.setType(safe_id['type'][0])
else:
id_field.setType(safe_id['type'][0])
id_field.setPrecision(safe_id['precision'])
id_field.setLength(safe_id['length'])
layer.addAttribute(id_field)
new_index = layer.fields().lookupField(id_field.name())
for feature in layer.getFeatures():
layer.changeAttributeValue(
feature.id(), new_index, feature.id())
layer.commitChanges()
layer.keywords['inasafe_fields'][safe_id['key']] = (
safe_id['field_name']) | Add an ID column if it's not present in the attribute table.
:param layer: The vector layer.
:type layer: QgsVectorLayer | Below is the the instruction that describes the task:
### Input:
Add an ID column if it's not present in the attribute table.
:param layer: The vector layer.
:type layer: QgsVectorLayer
### Response:
def _add_id_column(layer):
"""Add an ID column if it's not present in the attribute table.
:param layer: The vector layer.
:type layer: QgsVectorLayer
"""
layer_purpose = layer.keywords['layer_purpose']
mapping = {
layer_purpose_exposure['key']: exposure_id_field,
layer_purpose_hazard['key']: hazard_id_field,
layer_purpose_aggregation['key']: aggregation_id_field
}
has_id_column = False
for layer_type, field in list(mapping.items()):
if layer_purpose == layer_type:
safe_id = field
if layer.keywords['inasafe_fields'].get(field['key']):
has_id_column = True
break
if not has_id_column:
LOGGER.info(
'We add an ID column in {purpose}'.format(purpose=layer_purpose))
layer.startEditing()
id_field = QgsField()
id_field.setName(safe_id['field_name'])
if isinstance(safe_id['type'], list):
# Use the first element in the list of type
id_field.setType(safe_id['type'][0])
else:
id_field.setType(safe_id['type'][0])
id_field.setPrecision(safe_id['precision'])
id_field.setLength(safe_id['length'])
layer.addAttribute(id_field)
new_index = layer.fields().lookupField(id_field.name())
for feature in layer.getFeatures():
layer.changeAttributeValue(
feature.id(), new_index, feature.id())
layer.commitChanges()
layer.keywords['inasafe_fields'][safe_id['key']] = (
safe_id['field_name']) |
def simulate_system(self, parameters, initial_conditions, timepoints):
"""
Simulates the system for each of the timepoints, starting at initial_constants and initial_values values
:param parameters: list of the initial values for the constants in the model.
Must be in the same order as in the model
:param initial_conditions: List of the initial values for the equations in the problem. Must be in the same order as
these equations occur.
If not all values specified, the remaining ones will be assumed to be 0.
:param timepoints: A list of time points to simulate the system for
:return: a list of :class:`~means.simulation.Trajectory` objects,
one for each of the equations in the problem
:rtype: list[:class:`~means.simulation.Trajectory`]
"""
initial_conditions = self._append_zeros(initial_conditions, self.problem.number_of_equations)
solver = self._initialise_solver(initial_conditions, parameters, timepoints)
trajectories = solver.simulate(timepoints)
return TrajectoryCollection(trajectories) | Simulates the system for each of the timepoints, starting at initial_constants and initial_values values
:param parameters: list of the initial values for the constants in the model.
Must be in the same order as in the model
:param initial_conditions: List of the initial values for the equations in the problem. Must be in the same order as
these equations occur.
If not all values specified, the remaining ones will be assumed to be 0.
:param timepoints: A list of time points to simulate the system for
:return: a list of :class:`~means.simulation.Trajectory` objects,
one for each of the equations in the problem
:rtype: list[:class:`~means.simulation.Trajectory`] | Below is the the instruction that describes the task:
### Input:
Simulates the system for each of the timepoints, starting at initial_constants and initial_values values
:param parameters: list of the initial values for the constants in the model.
Must be in the same order as in the model
:param initial_conditions: List of the initial values for the equations in the problem. Must be in the same order as
these equations occur.
If not all values specified, the remaining ones will be assumed to be 0.
:param timepoints: A list of time points to simulate the system for
:return: a list of :class:`~means.simulation.Trajectory` objects,
one for each of the equations in the problem
:rtype: list[:class:`~means.simulation.Trajectory`]
### Response:
def simulate_system(self, parameters, initial_conditions, timepoints):
"""
Simulates the system for each of the timepoints, starting at initial_constants and initial_values values
:param parameters: list of the initial values for the constants in the model.
Must be in the same order as in the model
:param initial_conditions: List of the initial values for the equations in the problem. Must be in the same order as
these equations occur.
If not all values specified, the remaining ones will be assumed to be 0.
:param timepoints: A list of time points to simulate the system for
:return: a list of :class:`~means.simulation.Trajectory` objects,
one for each of the equations in the problem
:rtype: list[:class:`~means.simulation.Trajectory`]
"""
initial_conditions = self._append_zeros(initial_conditions, self.problem.number_of_equations)
solver = self._initialise_solver(initial_conditions, parameters, timepoints)
trajectories = solver.simulate(timepoints)
return TrajectoryCollection(trajectories) |
def list_books(self):
''' Return the list of book names '''
names = []
try:
for n in self.cur.execute("SELECT name FROM book;").fetchall():
names.extend(n)
except:
self.error("ERROR: cannot find database table 'book'")
return(names) | Return the list of book names | Below is the the instruction that describes the task:
### Input:
Return the list of book names
### Response:
def list_books(self):
''' Return the list of book names '''
names = []
try:
for n in self.cur.execute("SELECT name FROM book;").fetchall():
names.extend(n)
except:
self.error("ERROR: cannot find database table 'book'")
return(names) |
def forward(
self,
chat_id: Union[int, str],
disable_notification: bool = None,
as_copy: bool = False,
remove_caption: bool = False
):
"""Bound method *forward* of :obj:`Message <pyrogram.Messages>`.
Args:
chat_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target chat.
For your personal cloud (Saved Messages) you can simply use "me" or "self".
For a contact that exists in your Telegram address book you can use his phone number (str).
disable_notification (``bool``, *optional*):
Sends messages silently.
Users will receive a notification with no sound.
as_copy (``bool``, *optional*):
Pass True to forward messages without the forward header (i.e.: send a copy of the message content).
Defaults to False.
remove_caption (``bool``, *optional*):
If set to True and *as_copy* is enabled as well, media captions are not preserved when copying the
message. Has no effect if *as_copy* is not enabled.
Defaults to False.
Returns:
On success, a :class:`Messages <pyrogram.Messages>` containing forwarded messages is returned.
Raises:
:class:`RPCError <pyrogram.RPCError>`
"""
forwarded_messages = []
for message in self.messages:
forwarded_messages.append(
message.forward(
chat_id=chat_id,
as_copy=as_copy,
disable_notification=disable_notification,
remove_caption=remove_caption
)
)
return Messages(
total_count=len(forwarded_messages),
messages=forwarded_messages,
client=self._client
) | Bound method *forward* of :obj:`Message <pyrogram.Messages>`.
Args:
chat_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target chat.
For your personal cloud (Saved Messages) you can simply use "me" or "self".
For a contact that exists in your Telegram address book you can use his phone number (str).
disable_notification (``bool``, *optional*):
Sends messages silently.
Users will receive a notification with no sound.
as_copy (``bool``, *optional*):
Pass True to forward messages without the forward header (i.e.: send a copy of the message content).
Defaults to False.
remove_caption (``bool``, *optional*):
If set to True and *as_copy* is enabled as well, media captions are not preserved when copying the
message. Has no effect if *as_copy* is not enabled.
Defaults to False.
Returns:
On success, a :class:`Messages <pyrogram.Messages>` containing forwarded messages is returned.
Raises:
:class:`RPCError <pyrogram.RPCError>` | Below is the the instruction that describes the task:
### Input:
Bound method *forward* of :obj:`Message <pyrogram.Messages>`.
Args:
chat_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target chat.
For your personal cloud (Saved Messages) you can simply use "me" or "self".
For a contact that exists in your Telegram address book you can use his phone number (str).
disable_notification (``bool``, *optional*):
Sends messages silently.
Users will receive a notification with no sound.
as_copy (``bool``, *optional*):
Pass True to forward messages without the forward header (i.e.: send a copy of the message content).
Defaults to False.
remove_caption (``bool``, *optional*):
If set to True and *as_copy* is enabled as well, media captions are not preserved when copying the
message. Has no effect if *as_copy* is not enabled.
Defaults to False.
Returns:
On success, a :class:`Messages <pyrogram.Messages>` containing forwarded messages is returned.
Raises:
:class:`RPCError <pyrogram.RPCError>`
### Response:
def forward(
self,
chat_id: Union[int, str],
disable_notification: bool = None,
as_copy: bool = False,
remove_caption: bool = False
):
"""Bound method *forward* of :obj:`Message <pyrogram.Messages>`.
Args:
chat_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target chat.
For your personal cloud (Saved Messages) you can simply use "me" or "self".
For a contact that exists in your Telegram address book you can use his phone number (str).
disable_notification (``bool``, *optional*):
Sends messages silently.
Users will receive a notification with no sound.
as_copy (``bool``, *optional*):
Pass True to forward messages without the forward header (i.e.: send a copy of the message content).
Defaults to False.
remove_caption (``bool``, *optional*):
If set to True and *as_copy* is enabled as well, media captions are not preserved when copying the
message. Has no effect if *as_copy* is not enabled.
Defaults to False.
Returns:
On success, a :class:`Messages <pyrogram.Messages>` containing forwarded messages is returned.
Raises:
:class:`RPCError <pyrogram.RPCError>`
"""
forwarded_messages = []
for message in self.messages:
forwarded_messages.append(
message.forward(
chat_id=chat_id,
as_copy=as_copy,
disable_notification=disable_notification,
remove_caption=remove_caption
)
)
return Messages(
total_count=len(forwarded_messages),
messages=forwarded_messages,
client=self._client
) |
def get_fd_from_final_mass_spin(template=None, distance=None, **kwargs):
"""Return frequency domain ringdown with all the modes specified.
Parameters
----------
template: object
An object that has attached properties. This can be used to substitute
for keyword arguments. A common example would be a row in an xml table.
distance : {None, float}, optional
Luminosity distance of the system. If specified, the returned ringdown
will contain the factor (final_mass/distance).
final_mass : float
Mass of the final black hole.
final_spin : float
Spin of the final black hole.
lmns : list
Desired lmn modes as strings (lm modes available: 22, 21, 33, 44, 55).
The n specifies the number of overtones desired for the corresponding
lm pair (maximum n=8).
Example: lmns = ['223','331'] are the modes 220, 221, 222, and 330
amp220 : float
Amplitude of the fundamental 220 mode. Note that if distance is given,
this parameter will have a completely different order of magnitude.
See table II in https://arxiv.org/abs/1107.0854 for an estimate.
amplmn : float
Fraction of the amplitude of the lmn overtone relative to the
fundamental mode, as many as the number of subdominant modes.
philmn : float
Phase of the lmn overtone, as many as the number of modes. Should also
include the information from the azimuthal angle (phi + m*Phi).
inclination : float
Inclination of the system in radians.
delta_f : {None, float}, optional
The frequency step used to generate the ringdown.
If None, it will be set to the inverse of the time at which the
amplitude is 1/1000 of the peak amplitude (the minimum of all modes).
f_lower: {None, float}, optional
The starting frequency of the output frequency series.
If None, it will be set to delta_f.
f_final : {None, float}, optional
The ending frequency of the output frequency series.
If None, it will be set to the frequency at which the amplitude
is 1/1000 of the peak amplitude (the maximum of all modes).
Returns
-------
hplustilde: FrequencySeries
The plus phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
hcrosstilde: FrequencySeries
The cross phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
"""
input_params = props(template, mass_spin_required_args, **kwargs)
# Get required args
final_mass = input_params['final_mass']
final_spin = input_params['final_spin']
lmns = input_params['lmns']
for lmn in lmns:
if int(lmn[2]) == 0:
raise ValueError('Number of overtones (nmodes) must be greater '
'than zero.')
# The following may not be in input_params
delta_f = input_params.pop('delta_f', None)
f_lower = input_params.pop('f_lower', None)
f_final = input_params.pop('f_final', None)
f_0, tau = get_lm_f0tau_allmodes(final_mass, final_spin, lmns)
if not delta_f:
delta_f = lm_deltaf(tau, lmns)
if not f_final:
f_final = lm_ffinal(f_0, tau, lmns)
if not f_lower:
f_lower = delta_f
kmax = int(f_final / delta_f) + 1
outplustilde = FrequencySeries(zeros(kmax, dtype=complex128), delta_f=delta_f)
outcrosstilde = FrequencySeries(zeros(kmax, dtype=complex128), delta_f=delta_f)
for lmn in lmns:
l, m, nmodes = int(lmn[0]), int(lmn[1]), int(lmn[2])
hplustilde, hcrosstilde = get_fd_lm(freqs=f_0, taus=tau,
l=l, m=m, nmodes=nmodes,
delta_f=delta_f, f_lower=f_lower,
f_final=f_final, **input_params)
outplustilde.data += hplustilde.data
outcrosstilde.data += hcrosstilde.data
norm = Kerr_factor(final_mass, distance) if distance else 1.
return norm*outplustilde, norm*outcrosstilde | Return frequency domain ringdown with all the modes specified.
Parameters
----------
template: object
An object that has attached properties. This can be used to substitute
for keyword arguments. A common example would be a row in an xml table.
distance : {None, float}, optional
Luminosity distance of the system. If specified, the returned ringdown
will contain the factor (final_mass/distance).
final_mass : float
Mass of the final black hole.
final_spin : float
Spin of the final black hole.
lmns : list
Desired lmn modes as strings (lm modes available: 22, 21, 33, 44, 55).
The n specifies the number of overtones desired for the corresponding
lm pair (maximum n=8).
Example: lmns = ['223','331'] are the modes 220, 221, 222, and 330
amp220 : float
Amplitude of the fundamental 220 mode. Note that if distance is given,
this parameter will have a completely different order of magnitude.
See table II in https://arxiv.org/abs/1107.0854 for an estimate.
amplmn : float
Fraction of the amplitude of the lmn overtone relative to the
fundamental mode, as many as the number of subdominant modes.
philmn : float
Phase of the lmn overtone, as many as the number of modes. Should also
include the information from the azimuthal angle (phi + m*Phi).
inclination : float
Inclination of the system in radians.
delta_f : {None, float}, optional
The frequency step used to generate the ringdown.
If None, it will be set to the inverse of the time at which the
amplitude is 1/1000 of the peak amplitude (the minimum of all modes).
f_lower: {None, float}, optional
The starting frequency of the output frequency series.
If None, it will be set to delta_f.
f_final : {None, float}, optional
The ending frequency of the output frequency series.
If None, it will be set to the frequency at which the amplitude
is 1/1000 of the peak amplitude (the maximum of all modes).
Returns
-------
hplustilde: FrequencySeries
The plus phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
hcrosstilde: FrequencySeries
The cross phase of a ringdown with the lm modes specified and
n overtones in frequency domain. | Below is the the instruction that describes the task:
### Input:
Return frequency domain ringdown with all the modes specified.
Parameters
----------
template: object
An object that has attached properties. This can be used to substitute
for keyword arguments. A common example would be a row in an xml table.
distance : {None, float}, optional
Luminosity distance of the system. If specified, the returned ringdown
will contain the factor (final_mass/distance).
final_mass : float
Mass of the final black hole.
final_spin : float
Spin of the final black hole.
lmns : list
Desired lmn modes as strings (lm modes available: 22, 21, 33, 44, 55).
The n specifies the number of overtones desired for the corresponding
lm pair (maximum n=8).
Example: lmns = ['223','331'] are the modes 220, 221, 222, and 330
amp220 : float
Amplitude of the fundamental 220 mode. Note that if distance is given,
this parameter will have a completely different order of magnitude.
See table II in https://arxiv.org/abs/1107.0854 for an estimate.
amplmn : float
Fraction of the amplitude of the lmn overtone relative to the
fundamental mode, as many as the number of subdominant modes.
philmn : float
Phase of the lmn overtone, as many as the number of modes. Should also
include the information from the azimuthal angle (phi + m*Phi).
inclination : float
Inclination of the system in radians.
delta_f : {None, float}, optional
The frequency step used to generate the ringdown.
If None, it will be set to the inverse of the time at which the
amplitude is 1/1000 of the peak amplitude (the minimum of all modes).
f_lower: {None, float}, optional
The starting frequency of the output frequency series.
If None, it will be set to delta_f.
f_final : {None, float}, optional
The ending frequency of the output frequency series.
If None, it will be set to the frequency at which the amplitude
is 1/1000 of the peak amplitude (the maximum of all modes).
Returns
-------
hplustilde: FrequencySeries
The plus phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
hcrosstilde: FrequencySeries
The cross phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
### Response:
def get_fd_from_final_mass_spin(template=None, distance=None, **kwargs):
"""Return frequency domain ringdown with all the modes specified.
Parameters
----------
template: object
An object that has attached properties. This can be used to substitute
for keyword arguments. A common example would be a row in an xml table.
distance : {None, float}, optional
Luminosity distance of the system. If specified, the returned ringdown
will contain the factor (final_mass/distance).
final_mass : float
Mass of the final black hole.
final_spin : float
Spin of the final black hole.
lmns : list
Desired lmn modes as strings (lm modes available: 22, 21, 33, 44, 55).
The n specifies the number of overtones desired for the corresponding
lm pair (maximum n=8).
Example: lmns = ['223','331'] are the modes 220, 221, 222, and 330
amp220 : float
Amplitude of the fundamental 220 mode. Note that if distance is given,
this parameter will have a completely different order of magnitude.
See table II in https://arxiv.org/abs/1107.0854 for an estimate.
amplmn : float
Fraction of the amplitude of the lmn overtone relative to the
fundamental mode, as many as the number of subdominant modes.
philmn : float
Phase of the lmn overtone, as many as the number of modes. Should also
include the information from the azimuthal angle (phi + m*Phi).
inclination : float
Inclination of the system in radians.
delta_f : {None, float}, optional
The frequency step used to generate the ringdown.
If None, it will be set to the inverse of the time at which the
amplitude is 1/1000 of the peak amplitude (the minimum of all modes).
f_lower: {None, float}, optional
The starting frequency of the output frequency series.
If None, it will be set to delta_f.
f_final : {None, float}, optional
The ending frequency of the output frequency series.
If None, it will be set to the frequency at which the amplitude
is 1/1000 of the peak amplitude (the maximum of all modes).
Returns
-------
hplustilde: FrequencySeries
The plus phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
hcrosstilde: FrequencySeries
The cross phase of a ringdown with the lm modes specified and
n overtones in frequency domain.
"""
input_params = props(template, mass_spin_required_args, **kwargs)
# Get required args
final_mass = input_params['final_mass']
final_spin = input_params['final_spin']
lmns = input_params['lmns']
for lmn in lmns:
if int(lmn[2]) == 0:
raise ValueError('Number of overtones (nmodes) must be greater '
'than zero.')
# The following may not be in input_params
delta_f = input_params.pop('delta_f', None)
f_lower = input_params.pop('f_lower', None)
f_final = input_params.pop('f_final', None)
f_0, tau = get_lm_f0tau_allmodes(final_mass, final_spin, lmns)
if not delta_f:
delta_f = lm_deltaf(tau, lmns)
if not f_final:
f_final = lm_ffinal(f_0, tau, lmns)
if not f_lower:
f_lower = delta_f
kmax = int(f_final / delta_f) + 1
outplustilde = FrequencySeries(zeros(kmax, dtype=complex128), delta_f=delta_f)
outcrosstilde = FrequencySeries(zeros(kmax, dtype=complex128), delta_f=delta_f)
for lmn in lmns:
l, m, nmodes = int(lmn[0]), int(lmn[1]), int(lmn[2])
hplustilde, hcrosstilde = get_fd_lm(freqs=f_0, taus=tau,
l=l, m=m, nmodes=nmodes,
delta_f=delta_f, f_lower=f_lower,
f_final=f_final, **input_params)
outplustilde.data += hplustilde.data
outcrosstilde.data += hcrosstilde.data
norm = Kerr_factor(final_mass, distance) if distance else 1.
return norm*outplustilde, norm*outcrosstilde |
def declare_alias(self, name):
"""Insert a Python function into this Namespace with an
explicitly-given name, but detect its argument count automatically.
"""
def decorator(f):
self._auto_register_function(f, name)
return f
return decorator | Insert a Python function into this Namespace with an
explicitly-given name, but detect its argument count automatically. | Below is the the instruction that describes the task:
### Input:
Insert a Python function into this Namespace with an
explicitly-given name, but detect its argument count automatically.
### Response:
def declare_alias(self, name):
"""Insert a Python function into this Namespace with an
explicitly-given name, but detect its argument count automatically.
"""
def decorator(f):
self._auto_register_function(f, name)
return f
return decorator |
def set_hooks(cls, hooks: dict) -> bool:
"""
Merge internal hooks set with the given hooks
"""
cls._hooks = cls._hooks.new_child()
for hook_name, hook_pt in hooks.items():
if '.' not in hook_name:
hook_name = cls.__module__ \
+ '.' + cls.__name__ \
+ '.' + hook_name
meta.set_one(cls._hooks, hook_name, hook_pt)
return True | Merge internal hooks set with the given hooks | Below is the the instruction that describes the task:
### Input:
Merge internal hooks set with the given hooks
### Response:
def set_hooks(cls, hooks: dict) -> bool:
"""
Merge internal hooks set with the given hooks
"""
cls._hooks = cls._hooks.new_child()
for hook_name, hook_pt in hooks.items():
if '.' not in hook_name:
hook_name = cls.__module__ \
+ '.' + cls.__name__ \
+ '.' + hook_name
meta.set_one(cls._hooks, hook_name, hook_pt)
return True |
def namedtuple_with_defaults(typename: str, field_names: Union[str, List[str]],
default_values: collections.Iterable = ()):
"""
Convenience function for defining a namedtuple with default values
From: https://stackoverflow.com/questions/11351032/namedtuple-and-default-values-for-optional-keyword-arguments
Examples:
>>> Node = namedtuple_with_defaults('Node', 'val left right')
>>> Node()
Node(val=None, left=None, right=None)
>>> Node = namedtuple_with_defaults('Node', 'val left right', [1, 2, 3])
>>> Node()
Node(val=1, left=2, right=3)
>>> Node = namedtuple_with_defaults('Node', 'val left right', {'right':7})
>>> Node()
Node(val=None, left=None, right=7)
>>> Node(4)
Node(val=4, left=None, right=7)
"""
T = collections.namedtuple(typename, field_names)
# noinspection PyProtectedMember,PyUnresolvedReferences
T.__new__.__defaults__ = (None,) * len(T._fields)
if isinstance(default_values, collections.Mapping):
prototype = T(**default_values)
else:
prototype = T(*default_values)
T.__new__.__defaults__ = tuple(prototype)
return T | Convenience function for defining a namedtuple with default values
From: https://stackoverflow.com/questions/11351032/namedtuple-and-default-values-for-optional-keyword-arguments
Examples:
>>> Node = namedtuple_with_defaults('Node', 'val left right')
>>> Node()
Node(val=None, left=None, right=None)
>>> Node = namedtuple_with_defaults('Node', 'val left right', [1, 2, 3])
>>> Node()
Node(val=1, left=2, right=3)
>>> Node = namedtuple_with_defaults('Node', 'val left right', {'right':7})
>>> Node()
Node(val=None, left=None, right=7)
>>> Node(4)
Node(val=4, left=None, right=7) | Below is the the instruction that describes the task:
### Input:
Convenience function for defining a namedtuple with default values
From: https://stackoverflow.com/questions/11351032/namedtuple-and-default-values-for-optional-keyword-arguments
Examples:
>>> Node = namedtuple_with_defaults('Node', 'val left right')
>>> Node()
Node(val=None, left=None, right=None)
>>> Node = namedtuple_with_defaults('Node', 'val left right', [1, 2, 3])
>>> Node()
Node(val=1, left=2, right=3)
>>> Node = namedtuple_with_defaults('Node', 'val left right', {'right':7})
>>> Node()
Node(val=None, left=None, right=7)
>>> Node(4)
Node(val=4, left=None, right=7)
### Response:
def namedtuple_with_defaults(typename: str, field_names: Union[str, List[str]],
default_values: collections.Iterable = ()):
"""
Convenience function for defining a namedtuple with default values
From: https://stackoverflow.com/questions/11351032/namedtuple-and-default-values-for-optional-keyword-arguments
Examples:
>>> Node = namedtuple_with_defaults('Node', 'val left right')
>>> Node()
Node(val=None, left=None, right=None)
>>> Node = namedtuple_with_defaults('Node', 'val left right', [1, 2, 3])
>>> Node()
Node(val=1, left=2, right=3)
>>> Node = namedtuple_with_defaults('Node', 'val left right', {'right':7})
>>> Node()
Node(val=None, left=None, right=7)
>>> Node(4)
Node(val=4, left=None, right=7)
"""
T = collections.namedtuple(typename, field_names)
# noinspection PyProtectedMember,PyUnresolvedReferences
T.__new__.__defaults__ = (None,) * len(T._fields)
if isinstance(default_values, collections.Mapping):
prototype = T(**default_values)
else:
prototype = T(*default_values)
T.__new__.__defaults__ = tuple(prototype)
return T |
def download(self, filename, format='sdf', overwrite=False, resolvers=None, **kwargs):
""" Download the resolved structure as a file """
download(self.input, filename, format, overwrite, resolvers, **kwargs) | Download the resolved structure as a file | Below is the the instruction that describes the task:
### Input:
Download the resolved structure as a file
### Response:
def download(self, filename, format='sdf', overwrite=False, resolvers=None, **kwargs):
""" Download the resolved structure as a file """
download(self.input, filename, format, overwrite, resolvers, **kwargs) |
def do_save(self, line):
"""save [config_file] Save session variables to file save (without parameters):
Save session to default file ~/.dataone_cli.conf save.
<file>: Save session to specified file.
"""
config_file = self._split_args(line, 0, 1)[0]
self._command_processor.get_session().save(config_file)
if config_file is None:
config_file = (
self._command_processor.get_session().get_default_pickle_file_path()
)
self._print_info_if_verbose("Saved session to file: {}".format(config_file)) | save [config_file] Save session variables to file save (without parameters):
Save session to default file ~/.dataone_cli.conf save.
<file>: Save session to specified file. | Below is the the instruction that describes the task:
### Input:
save [config_file] Save session variables to file save (without parameters):
Save session to default file ~/.dataone_cli.conf save.
<file>: Save session to specified file.
### Response:
def do_save(self, line):
"""save [config_file] Save session variables to file save (without parameters):
Save session to default file ~/.dataone_cli.conf save.
<file>: Save session to specified file.
"""
config_file = self._split_args(line, 0, 1)[0]
self._command_processor.get_session().save(config_file)
if config_file is None:
config_file = (
self._command_processor.get_session().get_default_pickle_file_path()
)
self._print_info_if_verbose("Saved session to file: {}".format(config_file)) |
def read(self, data):
"""Handles incoming raw sensor data and broadcasts it to specified
udp servers and connected tcp clients
:param data: NMEA raw sentences incoming data
"""
self.log('Received NMEA data:', data, lvl=debug)
# self.log(data, pretty=True)
if self._tcp_socket is not None and \
len(self._connected_tcp_endpoints) > 0:
self.log('Publishing data on tcp server', lvl=debug)
for endpoint in self._connected_tcp_endpoints:
self.fireEvent(
write(
endpoint,
bytes(data, 'ascii')),
self.channel + '_tcp'
)
if self._udp_socket is not None and \
len(self.config.udp_endpoints) > 0:
self.log('Publishing data to udp endpoints', lvl=debug)
for endpoint in self.config.udp_endpoints:
host, port = endpoint.split(":")
self.log('Transmitting to', endpoint, lvl=verbose)
self.fireEvent(
write(
(host, int(port)),
bytes(data, 'ascii')
),
self.channel +
'_udp'
) | Handles incoming raw sensor data and broadcasts it to specified
udp servers and connected tcp clients
:param data: NMEA raw sentences incoming data | Below is the the instruction that describes the task:
### Input:
Handles incoming raw sensor data and broadcasts it to specified
udp servers and connected tcp clients
:param data: NMEA raw sentences incoming data
### Response:
def read(self, data):
"""Handles incoming raw sensor data and broadcasts it to specified
udp servers and connected tcp clients
:param data: NMEA raw sentences incoming data
"""
self.log('Received NMEA data:', data, lvl=debug)
# self.log(data, pretty=True)
if self._tcp_socket is not None and \
len(self._connected_tcp_endpoints) > 0:
self.log('Publishing data on tcp server', lvl=debug)
for endpoint in self._connected_tcp_endpoints:
self.fireEvent(
write(
endpoint,
bytes(data, 'ascii')),
self.channel + '_tcp'
)
if self._udp_socket is not None and \
len(self.config.udp_endpoints) > 0:
self.log('Publishing data to udp endpoints', lvl=debug)
for endpoint in self.config.udp_endpoints:
host, port = endpoint.split(":")
self.log('Transmitting to', endpoint, lvl=verbose)
self.fireEvent(
write(
(host, int(port)),
bytes(data, 'ascii')
),
self.channel +
'_udp'
) |
def _get_choices(self):
"""
Returns menus specified in ``PAGE_MENU_TEMPLATES`` unless you provide
some custom choices in the field definition.
"""
if self._overridden_choices:
# Note: choices is a property on Field bound to _get_choices().
return self._choices
else:
menus = getattr(settings, "PAGE_MENU_TEMPLATES", [])
return (m[:2] for m in menus) | Returns menus specified in ``PAGE_MENU_TEMPLATES`` unless you provide
some custom choices in the field definition. | Below is the the instruction that describes the task:
### Input:
Returns menus specified in ``PAGE_MENU_TEMPLATES`` unless you provide
some custom choices in the field definition.
### Response:
def _get_choices(self):
"""
Returns menus specified in ``PAGE_MENU_TEMPLATES`` unless you provide
some custom choices in the field definition.
"""
if self._overridden_choices:
# Note: choices is a property on Field bound to _get_choices().
return self._choices
else:
menus = getattr(settings, "PAGE_MENU_TEMPLATES", [])
return (m[:2] for m in menus) |
def custom(self, ref, context=None):
"""
Get whether the specified reference is B{not} an (xs) builtin.
@param ref: A str or qref.
@type ref: (str|qref)
@return: True if B{not} a builtin, else False.
@rtype: bool
"""
if ref is None:
return True
else:
return not self.builtin(ref, context) | Get whether the specified reference is B{not} an (xs) builtin.
@param ref: A str or qref.
@type ref: (str|qref)
@return: True if B{not} a builtin, else False.
@rtype: bool | Below is the the instruction that describes the task:
### Input:
Get whether the specified reference is B{not} an (xs) builtin.
@param ref: A str or qref.
@type ref: (str|qref)
@return: True if B{not} a builtin, else False.
@rtype: bool
### Response:
def custom(self, ref, context=None):
"""
Get whether the specified reference is B{not} an (xs) builtin.
@param ref: A str or qref.
@type ref: (str|qref)
@return: True if B{not} a builtin, else False.
@rtype: bool
"""
if ref is None:
return True
else:
return not self.builtin(ref, context) |
def zap(input_url, archive, domain, host, internal, robots, proxies):
"""Extract links from robots.txt and sitemap.xml."""
if archive:
print('%s Fetching URLs from archive.org' % run)
if False:
archived_urls = time_machine(domain, 'domain')
else:
archived_urls = time_machine(host, 'host')
print('%s Retrieved %i URLs from archive.org' % (
good, len(archived_urls) - 1))
for url in archived_urls:
verb('Internal page', url)
internal.add(url)
# Makes request to robots.txt
response = requests.get(input_url + '/robots.txt',
proxies=random.choice(proxies)).text
# Making sure robots.txt isn't some fancy 404 page
if '<body' not in response:
# If you know it, you know it
matches = re.findall(r'Allow: (.*)|Disallow: (.*)', response)
if matches:
# Iterating over the matches, match is a tuple here
for match in matches:
# One item in match will always be empty so will combine both
# items
match = ''.join(match)
# If the URL doesn't use a wildcard
if '*' not in match:
url = input_url + match
# Add the URL to internal list for crawling
internal.add(url)
# Add the URL to robots list
robots.add(url)
print('%s URLs retrieved from robots.txt: %s' % (good, len(robots)))
# Makes request to sitemap.xml
response = requests.get(input_url + '/sitemap.xml',
proxies=random.choice(proxies)).text
# Making sure robots.txt isn't some fancy 404 page
if '<body' not in response:
matches = xml_parser(response)
if matches: # if there are any matches
print('%s URLs retrieved from sitemap.xml: %s' % (
good, len(matches)))
for match in matches:
verb('Internal page', match)
# Cleaning up the URL and adding it to the internal list for
# crawling
internal.add(match) | Extract links from robots.txt and sitemap.xml. | Below is the the instruction that describes the task:
### Input:
Extract links from robots.txt and sitemap.xml.
### Response:
def zap(input_url, archive, domain, host, internal, robots, proxies):
"""Extract links from robots.txt and sitemap.xml."""
if archive:
print('%s Fetching URLs from archive.org' % run)
if False:
archived_urls = time_machine(domain, 'domain')
else:
archived_urls = time_machine(host, 'host')
print('%s Retrieved %i URLs from archive.org' % (
good, len(archived_urls) - 1))
for url in archived_urls:
verb('Internal page', url)
internal.add(url)
# Makes request to robots.txt
response = requests.get(input_url + '/robots.txt',
proxies=random.choice(proxies)).text
# Making sure robots.txt isn't some fancy 404 page
if '<body' not in response:
# If you know it, you know it
matches = re.findall(r'Allow: (.*)|Disallow: (.*)', response)
if matches:
# Iterating over the matches, match is a tuple here
for match in matches:
# One item in match will always be empty so will combine both
# items
match = ''.join(match)
# If the URL doesn't use a wildcard
if '*' not in match:
url = input_url + match
# Add the URL to internal list for crawling
internal.add(url)
# Add the URL to robots list
robots.add(url)
print('%s URLs retrieved from robots.txt: %s' % (good, len(robots)))
# Makes request to sitemap.xml
response = requests.get(input_url + '/sitemap.xml',
proxies=random.choice(proxies)).text
# Making sure robots.txt isn't some fancy 404 page
if '<body' not in response:
matches = xml_parser(response)
if matches: # if there are any matches
print('%s URLs retrieved from sitemap.xml: %s' % (
good, len(matches)))
for match in matches:
verb('Internal page', match)
# Cleaning up the URL and adding it to the internal list for
# crawling
internal.add(match) |
def tell(self):
"""Return the file's current position.
Returns:
int, file's current position in bytes.
"""
self._check_open_file()
if self._flushes_after_tell():
self.flush()
if not self._append:
return self._io.tell()
if self._read_whence:
write_seek = self._io.tell()
self._io.seek(self._read_seek, self._read_whence)
self._read_seek = self._io.tell()
self._read_whence = 0
self._io.seek(write_seek)
return self._read_seek | Return the file's current position.
Returns:
int, file's current position in bytes. | Below is the the instruction that describes the task:
### Input:
Return the file's current position.
Returns:
int, file's current position in bytes.
### Response:
def tell(self):
"""Return the file's current position.
Returns:
int, file's current position in bytes.
"""
self._check_open_file()
if self._flushes_after_tell():
self.flush()
if not self._append:
return self._io.tell()
if self._read_whence:
write_seek = self._io.tell()
self._io.seek(self._read_seek, self._read_whence)
self._read_seek = self._io.tell()
self._read_whence = 0
self._io.seek(write_seek)
return self._read_seek |
def get_queues(*queue_names, **kwargs):
"""
Return queue instances from specified queue names.
All instances must use the same Redis connection.
"""
from .settings import QUEUES
if len(queue_names) <= 1:
# Return "default" queue if no queue name is specified
# or one queue with specified name
return [get_queue(*queue_names, **kwargs)]
# will return more than one queue
# import job class only once for all queues
kwargs['job_class'] = get_job_class(kwargs.pop('job_class', None))
queue_params = QUEUES[queue_names[0]]
connection_params = filter_connection_params(queue_params)
queues = [get_queue(queue_names[0], **kwargs)]
# do consistency checks while building return list
for name in queue_names[1:]:
queue = get_queue(name, **kwargs)
if type(queue) is not type(queues[0]):
raise ValueError(
'Queues must have the same class.'
'"{0}" and "{1}" have '
'different classes'.format(name, queue_names[0]))
if connection_params != filter_connection_params(QUEUES[name]):
raise ValueError(
'Queues must have the same redis connection.'
'"{0}" and "{1}" have '
'different connections'.format(name, queue_names[0]))
queues.append(queue)
return queues | Return queue instances from specified queue names.
All instances must use the same Redis connection. | Below is the the instruction that describes the task:
### Input:
Return queue instances from specified queue names.
All instances must use the same Redis connection.
### Response:
def get_queues(*queue_names, **kwargs):
"""
Return queue instances from specified queue names.
All instances must use the same Redis connection.
"""
from .settings import QUEUES
if len(queue_names) <= 1:
# Return "default" queue if no queue name is specified
# or one queue with specified name
return [get_queue(*queue_names, **kwargs)]
# will return more than one queue
# import job class only once for all queues
kwargs['job_class'] = get_job_class(kwargs.pop('job_class', None))
queue_params = QUEUES[queue_names[0]]
connection_params = filter_connection_params(queue_params)
queues = [get_queue(queue_names[0], **kwargs)]
# do consistency checks while building return list
for name in queue_names[1:]:
queue = get_queue(name, **kwargs)
if type(queue) is not type(queues[0]):
raise ValueError(
'Queues must have the same class.'
'"{0}" and "{1}" have '
'different classes'.format(name, queue_names[0]))
if connection_params != filter_connection_params(QUEUES[name]):
raise ValueError(
'Queues must have the same redis connection.'
'"{0}" and "{1}" have '
'different connections'.format(name, queue_names[0]))
queues.append(queue)
return queues |
def zoom_out(self):
"""Scale the image down by one scale step."""
if self._scalefactor >= self._sfmin:
self._scalefactor -= 1
self.scale_image()
self._adjust_scrollbar(1/self._scalestep)
self.sig_zoom_changed.emit(self.get_scaling()) | Scale the image down by one scale step. | Below is the the instruction that describes the task:
### Input:
Scale the image down by one scale step.
### Response:
def zoom_out(self):
"""Scale the image down by one scale step."""
if self._scalefactor >= self._sfmin:
self._scalefactor -= 1
self.scale_image()
self._adjust_scrollbar(1/self._scalestep)
self.sig_zoom_changed.emit(self.get_scaling()) |
def handle_command(self, master, mpstate, args):
'''handle parameter commands'''
param_wildcard = "*"
usage="Usage: param <fetch|save|set|show|load|preload|forceload|diff|download|help>"
if len(args) < 1:
print(usage)
return
if args[0] == "fetch":
if len(args) == 1:
master.param_fetch_all()
self.mav_param_set = set()
print("Requested parameter list")
else:
found = False
pname = args[1].upper()
for p in self.mav_param.keys():
if fnmatch.fnmatch(p, pname):
master.param_fetch_one(p)
if p not in self.fetch_one:
self.fetch_one[p] = 0
self.fetch_one[p] += 1
found = True
print("Requested parameter %s" % p)
if not found and args[1].find('*') == -1:
master.param_fetch_one(pname)
if pname not in self.fetch_one:
self.fetch_one[pname] = 0
self.fetch_one[pname] += 1
print("Requested parameter %s" % pname)
elif args[0] == "save":
if len(args) < 2:
print("usage: param save <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.save(args[1], param_wildcard, verbose=True)
elif args[0] == "diff":
wildcard = '*'
if len(args) < 2 or args[1].find('*') != -1:
if self.vehicle_name is None:
print("Unknown vehicle type")
return
filename = mp_util.dot_mavproxy("%s-defaults.parm" % self.vehicle_name)
if not os.path.exists(filename):
print("Please run 'param download' first (vehicle_name=%s)" % self.vehicle_name)
return
if len(args) >= 2:
wildcard = args[1]
else:
filename = args[1]
if len(args) == 3:
wildcard = args[2]
print("%-16.16s %12.12s %12.12s" % ('Parameter', 'Defaults', 'Current'))
self.mav_param.diff(filename, wildcard=wildcard)
elif args[0] == "set":
if len(args) < 2:
print("Usage: param set PARMNAME VALUE")
return
if len(args) == 2:
self.mav_param.show(args[1])
return
param = args[1]
value = args[2]
if value.startswith('0x'):
value = int(value, base=16)
if not param.upper() in self.mav_param:
print("Unable to find parameter '%s'" % param)
return
uname = param.upper()
ptype = None
if uname in self.param_types:
ptype = self.param_types[uname]
self.mav_param.mavset(master, uname, value, retries=3, parm_type=ptype)
if (param.upper() == "WP_LOITER_RAD" or param.upper() == "LAND_BREAK_PATH"):
#need to redraw rally points
mpstate.module('rally').rallyloader.last_change = time.time()
#need to redraw loiter points
mpstate.module('wp').wploader.last_change = time.time()
elif args[0] == "load":
if len(args) < 2:
print("Usage: param load <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.load(args[1], param_wildcard, master)
elif args[0] == "preload":
if len(args) < 2:
print("Usage: param preload <filename>")
return
self.mav_param.load(args[1])
elif args[0] == "forceload":
if len(args) < 2:
print("Usage: param forceload <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.load(args[1], param_wildcard, master, check=False)
elif args[0] == "download":
self.param_help_download()
elif args[0] == "apropos":
self.param_apropos(args[1:])
elif args[0] == "help":
self.param_help(args[1:])
elif args[0] == "set_xml_filepath":
self.param_set_xml_filepath(args[1:])
elif args[0] == "show":
if len(args) > 1:
pattern = args[1]
else:
pattern = "*"
self.mav_param.show(pattern)
elif args[0] == "status":
print("Have %u/%u params" % (len(self.mav_param_set), self.mav_param_count))
else:
print(usage) | handle parameter commands | Below is the the instruction that describes the task:
### Input:
handle parameter commands
### Response:
def handle_command(self, master, mpstate, args):
'''handle parameter commands'''
param_wildcard = "*"
usage="Usage: param <fetch|save|set|show|load|preload|forceload|diff|download|help>"
if len(args) < 1:
print(usage)
return
if args[0] == "fetch":
if len(args) == 1:
master.param_fetch_all()
self.mav_param_set = set()
print("Requested parameter list")
else:
found = False
pname = args[1].upper()
for p in self.mav_param.keys():
if fnmatch.fnmatch(p, pname):
master.param_fetch_one(p)
if p not in self.fetch_one:
self.fetch_one[p] = 0
self.fetch_one[p] += 1
found = True
print("Requested parameter %s" % p)
if not found and args[1].find('*') == -1:
master.param_fetch_one(pname)
if pname not in self.fetch_one:
self.fetch_one[pname] = 0
self.fetch_one[pname] += 1
print("Requested parameter %s" % pname)
elif args[0] == "save":
if len(args) < 2:
print("usage: param save <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.save(args[1], param_wildcard, verbose=True)
elif args[0] == "diff":
wildcard = '*'
if len(args) < 2 or args[1].find('*') != -1:
if self.vehicle_name is None:
print("Unknown vehicle type")
return
filename = mp_util.dot_mavproxy("%s-defaults.parm" % self.vehicle_name)
if not os.path.exists(filename):
print("Please run 'param download' first (vehicle_name=%s)" % self.vehicle_name)
return
if len(args) >= 2:
wildcard = args[1]
else:
filename = args[1]
if len(args) == 3:
wildcard = args[2]
print("%-16.16s %12.12s %12.12s" % ('Parameter', 'Defaults', 'Current'))
self.mav_param.diff(filename, wildcard=wildcard)
elif args[0] == "set":
if len(args) < 2:
print("Usage: param set PARMNAME VALUE")
return
if len(args) == 2:
self.mav_param.show(args[1])
return
param = args[1]
value = args[2]
if value.startswith('0x'):
value = int(value, base=16)
if not param.upper() in self.mav_param:
print("Unable to find parameter '%s'" % param)
return
uname = param.upper()
ptype = None
if uname in self.param_types:
ptype = self.param_types[uname]
self.mav_param.mavset(master, uname, value, retries=3, parm_type=ptype)
if (param.upper() == "WP_LOITER_RAD" or param.upper() == "LAND_BREAK_PATH"):
#need to redraw rally points
mpstate.module('rally').rallyloader.last_change = time.time()
#need to redraw loiter points
mpstate.module('wp').wploader.last_change = time.time()
elif args[0] == "load":
if len(args) < 2:
print("Usage: param load <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.load(args[1], param_wildcard, master)
elif args[0] == "preload":
if len(args) < 2:
print("Usage: param preload <filename>")
return
self.mav_param.load(args[1])
elif args[0] == "forceload":
if len(args) < 2:
print("Usage: param forceload <filename> [wildcard]")
return
if len(args) > 2:
param_wildcard = args[2]
else:
param_wildcard = "*"
self.mav_param.load(args[1], param_wildcard, master, check=False)
elif args[0] == "download":
self.param_help_download()
elif args[0] == "apropos":
self.param_apropos(args[1:])
elif args[0] == "help":
self.param_help(args[1:])
elif args[0] == "set_xml_filepath":
self.param_set_xml_filepath(args[1:])
elif args[0] == "show":
if len(args) > 1:
pattern = args[1]
else:
pattern = "*"
self.mav_param.show(pattern)
elif args[0] == "status":
print("Have %u/%u params" % (len(self.mav_param_set), self.mav_param_count))
else:
print(usage) |
def rename(self, new_dirname=None, new_basename=None):
"""Rename the dirname, basename or their combinations.
**中文文档**
对文件的目录名, 文件夹名, 或它们的组合进行修改。
"""
if not new_basename:
new_basename = self.new_basename
if not new_dirname:
new_dirname = self.dirname
else:
new_dirname = os.path.abspath(new_dirname)
new_abspath = os.path.join(new_dirname, new_basename)
os.rename(self.abspath, new_abspath)
# 如果成功重命名, 则更新文件信息
self.abspath = new_abspath
self.dirname = new_dirname
self.basename = new_basename | Rename the dirname, basename or their combinations.
**中文文档**
对文件的目录名, 文件夹名, 或它们的组合进行修改。 | Below is the the instruction that describes the task:
### Input:
Rename the dirname, basename or their combinations.
**中文文档**
对文件的目录名, 文件夹名, 或它们的组合进行修改。
### Response:
def rename(self, new_dirname=None, new_basename=None):
"""Rename the dirname, basename or their combinations.
**中文文档**
对文件的目录名, 文件夹名, 或它们的组合进行修改。
"""
if not new_basename:
new_basename = self.new_basename
if not new_dirname:
new_dirname = self.dirname
else:
new_dirname = os.path.abspath(new_dirname)
new_abspath = os.path.join(new_dirname, new_basename)
os.rename(self.abspath, new_abspath)
# 如果成功重命名, 则更新文件信息
self.abspath = new_abspath
self.dirname = new_dirname
self.basename = new_basename |
def to_identifier(string):
"""Makes a python identifier (perhaps an ugly one) out of any string.
This isn't an isomorphic change, the original name can't be recovered
from the change in all cases, so it must be stored separately.
Examples:
>>> to_identifier('Alice\'s Restaurant') -> 'Alice_s_Restaurant'
>>> to_identifier('#if') -> 'if' -> QuiltException
>>> to_identifier('9foo') -> 'n9foo'
:param string: string to convert
:returns: `string`, converted to python identifier if needed
:rtype: string
"""
# Not really useful to expose as a CONSTANT, and python will compile and cache
result = re.sub(r'[^0-9a-zA-Z_]', '_', string)
# compatibility with older behavior and tests, doesn't hurt anyways -- "_" is a
# pretty useless name to translate to. With this, it'll raise an exception.
result = result.strip('_')
if result and result[0].isdigit():
result = "n" + result
if not is_identifier(result):
raise QuiltException("Unable to generate Python identifier from name: {!r}".format(string))
return result | Makes a python identifier (perhaps an ugly one) out of any string.
This isn't an isomorphic change, the original name can't be recovered
from the change in all cases, so it must be stored separately.
Examples:
>>> to_identifier('Alice\'s Restaurant') -> 'Alice_s_Restaurant'
>>> to_identifier('#if') -> 'if' -> QuiltException
>>> to_identifier('9foo') -> 'n9foo'
:param string: string to convert
:returns: `string`, converted to python identifier if needed
:rtype: string | Below is the the instruction that describes the task:
### Input:
Makes a python identifier (perhaps an ugly one) out of any string.
This isn't an isomorphic change, the original name can't be recovered
from the change in all cases, so it must be stored separately.
Examples:
>>> to_identifier('Alice\'s Restaurant') -> 'Alice_s_Restaurant'
>>> to_identifier('#if') -> 'if' -> QuiltException
>>> to_identifier('9foo') -> 'n9foo'
:param string: string to convert
:returns: `string`, converted to python identifier if needed
:rtype: string
### Response:
def to_identifier(string):
"""Makes a python identifier (perhaps an ugly one) out of any string.
This isn't an isomorphic change, the original name can't be recovered
from the change in all cases, so it must be stored separately.
Examples:
>>> to_identifier('Alice\'s Restaurant') -> 'Alice_s_Restaurant'
>>> to_identifier('#if') -> 'if' -> QuiltException
>>> to_identifier('9foo') -> 'n9foo'
:param string: string to convert
:returns: `string`, converted to python identifier if needed
:rtype: string
"""
# Not really useful to expose as a CONSTANT, and python will compile and cache
result = re.sub(r'[^0-9a-zA-Z_]', '_', string)
# compatibility with older behavior and tests, doesn't hurt anyways -- "_" is a
# pretty useless name to translate to. With this, it'll raise an exception.
result = result.strip('_')
if result and result[0].isdigit():
result = "n" + result
if not is_identifier(result):
raise QuiltException("Unable to generate Python identifier from name: {!r}".format(string))
return result |
def get_opt_add_remove_edges_greedy(instance):
'''
only apply with elementary path consistency notion
'''
sem = [sign_cons_prg, elem_path_prg, fwd_prop_prg, bwd_prop_prg]
inst = instance.to_file()
prg = [ inst, remove_edges_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
coptions = '1 --project --opt-strategy=5 --opt-mode=optN --quiet=1'
solver = GringoClasp(clasp_options=coptions)
models = solver.run(prg, collapseTerms=True, collapseAtoms=False)
bscenfit = models[0].score[0]
brepscore = models[0].score[1]
#print('model: ',models[0])
#print('bscenfit: ',bscenfit)
#print('brepscore: ',brepscore)
strt_edges = TermSet()
fedges = [(strt_edges, bscenfit, brepscore)]
tedges = []
dedges = []
coptions = '0 --project --opt-strategy=5 --opt-mode=optN --quiet=1'
solver = GringoClasp(clasp_options=coptions)
while fedges:
# sys.stdout.flush()
# print ("TODO: ",len(fedges))
(oedges, oscenfit, orepscore) = fedges.pop()
# print('(oedges,oscenfit, orepscore):',(oedges,oscenfit, orepscore))
# print('len(oedges):',len(oedges))
# extend till no better solution can be found
end = True # assume this time its the end
f_oedges = TermSet(oedges).to_file()
prg = [ inst, f_oedges, remove_edges_prg, best_one_edge_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
models = solver.run(prg, collapseTerms=True, collapseAtoms=False)
nscenfit = models[0].score[0]
nrepscore = models[0].score[1]+2*(len(oedges))
# print('nscenfit: ',nscenfit)
# print('nrepscore: ',nrepscore)
if (nscenfit < oscenfit) or nrepscore < orepscore: # better score or more that 1 scenfit
# print('maybe better solution:')
# print('#models: ',len(models))
for m in models:
#print('MMM ',models)
nend = TermSet()
for a in m :
if a.pred() == 'rep' :
if a.arg(0)[0:7]=='addeddy' :
# print('new addeddy to',a.arg(0)[8:-1])
nend = String2TermSet('edge_end('+(a.arg(0)[8:-1])+')')
# search starts of the addeddy
# print('search best edge starts')
f_end = TermSet(nend).to_file()
prg = [ inst, f_oedges, remove_edges_prg, f_end, best_edge_start_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
starts = solver.run(prg, collapseTerms=True, collapseAtoms=False)
os.unlink(f_end)
# print(starts)
for s in starts:
n2scenfit = s.score[0]
n2repscore = s.score[1]+2*(len(oedges))
# print('n2scenfit: ', n2scenfit)
# print('n2repscore: ', n2repscore)
if (n2scenfit < oscenfit) or n2repscore < orepscore: # better score or more that 1 scenfit
# print('better solution:')
if (n2scenfit<bscenfit):
bscenfit = n2scenfit # update bscenfit
brepscore = n2repscore
if (n2scenfit == bscenfit) :
if (n2repscore<brepscore) : brepscore = n2repscore
nedge = TermSet()
for a in s :
if a.pred() == 'rep' :
if a.arg(0)[0:7]=='addedge' :
# print('new edge ',a.arg(0)[8:-1])
nedge = String2TermSet('obs_elabel('+(a.arg(0)[8:-1])+')')
end = False
nedges = oedges.union(nedge)
if (nedges,n2scenfit,n2repscore) not in fedges and nedges not in dedges:
fedges.append((nedges,n2scenfit,n2repscore))
dedges.append(nedges)
if end :
if (oedges,oscenfit,orepscore) not in tedges and oscenfit == bscenfit and orepscore == brepscore:
# print('LAST tedges append',oedges)
tedges.append((oedges,oscenfit,orepscore))
# end while
os.unlink(f_oedges)
# take only the results with the best scenfit
redges=[]
for (tedges,tscenfit,trepairs) in tedges:
if tscenfit == bscenfit: redges.append((tedges,trepairs))
os.unlink(inst)
return (bscenfit,redges) | only apply with elementary path consistency notion | Below is the the instruction that describes the task:
### Input:
only apply with elementary path consistency notion
### Response:
def get_opt_add_remove_edges_greedy(instance):
'''
only apply with elementary path consistency notion
'''
sem = [sign_cons_prg, elem_path_prg, fwd_prop_prg, bwd_prop_prg]
inst = instance.to_file()
prg = [ inst, remove_edges_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
coptions = '1 --project --opt-strategy=5 --opt-mode=optN --quiet=1'
solver = GringoClasp(clasp_options=coptions)
models = solver.run(prg, collapseTerms=True, collapseAtoms=False)
bscenfit = models[0].score[0]
brepscore = models[0].score[1]
#print('model: ',models[0])
#print('bscenfit: ',bscenfit)
#print('brepscore: ',brepscore)
strt_edges = TermSet()
fedges = [(strt_edges, bscenfit, brepscore)]
tedges = []
dedges = []
coptions = '0 --project --opt-strategy=5 --opt-mode=optN --quiet=1'
solver = GringoClasp(clasp_options=coptions)
while fedges:
# sys.stdout.flush()
# print ("TODO: ",len(fedges))
(oedges, oscenfit, orepscore) = fedges.pop()
# print('(oedges,oscenfit, orepscore):',(oedges,oscenfit, orepscore))
# print('len(oedges):',len(oedges))
# extend till no better solution can be found
end = True # assume this time its the end
f_oedges = TermSet(oedges).to_file()
prg = [ inst, f_oedges, remove_edges_prg, best_one_edge_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
models = solver.run(prg, collapseTerms=True, collapseAtoms=False)
nscenfit = models[0].score[0]
nrepscore = models[0].score[1]+2*(len(oedges))
# print('nscenfit: ',nscenfit)
# print('nrepscore: ',nrepscore)
if (nscenfit < oscenfit) or nrepscore < orepscore: # better score or more that 1 scenfit
# print('maybe better solution:')
# print('#models: ',len(models))
for m in models:
#print('MMM ',models)
nend = TermSet()
for a in m :
if a.pred() == 'rep' :
if a.arg(0)[0:7]=='addeddy' :
# print('new addeddy to',a.arg(0)[8:-1])
nend = String2TermSet('edge_end('+(a.arg(0)[8:-1])+')')
# search starts of the addeddy
# print('search best edge starts')
f_end = TermSet(nend).to_file()
prg = [ inst, f_oedges, remove_edges_prg, f_end, best_edge_start_prg,
min_repairs_prg, show_rep_prg
] + sem + scenfit
starts = solver.run(prg, collapseTerms=True, collapseAtoms=False)
os.unlink(f_end)
# print(starts)
for s in starts:
n2scenfit = s.score[0]
n2repscore = s.score[1]+2*(len(oedges))
# print('n2scenfit: ', n2scenfit)
# print('n2repscore: ', n2repscore)
if (n2scenfit < oscenfit) or n2repscore < orepscore: # better score or more that 1 scenfit
# print('better solution:')
if (n2scenfit<bscenfit):
bscenfit = n2scenfit # update bscenfit
brepscore = n2repscore
if (n2scenfit == bscenfit) :
if (n2repscore<brepscore) : brepscore = n2repscore
nedge = TermSet()
for a in s :
if a.pred() == 'rep' :
if a.arg(0)[0:7]=='addedge' :
# print('new edge ',a.arg(0)[8:-1])
nedge = String2TermSet('obs_elabel('+(a.arg(0)[8:-1])+')')
end = False
nedges = oedges.union(nedge)
if (nedges,n2scenfit,n2repscore) not in fedges and nedges not in dedges:
fedges.append((nedges,n2scenfit,n2repscore))
dedges.append(nedges)
if end :
if (oedges,oscenfit,orepscore) not in tedges and oscenfit == bscenfit and orepscore == brepscore:
# print('LAST tedges append',oedges)
tedges.append((oedges,oscenfit,orepscore))
# end while
os.unlink(f_oedges)
# take only the results with the best scenfit
redges=[]
for (tedges,tscenfit,trepairs) in tedges:
if tscenfit == bscenfit: redges.append((tedges,trepairs))
os.unlink(inst)
return (bscenfit,redges) |
def _read_blob_in_tree(tree, components):
"""Recursively open trees to ultimately read a blob"""
if len(components) == 1:
# Tree is direct parent of blob
return _read_blob(tree, components[0])
else:
# Still trees to open
dirname = components.pop(0)
for t in tree.traverse():
if t.name == dirname:
return _read_blob_in_tree(t, components) | Recursively open trees to ultimately read a blob | Below is the the instruction that describes the task:
### Input:
Recursively open trees to ultimately read a blob
### Response:
def _read_blob_in_tree(tree, components):
"""Recursively open trees to ultimately read a blob"""
if len(components) == 1:
# Tree is direct parent of blob
return _read_blob(tree, components[0])
else:
# Still trees to open
dirname = components.pop(0)
for t in tree.traverse():
if t.name == dirname:
return _read_blob_in_tree(t, components) |
def download_member_shared(cls, member_data, target_member_dir, source=None,
max_size=MAX_SIZE_DEFAULT, id_filename=False):
"""
Download files to sync a local dir to match OH member shared data.
Files are downloaded to match their "basename" on Open Humans.
If there are multiple files with the same name, the most recent is
downloaded.
:param member_data: This field is data related to member in a project.
:param target_member_dir: This field is the target directory where data
will be downloaded.
:param source: This field is the source from which to download data.
:param max_size: This field is the maximum file size. It's default
value is 128m.
"""
logging.debug('Download member shared data...')
sources_shared = member_data['sources_shared']
file_data = cls._get_member_file_data(member_data,
id_filename=id_filename)
logging.info('Downloading member data to {}'.format(target_member_dir))
for basename in file_data:
# If not in sources shared, it's the project's own data. Skip.
if file_data[basename]['source'] not in sources_shared:
continue
# Filter source if specified. Determine target directory for file.
if source:
if source == file_data[basename]['source']:
target_filepath = os.path.join(target_member_dir, basename)
else:
continue
else:
source_data_dir = os.path.join(target_member_dir,
file_data[basename]['source'])
if not os.path.exists(source_data_dir):
os.mkdir(source_data_dir)
target_filepath = os.path.join(source_data_dir, basename)
download_file(download_url=file_data[basename]['download_url'],
target_filepath=target_filepath,
max_bytes=parse_size(max_size)) | Download files to sync a local dir to match OH member shared data.
Files are downloaded to match their "basename" on Open Humans.
If there are multiple files with the same name, the most recent is
downloaded.
:param member_data: This field is data related to member in a project.
:param target_member_dir: This field is the target directory where data
will be downloaded.
:param source: This field is the source from which to download data.
:param max_size: This field is the maximum file size. It's default
value is 128m. | Below is the the instruction that describes the task:
### Input:
Download files to sync a local dir to match OH member shared data.
Files are downloaded to match their "basename" on Open Humans.
If there are multiple files with the same name, the most recent is
downloaded.
:param member_data: This field is data related to member in a project.
:param target_member_dir: This field is the target directory where data
will be downloaded.
:param source: This field is the source from which to download data.
:param max_size: This field is the maximum file size. It's default
value is 128m.
### Response:
def download_member_shared(cls, member_data, target_member_dir, source=None,
max_size=MAX_SIZE_DEFAULT, id_filename=False):
"""
Download files to sync a local dir to match OH member shared data.
Files are downloaded to match their "basename" on Open Humans.
If there are multiple files with the same name, the most recent is
downloaded.
:param member_data: This field is data related to member in a project.
:param target_member_dir: This field is the target directory where data
will be downloaded.
:param source: This field is the source from which to download data.
:param max_size: This field is the maximum file size. It's default
value is 128m.
"""
logging.debug('Download member shared data...')
sources_shared = member_data['sources_shared']
file_data = cls._get_member_file_data(member_data,
id_filename=id_filename)
logging.info('Downloading member data to {}'.format(target_member_dir))
for basename in file_data:
# If not in sources shared, it's the project's own data. Skip.
if file_data[basename]['source'] not in sources_shared:
continue
# Filter source if specified. Determine target directory for file.
if source:
if source == file_data[basename]['source']:
target_filepath = os.path.join(target_member_dir, basename)
else:
continue
else:
source_data_dir = os.path.join(target_member_dir,
file_data[basename]['source'])
if not os.path.exists(source_data_dir):
os.mkdir(source_data_dir)
target_filepath = os.path.join(source_data_dir, basename)
download_file(download_url=file_data[basename]['download_url'],
target_filepath=target_filepath,
max_bytes=parse_size(max_size)) |
def _calc_resp(password_hash, server_challenge):
"""
Generate the LM response given a 16-byte password hash and the
challenge from the CHALLENGE_MESSAGE
:param password_hash: A 16-byte password hash
:param server_challenge: A random 8-byte response generated by the
server in the CHALLENGE_MESSAGE
:return res: A 24-byte buffer to contain the LM response upon return
"""
# padding with zeros to make the hash 21 bytes long
password_hash += b'\x00' * (21 - len(password_hash))
res = b''
dobj = DES(DES.key56_to_key64(password_hash[0:7]))
res = res + dobj.encrypt(server_challenge[0:8])
dobj = DES(DES.key56_to_key64(password_hash[7:14]))
res = res + dobj.encrypt(server_challenge[0:8])
dobj = DES(DES.key56_to_key64(password_hash[14:21]))
res = res + dobj.encrypt(server_challenge[0:8])
return res | Generate the LM response given a 16-byte password hash and the
challenge from the CHALLENGE_MESSAGE
:param password_hash: A 16-byte password hash
:param server_challenge: A random 8-byte response generated by the
server in the CHALLENGE_MESSAGE
:return res: A 24-byte buffer to contain the LM response upon return | Below is the the instruction that describes the task:
### Input:
Generate the LM response given a 16-byte password hash and the
challenge from the CHALLENGE_MESSAGE
:param password_hash: A 16-byte password hash
:param server_challenge: A random 8-byte response generated by the
server in the CHALLENGE_MESSAGE
:return res: A 24-byte buffer to contain the LM response upon return
### Response:
def _calc_resp(password_hash, server_challenge):
"""
Generate the LM response given a 16-byte password hash and the
challenge from the CHALLENGE_MESSAGE
:param password_hash: A 16-byte password hash
:param server_challenge: A random 8-byte response generated by the
server in the CHALLENGE_MESSAGE
:return res: A 24-byte buffer to contain the LM response upon return
"""
# padding with zeros to make the hash 21 bytes long
password_hash += b'\x00' * (21 - len(password_hash))
res = b''
dobj = DES(DES.key56_to_key64(password_hash[0:7]))
res = res + dobj.encrypt(server_challenge[0:8])
dobj = DES(DES.key56_to_key64(password_hash[7:14]))
res = res + dobj.encrypt(server_challenge[0:8])
dobj = DES(DES.key56_to_key64(password_hash[14:21]))
res = res + dobj.encrypt(server_challenge[0:8])
return res |
def _re_pattern_pprint(obj, p, cycle):
"""The pprint function for regular expression patterns."""
p.text('re.compile(')
pattern = repr(obj.pattern)
if pattern[:1] in 'uU':
pattern = pattern[1:]
prefix = 'ur'
else:
prefix = 'r'
pattern = prefix + pattern.replace('\\\\', '\\')
p.text(pattern)
if obj.flags:
p.text(',')
p.breakable()
done_one = False
for flag in ('TEMPLATE', 'IGNORECASE', 'LOCALE', 'MULTILINE', 'DOTALL',
'UNICODE', 'VERBOSE', 'DEBUG'):
if obj.flags & getattr(re, flag):
if done_one:
p.text('|')
p.text('re.' + flag)
done_one = True
p.text(')') | The pprint function for regular expression patterns. | Below is the the instruction that describes the task:
### Input:
The pprint function for regular expression patterns.
### Response:
def _re_pattern_pprint(obj, p, cycle):
"""The pprint function for regular expression patterns."""
p.text('re.compile(')
pattern = repr(obj.pattern)
if pattern[:1] in 'uU':
pattern = pattern[1:]
prefix = 'ur'
else:
prefix = 'r'
pattern = prefix + pattern.replace('\\\\', '\\')
p.text(pattern)
if obj.flags:
p.text(',')
p.breakable()
done_one = False
for flag in ('TEMPLATE', 'IGNORECASE', 'LOCALE', 'MULTILINE', 'DOTALL',
'UNICODE', 'VERBOSE', 'DEBUG'):
if obj.flags & getattr(re, flag):
if done_one:
p.text('|')
p.text('re.' + flag)
done_one = True
p.text(')') |
def to_index(self):
"""Convert this variable to a pandas.Index"""
# n.b. creating a new pandas.Index from an old pandas.Index is
# basically free as pandas.Index objects are immutable
assert self.ndim == 1
index = self._data.array
if isinstance(index, pd.MultiIndex):
# set default names for multi-index unnamed levels so that
# we can safely rename dimension / coordinate later
valid_level_names = [name or '{}_level_{}'.format(self.dims[0], i)
for i, name in enumerate(index.names)]
index = index.set_names(valid_level_names)
else:
index = index.set_names(self.name)
return index | Convert this variable to a pandas.Index | Below is the the instruction that describes the task:
### Input:
Convert this variable to a pandas.Index
### Response:
def to_index(self):
"""Convert this variable to a pandas.Index"""
# n.b. creating a new pandas.Index from an old pandas.Index is
# basically free as pandas.Index objects are immutable
assert self.ndim == 1
index = self._data.array
if isinstance(index, pd.MultiIndex):
# set default names for multi-index unnamed levels so that
# we can safely rename dimension / coordinate later
valid_level_names = [name or '{}_level_{}'.format(self.dims[0], i)
for i, name in enumerate(index.names)]
index = index.set_names(valid_level_names)
else:
index = index.set_names(self.name)
return index |
def draw_img_button(width=200, height=50, text='This is a button', color=rgb(200,100,50)):
""" Draws a simple image button. """
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height)
ctx = cairo.Context(surface)
ctx.rectangle(0, 0, width - 1, height - 1)
ctx.set_source_rgb(color.red/255.0, color.green/255.0, color.blue/255.0)
ctx.fill()
# Draw text
ctx.set_source_rgb(1.0, 1.0, 1.0)
ctx.select_font_face(
"Helvetica",
cairo.FONT_SLANT_NORMAL,
cairo.FONT_WEIGHT_BOLD
)
ctx.set_font_size(15.0)
ctx.move_to(15, 2 * height / 3)
ctx.show_text(text)
surface.write_to_png('button.png') | Draws a simple image button. | Below is the the instruction that describes the task:
### Input:
Draws a simple image button.
### Response:
def draw_img_button(width=200, height=50, text='This is a button', color=rgb(200,100,50)):
""" Draws a simple image button. """
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height)
ctx = cairo.Context(surface)
ctx.rectangle(0, 0, width - 1, height - 1)
ctx.set_source_rgb(color.red/255.0, color.green/255.0, color.blue/255.0)
ctx.fill()
# Draw text
ctx.set_source_rgb(1.0, 1.0, 1.0)
ctx.select_font_face(
"Helvetica",
cairo.FONT_SLANT_NORMAL,
cairo.FONT_WEIGHT_BOLD
)
ctx.set_font_size(15.0)
ctx.move_to(15, 2 * height / 3)
ctx.show_text(text)
surface.write_to_png('button.png') |
def _find_field_generator_templates(self):
"""
Return a dictionary of the form {name: field_generator} containing
all tohu generators defined in the class and instance namespace
of this custom generator.
"""
field_gen_templates = {}
# Extract field generators from class dict
for name, g in self.__class__.__dict__.items():
if isinstance(g, TohuBaseGenerator):
field_gen_templates[name] = g.set_tohu_name(f'{name} (TPL)')
# Extract field generators from instance dict
for name, g in self.__dict__.items():
if isinstance(g, TohuBaseGenerator):
field_gen_templates[name] = g.set_tohu_name(f'{name} (TPL)')
return field_gen_templates | Return a dictionary of the form {name: field_generator} containing
all tohu generators defined in the class and instance namespace
of this custom generator. | Below is the the instruction that describes the task:
### Input:
Return a dictionary of the form {name: field_generator} containing
all tohu generators defined in the class and instance namespace
of this custom generator.
### Response:
def _find_field_generator_templates(self):
"""
Return a dictionary of the form {name: field_generator} containing
all tohu generators defined in the class and instance namespace
of this custom generator.
"""
field_gen_templates = {}
# Extract field generators from class dict
for name, g in self.__class__.__dict__.items():
if isinstance(g, TohuBaseGenerator):
field_gen_templates[name] = g.set_tohu_name(f'{name} (TPL)')
# Extract field generators from instance dict
for name, g in self.__dict__.items():
if isinstance(g, TohuBaseGenerator):
field_gen_templates[name] = g.set_tohu_name(f'{name} (TPL)')
return field_gen_templates |
def _connect_lxd(spec):
"""
Return ContextService arguments for an LXD container connection.
"""
return {
'method': 'lxd',
'kwargs': {
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'lxc_path': spec.mitogen_lxc_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
} | Return ContextService arguments for an LXD container connection. | Below is the the instruction that describes the task:
### Input:
Return ContextService arguments for an LXD container connection.
### Response:
def _connect_lxd(spec):
"""
Return ContextService arguments for an LXD container connection.
"""
return {
'method': 'lxd',
'kwargs': {
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'lxc_path': spec.mitogen_lxc_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
} |
def pg_isready(self):
"""Runs pg_isready to see if PostgreSQL is accepting connections.
:returns: 'ok' if PostgreSQL is up, 'reject' if starting up, 'no_resopnse' if not up."""
cmd = [self._pgcommand('pg_isready'), '-p', self._local_address['port'], '-d', self._database]
# Host is not set if we are connecting via default unix socket
if 'host' in self._local_address:
cmd.extend(['-h', self._local_address['host']])
# We only need the username because pg_isready does not try to authenticate
if 'username' in self._superuser:
cmd.extend(['-U', self._superuser['username']])
ret = subprocess.call(cmd)
return_codes = {0: STATE_RUNNING,
1: STATE_REJECT,
2: STATE_NO_RESPONSE,
3: STATE_UNKNOWN}
return return_codes.get(ret, STATE_UNKNOWN) | Runs pg_isready to see if PostgreSQL is accepting connections.
:returns: 'ok' if PostgreSQL is up, 'reject' if starting up, 'no_resopnse' if not up. | Below is the the instruction that describes the task:
### Input:
Runs pg_isready to see if PostgreSQL is accepting connections.
:returns: 'ok' if PostgreSQL is up, 'reject' if starting up, 'no_resopnse' if not up.
### Response:
def pg_isready(self):
"""Runs pg_isready to see if PostgreSQL is accepting connections.
:returns: 'ok' if PostgreSQL is up, 'reject' if starting up, 'no_resopnse' if not up."""
cmd = [self._pgcommand('pg_isready'), '-p', self._local_address['port'], '-d', self._database]
# Host is not set if we are connecting via default unix socket
if 'host' in self._local_address:
cmd.extend(['-h', self._local_address['host']])
# We only need the username because pg_isready does not try to authenticate
if 'username' in self._superuser:
cmd.extend(['-U', self._superuser['username']])
ret = subprocess.call(cmd)
return_codes = {0: STATE_RUNNING,
1: STATE_REJECT,
2: STATE_NO_RESPONSE,
3: STATE_UNKNOWN}
return return_codes.get(ret, STATE_UNKNOWN) |
def set_scope(self, http_method, scope):
"""Set a scope condition for the resource for a http_method
Parameters:
* **http_method (str):** HTTP method like GET, POST, PUT, DELETE
* **scope (str, list):** the scope of access control as str if single, or as a list of strings if multiple scopes are to be set
"""
for con in self.conditions:
if http_method in con['httpMethods']:
if isinstance(scope, list):
con['scopes'] = scope
elif isinstance(scope, str) or isinstance(scope, unicode):
con['scopes'].append(scope)
return
# If not present, then create a new condition
if isinstance(scope, list):
self.conditions.append({'httpMethods': [http_method],
'scopes': scope})
elif isinstance(scope, str) or isinstance(scope, unicode):
self.conditions.append({'httpMethods': [http_method],
'scopes': [scope]}) | Set a scope condition for the resource for a http_method
Parameters:
* **http_method (str):** HTTP method like GET, POST, PUT, DELETE
* **scope (str, list):** the scope of access control as str if single, or as a list of strings if multiple scopes are to be set | Below is the the instruction that describes the task:
### Input:
Set a scope condition for the resource for a http_method
Parameters:
* **http_method (str):** HTTP method like GET, POST, PUT, DELETE
* **scope (str, list):** the scope of access control as str if single, or as a list of strings if multiple scopes are to be set
### Response:
def set_scope(self, http_method, scope):
"""Set a scope condition for the resource for a http_method
Parameters:
* **http_method (str):** HTTP method like GET, POST, PUT, DELETE
* **scope (str, list):** the scope of access control as str if single, or as a list of strings if multiple scopes are to be set
"""
for con in self.conditions:
if http_method in con['httpMethods']:
if isinstance(scope, list):
con['scopes'] = scope
elif isinstance(scope, str) or isinstance(scope, unicode):
con['scopes'].append(scope)
return
# If not present, then create a new condition
if isinstance(scope, list):
self.conditions.append({'httpMethods': [http_method],
'scopes': scope})
elif isinstance(scope, str) or isinstance(scope, unicode):
self.conditions.append({'httpMethods': [http_method],
'scopes': [scope]}) |
def on_menu_make_MagIC_results_tables(self, event):
"""
Creates or Updates Specimens or Pmag Specimens MagIC table,
overwrites .redo file for safety, and starts User dialog to
generate other MagIC tables for later contribution to the MagIC
database. The following describes the steps used in the 2.5 data
format to do this:
1. read pmag_specimens.txt, pmag_samples.txt, pmag_sites.txt, and
sort out lines with LP-DIR in magic_codes
2. saves a clean pmag_*.txt files without LP-DIR stuff as
pmag_*.txt.tmp
3. write a new file pmag_specimens.txt
4. merge pmag_specimens.txt and pmag_specimens.txt.tmp using
combine_magic.py
5. delete pmag_specimens.txt.tmp
6 (optional) extracting new pag_*.txt files (except
pmag_specimens.txt) using specimens_results_magic.py
7: if #6: merge pmag_*.txt and pmag_*.txt.tmp using combine_magic.py
if not #6: save pmag_*.txt.tmp as pmag_*.txt
"""
# ---------------------------------------
# save pmag_*.txt.tmp without directional data
# ---------------------------------------
self.on_menu_save_interpretation(None)
# ---------------------------------------
# dialog box to choose coordinate systems for pmag_specimens.txt
# ---------------------------------------
dia = demag_dialogs.magic_pmag_specimens_table_dialog(None)
CoorTypes = []
if self.test_mode:
CoorTypes = ['DA-DIR']
elif dia.ShowModal() == wx.ID_OK: # Until the user clicks OK, show the message
if dia.cb_spec_coor.GetValue() == True:
CoorTypes.append('DA-DIR')
if dia.cb_geo_coor.GetValue() == True:
CoorTypes.append('DA-DIR-GEO')
if dia.cb_tilt_coor.GetValue() == True:
CoorTypes.append('DA-DIR-TILT')
else:
self.user_warning("MagIC tables not saved")
print("MagIC tables not saved")
return
# ------------------------------
self.PmagRecsOld = {}
if self.data_model == 3.0:
FILES = []
else:
FILES = ['pmag_specimens.txt']
for FILE in FILES:
self.PmagRecsOld[FILE] = []
meas_data = []
try:
meas_data, file_type = pmag.magic_read(
os.path.join(self.WD, FILE))
print(("-I- Read old magic file %s\n" %
os.path.join(self.WD, FILE)))
# if FILE !='pmag_specimens.txt':
os.remove(os.path.join(self.WD, FILE))
print(("-I- Delete old magic file %s\n" %
os.path.join(self.WD, FILE)))
except (OSError, IOError) as e:
continue
for rec in meas_data:
if "magic_method_codes" in list(rec.keys()):
if "LP-DIR" not in rec['magic_method_codes'] and "DE-" not in rec['magic_method_codes']:
self.PmagRecsOld[FILE].append(rec)
# ---------------------------------------
# write a new pmag_specimens.txt
# ---------------------------------------
specimens_list = list(self.pmag_results_data['specimens'].keys())
specimens_list.sort()
PmagSpecs = []
for specimen in specimens_list:
for dirtype in CoorTypes:
i = 0
for fit in self.pmag_results_data['specimens'][specimen]:
mpars = fit.get(dirtype)
if not mpars:
mpars = self.get_PCA_parameters(
specimen, fit, fit.tmin, fit.tmax, dirtype, fit.PCA_type)
if not mpars or 'specimen_dec' not in list(mpars.keys()):
self.user_warning("Could not calculate interpretation for specimen %s and fit %s in coordinate system %s while exporting pmag tables, skipping" % (
specimen, fit.name, dirtype))
continue
PmagSpecRec = {}
PmagSpecRec["magic_software_packages"] = pmag.get_version(
) + ': demag_gui'
PmagSpecRec["er_specimen_name"] = specimen
PmagSpecRec["er_sample_name"] = self.Data_hierarchy['sample_of_specimen'][specimen]
PmagSpecRec["er_site_name"] = self.Data_hierarchy['site_of_specimen'][specimen]
PmagSpecRec["er_location_name"] = self.Data_hierarchy['location_of_specimen'][specimen]
if specimen in list(self.Data_hierarchy['expedition_name_of_specimen'].keys()):
PmagSpecRec["er_expedition_name"] = self.Data_hierarchy['expedition_name_of_specimen'][specimen]
PmagSpecRec["er_citation_names"] = "This study"
if "magic_experiment_name" in self.Data[specimen]:
PmagSpecRec["magic_experiment_names"] = self.Data[specimen]["magic_experiment_name"]
if 'magic_instrument_codes' in list(self.Data[specimen].keys()):
PmagSpecRec["magic_instrument_codes"] = self.Data[specimen]['magic_instrument_codes']
PmagSpecRec['specimen_correction'] = 'u'
PmagSpecRec['specimen_direction_type'] = mpars["specimen_direction_type"]
PmagSpecRec['specimen_dec'] = "%.1f" % mpars["specimen_dec"]
PmagSpecRec['specimen_inc'] = "%.1f" % mpars["specimen_inc"]
PmagSpecRec['specimen_flag'] = "g"
if fit in self.bad_fits:
PmagSpecRec['specimen_flag'] = "b"
if "C" in fit.tmin or "C" in fit.tmax:
PmagSpecRec['measurement_step_unit'] = "K"
else:
PmagSpecRec['measurement_step_unit'] = "T"
if "C" in fit.tmin:
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
elif "mT" in fit.tmin:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
else:
if PmagSpecRec['measurement_step_unit'] == "K":
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
else:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
if "C" in fit.tmax:
PmagSpecRec['measurement_step_max'] = "%.0f" % (
mpars["measurement_step_max"]+273.)
elif "mT" in fit.tmax:
PmagSpecRec['measurement_step_max'] = "%8.3e" % (
mpars["measurement_step_max"]*1e-3)
else:
if PmagSpecRec['measurement_step_unit'] == "K":
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
else:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
PmagSpecRec['specimen_n'] = "%.0f" % mpars["specimen_n"]
calculation_type = mpars['calculation_type']
PmagSpecRec["magic_method_codes"] = self.Data[specimen]['magic_method_codes'] + \
":"+calculation_type+":"+dirtype
PmagSpecRec["specimen_comp_n"] = str(
len(self.pmag_results_data["specimens"][specimen]))
PmagSpecRec["specimen_comp_name"] = fit.name
if fit in self.bad_fits:
PmagSpecRec["specimen_flag"] = "b"
else:
PmagSpecRec["specimen_flag"] = "g"
if calculation_type in ["DE-BFL", "DE-BFL-A", "DE-BFL-O"]:
PmagSpecRec['specimen_direction_type'] = 'l'
PmagSpecRec['specimen_mad'] = "%.1f" % float(
mpars["specimen_mad"])
PmagSpecRec['specimen_dang'] = "%.1f" % float(
mpars['specimen_dang'])
PmagSpecRec['specimen_alpha95'] = ""
elif calculation_type in ["DE-BFP"]:
PmagSpecRec['specimen_direction_type'] = 'p'
PmagSpecRec['specimen_mad'] = "%.1f" % float(
mpars['specimen_mad'])
PmagSpecRec['specimen_dang'] = ""
PmagSpecRec['specimen_alpha95'] = ""
if self.data_model == 3.0:
if 'bfv_dec' not in list(mpars.keys()) or \
'bfv_inc' not in list(mpars.keys()):
self.calculate_best_fit_vectors(
high_level_type="sites", high_level_name=PmagSpecRec["er_site_name"], dirtype=dirtype)
mpars = fit.get(dirtype)
try:
PmagSpecRec['dir_bfv_dec'] = "%.1f" % mpars['bfv_dec']
PmagSpecRec['dir_bfv_inc'] = "%.1f" % mpars['bfv_inc']
except KeyError:
print("Error calculating BFV during export of interpretations for %s, %s, %s" % (
fit.name, specimen, dirtype))
elif calculation_type in ["DE-FM"]:
PmagSpecRec['specimen_direction_type'] = 'l'
PmagSpecRec['specimen_mad'] = ""
PmagSpecRec['specimen_dang'] = ""
PmagSpecRec['specimen_alpha95'] = "%.1f" % float(
mpars['specimen_alpha95'])
if dirtype == 'DA-DIR-TILT':
PmagSpecRec['specimen_tilt_correction'] = "100"
elif dirtype == 'DA-DIR-GEO':
PmagSpecRec['specimen_tilt_correction'] = "0"
else:
PmagSpecRec['specimen_tilt_correction'] = "-1"
PmagSpecs.append(PmagSpecRec)
i += 1
# add the 'old' lines with no "LP-DIR" in
if 'pmag_specimens.txt' in list(self.PmagRecsOld.keys()):
for rec in self.PmagRecsOld['pmag_specimens.txt']:
PmagSpecs.append(rec)
PmagSpecs_fixed = self.merge_pmag_recs(PmagSpecs)
if len(PmagSpecs_fixed) == 0:
self.user_warning(
"No data to save to MagIC tables please create some interpretations before saving")
print("No data to save, MagIC tables not written")
return
if self.data_model == 3.0:
# translate demag_gui output to 3.0 DataFrame
ndf2_5 = DataFrame(PmagSpecs_fixed)
if 'specimen_direction_type' in ndf2_5.columns:
# doesn't exist in new model
del ndf2_5['specimen_direction_type']
ndf3_0 = ndf2_5.rename(columns=map_magic.spec_magic2_2_magic3_map)
if 'specimen' in ndf3_0.columns:
ndf3_0 = ndf3_0.set_index("specimen")
# replace the removed specimen column
ndf3_0['specimen'] = ndf3_0.index
# prefer keeping analyst_names in txt
if 'analyst_names' in ndf3_0:
del ndf3_0['analyst_names']
# get current 3.0 DataFrame from contribution object
if 'specimens' not in self.con.tables:
cols = ndf3_0.columns
self.con.add_empty_magic_table('specimens', col_names=cols)
spmdf = self.con.tables['specimens']
# remove translation collisions or deprecated terms
for dc in ["dir_comp_name", "magic_method_codes"]:
if dc in spmdf.df.columns:
del spmdf.df[dc]
# merge previous df with new interpretations DataFrame
# (do not include non-directional data in the merge or else it
# will be overwritten)
# fix index names
spmdf.df.index.name = "specimen_name"
ndf3_0.index.name = "specimen_name"
# pull out directional/non-directional data
if 'method_codes' not in spmdf.df:
spmdf.df['method_codes'] = ''
directional = spmdf.df['method_codes'].str.contains('LP-DIR').astype(bool)
non_directional_df = spmdf.df[~directional]
spmdf.df = spmdf.df[directional]
# merge new interpretations with old specimen information
directional_df = spmdf.merge_dfs(ndf3_0)
# add any missing columns to non_directional_df
for col in directional_df.columns:
if col not in non_directional_df.columns:
non_directional_df[col] = ""
# make sure columns are ordered the same so that we can concatenate
non_directional_df.sort_index(axis='columns', inplace=True)
directional_df.sort_index(axis='columns', inplace=True)
# put directional/non-directional data back together
merged = pd.concat([non_directional_df, directional_df])
merged.sort_index(inplace=True)
spmdf.df = merged
# write to disk
spmdf.write_magic_file(dir_path=self.WD)
TEXT = "specimens interpretations are saved in specimens.txt.\nPress OK to save to samples/sites/locations/ages tables."
self.dlg = wx.MessageDialog(
self, caption="Other Tables", message=TEXT, style=wx.OK | wx.CANCEL)
result = self.show_dlg(self.dlg)
if result == wx.ID_OK:
self.dlg.Destroy()
else:
self.dlg.Destroy()
return
else:
pmag.magic_write(os.path.join(
self.WD, "pmag_specimens.txt"), PmagSpecs_fixed, 'pmag_specimens')
print(("specimen data stored in %s\n" %
os.path.join(self.WD, "pmag_specimens.txt")))
TEXT = "specimens interpretations are saved in pmag_specimens.txt.\nPress OK for pmag_samples/pmag_sites/pmag_results tables."
dlg = wx.MessageDialog(
self, caption="Other Pmag Tables", message=TEXT, style=wx.OK | wx.CANCEL)
result = self.show_dlg(dlg)
if result == wx.ID_OK:
dlg.Destroy()
else:
dlg.Destroy()
return
# --------------------------------
dia = demag_dialogs.magic_pmag_tables_dialog(
None, self.WD, self.Data, self.Data_info)
if self.show_dlg(dia) == wx.ID_OK: # Until the user clicks OK, show the message
self.On_close_MagIC_dialog(dia) | Creates or Updates Specimens or Pmag Specimens MagIC table,
overwrites .redo file for safety, and starts User dialog to
generate other MagIC tables for later contribution to the MagIC
database. The following describes the steps used in the 2.5 data
format to do this:
1. read pmag_specimens.txt, pmag_samples.txt, pmag_sites.txt, and
sort out lines with LP-DIR in magic_codes
2. saves a clean pmag_*.txt files without LP-DIR stuff as
pmag_*.txt.tmp
3. write a new file pmag_specimens.txt
4. merge pmag_specimens.txt and pmag_specimens.txt.tmp using
combine_magic.py
5. delete pmag_specimens.txt.tmp
6 (optional) extracting new pag_*.txt files (except
pmag_specimens.txt) using specimens_results_magic.py
7: if #6: merge pmag_*.txt and pmag_*.txt.tmp using combine_magic.py
if not #6: save pmag_*.txt.tmp as pmag_*.txt | Below is the the instruction that describes the task:
### Input:
Creates or Updates Specimens or Pmag Specimens MagIC table,
overwrites .redo file for safety, and starts User dialog to
generate other MagIC tables for later contribution to the MagIC
database. The following describes the steps used in the 2.5 data
format to do this:
1. read pmag_specimens.txt, pmag_samples.txt, pmag_sites.txt, and
sort out lines with LP-DIR in magic_codes
2. saves a clean pmag_*.txt files without LP-DIR stuff as
pmag_*.txt.tmp
3. write a new file pmag_specimens.txt
4. merge pmag_specimens.txt and pmag_specimens.txt.tmp using
combine_magic.py
5. delete pmag_specimens.txt.tmp
6 (optional) extracting new pag_*.txt files (except
pmag_specimens.txt) using specimens_results_magic.py
7: if #6: merge pmag_*.txt and pmag_*.txt.tmp using combine_magic.py
if not #6: save pmag_*.txt.tmp as pmag_*.txt
### Response:
def on_menu_make_MagIC_results_tables(self, event):
"""
Creates or Updates Specimens or Pmag Specimens MagIC table,
overwrites .redo file for safety, and starts User dialog to
generate other MagIC tables for later contribution to the MagIC
database. The following describes the steps used in the 2.5 data
format to do this:
1. read pmag_specimens.txt, pmag_samples.txt, pmag_sites.txt, and
sort out lines with LP-DIR in magic_codes
2. saves a clean pmag_*.txt files without LP-DIR stuff as
pmag_*.txt.tmp
3. write a new file pmag_specimens.txt
4. merge pmag_specimens.txt and pmag_specimens.txt.tmp using
combine_magic.py
5. delete pmag_specimens.txt.tmp
6 (optional) extracting new pag_*.txt files (except
pmag_specimens.txt) using specimens_results_magic.py
7: if #6: merge pmag_*.txt and pmag_*.txt.tmp using combine_magic.py
if not #6: save pmag_*.txt.tmp as pmag_*.txt
"""
# ---------------------------------------
# save pmag_*.txt.tmp without directional data
# ---------------------------------------
self.on_menu_save_interpretation(None)
# ---------------------------------------
# dialog box to choose coordinate systems for pmag_specimens.txt
# ---------------------------------------
dia = demag_dialogs.magic_pmag_specimens_table_dialog(None)
CoorTypes = []
if self.test_mode:
CoorTypes = ['DA-DIR']
elif dia.ShowModal() == wx.ID_OK: # Until the user clicks OK, show the message
if dia.cb_spec_coor.GetValue() == True:
CoorTypes.append('DA-DIR')
if dia.cb_geo_coor.GetValue() == True:
CoorTypes.append('DA-DIR-GEO')
if dia.cb_tilt_coor.GetValue() == True:
CoorTypes.append('DA-DIR-TILT')
else:
self.user_warning("MagIC tables not saved")
print("MagIC tables not saved")
return
# ------------------------------
self.PmagRecsOld = {}
if self.data_model == 3.0:
FILES = []
else:
FILES = ['pmag_specimens.txt']
for FILE in FILES:
self.PmagRecsOld[FILE] = []
meas_data = []
try:
meas_data, file_type = pmag.magic_read(
os.path.join(self.WD, FILE))
print(("-I- Read old magic file %s\n" %
os.path.join(self.WD, FILE)))
# if FILE !='pmag_specimens.txt':
os.remove(os.path.join(self.WD, FILE))
print(("-I- Delete old magic file %s\n" %
os.path.join(self.WD, FILE)))
except (OSError, IOError) as e:
continue
for rec in meas_data:
if "magic_method_codes" in list(rec.keys()):
if "LP-DIR" not in rec['magic_method_codes'] and "DE-" not in rec['magic_method_codes']:
self.PmagRecsOld[FILE].append(rec)
# ---------------------------------------
# write a new pmag_specimens.txt
# ---------------------------------------
specimens_list = list(self.pmag_results_data['specimens'].keys())
specimens_list.sort()
PmagSpecs = []
for specimen in specimens_list:
for dirtype in CoorTypes:
i = 0
for fit in self.pmag_results_data['specimens'][specimen]:
mpars = fit.get(dirtype)
if not mpars:
mpars = self.get_PCA_parameters(
specimen, fit, fit.tmin, fit.tmax, dirtype, fit.PCA_type)
if not mpars or 'specimen_dec' not in list(mpars.keys()):
self.user_warning("Could not calculate interpretation for specimen %s and fit %s in coordinate system %s while exporting pmag tables, skipping" % (
specimen, fit.name, dirtype))
continue
PmagSpecRec = {}
PmagSpecRec["magic_software_packages"] = pmag.get_version(
) + ': demag_gui'
PmagSpecRec["er_specimen_name"] = specimen
PmagSpecRec["er_sample_name"] = self.Data_hierarchy['sample_of_specimen'][specimen]
PmagSpecRec["er_site_name"] = self.Data_hierarchy['site_of_specimen'][specimen]
PmagSpecRec["er_location_name"] = self.Data_hierarchy['location_of_specimen'][specimen]
if specimen in list(self.Data_hierarchy['expedition_name_of_specimen'].keys()):
PmagSpecRec["er_expedition_name"] = self.Data_hierarchy['expedition_name_of_specimen'][specimen]
PmagSpecRec["er_citation_names"] = "This study"
if "magic_experiment_name" in self.Data[specimen]:
PmagSpecRec["magic_experiment_names"] = self.Data[specimen]["magic_experiment_name"]
if 'magic_instrument_codes' in list(self.Data[specimen].keys()):
PmagSpecRec["magic_instrument_codes"] = self.Data[specimen]['magic_instrument_codes']
PmagSpecRec['specimen_correction'] = 'u'
PmagSpecRec['specimen_direction_type'] = mpars["specimen_direction_type"]
PmagSpecRec['specimen_dec'] = "%.1f" % mpars["specimen_dec"]
PmagSpecRec['specimen_inc'] = "%.1f" % mpars["specimen_inc"]
PmagSpecRec['specimen_flag'] = "g"
if fit in self.bad_fits:
PmagSpecRec['specimen_flag'] = "b"
if "C" in fit.tmin or "C" in fit.tmax:
PmagSpecRec['measurement_step_unit'] = "K"
else:
PmagSpecRec['measurement_step_unit'] = "T"
if "C" in fit.tmin:
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
elif "mT" in fit.tmin:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
else:
if PmagSpecRec['measurement_step_unit'] == "K":
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
else:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
if "C" in fit.tmax:
PmagSpecRec['measurement_step_max'] = "%.0f" % (
mpars["measurement_step_max"]+273.)
elif "mT" in fit.tmax:
PmagSpecRec['measurement_step_max'] = "%8.3e" % (
mpars["measurement_step_max"]*1e-3)
else:
if PmagSpecRec['measurement_step_unit'] == "K":
PmagSpecRec['measurement_step_min'] = "%.0f" % (
mpars["measurement_step_min"]+273.)
else:
PmagSpecRec['measurement_step_min'] = "%8.3e" % (
mpars["measurement_step_min"]*1e-3)
PmagSpecRec['specimen_n'] = "%.0f" % mpars["specimen_n"]
calculation_type = mpars['calculation_type']
PmagSpecRec["magic_method_codes"] = self.Data[specimen]['magic_method_codes'] + \
":"+calculation_type+":"+dirtype
PmagSpecRec["specimen_comp_n"] = str(
len(self.pmag_results_data["specimens"][specimen]))
PmagSpecRec["specimen_comp_name"] = fit.name
if fit in self.bad_fits:
PmagSpecRec["specimen_flag"] = "b"
else:
PmagSpecRec["specimen_flag"] = "g"
if calculation_type in ["DE-BFL", "DE-BFL-A", "DE-BFL-O"]:
PmagSpecRec['specimen_direction_type'] = 'l'
PmagSpecRec['specimen_mad'] = "%.1f" % float(
mpars["specimen_mad"])
PmagSpecRec['specimen_dang'] = "%.1f" % float(
mpars['specimen_dang'])
PmagSpecRec['specimen_alpha95'] = ""
elif calculation_type in ["DE-BFP"]:
PmagSpecRec['specimen_direction_type'] = 'p'
PmagSpecRec['specimen_mad'] = "%.1f" % float(
mpars['specimen_mad'])
PmagSpecRec['specimen_dang'] = ""
PmagSpecRec['specimen_alpha95'] = ""
if self.data_model == 3.0:
if 'bfv_dec' not in list(mpars.keys()) or \
'bfv_inc' not in list(mpars.keys()):
self.calculate_best_fit_vectors(
high_level_type="sites", high_level_name=PmagSpecRec["er_site_name"], dirtype=dirtype)
mpars = fit.get(dirtype)
try:
PmagSpecRec['dir_bfv_dec'] = "%.1f" % mpars['bfv_dec']
PmagSpecRec['dir_bfv_inc'] = "%.1f" % mpars['bfv_inc']
except KeyError:
print("Error calculating BFV during export of interpretations for %s, %s, %s" % (
fit.name, specimen, dirtype))
elif calculation_type in ["DE-FM"]:
PmagSpecRec['specimen_direction_type'] = 'l'
PmagSpecRec['specimen_mad'] = ""
PmagSpecRec['specimen_dang'] = ""
PmagSpecRec['specimen_alpha95'] = "%.1f" % float(
mpars['specimen_alpha95'])
if dirtype == 'DA-DIR-TILT':
PmagSpecRec['specimen_tilt_correction'] = "100"
elif dirtype == 'DA-DIR-GEO':
PmagSpecRec['specimen_tilt_correction'] = "0"
else:
PmagSpecRec['specimen_tilt_correction'] = "-1"
PmagSpecs.append(PmagSpecRec)
i += 1
# add the 'old' lines with no "LP-DIR" in
if 'pmag_specimens.txt' in list(self.PmagRecsOld.keys()):
for rec in self.PmagRecsOld['pmag_specimens.txt']:
PmagSpecs.append(rec)
PmagSpecs_fixed = self.merge_pmag_recs(PmagSpecs)
if len(PmagSpecs_fixed) == 0:
self.user_warning(
"No data to save to MagIC tables please create some interpretations before saving")
print("No data to save, MagIC tables not written")
return
if self.data_model == 3.0:
# translate demag_gui output to 3.0 DataFrame
ndf2_5 = DataFrame(PmagSpecs_fixed)
if 'specimen_direction_type' in ndf2_5.columns:
# doesn't exist in new model
del ndf2_5['specimen_direction_type']
ndf3_0 = ndf2_5.rename(columns=map_magic.spec_magic2_2_magic3_map)
if 'specimen' in ndf3_0.columns:
ndf3_0 = ndf3_0.set_index("specimen")
# replace the removed specimen column
ndf3_0['specimen'] = ndf3_0.index
# prefer keeping analyst_names in txt
if 'analyst_names' in ndf3_0:
del ndf3_0['analyst_names']
# get current 3.0 DataFrame from contribution object
if 'specimens' not in self.con.tables:
cols = ndf3_0.columns
self.con.add_empty_magic_table('specimens', col_names=cols)
spmdf = self.con.tables['specimens']
# remove translation collisions or deprecated terms
for dc in ["dir_comp_name", "magic_method_codes"]:
if dc in spmdf.df.columns:
del spmdf.df[dc]
# merge previous df with new interpretations DataFrame
# (do not include non-directional data in the merge or else it
# will be overwritten)
# fix index names
spmdf.df.index.name = "specimen_name"
ndf3_0.index.name = "specimen_name"
# pull out directional/non-directional data
if 'method_codes' not in spmdf.df:
spmdf.df['method_codes'] = ''
directional = spmdf.df['method_codes'].str.contains('LP-DIR').astype(bool)
non_directional_df = spmdf.df[~directional]
spmdf.df = spmdf.df[directional]
# merge new interpretations with old specimen information
directional_df = spmdf.merge_dfs(ndf3_0)
# add any missing columns to non_directional_df
for col in directional_df.columns:
if col not in non_directional_df.columns:
non_directional_df[col] = ""
# make sure columns are ordered the same so that we can concatenate
non_directional_df.sort_index(axis='columns', inplace=True)
directional_df.sort_index(axis='columns', inplace=True)
# put directional/non-directional data back together
merged = pd.concat([non_directional_df, directional_df])
merged.sort_index(inplace=True)
spmdf.df = merged
# write to disk
spmdf.write_magic_file(dir_path=self.WD)
TEXT = "specimens interpretations are saved in specimens.txt.\nPress OK to save to samples/sites/locations/ages tables."
self.dlg = wx.MessageDialog(
self, caption="Other Tables", message=TEXT, style=wx.OK | wx.CANCEL)
result = self.show_dlg(self.dlg)
if result == wx.ID_OK:
self.dlg.Destroy()
else:
self.dlg.Destroy()
return
else:
pmag.magic_write(os.path.join(
self.WD, "pmag_specimens.txt"), PmagSpecs_fixed, 'pmag_specimens')
print(("specimen data stored in %s\n" %
os.path.join(self.WD, "pmag_specimens.txt")))
TEXT = "specimens interpretations are saved in pmag_specimens.txt.\nPress OK for pmag_samples/pmag_sites/pmag_results tables."
dlg = wx.MessageDialog(
self, caption="Other Pmag Tables", message=TEXT, style=wx.OK | wx.CANCEL)
result = self.show_dlg(dlg)
if result == wx.ID_OK:
dlg.Destroy()
else:
dlg.Destroy()
return
# --------------------------------
dia = demag_dialogs.magic_pmag_tables_dialog(
None, self.WD, self.Data, self.Data_info)
if self.show_dlg(dia) == wx.ID_OK: # Until the user clicks OK, show the message
self.On_close_MagIC_dialog(dia) |
def get_sla_template_path(service_type=ServiceTypes.ASSET_ACCESS):
"""
Get the template for a ServiceType.
:param service_type: ServiceTypes
:return: Path of the template, str
"""
if service_type == ServiceTypes.ASSET_ACCESS:
name = 'access_sla_template.json'
elif service_type == ServiceTypes.CLOUD_COMPUTE:
name = 'compute_sla_template.json'
elif service_type == ServiceTypes.FITCHAIN_COMPUTE:
name = 'fitchain_sla_template.json'
else:
raise ValueError(f'Invalid/unsupported service agreement type {service_type}')
return os.path.join(os.path.sep, *os.path.realpath(__file__).split(os.path.sep)[1:-1], name) | Get the template for a ServiceType.
:param service_type: ServiceTypes
:return: Path of the template, str | Below is the the instruction that describes the task:
### Input:
Get the template for a ServiceType.
:param service_type: ServiceTypes
:return: Path of the template, str
### Response:
def get_sla_template_path(service_type=ServiceTypes.ASSET_ACCESS):
"""
Get the template for a ServiceType.
:param service_type: ServiceTypes
:return: Path of the template, str
"""
if service_type == ServiceTypes.ASSET_ACCESS:
name = 'access_sla_template.json'
elif service_type == ServiceTypes.CLOUD_COMPUTE:
name = 'compute_sla_template.json'
elif service_type == ServiceTypes.FITCHAIN_COMPUTE:
name = 'fitchain_sla_template.json'
else:
raise ValueError(f'Invalid/unsupported service agreement type {service_type}')
return os.path.join(os.path.sep, *os.path.realpath(__file__).split(os.path.sep)[1:-1], name) |
def delete_route_table(route_table_id=None, route_table_name=None,
region=None, key=None, keyid=None, profile=None):
'''
Deletes a route table.
CLI Examples:
.. code-block:: bash
salt myminion boto_vpc.delete_route_table route_table_id='rtb-1f382e7d'
salt myminion boto_vpc.delete_route_table route_table_name='myroutetable'
'''
return _delete_resource(resource='route_table', name=route_table_name,
resource_id=route_table_id, region=region, key=key,
keyid=keyid, profile=profile) | Deletes a route table.
CLI Examples:
.. code-block:: bash
salt myminion boto_vpc.delete_route_table route_table_id='rtb-1f382e7d'
salt myminion boto_vpc.delete_route_table route_table_name='myroutetable' | Below is the the instruction that describes the task:
### Input:
Deletes a route table.
CLI Examples:
.. code-block:: bash
salt myminion boto_vpc.delete_route_table route_table_id='rtb-1f382e7d'
salt myminion boto_vpc.delete_route_table route_table_name='myroutetable'
### Response:
def delete_route_table(route_table_id=None, route_table_name=None,
region=None, key=None, keyid=None, profile=None):
'''
Deletes a route table.
CLI Examples:
.. code-block:: bash
salt myminion boto_vpc.delete_route_table route_table_id='rtb-1f382e7d'
salt myminion boto_vpc.delete_route_table route_table_name='myroutetable'
'''
return _delete_resource(resource='route_table', name=route_table_name,
resource_id=route_table_id, region=region, key=key,
keyid=keyid, profile=profile) |
def get_sentry(self, username):
"""
Returns contents of sentry file for the given username
.. note::
returns ``None`` if :attr:`credential_location` is not set, or file is not found/inaccessible
:param username: username
:type username: :class:`str`
:return: sentry file contents, or ``None``
:rtype: :class:`bytes`, :class:`None`
"""
filepath = self._get_sentry_path(username)
if filepath and os.path.isfile(filepath):
try:
with open(filepath, 'rb') as f:
return f.read()
except IOError as e:
self._LOG.error("get_sentry: %s" % str(e))
return None | Returns contents of sentry file for the given username
.. note::
returns ``None`` if :attr:`credential_location` is not set, or file is not found/inaccessible
:param username: username
:type username: :class:`str`
:return: sentry file contents, or ``None``
:rtype: :class:`bytes`, :class:`None` | Below is the the instruction that describes the task:
### Input:
Returns contents of sentry file for the given username
.. note::
returns ``None`` if :attr:`credential_location` is not set, or file is not found/inaccessible
:param username: username
:type username: :class:`str`
:return: sentry file contents, or ``None``
:rtype: :class:`bytes`, :class:`None`
### Response:
def get_sentry(self, username):
"""
Returns contents of sentry file for the given username
.. note::
returns ``None`` if :attr:`credential_location` is not set, or file is not found/inaccessible
:param username: username
:type username: :class:`str`
:return: sentry file contents, or ``None``
:rtype: :class:`bytes`, :class:`None`
"""
filepath = self._get_sentry_path(username)
if filepath and os.path.isfile(filepath):
try:
with open(filepath, 'rb') as f:
return f.read()
except IOError as e:
self._LOG.error("get_sentry: %s" % str(e))
return None |
def monte_carlo_csiszar_f_divergence(
f,
p_log_prob,
q,
num_draws,
use_reparametrization=None,
seed=None,
name=None):
"""Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergence for Csiszar-function f is given by:
```none
D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]
~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),
where x_j ~iid q(X)
```
Tricks: Reparameterization and Score-Gradient
When q is "reparameterized", i.e., a diffeomorphic transformation of a
parameterless distribution (e.g.,
`Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and
expectation, i.e.,
`grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`
and `s_i = f(x_i), x_i ~iid q(X)`.
However, if q is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of unreparameterized distributions. In
this circumstance using the Score-Gradient trick results in an unbiased
gradient, i.e.,
```none
grad[ E_q[f(X)] ]
= grad[ int dx q(x) f(x) ]
= int dx grad[ q(x) f(x) ]
= int dx [ q'(x) f(x) + q(x) f'(x) ]
= int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]
= int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]
= E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]
```
Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is
usually preferable to set `use_reparametrization = True`.
Example Application:
The Csiszar f-Divergence is a useful framework for variational inference.
I.e., observe that,
```none
f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )
<= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]
:= D_f[p(x, Z), q(Z | x)]
```
The inequality follows from the fact that the "perspective" of `f`, i.e.,
`(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and
`t` is a real. Since the above framework includes the popular Evidence Lower
BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework
"Evidence Divergence Bound Optimization" (EDBO).
Args:
f: Python `callable` representing a Csiszar-function in log-space, i.e.,
takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.
p_log_prob: Python `callable` taking (a batch of) samples from `q` and
returning the natural-log of the probability under distribution `p`.
(In variational inference `p` is the joint distribution.)
q: `tf.Distribution`-like instance; must implement:
`reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.
(In variational inference `q` is the approximate posterior distribution.)
num_draws: Integer scalar number of draws used to approximate the
f-Divergence expectation.
use_reparametrization: Python `bool`. When `None` (the default),
automatically set to:
`q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.
When `True` uses the standard Monte-Carlo average. When `False` uses the
score-gradient trick. (See above for details.) When `False`, consider
using `csiszar_vimco`.
seed: Python `int` seed for `q.sample`.
name: Python `str` name prefixed to Ops created by this function.
Returns:
monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo
approximation of the Csiszar f-Divergence.
Raises:
ValueError: if `q` is not a reparameterized distribution and
`use_reparametrization = True`. A distribution `q` is said to be
"reparameterized" when its samples are generated by transforming the
samples of another distribution which does not depend on the
parameterization of `q`. This property ensures the gradient (with respect
to parameters) is valid.
TypeError: if `p_log_prob` is not a Python `callable`.
"""
reparameterization_types = tf.nest.flatten(q.reparameterization_type)
with tf.compat.v1.name_scope(name, "monte_carlo_csiszar_f_divergence",
[num_draws]):
if use_reparametrization is None:
use_reparametrization = all(
reparameterization_type == tfd.FULLY_REPARAMETERIZED
for reparameterization_type in reparameterization_types)
elif (use_reparametrization and
any(reparameterization_type != tfd.FULLY_REPARAMETERIZED
for reparameterization_type in reparameterization_types)):
# TODO(jvdillon): Consider only raising an exception if the gradient is
# requested.
raise ValueError(
"Distribution `q` must be reparameterized, i.e., a diffeomorphic "
"transformation of a parameterless distribution. (Otherwise this "
"function has a biased gradient.)")
if not callable(p_log_prob):
raise TypeError("`p_log_prob` must be a Python `callable` function.")
return monte_carlo.expectation(
f=lambda q_samples: f(p_log_prob(q_samples) - q.log_prob(q_samples)),
samples=q.sample(num_draws, seed=seed),
log_prob=q.log_prob, # Only used if use_reparametrization=False.
use_reparametrization=use_reparametrization) | Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergence for Csiszar-function f is given by:
```none
D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]
~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),
where x_j ~iid q(X)
```
Tricks: Reparameterization and Score-Gradient
When q is "reparameterized", i.e., a diffeomorphic transformation of a
parameterless distribution (e.g.,
`Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and
expectation, i.e.,
`grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`
and `s_i = f(x_i), x_i ~iid q(X)`.
However, if q is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of unreparameterized distributions. In
this circumstance using the Score-Gradient trick results in an unbiased
gradient, i.e.,
```none
grad[ E_q[f(X)] ]
= grad[ int dx q(x) f(x) ]
= int dx grad[ q(x) f(x) ]
= int dx [ q'(x) f(x) + q(x) f'(x) ]
= int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]
= int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]
= E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]
```
Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is
usually preferable to set `use_reparametrization = True`.
Example Application:
The Csiszar f-Divergence is a useful framework for variational inference.
I.e., observe that,
```none
f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )
<= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]
:= D_f[p(x, Z), q(Z | x)]
```
The inequality follows from the fact that the "perspective" of `f`, i.e.,
`(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and
`t` is a real. Since the above framework includes the popular Evidence Lower
BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework
"Evidence Divergence Bound Optimization" (EDBO).
Args:
f: Python `callable` representing a Csiszar-function in log-space, i.e.,
takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.
p_log_prob: Python `callable` taking (a batch of) samples from `q` and
returning the natural-log of the probability under distribution `p`.
(In variational inference `p` is the joint distribution.)
q: `tf.Distribution`-like instance; must implement:
`reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.
(In variational inference `q` is the approximate posterior distribution.)
num_draws: Integer scalar number of draws used to approximate the
f-Divergence expectation.
use_reparametrization: Python `bool`. When `None` (the default),
automatically set to:
`q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.
When `True` uses the standard Monte-Carlo average. When `False` uses the
score-gradient trick. (See above for details.) When `False`, consider
using `csiszar_vimco`.
seed: Python `int` seed for `q.sample`.
name: Python `str` name prefixed to Ops created by this function.
Returns:
monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo
approximation of the Csiszar f-Divergence.
Raises:
ValueError: if `q` is not a reparameterized distribution and
`use_reparametrization = True`. A distribution `q` is said to be
"reparameterized" when its samples are generated by transforming the
samples of another distribution which does not depend on the
parameterization of `q`. This property ensures the gradient (with respect
to parameters) is valid.
TypeError: if `p_log_prob` is not a Python `callable`. | Below is the the instruction that describes the task:
### Input:
Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergence for Csiszar-function f is given by:
```none
D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]
~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),
where x_j ~iid q(X)
```
Tricks: Reparameterization and Score-Gradient
When q is "reparameterized", i.e., a diffeomorphic transformation of a
parameterless distribution (e.g.,
`Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and
expectation, i.e.,
`grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`
and `s_i = f(x_i), x_i ~iid q(X)`.
However, if q is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of unreparameterized distributions. In
this circumstance using the Score-Gradient trick results in an unbiased
gradient, i.e.,
```none
grad[ E_q[f(X)] ]
= grad[ int dx q(x) f(x) ]
= int dx grad[ q(x) f(x) ]
= int dx [ q'(x) f(x) + q(x) f'(x) ]
= int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]
= int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]
= E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]
```
Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is
usually preferable to set `use_reparametrization = True`.
Example Application:
The Csiszar f-Divergence is a useful framework for variational inference.
I.e., observe that,
```none
f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )
<= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]
:= D_f[p(x, Z), q(Z | x)]
```
The inequality follows from the fact that the "perspective" of `f`, i.e.,
`(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and
`t` is a real. Since the above framework includes the popular Evidence Lower
BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework
"Evidence Divergence Bound Optimization" (EDBO).
Args:
f: Python `callable` representing a Csiszar-function in log-space, i.e.,
takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.
p_log_prob: Python `callable` taking (a batch of) samples from `q` and
returning the natural-log of the probability under distribution `p`.
(In variational inference `p` is the joint distribution.)
q: `tf.Distribution`-like instance; must implement:
`reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.
(In variational inference `q` is the approximate posterior distribution.)
num_draws: Integer scalar number of draws used to approximate the
f-Divergence expectation.
use_reparametrization: Python `bool`. When `None` (the default),
automatically set to:
`q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.
When `True` uses the standard Monte-Carlo average. When `False` uses the
score-gradient trick. (See above for details.) When `False`, consider
using `csiszar_vimco`.
seed: Python `int` seed for `q.sample`.
name: Python `str` name prefixed to Ops created by this function.
Returns:
monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo
approximation of the Csiszar f-Divergence.
Raises:
ValueError: if `q` is not a reparameterized distribution and
`use_reparametrization = True`. A distribution `q` is said to be
"reparameterized" when its samples are generated by transforming the
samples of another distribution which does not depend on the
parameterization of `q`. This property ensures the gradient (with respect
to parameters) is valid.
TypeError: if `p_log_prob` is not a Python `callable`.
### Response:
def monte_carlo_csiszar_f_divergence(
f,
p_log_prob,
q,
num_draws,
use_reparametrization=None,
seed=None,
name=None):
"""Monte-Carlo approximation of the Csiszar f-Divergence.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Csiszar f-Divergence for Csiszar-function f is given by:
```none
D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]
~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),
where x_j ~iid q(X)
```
Tricks: Reparameterization and Score-Gradient
When q is "reparameterized", i.e., a diffeomorphic transformation of a
parameterless distribution (e.g.,
`Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and
expectation, i.e.,
`grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`
and `s_i = f(x_i), x_i ~iid q(X)`.
However, if q is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of unreparameterized distributions. In
this circumstance using the Score-Gradient trick results in an unbiased
gradient, i.e.,
```none
grad[ E_q[f(X)] ]
= grad[ int dx q(x) f(x) ]
= int dx grad[ q(x) f(x) ]
= int dx [ q'(x) f(x) + q(x) f'(x) ]
= int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]
= int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]
= E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]
```
Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is
usually preferable to set `use_reparametrization = True`.
Example Application:
The Csiszar f-Divergence is a useful framework for variational inference.
I.e., observe that,
```none
f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )
<= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]
:= D_f[p(x, Z), q(Z | x)]
```
The inequality follows from the fact that the "perspective" of `f`, i.e.,
`(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and
`t` is a real. Since the above framework includes the popular Evidence Lower
BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework
"Evidence Divergence Bound Optimization" (EDBO).
Args:
f: Python `callable` representing a Csiszar-function in log-space, i.e.,
takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.
p_log_prob: Python `callable` taking (a batch of) samples from `q` and
returning the natural-log of the probability under distribution `p`.
(In variational inference `p` is the joint distribution.)
q: `tf.Distribution`-like instance; must implement:
`reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.
(In variational inference `q` is the approximate posterior distribution.)
num_draws: Integer scalar number of draws used to approximate the
f-Divergence expectation.
use_reparametrization: Python `bool`. When `None` (the default),
automatically set to:
`q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.
When `True` uses the standard Monte-Carlo average. When `False` uses the
score-gradient trick. (See above for details.) When `False`, consider
using `csiszar_vimco`.
seed: Python `int` seed for `q.sample`.
name: Python `str` name prefixed to Ops created by this function.
Returns:
monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo
approximation of the Csiszar f-Divergence.
Raises:
ValueError: if `q` is not a reparameterized distribution and
`use_reparametrization = True`. A distribution `q` is said to be
"reparameterized" when its samples are generated by transforming the
samples of another distribution which does not depend on the
parameterization of `q`. This property ensures the gradient (with respect
to parameters) is valid.
TypeError: if `p_log_prob` is not a Python `callable`.
"""
reparameterization_types = tf.nest.flatten(q.reparameterization_type)
with tf.compat.v1.name_scope(name, "monte_carlo_csiszar_f_divergence",
[num_draws]):
if use_reparametrization is None:
use_reparametrization = all(
reparameterization_type == tfd.FULLY_REPARAMETERIZED
for reparameterization_type in reparameterization_types)
elif (use_reparametrization and
any(reparameterization_type != tfd.FULLY_REPARAMETERIZED
for reparameterization_type in reparameterization_types)):
# TODO(jvdillon): Consider only raising an exception if the gradient is
# requested.
raise ValueError(
"Distribution `q` must be reparameterized, i.e., a diffeomorphic "
"transformation of a parameterless distribution. (Otherwise this "
"function has a biased gradient.)")
if not callable(p_log_prob):
raise TypeError("`p_log_prob` must be a Python `callable` function.")
return monte_carlo.expectation(
f=lambda q_samples: f(p_log_prob(q_samples) - q.log_prob(q_samples)),
samples=q.sample(num_draws, seed=seed),
log_prob=q.log_prob, # Only used if use_reparametrization=False.
use_reparametrization=use_reparametrization) |
def process_result(transmute_func, context, result, exc, content_type):
"""
process a result:
transmute_func: the transmute_func function that returned the response.
context: the transmute_context to use.
result: the return value of the function, which will be serialized and
returned back in the API.
exc: the exception object. For Python 2, the traceback should
be attached via the __traceback__ attribute. This is done automatically
in Python 3.
content_type: the content type that request is requesting for a return type.
(e.g. application/json)
"""
if isinstance(result, Response):
response = result
else:
response = Response(
result=result, code=transmute_func.success_code, success=True
)
if exc:
if isinstance(exc, APIException):
response.result = str(exc)
response.success = False
response.code = exc.code
else:
reraise(type(exc), exc, getattr(exc, "__traceback__", None))
else:
return_type = transmute_func.get_response_by_code(response.code)
if return_type:
response.result = context.serializers.dump(return_type, response.result)
try:
content_type = str(content_type)
serializer = context.contenttype_serializers[content_type]
except NoSerializerFound:
serializer = context.contenttype_serializers.default
content_type = serializer.main_type
if response.success:
result = context.response_shape.create_body(attr.asdict(response))
response.result = result
else:
response.result = attr.asdict(response)
body = serializer.dump(response.result)
# keeping the return type a dict to
# reduce performance overhead.
return {
"body": body,
"code": response.code,
"content-type": content_type,
"headers": response.headers,
} | process a result:
transmute_func: the transmute_func function that returned the response.
context: the transmute_context to use.
result: the return value of the function, which will be serialized and
returned back in the API.
exc: the exception object. For Python 2, the traceback should
be attached via the __traceback__ attribute. This is done automatically
in Python 3.
content_type: the content type that request is requesting for a return type.
(e.g. application/json) | Below is the the instruction that describes the task:
### Input:
process a result:
transmute_func: the transmute_func function that returned the response.
context: the transmute_context to use.
result: the return value of the function, which will be serialized and
returned back in the API.
exc: the exception object. For Python 2, the traceback should
be attached via the __traceback__ attribute. This is done automatically
in Python 3.
content_type: the content type that request is requesting for a return type.
(e.g. application/json)
### Response:
def process_result(transmute_func, context, result, exc, content_type):
"""
process a result:
transmute_func: the transmute_func function that returned the response.
context: the transmute_context to use.
result: the return value of the function, which will be serialized and
returned back in the API.
exc: the exception object. For Python 2, the traceback should
be attached via the __traceback__ attribute. This is done automatically
in Python 3.
content_type: the content type that request is requesting for a return type.
(e.g. application/json)
"""
if isinstance(result, Response):
response = result
else:
response = Response(
result=result, code=transmute_func.success_code, success=True
)
if exc:
if isinstance(exc, APIException):
response.result = str(exc)
response.success = False
response.code = exc.code
else:
reraise(type(exc), exc, getattr(exc, "__traceback__", None))
else:
return_type = transmute_func.get_response_by_code(response.code)
if return_type:
response.result = context.serializers.dump(return_type, response.result)
try:
content_type = str(content_type)
serializer = context.contenttype_serializers[content_type]
except NoSerializerFound:
serializer = context.contenttype_serializers.default
content_type = serializer.main_type
if response.success:
result = context.response_shape.create_body(attr.asdict(response))
response.result = result
else:
response.result = attr.asdict(response)
body = serializer.dump(response.result)
# keeping the return type a dict to
# reduce performance overhead.
return {
"body": body,
"code": response.code,
"content-type": content_type,
"headers": response.headers,
} |
def parseURIRaw(str, raw):
"""Parse an URI but allows to keep intact the original
fragments. URI-reference = URI / relative-ref """
ret = libxml2mod.xmlParseURIRaw(str, raw)
if ret is None:raise uriError('xmlParseURIRaw() failed')
return URI(_obj=ret) | Parse an URI but allows to keep intact the original
fragments. URI-reference = URI / relative-ref | Below is the the instruction that describes the task:
### Input:
Parse an URI but allows to keep intact the original
fragments. URI-reference = URI / relative-ref
### Response:
def parseURIRaw(str, raw):
"""Parse an URI but allows to keep intact the original
fragments. URI-reference = URI / relative-ref """
ret = libxml2mod.xmlParseURIRaw(str, raw)
if ret is None:raise uriError('xmlParseURIRaw() failed')
return URI(_obj=ret) |
def get_pipe_series_output(commands: Sequence[str],
stdinput: BinaryIO = None) -> bytes:
"""
Get the output from a piped series of commands.
Args:
commands: sequence of command strings
stdinput: optional ``stdin`` data to feed into the start of the pipe
Returns:
``stdout`` from the end of the pipe
"""
# Python arrays indexes are zero-based, i.e. an array is indexed from
# 0 to len(array)-1.
# The range/xrange commands, by default, start at 0 and go to one less
# than the maximum specified.
# print commands
processes = [] # type: List[subprocess.Popen]
for i in range(len(commands)):
if i == 0: # first processes
processes.append(
subprocess.Popen(
shlex.split(commands[i]),
stdin=subprocess.PIPE,
stdout=subprocess.PIPE
)
)
else: # subsequent ones
processes.append(
subprocess.Popen(
shlex.split(commands[i]),
stdin=processes[i - 1].stdout,
stdout=subprocess.PIPE
)
)
return processes[len(processes) - 1].communicate(stdinput)[0] | Get the output from a piped series of commands.
Args:
commands: sequence of command strings
stdinput: optional ``stdin`` data to feed into the start of the pipe
Returns:
``stdout`` from the end of the pipe | Below is the the instruction that describes the task:
### Input:
Get the output from a piped series of commands.
Args:
commands: sequence of command strings
stdinput: optional ``stdin`` data to feed into the start of the pipe
Returns:
``stdout`` from the end of the pipe
### Response:
def get_pipe_series_output(commands: Sequence[str],
stdinput: BinaryIO = None) -> bytes:
"""
Get the output from a piped series of commands.
Args:
commands: sequence of command strings
stdinput: optional ``stdin`` data to feed into the start of the pipe
Returns:
``stdout`` from the end of the pipe
"""
# Python arrays indexes are zero-based, i.e. an array is indexed from
# 0 to len(array)-1.
# The range/xrange commands, by default, start at 0 and go to one less
# than the maximum specified.
# print commands
processes = [] # type: List[subprocess.Popen]
for i in range(len(commands)):
if i == 0: # first processes
processes.append(
subprocess.Popen(
shlex.split(commands[i]),
stdin=subprocess.PIPE,
stdout=subprocess.PIPE
)
)
else: # subsequent ones
processes.append(
subprocess.Popen(
shlex.split(commands[i]),
stdin=processes[i - 1].stdout,
stdout=subprocess.PIPE
)
)
return processes[len(processes) - 1].communicate(stdinput)[0] |
def _to_dict(self):
"""Return a json dictionary representing this model."""
_dict = {}
if hasattr(self, 'id') and self.id is not None:
_dict['id'] = self.id
return _dict | Return a json dictionary representing this model. | Below is the the instruction that describes the task:
### Input:
Return a json dictionary representing this model.
### Response:
def _to_dict(self):
"""Return a json dictionary representing this model."""
_dict = {}
if hasattr(self, 'id') and self.id is not None:
_dict['id'] = self.id
return _dict |
def load_etext(etextno, refresh_cache=False, mirror=None, prefer_ascii=False):
"""Returns a unicode representation of the full body of a Project Gutenberg
text. After making an initial remote call to Project Gutenberg's servers,
the text is persisted locally.
"""
etextno = validate_etextno(etextno)
cached = os.path.join(_TEXT_CACHE, '{}.txt.gz'.format(etextno))
if refresh_cache:
remove(cached)
if not os.path.exists(cached):
makedirs(os.path.dirname(cached))
download_uri = _format_download_uri(etextno, mirror, prefer_ascii)
response = requests.get(download_uri)
# Ensure proper UTF-8 saving. There might be instances of ebooks or
# mirrors which advertise a broken encoding, and this will break
# downstream usages. For example, #55517 from aleph.gutenberg.org:
#
# from gutenberg.acquire import load_etext
# print(load_etext(55517, refresh_cache=True)[0:1000])
#
# response.encoding will be 'ISO-8859-1' while the file is UTF-8
if response.encoding != response.apparent_encoding:
response.encoding = response.apparent_encoding
text = response.text
with closing(gzip.open(cached, 'w')) as cache:
cache.write(text.encode('utf-8'))
with closing(gzip.open(cached, 'r')) as cache:
text = cache.read().decode('utf-8')
return text | Returns a unicode representation of the full body of a Project Gutenberg
text. After making an initial remote call to Project Gutenberg's servers,
the text is persisted locally. | Below is the the instruction that describes the task:
### Input:
Returns a unicode representation of the full body of a Project Gutenberg
text. After making an initial remote call to Project Gutenberg's servers,
the text is persisted locally.
### Response:
def load_etext(etextno, refresh_cache=False, mirror=None, prefer_ascii=False):
"""Returns a unicode representation of the full body of a Project Gutenberg
text. After making an initial remote call to Project Gutenberg's servers,
the text is persisted locally.
"""
etextno = validate_etextno(etextno)
cached = os.path.join(_TEXT_CACHE, '{}.txt.gz'.format(etextno))
if refresh_cache:
remove(cached)
if not os.path.exists(cached):
makedirs(os.path.dirname(cached))
download_uri = _format_download_uri(etextno, mirror, prefer_ascii)
response = requests.get(download_uri)
# Ensure proper UTF-8 saving. There might be instances of ebooks or
# mirrors which advertise a broken encoding, and this will break
# downstream usages. For example, #55517 from aleph.gutenberg.org:
#
# from gutenberg.acquire import load_etext
# print(load_etext(55517, refresh_cache=True)[0:1000])
#
# response.encoding will be 'ISO-8859-1' while the file is UTF-8
if response.encoding != response.apparent_encoding:
response.encoding = response.apparent_encoding
text = response.text
with closing(gzip.open(cached, 'w')) as cache:
cache.write(text.encode('utf-8'))
with closing(gzip.open(cached, 'r')) as cache:
text = cache.read().decode('utf-8')
return text |
def disable(self):
"""Disables the entity at this endpoint."""
self.post("disable")
if self.service.restart_required:
self.service.restart(120)
return self | Disables the entity at this endpoint. | Below is the the instruction that describes the task:
### Input:
Disables the entity at this endpoint.
### Response:
def disable(self):
"""Disables the entity at this endpoint."""
self.post("disable")
if self.service.restart_required:
self.service.restart(120)
return self |
def _software(self, *args, **kwargs):
'''
Return installed software.
'''
data = dict()
if 'exclude' in kwargs:
excludes = kwargs['exclude'].split(",")
else:
excludes = list()
os_family = __grains__.get("os_family").lower()
# Get locks
if os_family == 'suse':
LOCKS = "pkg.list_locks"
if 'products' not in excludes:
products = __salt__['pkg.list_products']()
if products:
data['products'] = products
elif os_family == 'redhat':
LOCKS = "pkg.get_locked_packages"
else:
LOCKS = None
if LOCKS and 'locks' not in excludes:
locks = __salt__[LOCKS]()
if locks:
data['locks'] = locks
# Get patterns
if os_family == 'suse':
PATTERNS = 'pkg.list_installed_patterns'
elif os_family == 'redhat':
PATTERNS = 'pkg.group_list'
else:
PATTERNS = None
if PATTERNS and 'patterns' not in excludes:
patterns = __salt__[PATTERNS]()
if patterns:
data['patterns'] = patterns
# Get packages
if 'packages' not in excludes:
data['packages'] = __salt__['pkg.list_pkgs']()
# Get repositories
if 'repositories' not in excludes:
repos = __salt__['pkg.list_repos']()
if repos:
data['repositories'] = repos
return data | Return installed software. | Below is the the instruction that describes the task:
### Input:
Return installed software.
### Response:
def _software(self, *args, **kwargs):
'''
Return installed software.
'''
data = dict()
if 'exclude' in kwargs:
excludes = kwargs['exclude'].split(",")
else:
excludes = list()
os_family = __grains__.get("os_family").lower()
# Get locks
if os_family == 'suse':
LOCKS = "pkg.list_locks"
if 'products' not in excludes:
products = __salt__['pkg.list_products']()
if products:
data['products'] = products
elif os_family == 'redhat':
LOCKS = "pkg.get_locked_packages"
else:
LOCKS = None
if LOCKS and 'locks' not in excludes:
locks = __salt__[LOCKS]()
if locks:
data['locks'] = locks
# Get patterns
if os_family == 'suse':
PATTERNS = 'pkg.list_installed_patterns'
elif os_family == 'redhat':
PATTERNS = 'pkg.group_list'
else:
PATTERNS = None
if PATTERNS and 'patterns' not in excludes:
patterns = __salt__[PATTERNS]()
if patterns:
data['patterns'] = patterns
# Get packages
if 'packages' not in excludes:
data['packages'] = __salt__['pkg.list_pkgs']()
# Get repositories
if 'repositories' not in excludes:
repos = __salt__['pkg.list_repos']()
if repos:
data['repositories'] = repos
return data |
def add_preset(self, name=None, desc=None, note=None, opts=SoSOptions()):
"""Add a new on-disk preset and write it to the configured
presets path.
:param preset: the new PresetDefaults to add
"""
presets_path = self.presets_path
if not name:
raise ValueError("Preset name cannot be empty")
if name in self.presets.keys():
raise ValueError("A preset with name '%s' already exists" % name)
preset = PresetDefaults(name=name, desc=desc, note=note, opts=opts)
preset.builtin = False
self.presets[preset.name] = preset
preset.write(presets_path) | Add a new on-disk preset and write it to the configured
presets path.
:param preset: the new PresetDefaults to add | Below is the the instruction that describes the task:
### Input:
Add a new on-disk preset and write it to the configured
presets path.
:param preset: the new PresetDefaults to add
### Response:
def add_preset(self, name=None, desc=None, note=None, opts=SoSOptions()):
"""Add a new on-disk preset and write it to the configured
presets path.
:param preset: the new PresetDefaults to add
"""
presets_path = self.presets_path
if not name:
raise ValueError("Preset name cannot be empty")
if name in self.presets.keys():
raise ValueError("A preset with name '%s' already exists" % name)
preset = PresetDefaults(name=name, desc=desc, note=note, opts=opts)
preset.builtin = False
self.presets[preset.name] = preset
preset.write(presets_path) |
def transformer_librispeech_v2():
"""HParams for training ASR model on LibriSpeech V2."""
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.num_decoder_layers = 4
hparams.num_encoder_layers = 6
hparams.hidden_size = 384
hparams.learning_rate = 0.15
hparams.daisy_chain_variables = False
hparams.filter_size = 1536
hparams.num_heads = 2
hparams.ffn_layer = "conv_relu_conv"
hparams.conv_first_kernel = 9
hparams.weight_decay = 0
hparams.layer_prepostprocess_dropout = 0.2
hparams.relu_dropout = 0.2
return hparams | HParams for training ASR model on LibriSpeech V2. | Below is the the instruction that describes the task:
### Input:
HParams for training ASR model on LibriSpeech V2.
### Response:
def transformer_librispeech_v2():
"""HParams for training ASR model on LibriSpeech V2."""
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.num_decoder_layers = 4
hparams.num_encoder_layers = 6
hparams.hidden_size = 384
hparams.learning_rate = 0.15
hparams.daisy_chain_variables = False
hparams.filter_size = 1536
hparams.num_heads = 2
hparams.ffn_layer = "conv_relu_conv"
hparams.conv_first_kernel = 9
hparams.weight_decay = 0
hparams.layer_prepostprocess_dropout = 0.2
hparams.relu_dropout = 0.2
return hparams |
def paintMilestone( self, painter ):
"""
Paints this item as the milestone look.
:param painter | <QPainter>
"""
# generate the rect
rect = self.rect()
padding = self.padding()
gantt = self.scene().ganttWidget()
cell_w = gantt.cellWidth()
cell_h = gantt.cellHeight()
x = rect.width() - cell_w
y = self.padding()
w = cell_w
h = rect.height() - padding - 2
# grab the color options
color = self.color()
alt_color = self.alternateColor()
if ( self.isSelected() ):
color = self.highlightColor()
alt_color = self.alternateHighlightColor()
# create the background brush
gradient = QLinearGradient()
gradient.setStart(0, 0)
gradient.setFinalStop(0, h)
gradient.setColorAt(0, color)
gradient.setColorAt(0.8, alt_color)
gradient.setColorAt(1, color)
painter.setPen(self.borderColor())
painter.setBrush(QBrush(gradient))
pen = painter.pen()
pen.setWidthF(0.5)
painter.setPen(pen)
painter.setRenderHint( painter.Antialiasing )
path = QPainterPath()
path.moveTo(x - cell_w / 3.0, y + h / 2.0)
path.lineTo(x, y)
path.lineTo(x + cell_w / 3.0, y + h / 2.0)
path.lineTo(x, y + h)
path.lineTo(x - cell_w / 3.0, y + h / 2.0)
painter.drawPath(path) | Paints this item as the milestone look.
:param painter | <QPainter> | Below is the the instruction that describes the task:
### Input:
Paints this item as the milestone look.
:param painter | <QPainter>
### Response:
def paintMilestone( self, painter ):
"""
Paints this item as the milestone look.
:param painter | <QPainter>
"""
# generate the rect
rect = self.rect()
padding = self.padding()
gantt = self.scene().ganttWidget()
cell_w = gantt.cellWidth()
cell_h = gantt.cellHeight()
x = rect.width() - cell_w
y = self.padding()
w = cell_w
h = rect.height() - padding - 2
# grab the color options
color = self.color()
alt_color = self.alternateColor()
if ( self.isSelected() ):
color = self.highlightColor()
alt_color = self.alternateHighlightColor()
# create the background brush
gradient = QLinearGradient()
gradient.setStart(0, 0)
gradient.setFinalStop(0, h)
gradient.setColorAt(0, color)
gradient.setColorAt(0.8, alt_color)
gradient.setColorAt(1, color)
painter.setPen(self.borderColor())
painter.setBrush(QBrush(gradient))
pen = painter.pen()
pen.setWidthF(0.5)
painter.setPen(pen)
painter.setRenderHint( painter.Antialiasing )
path = QPainterPath()
path.moveTo(x - cell_w / 3.0, y + h / 2.0)
path.lineTo(x, y)
path.lineTo(x + cell_w / 3.0, y + h / 2.0)
path.lineTo(x, y + h)
path.lineTo(x - cell_w / 3.0, y + h / 2.0)
painter.drawPath(path) |
def bbox(self):
"""(left, top, right, bottom) tuple."""
if not hasattr(self, '_bbox'):
self._bbox = extract_bbox(self)
return self._bbox | (left, top, right, bottom) tuple. | Below is the the instruction that describes the task:
### Input:
(left, top, right, bottom) tuple.
### Response:
def bbox(self):
"""(left, top, right, bottom) tuple."""
if not hasattr(self, '_bbox'):
self._bbox = extract_bbox(self)
return self._bbox |
def _postprocess_data(self, data):
"""
Applies necessary type transformation to the data before
it is set on a ColumnDataSource.
"""
new_data = {}
for k, values in data.items():
values = decode_bytes(values) # Bytes need decoding to strings
# Certain datetime types need to be converted
if len(values) and isinstance(values[0], cftime_types):
if any(v.calendar not in _STANDARD_CALENDARS for v in values):
self.param.warning(
'Converting cftime.datetime from a non-standard '
'calendar (%s) to a standard calendar for plotting. '
'This may lead to subtle errors in formatting '
'dates, for accurate tick formatting switch to '
'the matplotlib backend.' % values[0].calendar)
values = cftime_to_timestamp(values, 'ms')
new_data[k] = values
return new_data | Applies necessary type transformation to the data before
it is set on a ColumnDataSource. | Below is the the instruction that describes the task:
### Input:
Applies necessary type transformation to the data before
it is set on a ColumnDataSource.
### Response:
def _postprocess_data(self, data):
"""
Applies necessary type transformation to the data before
it is set on a ColumnDataSource.
"""
new_data = {}
for k, values in data.items():
values = decode_bytes(values) # Bytes need decoding to strings
# Certain datetime types need to be converted
if len(values) and isinstance(values[0], cftime_types):
if any(v.calendar not in _STANDARD_CALENDARS for v in values):
self.param.warning(
'Converting cftime.datetime from a non-standard '
'calendar (%s) to a standard calendar for plotting. '
'This may lead to subtle errors in formatting '
'dates, for accurate tick formatting switch to '
'the matplotlib backend.' % values[0].calendar)
values = cftime_to_timestamp(values, 'ms')
new_data[k] = values
return new_data |
def addSkip(self, test, reason):
"""Register that a test that was skipped.
Parameters
----------
test : unittest.TestCase
The test that has completed.
reason : str
The reason the test was skipped.
"""
result = self._handle_result(
test, TestCompletionStatus.skipped, message=reason)
self.skipped.append(result) | Register that a test that was skipped.
Parameters
----------
test : unittest.TestCase
The test that has completed.
reason : str
The reason the test was skipped. | Below is the the instruction that describes the task:
### Input:
Register that a test that was skipped.
Parameters
----------
test : unittest.TestCase
The test that has completed.
reason : str
The reason the test was skipped.
### Response:
def addSkip(self, test, reason):
"""Register that a test that was skipped.
Parameters
----------
test : unittest.TestCase
The test that has completed.
reason : str
The reason the test was skipped.
"""
result = self._handle_result(
test, TestCompletionStatus.skipped, message=reason)
self.skipped.append(result) |
def sixteen_oscillator_two_stimulated_ensembles_grid():
"Not accurate false due to spikes are observed"
parameters = legion_parameters();
parameters.teta_x = -1.1;
template_dynamic_legion(16, 2000, 1500, conn_type = conn_type.GRID_FOUR, params = parameters, stimulus = [1, 1, 1, 0,
1, 1, 1, 0,
0, 0, 0, 1,
0, 0, 1, 1]); | Not accurate false due to spikes are observed | Below is the the instruction that describes the task:
### Input:
Not accurate false due to spikes are observed
### Response:
def sixteen_oscillator_two_stimulated_ensembles_grid():
"Not accurate false due to spikes are observed"
parameters = legion_parameters();
parameters.teta_x = -1.1;
template_dynamic_legion(16, 2000, 1500, conn_type = conn_type.GRID_FOUR, params = parameters, stimulus = [1, 1, 1, 0,
1, 1, 1, 0,
0, 0, 0, 1,
0, 0, 1, 1]); |
def starter(cls):
"""Get bounced start URL."""
data = cls.getPage(cls.url)
url1 = cls.fetchUrl(cls.url, data, cls.prevSearch)
data = cls.getPage(url1)
url2 = cls.fetchUrl(url1, data, cls.nextSearch)
return cls.prevUrlModifier(url2) | Get bounced start URL. | Below is the the instruction that describes the task:
### Input:
Get bounced start URL.
### Response:
def starter(cls):
"""Get bounced start URL."""
data = cls.getPage(cls.url)
url1 = cls.fetchUrl(cls.url, data, cls.prevSearch)
data = cls.getPage(url1)
url2 = cls.fetchUrl(url1, data, cls.nextSearch)
return cls.prevUrlModifier(url2) |
def hl_table2canvas(self, w, res_dict):
"""Highlight marking on canvas when user click on table."""
objlist = []
width = self.markwidth + self._dwidth
# Remove existing highlight
if self.markhltag:
try:
self.canvas.delete_object_by_tag(self.markhltag, redraw=False)
except Exception:
pass
# Display highlighted entries only in second table
self.treeviewsel.set_tree(res_dict)
for kstr, sub_dict in res_dict.items():
s = kstr.split(',')
marktype = s[0]
marksize = float(s[1])
markcolor = s[2]
for bnch in sub_dict.values():
obj = self._get_markobj(bnch.X - self.pixelstart,
bnch.Y - self.pixelstart,
marktype, marksize, markcolor, width)
objlist.append(obj)
nsel = len(objlist)
self.w.nselected.set_text(str(nsel))
# Draw on canvas
if nsel > 0:
self.markhltag = self.canvas.add(self.dc.CompoundObject(*objlist))
self.fitsimage.redraw() | Highlight marking on canvas when user click on table. | Below is the the instruction that describes the task:
### Input:
Highlight marking on canvas when user click on table.
### Response:
def hl_table2canvas(self, w, res_dict):
"""Highlight marking on canvas when user click on table."""
objlist = []
width = self.markwidth + self._dwidth
# Remove existing highlight
if self.markhltag:
try:
self.canvas.delete_object_by_tag(self.markhltag, redraw=False)
except Exception:
pass
# Display highlighted entries only in second table
self.treeviewsel.set_tree(res_dict)
for kstr, sub_dict in res_dict.items():
s = kstr.split(',')
marktype = s[0]
marksize = float(s[1])
markcolor = s[2]
for bnch in sub_dict.values():
obj = self._get_markobj(bnch.X - self.pixelstart,
bnch.Y - self.pixelstart,
marktype, marksize, markcolor, width)
objlist.append(obj)
nsel = len(objlist)
self.w.nselected.set_text(str(nsel))
# Draw on canvas
if nsel > 0:
self.markhltag = self.canvas.add(self.dc.CompoundObject(*objlist))
self.fitsimage.redraw() |
def _edge_event(self, i, j):
"""
Force edge (i, j) to be present in mesh.
This works by removing intersected triangles and filling holes up to
the cutting edge.
"""
front_index = self._front.index(i)
#debug(" == edge event ==")
front = self._front
# First just see whether this edge is already present
# (this is not in the published algorithm)
if (i, j) in self._edges_lookup or (j, i) in self._edges_lookup:
#debug(" already added.")
return
#debug(" Edge (%d,%d) not added yet. Do edge event. (%s - %s)" %
# (i, j, pts[i], pts[j]))
# traverse in two different modes:
# 1. If cutting edge is below front, traverse through triangles. These
# must be removed and the resulting hole re-filled. (fig. 12)
# 2. If cutting edge is above the front, then follow the front until
# crossing under again. (fig. 13)
# We must be able to switch back and forth between these
# modes (fig. 14)
# Collect points that draw the open polygons on either side of the
# cutting edge. Note that our use of 'upper' and 'lower' is not strict;
# in some cases the two may be swapped.
upper_polygon = [i]
lower_polygon = [i]
# Keep track of which section of the front must be replaced
# and with what it should be replaced
front_holes = [] # contains indexes for sections of front to remove
next_tri = None # next triangle to cut (already set if in mode 1)
last_edge = None # or last triangle edge crossed (if in mode 1)
# Which direction to traverse front
front_dir = 1 if self.pts[j][0] > self.pts[i][0] else -1
# Initialize search state
if self._edge_below_front((i, j), front_index):
mode = 1 # follow triangles
tri = self._find_cut_triangle((i, j))
last_edge = self._edge_opposite_point(tri, i)
next_tri = self._adjacent_tri(last_edge, i)
assert next_tri is not None
self._remove_tri(*tri)
# todo: does this work? can we count on last_edge to be clockwise
# around point i?
lower_polygon.append(last_edge[1])
upper_polygon.append(last_edge[0])
else:
mode = 2 # follow front
# Loop until we reach point j
while True:
#debug(" == edge_event loop: mode %d ==" % mode)
#debug(" front_holes:", front_holes, front)
#debug(" front_index:", front_index)
#debug(" next_tri:", next_tri)
#debug(" last_edge:", last_edge)
#debug(" upper_polygon:", upper_polygon)
#debug(" lower_polygon:", lower_polygon)
#debug(" =====")
if mode == 1:
# crossing from one triangle into another
if j in next_tri:
#debug(" -> hit endpoint!")
# reached endpoint!
# update front / polygons
upper_polygon.append(j)
lower_polygon.append(j)
#debug(" Appended to upper_polygon:", upper_polygon)
#debug(" Appended to lower_polygon:", lower_polygon)
self._remove_tri(*next_tri)
break
else:
# next triangle does not contain the end point; we will
# cut one of the two far edges.
tri_edges = self._edges_in_tri_except(next_tri, last_edge)
# select the edge that is cut
last_edge = self._intersected_edge(tri_edges, (i, j))
#debug(" set last_edge to intersected edge:", last_edge)
last_tri = next_tri
next_tri = self._adjacent_tri(last_edge, last_tri)
#debug(" set next_tri:", next_tri)
self._remove_tri(*last_tri)
# Crossing an edge adds one point to one of the polygons
if lower_polygon[-1] == last_edge[0]:
upper_polygon.append(last_edge[1])
#debug(" Appended to upper_polygon:", upper_polygon)
elif lower_polygon[-1] == last_edge[1]:
upper_polygon.append(last_edge[0])
#debug(" Appended to upper_polygon:", upper_polygon)
elif upper_polygon[-1] == last_edge[0]:
lower_polygon.append(last_edge[1])
#debug(" Appended to lower_polygon:", lower_polygon)
elif upper_polygon[-1] == last_edge[1]:
lower_polygon.append(last_edge[0])
#debug(" Appended to lower_polygon:", lower_polygon)
else:
raise RuntimeError("Something went wrong..")
# If we crossed the front, go to mode 2
x = self._edge_in_front(last_edge)
if x >= 0: # crossing over front
#debug(" -> crossed over front, prepare for mode 2")
mode = 2
next_tri = None
#debug(" set next_tri: None")
# where did we cross the front?
# nearest to new point
front_index = x + (1 if front_dir == -1 else 0)
#debug(" set front_index:", front_index)
# Select the correct polygon to be lower_polygon
# (because mode 2 requires this).
# We know that last_edge is in the front, and
# front[front_index] is the point _above_ the front.
# So if this point is currently the last element in
# lower_polygon, then the polys must be swapped.
if lower_polygon[-1] == front[front_index]:
tmp = lower_polygon, upper_polygon
upper_polygon, lower_polygon = tmp
#debug(' Swap upper/lower polygons')
else:
assert upper_polygon[-1] == front[front_index]
else:
assert next_tri is not None
else: # mode == 2
# At each iteration, we require:
# * front_index is the starting index of the edge _preceding_
# the edge that will be handled in this iteration
# * lower_polygon is the polygon to which points should be
# added while traversing the front
front_index += front_dir
#debug(" Increment front_index: %d" % front_index)
next_edge = (front[front_index], front[front_index+front_dir])
#debug(" Set next_edge: %s" % repr(next_edge))
assert front_index >= 0
if front[front_index] == j:
# found endpoint!
#debug(" -> hit endpoint!")
lower_polygon.append(j)
upper_polygon.append(j)
#debug(" Appended to upper_polygon:", upper_polygon)
#debug(" Appended to lower_polygon:", lower_polygon)
break
# Add point to lower_polygon.
# The conditional is because there are cases where the
# point was already added if we just crossed from mode 1.
if lower_polygon[-1] != front[front_index]:
lower_polygon.append(front[front_index])
#debug(" Appended to lower_polygon:", lower_polygon)
front_holes.append(front_index)
#debug(" Append to front_holes:", front_holes)
if self._edges_intersect((i, j), next_edge):
# crossing over front into triangle
#debug(" -> crossed over front, prepare for mode 1")
mode = 1
last_edge = next_edge
#debug(" Set last_edge:", last_edge)
# we are crossing the front, so this edge only has one
# triangle.
next_tri = self._tri_from_edge(last_edge)
#debug(" Set next_tri:", next_tri)
upper_polygon.append(front[front_index+front_dir])
#debug(" Appended to upper_polygon:", upper_polygon)
#else:
#debug(" -> did not cross front..")
#debug("Finished edge_event:")
#debug(" front_holes:", front_holes)
#debug(" upper_polygon:", upper_polygon)
#debug(" lower_polygon:", lower_polygon)
# (iii) triangluate empty areas
#debug("Filling edge_event polygons...")
for polygon in [lower_polygon, upper_polygon]:
dist = self._distances_from_line((i, j), polygon)
#debug("Distances:", dist)
while len(polygon) > 2:
ind = np.argmax(dist)
#debug("Next index: %d" % ind)
self._add_tri(polygon[ind], polygon[ind-1],
polygon[ind+1], legal=False,
source='edge_event')
polygon.pop(ind)
dist.pop(ind)
#debug("Finished filling edge_event polygons.")
# update front by removing points in the holes (places where front
# passes below the cut edge)
front_holes.sort(reverse=True)
for i in front_holes:
front.pop(i) | Force edge (i, j) to be present in mesh.
This works by removing intersected triangles and filling holes up to
the cutting edge. | Below is the the instruction that describes the task:
### Input:
Force edge (i, j) to be present in mesh.
This works by removing intersected triangles and filling holes up to
the cutting edge.
### Response:
def _edge_event(self, i, j):
"""
Force edge (i, j) to be present in mesh.
This works by removing intersected triangles and filling holes up to
the cutting edge.
"""
front_index = self._front.index(i)
#debug(" == edge event ==")
front = self._front
# First just see whether this edge is already present
# (this is not in the published algorithm)
if (i, j) in self._edges_lookup or (j, i) in self._edges_lookup:
#debug(" already added.")
return
#debug(" Edge (%d,%d) not added yet. Do edge event. (%s - %s)" %
# (i, j, pts[i], pts[j]))
# traverse in two different modes:
# 1. If cutting edge is below front, traverse through triangles. These
# must be removed and the resulting hole re-filled. (fig. 12)
# 2. If cutting edge is above the front, then follow the front until
# crossing under again. (fig. 13)
# We must be able to switch back and forth between these
# modes (fig. 14)
# Collect points that draw the open polygons on either side of the
# cutting edge. Note that our use of 'upper' and 'lower' is not strict;
# in some cases the two may be swapped.
upper_polygon = [i]
lower_polygon = [i]
# Keep track of which section of the front must be replaced
# and with what it should be replaced
front_holes = [] # contains indexes for sections of front to remove
next_tri = None # next triangle to cut (already set if in mode 1)
last_edge = None # or last triangle edge crossed (if in mode 1)
# Which direction to traverse front
front_dir = 1 if self.pts[j][0] > self.pts[i][0] else -1
# Initialize search state
if self._edge_below_front((i, j), front_index):
mode = 1 # follow triangles
tri = self._find_cut_triangle((i, j))
last_edge = self._edge_opposite_point(tri, i)
next_tri = self._adjacent_tri(last_edge, i)
assert next_tri is not None
self._remove_tri(*tri)
# todo: does this work? can we count on last_edge to be clockwise
# around point i?
lower_polygon.append(last_edge[1])
upper_polygon.append(last_edge[0])
else:
mode = 2 # follow front
# Loop until we reach point j
while True:
#debug(" == edge_event loop: mode %d ==" % mode)
#debug(" front_holes:", front_holes, front)
#debug(" front_index:", front_index)
#debug(" next_tri:", next_tri)
#debug(" last_edge:", last_edge)
#debug(" upper_polygon:", upper_polygon)
#debug(" lower_polygon:", lower_polygon)
#debug(" =====")
if mode == 1:
# crossing from one triangle into another
if j in next_tri:
#debug(" -> hit endpoint!")
# reached endpoint!
# update front / polygons
upper_polygon.append(j)
lower_polygon.append(j)
#debug(" Appended to upper_polygon:", upper_polygon)
#debug(" Appended to lower_polygon:", lower_polygon)
self._remove_tri(*next_tri)
break
else:
# next triangle does not contain the end point; we will
# cut one of the two far edges.
tri_edges = self._edges_in_tri_except(next_tri, last_edge)
# select the edge that is cut
last_edge = self._intersected_edge(tri_edges, (i, j))
#debug(" set last_edge to intersected edge:", last_edge)
last_tri = next_tri
next_tri = self._adjacent_tri(last_edge, last_tri)
#debug(" set next_tri:", next_tri)
self._remove_tri(*last_tri)
# Crossing an edge adds one point to one of the polygons
if lower_polygon[-1] == last_edge[0]:
upper_polygon.append(last_edge[1])
#debug(" Appended to upper_polygon:", upper_polygon)
elif lower_polygon[-1] == last_edge[1]:
upper_polygon.append(last_edge[0])
#debug(" Appended to upper_polygon:", upper_polygon)
elif upper_polygon[-1] == last_edge[0]:
lower_polygon.append(last_edge[1])
#debug(" Appended to lower_polygon:", lower_polygon)
elif upper_polygon[-1] == last_edge[1]:
lower_polygon.append(last_edge[0])
#debug(" Appended to lower_polygon:", lower_polygon)
else:
raise RuntimeError("Something went wrong..")
# If we crossed the front, go to mode 2
x = self._edge_in_front(last_edge)
if x >= 0: # crossing over front
#debug(" -> crossed over front, prepare for mode 2")
mode = 2
next_tri = None
#debug(" set next_tri: None")
# where did we cross the front?
# nearest to new point
front_index = x + (1 if front_dir == -1 else 0)
#debug(" set front_index:", front_index)
# Select the correct polygon to be lower_polygon
# (because mode 2 requires this).
# We know that last_edge is in the front, and
# front[front_index] is the point _above_ the front.
# So if this point is currently the last element in
# lower_polygon, then the polys must be swapped.
if lower_polygon[-1] == front[front_index]:
tmp = lower_polygon, upper_polygon
upper_polygon, lower_polygon = tmp
#debug(' Swap upper/lower polygons')
else:
assert upper_polygon[-1] == front[front_index]
else:
assert next_tri is not None
else: # mode == 2
# At each iteration, we require:
# * front_index is the starting index of the edge _preceding_
# the edge that will be handled in this iteration
# * lower_polygon is the polygon to which points should be
# added while traversing the front
front_index += front_dir
#debug(" Increment front_index: %d" % front_index)
next_edge = (front[front_index], front[front_index+front_dir])
#debug(" Set next_edge: %s" % repr(next_edge))
assert front_index >= 0
if front[front_index] == j:
# found endpoint!
#debug(" -> hit endpoint!")
lower_polygon.append(j)
upper_polygon.append(j)
#debug(" Appended to upper_polygon:", upper_polygon)
#debug(" Appended to lower_polygon:", lower_polygon)
break
# Add point to lower_polygon.
# The conditional is because there are cases where the
# point was already added if we just crossed from mode 1.
if lower_polygon[-1] != front[front_index]:
lower_polygon.append(front[front_index])
#debug(" Appended to lower_polygon:", lower_polygon)
front_holes.append(front_index)
#debug(" Append to front_holes:", front_holes)
if self._edges_intersect((i, j), next_edge):
# crossing over front into triangle
#debug(" -> crossed over front, prepare for mode 1")
mode = 1
last_edge = next_edge
#debug(" Set last_edge:", last_edge)
# we are crossing the front, so this edge only has one
# triangle.
next_tri = self._tri_from_edge(last_edge)
#debug(" Set next_tri:", next_tri)
upper_polygon.append(front[front_index+front_dir])
#debug(" Appended to upper_polygon:", upper_polygon)
#else:
#debug(" -> did not cross front..")
#debug("Finished edge_event:")
#debug(" front_holes:", front_holes)
#debug(" upper_polygon:", upper_polygon)
#debug(" lower_polygon:", lower_polygon)
# (iii) triangluate empty areas
#debug("Filling edge_event polygons...")
for polygon in [lower_polygon, upper_polygon]:
dist = self._distances_from_line((i, j), polygon)
#debug("Distances:", dist)
while len(polygon) > 2:
ind = np.argmax(dist)
#debug("Next index: %d" % ind)
self._add_tri(polygon[ind], polygon[ind-1],
polygon[ind+1], legal=False,
source='edge_event')
polygon.pop(ind)
dist.pop(ind)
#debug("Finished filling edge_event polygons.")
# update front by removing points in the holes (places where front
# passes below the cut edge)
front_holes.sort(reverse=True)
for i in front_holes:
front.pop(i) |
def _union_copy(dict1, dict2):
"""
Internal wrapper to keep one level of copying out of play, for efficiency.
Only copies data on dict2, but will alter dict1.
"""
for key, value in dict2.items():
if key in dict1 and isinstance(value, dict):
dict1[key] = _union_copy(dict1[key], value)
else:
dict1[key] = copy.deepcopy(value)
return dict1 | Internal wrapper to keep one level of copying out of play, for efficiency.
Only copies data on dict2, but will alter dict1. | Below is the the instruction that describes the task:
### Input:
Internal wrapper to keep one level of copying out of play, for efficiency.
Only copies data on dict2, but will alter dict1.
### Response:
def _union_copy(dict1, dict2):
"""
Internal wrapper to keep one level of copying out of play, for efficiency.
Only copies data on dict2, but will alter dict1.
"""
for key, value in dict2.items():
if key in dict1 and isinstance(value, dict):
dict1[key] = _union_copy(dict1[key], value)
else:
dict1[key] = copy.deepcopy(value)
return dict1 |
def add_metaclass(metaclass):
"""
Class decorator for creating a class with a metaclass.
Adapted from the six project:
https://pythonhosted.org/six/
"""
vars_to_skip = ('__dict__', '__weakref__')
def wrapper(cls):
copied_dict = {
key: value
for key, value in cls.__dict__.items()
if key not in vars_to_skip
}
return metaclass(cls.__name__, cls.__bases__, copied_dict)
return wrapper | Class decorator for creating a class with a metaclass.
Adapted from the six project:
https://pythonhosted.org/six/ | Below is the the instruction that describes the task:
### Input:
Class decorator for creating a class with a metaclass.
Adapted from the six project:
https://pythonhosted.org/six/
### Response:
def add_metaclass(metaclass):
"""
Class decorator for creating a class with a metaclass.
Adapted from the six project:
https://pythonhosted.org/six/
"""
vars_to_skip = ('__dict__', '__weakref__')
def wrapper(cls):
copied_dict = {
key: value
for key, value in cls.__dict__.items()
if key not in vars_to_skip
}
return metaclass(cls.__name__, cls.__bases__, copied_dict)
return wrapper |
def get_components(self, uri):
"""
Get components from a component definition in order
"""
try:
component_definition = self._components[uri]
except KeyError:
return False
sorted_sequences = sorted(component_definition.sequence_annotations,
key=attrgetter('first_location'))
return [c.component for c in sorted_sequences] | Get components from a component definition in order | Below is the the instruction that describes the task:
### Input:
Get components from a component definition in order
### Response:
def get_components(self, uri):
"""
Get components from a component definition in order
"""
try:
component_definition = self._components[uri]
except KeyError:
return False
sorted_sequences = sorted(component_definition.sequence_annotations,
key=attrgetter('first_location'))
return [c.component for c in sorted_sequences] |
def build(self):
"""
Create the current layer
:return: string of the packet with the payload
"""
p = self.do_build()
p += self.build_padding()
p = self.build_done(p)
return p | Create the current layer
:return: string of the packet with the payload | Below is the the instruction that describes the task:
### Input:
Create the current layer
:return: string of the packet with the payload
### Response:
def build(self):
"""
Create the current layer
:return: string of the packet with the payload
"""
p = self.do_build()
p += self.build_padding()
p = self.build_done(p)
return p |
def dist_is_editable(dist):
# type: (Distribution) -> bool
"""
Return True if given Distribution is an editable install.
"""
for path_item in sys.path:
egg_link = os.path.join(path_item, dist.project_name + '.egg-link')
if os.path.isfile(egg_link):
return True
return False | Return True if given Distribution is an editable install. | Below is the the instruction that describes the task:
### Input:
Return True if given Distribution is an editable install.
### Response:
def dist_is_editable(dist):
# type: (Distribution) -> bool
"""
Return True if given Distribution is an editable install.
"""
for path_item in sys.path:
egg_link = os.path.join(path_item, dist.project_name + '.egg-link')
if os.path.isfile(egg_link):
return True
return False |
def _parseWasbUrl(cls, url):
"""
:param urlparse.ParseResult url: x
:rtype: AzureJobStore.BlobInfo
"""
assert url.scheme in ('wasb', 'wasbs')
try:
container, account = url.netloc.split('@')
except ValueError:
raise InvalidImportExportUrlException(url)
suffix = '.blob.core.windows.net'
if account.endswith(suffix):
account = account[:-len(suffix)]
else:
raise InvalidImportExportUrlException(url)
assert url.path[0] == '/'
return cls.BlobInfo(account=account, container=container, name=url.path[1:]) | :param urlparse.ParseResult url: x
:rtype: AzureJobStore.BlobInfo | Below is the the instruction that describes the task:
### Input:
:param urlparse.ParseResult url: x
:rtype: AzureJobStore.BlobInfo
### Response:
def _parseWasbUrl(cls, url):
"""
:param urlparse.ParseResult url: x
:rtype: AzureJobStore.BlobInfo
"""
assert url.scheme in ('wasb', 'wasbs')
try:
container, account = url.netloc.split('@')
except ValueError:
raise InvalidImportExportUrlException(url)
suffix = '.blob.core.windows.net'
if account.endswith(suffix):
account = account[:-len(suffix)]
else:
raise InvalidImportExportUrlException(url)
assert url.path[0] == '/'
return cls.BlobInfo(account=account, container=container, name=url.path[1:]) |
def _get_time_at_progress(self, x_target):
"""
Return the projected time when progress level `x_target` will be reached.
Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution
to the equation x(t) = x_target.
"""
t, x, v = self._t0, self._x0, self._v0
# The convergence should be achieved in just few iterations, however in unlikely situation that it doesn't
# we don't want to loop forever...
for _ in range(20):
if v == 0: return 1e20
# make time prediction assuming the progress will continue at a linear speed ``v``
t += (x_target - x) / v
# calculate the actual progress at that time
x, v = self._compute_progress_at_time(t)
# iterate until convergence
if abs(x - x_target) < 1e-3: return t
return time.time() + 100 | Return the projected time when progress level `x_target` will be reached.
Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution
to the equation x(t) = x_target. | Below is the the instruction that describes the task:
### Input:
Return the projected time when progress level `x_target` will be reached.
Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution
to the equation x(t) = x_target.
### Response:
def _get_time_at_progress(self, x_target):
"""
Return the projected time when progress level `x_target` will be reached.
Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution
to the equation x(t) = x_target.
"""
t, x, v = self._t0, self._x0, self._v0
# The convergence should be achieved in just few iterations, however in unlikely situation that it doesn't
# we don't want to loop forever...
for _ in range(20):
if v == 0: return 1e20
# make time prediction assuming the progress will continue at a linear speed ``v``
t += (x_target - x) / v
# calculate the actual progress at that time
x, v = self._compute_progress_at_time(t)
# iterate until convergence
if abs(x - x_target) < 1e-3: return t
return time.time() + 100 |
def get_ssh_key(host,
username,
password,
protocol=None,
port=None,
certificate_verify=False):
'''
Retrieve the authorized_keys entry for root.
This function only works for ESXi, not vCenter.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: True if upload is successful
CLI Example:
.. code-block:: bash
salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
'''
if protocol is None:
protocol = 'https'
if port is None:
port = 443
url = '{0}://{1}:{2}/host/ssh_root_authorized_keys'.format(protocol,
host,
port)
ret = {}
try:
result = salt.utils.http.query(url,
status=True,
text=True,
method='GET',
username=username,
password=password,
verify_ssl=certificate_verify)
if result.get('status') == 200:
ret['status'] = True
ret['key'] = result['text']
else:
ret['status'] = False
ret['Error'] = result['error']
except Exception as msg:
ret['status'] = False
ret['Error'] = msg
return ret | Retrieve the authorized_keys entry for root.
This function only works for ESXi, not vCenter.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: True if upload is successful
CLI Example:
.. code-block:: bash
salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True | Below is the the instruction that describes the task:
### Input:
Retrieve the authorized_keys entry for root.
This function only works for ESXi, not vCenter.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: True if upload is successful
CLI Example:
.. code-block:: bash
salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
### Response:
def get_ssh_key(host,
username,
password,
protocol=None,
port=None,
certificate_verify=False):
'''
Retrieve the authorized_keys entry for root.
This function only works for ESXi, not vCenter.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: True if upload is successful
CLI Example:
.. code-block:: bash
salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
'''
if protocol is None:
protocol = 'https'
if port is None:
port = 443
url = '{0}://{1}:{2}/host/ssh_root_authorized_keys'.format(protocol,
host,
port)
ret = {}
try:
result = salt.utils.http.query(url,
status=True,
text=True,
method='GET',
username=username,
password=password,
verify_ssl=certificate_verify)
if result.get('status') == 200:
ret['status'] = True
ret['key'] = result['text']
else:
ret['status'] = False
ret['Error'] = result['error']
except Exception as msg:
ret['status'] = False
ret['Error'] = msg
return ret |
def int_to_bytes(i, minlen=1, order='big'): # pragma: no cover
"""convert integer to bytes"""
blen = max(minlen, PGPObject.int_byte_len(i), 1)
if six.PY2:
r = iter(_ * 8 for _ in (range(blen) if order == 'little' else range(blen - 1, -1, -1)))
return bytes(bytearray((i >> c) & 0xff for c in r))
return i.to_bytes(blen, order) | convert integer to bytes | Below is the the instruction that describes the task:
### Input:
convert integer to bytes
### Response:
def int_to_bytes(i, minlen=1, order='big'): # pragma: no cover
"""convert integer to bytes"""
blen = max(minlen, PGPObject.int_byte_len(i), 1)
if six.PY2:
r = iter(_ * 8 for _ in (range(blen) if order == 'little' else range(blen - 1, -1, -1)))
return bytes(bytearray((i >> c) & 0xff for c in r))
return i.to_bytes(blen, order) |
def _check_for_duplicates(durations, events):
"""Checks for duplicated event times in the data set. This is narrowed to detecting duplicated event times
where the events are of different types
"""
# Setting up DataFrame to detect duplicates
df = pd.DataFrame({"t": durations, "e": events})
# Finding duplicated event times
dup_times = df.loc[df["e"] != 0, "t"].duplicated(keep=False)
# Finding duplicated events and event times
dup_events = df.loc[df["e"] != 0, ["t", "e"]].duplicated(keep=False)
# Detect duplicated times with different event types
return (dup_times & (~dup_events)).any() | Checks for duplicated event times in the data set. This is narrowed to detecting duplicated event times
where the events are of different types | Below is the the instruction that describes the task:
### Input:
Checks for duplicated event times in the data set. This is narrowed to detecting duplicated event times
where the events are of different types
### Response:
def _check_for_duplicates(durations, events):
"""Checks for duplicated event times in the data set. This is narrowed to detecting duplicated event times
where the events are of different types
"""
# Setting up DataFrame to detect duplicates
df = pd.DataFrame({"t": durations, "e": events})
# Finding duplicated event times
dup_times = df.loc[df["e"] != 0, "t"].duplicated(keep=False)
# Finding duplicated events and event times
dup_events = df.loc[df["e"] != 0, ["t", "e"]].duplicated(keep=False)
# Detect duplicated times with different event types
return (dup_times & (~dup_events)).any() |
def main(argv): # pylint: disable=W0613
'''
Main program body
'''
thin_path = os.path.join(OPTIONS.saltdir, THIN_ARCHIVE)
if os.path.isfile(thin_path):
if OPTIONS.checksum != get_hash(thin_path, OPTIONS.hashfunc):
need_deployment()
unpack_thin(thin_path)
# Salt thin now is available to use
else:
if not sys.platform.startswith('win'):
scpstat = subprocess.Popen(['/bin/sh', '-c', 'command -v scp']).wait()
if scpstat != 0:
sys.exit(EX_SCP_NOT_FOUND)
if os.path.exists(OPTIONS.saltdir) and not os.path.isdir(OPTIONS.saltdir):
sys.stderr.write(
'ERROR: salt path "{0}" exists but is'
' not a directory\n'.format(OPTIONS.saltdir)
)
sys.exit(EX_CANTCREAT)
if not os.path.exists(OPTIONS.saltdir):
need_deployment()
code_checksum_path = os.path.normpath(os.path.join(OPTIONS.saltdir, 'code-checksum'))
if not os.path.exists(code_checksum_path) or not os.path.isfile(code_checksum_path):
sys.stderr.write('WARNING: Unable to locate current code checksum: {0}.\n'.format(code_checksum_path))
need_deployment()
with open(code_checksum_path, 'r') as vpo:
cur_code_cs = vpo.readline().strip()
if cur_code_cs != OPTIONS.code_checksum:
sys.stderr.write('WARNING: current code checksum {0} is different to {1}.\n'.format(cur_code_cs,
OPTIONS.code_checksum))
need_deployment()
# Salt thin exists and is up-to-date - fall through and use it
salt_call_path = os.path.join(OPTIONS.saltdir, 'salt-call')
if not os.path.isfile(salt_call_path):
sys.stderr.write('ERROR: thin is missing "{0}"\n'.format(salt_call_path))
need_deployment()
with open(os.path.join(OPTIONS.saltdir, 'minion'), 'w') as config:
config.write(OPTIONS.config + '\n')
if OPTIONS.ext_mods:
ext_path = os.path.join(OPTIONS.saltdir, EXT_ARCHIVE)
if os.path.exists(ext_path):
unpack_ext(ext_path)
else:
version_path = os.path.join(OPTIONS.saltdir, 'ext_version')
if not os.path.exists(version_path) or not os.path.isfile(version_path):
need_ext()
with open(version_path, 'r') as vpo:
cur_version = vpo.readline().strip()
if cur_version != OPTIONS.ext_mods:
need_ext()
# Fix parameter passing issue
if len(ARGS) == 1:
argv_prepared = ARGS[0].split()
else:
argv_prepared = ARGS
salt_argv = [
get_executable(),
salt_call_path,
'--retcode-passthrough',
'--local',
'--metadata',
'--out', 'json',
'-l', 'quiet',
'-c', OPTIONS.saltdir
]
try:
if argv_prepared[-1].startswith('--no-parse='):
salt_argv.append(argv_prepared.pop(-1))
except (IndexError, TypeError):
pass
salt_argv.append('--')
salt_argv.extend(argv_prepared)
sys.stderr.write('SALT_ARGV: {0}\n'.format(salt_argv))
# Only emit the delimiter on *both* stdout and stderr when completely successful.
# Yes, the flush() is necessary.
sys.stdout.write(OPTIONS.delimiter + '\n')
sys.stdout.flush()
if not OPTIONS.tty:
sys.stderr.write(OPTIONS.delimiter + '\n')
sys.stderr.flush()
if OPTIONS.cmd_umask is not None:
old_umask = os.umask(OPTIONS.cmd_umask) # pylint: disable=blacklisted-function
if OPTIONS.tty:
proc = subprocess.Popen(salt_argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Returns bytes instead of string on python 3
stdout, _ = proc.communicate()
sys.stdout.write(stdout.decode(encoding=get_system_encoding(), errors="replace"))
sys.stdout.flush()
retcode = proc.returncode
if OPTIONS.wipe:
shutil.rmtree(OPTIONS.saltdir)
elif OPTIONS.wipe:
retcode = subprocess.call(salt_argv)
shutil.rmtree(OPTIONS.saltdir)
else:
retcode = subprocess.call(salt_argv)
if OPTIONS.cmd_umask is not None:
os.umask(old_umask) # pylint: disable=blacklisted-function
return retcode | Main program body | Below is the the instruction that describes the task:
### Input:
Main program body
### Response:
def main(argv): # pylint: disable=W0613
'''
Main program body
'''
thin_path = os.path.join(OPTIONS.saltdir, THIN_ARCHIVE)
if os.path.isfile(thin_path):
if OPTIONS.checksum != get_hash(thin_path, OPTIONS.hashfunc):
need_deployment()
unpack_thin(thin_path)
# Salt thin now is available to use
else:
if not sys.platform.startswith('win'):
scpstat = subprocess.Popen(['/bin/sh', '-c', 'command -v scp']).wait()
if scpstat != 0:
sys.exit(EX_SCP_NOT_FOUND)
if os.path.exists(OPTIONS.saltdir) and not os.path.isdir(OPTIONS.saltdir):
sys.stderr.write(
'ERROR: salt path "{0}" exists but is'
' not a directory\n'.format(OPTIONS.saltdir)
)
sys.exit(EX_CANTCREAT)
if not os.path.exists(OPTIONS.saltdir):
need_deployment()
code_checksum_path = os.path.normpath(os.path.join(OPTIONS.saltdir, 'code-checksum'))
if not os.path.exists(code_checksum_path) or not os.path.isfile(code_checksum_path):
sys.stderr.write('WARNING: Unable to locate current code checksum: {0}.\n'.format(code_checksum_path))
need_deployment()
with open(code_checksum_path, 'r') as vpo:
cur_code_cs = vpo.readline().strip()
if cur_code_cs != OPTIONS.code_checksum:
sys.stderr.write('WARNING: current code checksum {0} is different to {1}.\n'.format(cur_code_cs,
OPTIONS.code_checksum))
need_deployment()
# Salt thin exists and is up-to-date - fall through and use it
salt_call_path = os.path.join(OPTIONS.saltdir, 'salt-call')
if not os.path.isfile(salt_call_path):
sys.stderr.write('ERROR: thin is missing "{0}"\n'.format(salt_call_path))
need_deployment()
with open(os.path.join(OPTIONS.saltdir, 'minion'), 'w') as config:
config.write(OPTIONS.config + '\n')
if OPTIONS.ext_mods:
ext_path = os.path.join(OPTIONS.saltdir, EXT_ARCHIVE)
if os.path.exists(ext_path):
unpack_ext(ext_path)
else:
version_path = os.path.join(OPTIONS.saltdir, 'ext_version')
if not os.path.exists(version_path) or not os.path.isfile(version_path):
need_ext()
with open(version_path, 'r') as vpo:
cur_version = vpo.readline().strip()
if cur_version != OPTIONS.ext_mods:
need_ext()
# Fix parameter passing issue
if len(ARGS) == 1:
argv_prepared = ARGS[0].split()
else:
argv_prepared = ARGS
salt_argv = [
get_executable(),
salt_call_path,
'--retcode-passthrough',
'--local',
'--metadata',
'--out', 'json',
'-l', 'quiet',
'-c', OPTIONS.saltdir
]
try:
if argv_prepared[-1].startswith('--no-parse='):
salt_argv.append(argv_prepared.pop(-1))
except (IndexError, TypeError):
pass
salt_argv.append('--')
salt_argv.extend(argv_prepared)
sys.stderr.write('SALT_ARGV: {0}\n'.format(salt_argv))
# Only emit the delimiter on *both* stdout and stderr when completely successful.
# Yes, the flush() is necessary.
sys.stdout.write(OPTIONS.delimiter + '\n')
sys.stdout.flush()
if not OPTIONS.tty:
sys.stderr.write(OPTIONS.delimiter + '\n')
sys.stderr.flush()
if OPTIONS.cmd_umask is not None:
old_umask = os.umask(OPTIONS.cmd_umask) # pylint: disable=blacklisted-function
if OPTIONS.tty:
proc = subprocess.Popen(salt_argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Returns bytes instead of string on python 3
stdout, _ = proc.communicate()
sys.stdout.write(stdout.decode(encoding=get_system_encoding(), errors="replace"))
sys.stdout.flush()
retcode = proc.returncode
if OPTIONS.wipe:
shutil.rmtree(OPTIONS.saltdir)
elif OPTIONS.wipe:
retcode = subprocess.call(salt_argv)
shutil.rmtree(OPTIONS.saltdir)
else:
retcode = subprocess.call(salt_argv)
if OPTIONS.cmd_umask is not None:
os.umask(old_umask) # pylint: disable=blacklisted-function
return retcode |
def get_enabled_browsers():
"""
Check the ADMINFILES_BROWSER_VIEWS setting and return a list of
instantiated browser views that have the necessary
dependencies/configuration to run.
"""
global _enabled_browsers_cache
if _enabled_browsers_cache is not None:
return _enabled_browsers_cache
enabled = []
for browser_path in settings.ADMINFILES_BROWSER_VIEWS:
try:
view_class = import_browser(browser_path)
except ImportError:
continue
if not issubclass(view_class, BaseView):
continue
browser = view_class
try:
browser.check()
except DisableView:
continue
enabled.append(browser)
_enabled_browsers_cache = enabled
return enabled | Check the ADMINFILES_BROWSER_VIEWS setting and return a list of
instantiated browser views that have the necessary
dependencies/configuration to run. | Below is the the instruction that describes the task:
### Input:
Check the ADMINFILES_BROWSER_VIEWS setting and return a list of
instantiated browser views that have the necessary
dependencies/configuration to run.
### Response:
def get_enabled_browsers():
"""
Check the ADMINFILES_BROWSER_VIEWS setting and return a list of
instantiated browser views that have the necessary
dependencies/configuration to run.
"""
global _enabled_browsers_cache
if _enabled_browsers_cache is not None:
return _enabled_browsers_cache
enabled = []
for browser_path in settings.ADMINFILES_BROWSER_VIEWS:
try:
view_class = import_browser(browser_path)
except ImportError:
continue
if not issubclass(view_class, BaseView):
continue
browser = view_class
try:
browser.check()
except DisableView:
continue
enabled.append(browser)
_enabled_browsers_cache = enabled
return enabled |
def explode_azure_storage_url(url):
# type: (str) -> Tuple[str, str, str, str, str]
"""Explode Azure Storage URL into parts
:param url str: storage url
:rtype: tuple
:return: (sa, mode, ep, rpath, sas)
"""
tmp = url.split('/')
host = tmp[2].split('.')
sa = host[0]
mode = host[1].lower()
ep = '.'.join(host[2:])
tmp = '/'.join(tmp[3:]).split('?')
rpath = tmp[0]
if len(tmp) > 1:
sas = tmp[1]
else:
sas = None
return sa, mode, ep, rpath, sas | Explode Azure Storage URL into parts
:param url str: storage url
:rtype: tuple
:return: (sa, mode, ep, rpath, sas) | Below is the the instruction that describes the task:
### Input:
Explode Azure Storage URL into parts
:param url str: storage url
:rtype: tuple
:return: (sa, mode, ep, rpath, sas)
### Response:
def explode_azure_storage_url(url):
# type: (str) -> Tuple[str, str, str, str, str]
"""Explode Azure Storage URL into parts
:param url str: storage url
:rtype: tuple
:return: (sa, mode, ep, rpath, sas)
"""
tmp = url.split('/')
host = tmp[2].split('.')
sa = host[0]
mode = host[1].lower()
ep = '.'.join(host[2:])
tmp = '/'.join(tmp[3:]).split('?')
rpath = tmp[0]
if len(tmp) > 1:
sas = tmp[1]
else:
sas = None
return sa, mode, ep, rpath, sas |
def _configure_send(self, request, **kwargs):
# type: (ClientRequest, Any) -> Dict[str, str]
"""Configure the kwargs to use with requests.
See "send" for kwargs details.
:param ClientRequest request: The request object to be sent.
:returns: The requests.Session.request kwargs
:rtype: dict[str,str]
"""
requests_kwargs = {} # type: Any
session = kwargs.pop('session', self.session)
# If custom session was not create here
if session is not self.session:
self._init_session(session)
session.max_redirects = int(self.config.redirect_policy())
session.trust_env = bool(self.config.proxies.use_env_settings)
# Initialize requests_kwargs with "config" value
requests_kwargs.update(self.config.connection())
requests_kwargs['allow_redirects'] = bool(self.config.redirect_policy)
requests_kwargs['headers'] = self.config.headers.copy()
proxies = self.config.proxies()
if proxies:
requests_kwargs['proxies'] = proxies
# Replace by operation level kwargs
# We allow some of them, since some like stream or json are controled by msrest
for key in kwargs:
if key in self._REQUESTS_KWARGS:
requests_kwargs[key] = kwargs[key]
# Hooks. Deprecated, should be a policy
def make_user_hook_cb(user_hook, session):
def user_hook_cb(r, *args, **kwargs):
kwargs.setdefault("msrest", {})['session'] = session
return user_hook(r, *args, **kwargs)
return user_hook_cb
hooks = []
for user_hook in self.config.hooks:
hooks.append(make_user_hook_cb(user_hook, self.session))
if hooks:
requests_kwargs['hooks'] = {'response': hooks}
# Configuration callback. Deprecated, should be a policy
output_kwargs = self.config.session_configuration_callback(
session,
self.config,
kwargs,
**requests_kwargs
)
if output_kwargs is not None:
requests_kwargs = output_kwargs
# If custom session was not create here
if session is not self.session:
requests_kwargs['session'] = session
### Autorest forced kwargs now ###
# If Autorest needs this response to be streamable. True for compat.
requests_kwargs['stream'] = kwargs.get('stream', True)
if request.files:
requests_kwargs['files'] = request.files
elif request.data:
requests_kwargs['data'] = request.data
requests_kwargs['headers'].update(request.headers)
return requests_kwargs | Configure the kwargs to use with requests.
See "send" for kwargs details.
:param ClientRequest request: The request object to be sent.
:returns: The requests.Session.request kwargs
:rtype: dict[str,str] | Below is the the instruction that describes the task:
### Input:
Configure the kwargs to use with requests.
See "send" for kwargs details.
:param ClientRequest request: The request object to be sent.
:returns: The requests.Session.request kwargs
:rtype: dict[str,str]
### Response:
def _configure_send(self, request, **kwargs):
# type: (ClientRequest, Any) -> Dict[str, str]
"""Configure the kwargs to use with requests.
See "send" for kwargs details.
:param ClientRequest request: The request object to be sent.
:returns: The requests.Session.request kwargs
:rtype: dict[str,str]
"""
requests_kwargs = {} # type: Any
session = kwargs.pop('session', self.session)
# If custom session was not create here
if session is not self.session:
self._init_session(session)
session.max_redirects = int(self.config.redirect_policy())
session.trust_env = bool(self.config.proxies.use_env_settings)
# Initialize requests_kwargs with "config" value
requests_kwargs.update(self.config.connection())
requests_kwargs['allow_redirects'] = bool(self.config.redirect_policy)
requests_kwargs['headers'] = self.config.headers.copy()
proxies = self.config.proxies()
if proxies:
requests_kwargs['proxies'] = proxies
# Replace by operation level kwargs
# We allow some of them, since some like stream or json are controled by msrest
for key in kwargs:
if key in self._REQUESTS_KWARGS:
requests_kwargs[key] = kwargs[key]
# Hooks. Deprecated, should be a policy
def make_user_hook_cb(user_hook, session):
def user_hook_cb(r, *args, **kwargs):
kwargs.setdefault("msrest", {})['session'] = session
return user_hook(r, *args, **kwargs)
return user_hook_cb
hooks = []
for user_hook in self.config.hooks:
hooks.append(make_user_hook_cb(user_hook, self.session))
if hooks:
requests_kwargs['hooks'] = {'response': hooks}
# Configuration callback. Deprecated, should be a policy
output_kwargs = self.config.session_configuration_callback(
session,
self.config,
kwargs,
**requests_kwargs
)
if output_kwargs is not None:
requests_kwargs = output_kwargs
# If custom session was not create here
if session is not self.session:
requests_kwargs['session'] = session
### Autorest forced kwargs now ###
# If Autorest needs this response to be streamable. True for compat.
requests_kwargs['stream'] = kwargs.get('stream', True)
if request.files:
requests_kwargs['files'] = request.files
elif request.data:
requests_kwargs['data'] = request.data
requests_kwargs['headers'].update(request.headers)
return requests_kwargs |
def aws(self):
"""
Access the aws
:returns: twilio.rest.accounts.v1.credential.aws.AwsList
:rtype: twilio.rest.accounts.v1.credential.aws.AwsList
"""
if self._aws is None:
self._aws = AwsList(self._version, )
return self._aws | Access the aws
:returns: twilio.rest.accounts.v1.credential.aws.AwsList
:rtype: twilio.rest.accounts.v1.credential.aws.AwsList | Below is the the instruction that describes the task:
### Input:
Access the aws
:returns: twilio.rest.accounts.v1.credential.aws.AwsList
:rtype: twilio.rest.accounts.v1.credential.aws.AwsList
### Response:
def aws(self):
"""
Access the aws
:returns: twilio.rest.accounts.v1.credential.aws.AwsList
:rtype: twilio.rest.accounts.v1.credential.aws.AwsList
"""
if self._aws is None:
self._aws = AwsList(self._version, )
return self._aws |
def get_coding_intervals(self, build='37', genes=None):
"""Return a dictionary with chromosomes as keys and interval trees as values
Each interval represents a coding region of overlapping genes.
Args:
build(str): The genome build
genes(iterable(scout.models.HgncGene)):
Returns:
intervals(dict): A dictionary with chromosomes as keys and overlapping genomic intervals as values
"""
intervals = {}
if not genes:
genes = self.all_genes(build=build)
LOG.info("Building interval trees...")
for i,hgnc_obj in enumerate(genes):
chrom = hgnc_obj['chromosome']
start = max((hgnc_obj['start'] - 5000), 1)
end = hgnc_obj['end'] + 5000
# If this is the first time a chromosome is seen we create a new
# interval tree with current interval
if chrom not in intervals:
intervals[chrom] = intervaltree.IntervalTree()
intervals[chrom].addi(start, end, i)
continue
res = intervals[chrom].search(start, end)
# If the interval did not overlap any other intervals we insert it and continue
if not res:
intervals[chrom].addi(start, end, i)
continue
# Loop over the overlapping intervals
for interval in res:
# Update the positions to new max and mins
if interval.begin < start:
start = interval.begin
if interval.end > end:
end = interval.end
# Delete the old interval
intervals[chrom].remove(interval)
# Add the new interval consisting och the overlapping ones
intervals[chrom].addi(start, end, i)
return intervals | Return a dictionary with chromosomes as keys and interval trees as values
Each interval represents a coding region of overlapping genes.
Args:
build(str): The genome build
genes(iterable(scout.models.HgncGene)):
Returns:
intervals(dict): A dictionary with chromosomes as keys and overlapping genomic intervals as values | Below is the the instruction that describes the task:
### Input:
Return a dictionary with chromosomes as keys and interval trees as values
Each interval represents a coding region of overlapping genes.
Args:
build(str): The genome build
genes(iterable(scout.models.HgncGene)):
Returns:
intervals(dict): A dictionary with chromosomes as keys and overlapping genomic intervals as values
### Response:
def get_coding_intervals(self, build='37', genes=None):
"""Return a dictionary with chromosomes as keys and interval trees as values
Each interval represents a coding region of overlapping genes.
Args:
build(str): The genome build
genes(iterable(scout.models.HgncGene)):
Returns:
intervals(dict): A dictionary with chromosomes as keys and overlapping genomic intervals as values
"""
intervals = {}
if not genes:
genes = self.all_genes(build=build)
LOG.info("Building interval trees...")
for i,hgnc_obj in enumerate(genes):
chrom = hgnc_obj['chromosome']
start = max((hgnc_obj['start'] - 5000), 1)
end = hgnc_obj['end'] + 5000
# If this is the first time a chromosome is seen we create a new
# interval tree with current interval
if chrom not in intervals:
intervals[chrom] = intervaltree.IntervalTree()
intervals[chrom].addi(start, end, i)
continue
res = intervals[chrom].search(start, end)
# If the interval did not overlap any other intervals we insert it and continue
if not res:
intervals[chrom].addi(start, end, i)
continue
# Loop over the overlapping intervals
for interval in res:
# Update the positions to new max and mins
if interval.begin < start:
start = interval.begin
if interval.end > end:
end = interval.end
# Delete the old interval
intervals[chrom].remove(interval)
# Add the new interval consisting och the overlapping ones
intervals[chrom].addi(start, end, i)
return intervals |
def load_database(adapter, variant_file=None, sv_file=None, family_file=None,
family_type='ped', skip_case_id=False, gq_treshold=None,
case_id=None, max_window = 3000, profile_file=None,
hard_threshold=0.95, soft_threshold=0.9):
"""Load the database with a case and its variants
Args:
adapter: Connection to database
variant_file(str): Path to variant file
sv_file(str): Path to sv variant file
family_file(str): Path to family file
family_type(str): Format of family file
skip_case_id(bool): If no case information should be added to variants
gq_treshold(int): If only quality variants should be considered
case_id(str): If different case id than the one in family file should be used
max_window(int): Specify the max size for sv windows
check_profile(bool): Does profile check if True
hard_threshold(float): Rejects load if hamming distance above this is found
soft_threshold(float): Stores similar samples if hamming distance above this is found
Returns:
nr_inserted(int)
"""
vcf_files = []
nr_variants = None
vcf_individuals = None
if variant_file:
vcf_info = check_vcf(variant_file)
nr_variants = vcf_info['nr_variants']
variant_type = vcf_info['variant_type']
vcf_files.append(variant_file)
# Get the indivuduals that are present in vcf file
vcf_individuals = vcf_info['individuals']
nr_sv_variants = None
sv_individuals = None
if sv_file:
vcf_info = check_vcf(sv_file, 'sv')
nr_sv_variants = vcf_info['nr_variants']
vcf_files.append(sv_file)
sv_individuals = vcf_info['individuals']
profiles = None
matches = None
if profile_file:
profiles = get_profiles(adapter, profile_file)
###Check if any profile already exists
matches = profile_match(adapter,
profiles,
hard_threshold=hard_threshold,
soft_threshold=soft_threshold)
# If a gq treshold is used the variants needs to have GQ
for _vcf_file in vcf_files:
# Get a cyvcf2.VCF object
vcf = get_vcf(_vcf_file)
if gq_treshold:
if not vcf.contains('GQ'):
LOG.warning('Set gq-treshold to 0 or add info to vcf {0}'.format(_vcf_file))
raise SyntaxError('GQ is not defined in vcf header')
# Get a ped_parser.Family object from family file
family = None
family_id = None
if family_file:
LOG.info("Loading family from %s", family_file)
with open(family_file, 'r') as family_lines:
family = get_case(
family_lines=family_lines,
family_type=family_type
)
family_id = family.family_id
# There has to be a case_id or a family at this stage.
case_id = case_id or family_id
# Convert infromation to a loqusdb Case object
case_obj = build_case(
case=family,
case_id=case_id,
vcf_path=variant_file,
vcf_individuals=vcf_individuals,
nr_variants=nr_variants,
vcf_sv_path=sv_file,
sv_individuals=sv_individuals,
nr_sv_variants=nr_sv_variants,
profiles=profiles,
matches=matches,
profile_path=profile_file
)
# Build and load a new case, or update an existing one
load_case(
adapter=adapter,
case_obj=case_obj,
)
nr_inserted = 0
# If case was succesfully added we can store the variants
for file_type in ['vcf_path','vcf_sv_path']:
variant_type = 'snv'
if file_type == 'vcf_sv_path':
variant_type = 'sv'
if case_obj.get(file_type) is None:
continue
vcf_obj = get_vcf(case_obj[file_type])
try:
nr_inserted += load_variants(
adapter=adapter,
vcf_obj=vcf_obj,
case_obj=case_obj,
skip_case_id=skip_case_id,
gq_treshold=gq_treshold,
max_window=max_window,
variant_type=variant_type,
)
except Exception as err:
# If something went wrong do a rollback
LOG.warning(err)
delete(
adapter=adapter,
case_obj=case_obj,
)
raise err
return nr_inserted | Load the database with a case and its variants
Args:
adapter: Connection to database
variant_file(str): Path to variant file
sv_file(str): Path to sv variant file
family_file(str): Path to family file
family_type(str): Format of family file
skip_case_id(bool): If no case information should be added to variants
gq_treshold(int): If only quality variants should be considered
case_id(str): If different case id than the one in family file should be used
max_window(int): Specify the max size for sv windows
check_profile(bool): Does profile check if True
hard_threshold(float): Rejects load if hamming distance above this is found
soft_threshold(float): Stores similar samples if hamming distance above this is found
Returns:
nr_inserted(int) | Below is the the instruction that describes the task:
### Input:
Load the database with a case and its variants
Args:
adapter: Connection to database
variant_file(str): Path to variant file
sv_file(str): Path to sv variant file
family_file(str): Path to family file
family_type(str): Format of family file
skip_case_id(bool): If no case information should be added to variants
gq_treshold(int): If only quality variants should be considered
case_id(str): If different case id than the one in family file should be used
max_window(int): Specify the max size for sv windows
check_profile(bool): Does profile check if True
hard_threshold(float): Rejects load if hamming distance above this is found
soft_threshold(float): Stores similar samples if hamming distance above this is found
Returns:
nr_inserted(int)
### Response:
def load_database(adapter, variant_file=None, sv_file=None, family_file=None,
family_type='ped', skip_case_id=False, gq_treshold=None,
case_id=None, max_window = 3000, profile_file=None,
hard_threshold=0.95, soft_threshold=0.9):
"""Load the database with a case and its variants
Args:
adapter: Connection to database
variant_file(str): Path to variant file
sv_file(str): Path to sv variant file
family_file(str): Path to family file
family_type(str): Format of family file
skip_case_id(bool): If no case information should be added to variants
gq_treshold(int): If only quality variants should be considered
case_id(str): If different case id than the one in family file should be used
max_window(int): Specify the max size for sv windows
check_profile(bool): Does profile check if True
hard_threshold(float): Rejects load if hamming distance above this is found
soft_threshold(float): Stores similar samples if hamming distance above this is found
Returns:
nr_inserted(int)
"""
vcf_files = []
nr_variants = None
vcf_individuals = None
if variant_file:
vcf_info = check_vcf(variant_file)
nr_variants = vcf_info['nr_variants']
variant_type = vcf_info['variant_type']
vcf_files.append(variant_file)
# Get the indivuduals that are present in vcf file
vcf_individuals = vcf_info['individuals']
nr_sv_variants = None
sv_individuals = None
if sv_file:
vcf_info = check_vcf(sv_file, 'sv')
nr_sv_variants = vcf_info['nr_variants']
vcf_files.append(sv_file)
sv_individuals = vcf_info['individuals']
profiles = None
matches = None
if profile_file:
profiles = get_profiles(adapter, profile_file)
###Check if any profile already exists
matches = profile_match(adapter,
profiles,
hard_threshold=hard_threshold,
soft_threshold=soft_threshold)
# If a gq treshold is used the variants needs to have GQ
for _vcf_file in vcf_files:
# Get a cyvcf2.VCF object
vcf = get_vcf(_vcf_file)
if gq_treshold:
if not vcf.contains('GQ'):
LOG.warning('Set gq-treshold to 0 or add info to vcf {0}'.format(_vcf_file))
raise SyntaxError('GQ is not defined in vcf header')
# Get a ped_parser.Family object from family file
family = None
family_id = None
if family_file:
LOG.info("Loading family from %s", family_file)
with open(family_file, 'r') as family_lines:
family = get_case(
family_lines=family_lines,
family_type=family_type
)
family_id = family.family_id
# There has to be a case_id or a family at this stage.
case_id = case_id or family_id
# Convert infromation to a loqusdb Case object
case_obj = build_case(
case=family,
case_id=case_id,
vcf_path=variant_file,
vcf_individuals=vcf_individuals,
nr_variants=nr_variants,
vcf_sv_path=sv_file,
sv_individuals=sv_individuals,
nr_sv_variants=nr_sv_variants,
profiles=profiles,
matches=matches,
profile_path=profile_file
)
# Build and load a new case, or update an existing one
load_case(
adapter=adapter,
case_obj=case_obj,
)
nr_inserted = 0
# If case was succesfully added we can store the variants
for file_type in ['vcf_path','vcf_sv_path']:
variant_type = 'snv'
if file_type == 'vcf_sv_path':
variant_type = 'sv'
if case_obj.get(file_type) is None:
continue
vcf_obj = get_vcf(case_obj[file_type])
try:
nr_inserted += load_variants(
adapter=adapter,
vcf_obj=vcf_obj,
case_obj=case_obj,
skip_case_id=skip_case_id,
gq_treshold=gq_treshold,
max_window=max_window,
variant_type=variant_type,
)
except Exception as err:
# If something went wrong do a rollback
LOG.warning(err)
delete(
adapter=adapter,
case_obj=case_obj,
)
raise err
return nr_inserted |
def parse_portal_json():
""" Extract id, ip from https://www.meethue.com/api/nupnp
Note: the ip is only the base and needs xml file appended, and
the id is not exactly the same as the serial number in the xml
"""
try:
json_str = from_url('https://www.meethue.com/api/nupnp')
except urllib.request.HTTPError as error:
logger.error("Problem at portal: %s", error)
raise
except urllib.request.URLError as error:
logger.warning("Problem reaching portal: %s", error)
return []
else:
portal_list = []
json_list = json.loads(json_str)
for bridge in json_list:
serial = bridge['id']
baseip = bridge['internalipaddress']
# baseip should look like "192.168.0.1"
xmlurl = _build_from(baseip)
# xmlurl should look like "http://192.168.0.1/description.xml"
portal_list.append((serial, xmlurl))
return portal_list | Extract id, ip from https://www.meethue.com/api/nupnp
Note: the ip is only the base and needs xml file appended, and
the id is not exactly the same as the serial number in the xml | Below is the the instruction that describes the task:
### Input:
Extract id, ip from https://www.meethue.com/api/nupnp
Note: the ip is only the base and needs xml file appended, and
the id is not exactly the same as the serial number in the xml
### Response:
def parse_portal_json():
""" Extract id, ip from https://www.meethue.com/api/nupnp
Note: the ip is only the base and needs xml file appended, and
the id is not exactly the same as the serial number in the xml
"""
try:
json_str = from_url('https://www.meethue.com/api/nupnp')
except urllib.request.HTTPError as error:
logger.error("Problem at portal: %s", error)
raise
except urllib.request.URLError as error:
logger.warning("Problem reaching portal: %s", error)
return []
else:
portal_list = []
json_list = json.loads(json_str)
for bridge in json_list:
serial = bridge['id']
baseip = bridge['internalipaddress']
# baseip should look like "192.168.0.1"
xmlurl = _build_from(baseip)
# xmlurl should look like "http://192.168.0.1/description.xml"
portal_list.append((serial, xmlurl))
return portal_list |
def rpc_receiver_count(self, service, routing_id):
'''Get the number of peers that would handle a particular RPC
:param service: the service name
:type service: anything hash-able
:param routing_id:
the id used for narrowing within the service handlers
:type routing_id: int
:returns:
the integer number of peers that would receive the described RPC
'''
peers = len(list(self._dispatcher.find_peer_routes(
const.MSG_TYPE_RPC_REQUEST, service, routing_id)))
if self._dispatcher.locally_handles(const.MSG_TYPE_RPC_REQUEST,
service, routing_id):
return peers + 1
return peers | Get the number of peers that would handle a particular RPC
:param service: the service name
:type service: anything hash-able
:param routing_id:
the id used for narrowing within the service handlers
:type routing_id: int
:returns:
the integer number of peers that would receive the described RPC | Below is the the instruction that describes the task:
### Input:
Get the number of peers that would handle a particular RPC
:param service: the service name
:type service: anything hash-able
:param routing_id:
the id used for narrowing within the service handlers
:type routing_id: int
:returns:
the integer number of peers that would receive the described RPC
### Response:
def rpc_receiver_count(self, service, routing_id):
'''Get the number of peers that would handle a particular RPC
:param service: the service name
:type service: anything hash-able
:param routing_id:
the id used for narrowing within the service handlers
:type routing_id: int
:returns:
the integer number of peers that would receive the described RPC
'''
peers = len(list(self._dispatcher.find_peer_routes(
const.MSG_TYPE_RPC_REQUEST, service, routing_id)))
if self._dispatcher.locally_handles(const.MSG_TYPE_RPC_REQUEST,
service, routing_id):
return peers + 1
return peers |
def round_dict(dic, places):
"""
Rounds all values in a dict containing only numeric types to `places` decimal places.
If places is None, round to INT.
"""
if places is None:
for key, value in dic.items():
dic[key] = round(value)
else:
for key, value in dic.items():
dic[key] = round(value, places) | Rounds all values in a dict containing only numeric types to `places` decimal places.
If places is None, round to INT. | Below is the the instruction that describes the task:
### Input:
Rounds all values in a dict containing only numeric types to `places` decimal places.
If places is None, round to INT.
### Response:
def round_dict(dic, places):
"""
Rounds all values in a dict containing only numeric types to `places` decimal places.
If places is None, round to INT.
"""
if places is None:
for key, value in dic.items():
dic[key] = round(value)
else:
for key, value in dic.items():
dic[key] = round(value, places) |
def to_array(self, channels=2):
"""Generate the array of volume multipliers for the dynamic"""
if self.fade_type == "linear":
return np.linspace(self.in_volume, self.out_volume,
self.duration * channels)\
.reshape(self.duration, channels)
elif self.fade_type == "exponential":
if self.in_volume < self.out_volume:
return (np.logspace(8, 1, self.duration * channels,
base=.5) * (
self.out_volume - self.in_volume) / 0.5 +
self.in_volume).reshape(self.duration, channels)
else:
return (np.logspace(1, 8, self.duration * channels, base=.5
) * (self.in_volume - self.out_volume) / 0.5 +
self.out_volume).reshape(self.duration, channels)
elif self.fade_type == "cosine":
return | Generate the array of volume multipliers for the dynamic | Below is the the instruction that describes the task:
### Input:
Generate the array of volume multipliers for the dynamic
### Response:
def to_array(self, channels=2):
"""Generate the array of volume multipliers for the dynamic"""
if self.fade_type == "linear":
return np.linspace(self.in_volume, self.out_volume,
self.duration * channels)\
.reshape(self.duration, channels)
elif self.fade_type == "exponential":
if self.in_volume < self.out_volume:
return (np.logspace(8, 1, self.duration * channels,
base=.5) * (
self.out_volume - self.in_volume) / 0.5 +
self.in_volume).reshape(self.duration, channels)
else:
return (np.logspace(1, 8, self.duration * channels, base=.5
) * (self.in_volume - self.out_volume) / 0.5 +
self.out_volume).reshape(self.duration, channels)
elif self.fade_type == "cosine":
return |
def _basis_spline_factory(coef, degree, knots, der, ext):
"""Return a B-Spline given some coefficients."""
return functools.partial(interpolate.splev, tck=(knots, coef, degree), der=der, ext=ext) | Return a B-Spline given some coefficients. | Below is the the instruction that describes the task:
### Input:
Return a B-Spline given some coefficients.
### Response:
def _basis_spline_factory(coef, degree, knots, der, ext):
"""Return a B-Spline given some coefficients."""
return functools.partial(interpolate.splev, tck=(knots, coef, degree), der=der, ext=ext) |
def last(self, rows: List[Row]) -> List[Row]:
"""
Takes an expression that evaluates to a list of rows, and returns the last one in that
list.
"""
if not rows:
logger.warning("Trying to get last row from an empty list")
return []
return [rows[-1]] | Takes an expression that evaluates to a list of rows, and returns the last one in that
list. | Below is the the instruction that describes the task:
### Input:
Takes an expression that evaluates to a list of rows, and returns the last one in that
list.
### Response:
def last(self, rows: List[Row]) -> List[Row]:
"""
Takes an expression that evaluates to a list of rows, and returns the last one in that
list.
"""
if not rows:
logger.warning("Trying to get last row from an empty list")
return []
return [rows[-1]] |
def _find_project_config_file(user_config_file):
"""Find path to project-wide config file
Search from current working directory, and traverse path up to
directory with .versionner.rc file or root directory
:param user_config_file: instance with user-wide config path
:type: pathlib.Path
:rtype: pathlib.Path
"""
proj_cfg_dir = pathlib.Path('.').absolute()
proj_cfg_file = None
root = pathlib.Path('/')
while proj_cfg_dir != root:
proj_cfg_file = proj_cfg_dir / defaults.RC_FILENAME
if proj_cfg_file.exists():
break
proj_cfg_file = None
# pylint: disable=redefined-variable-type
proj_cfg_dir = proj_cfg_dir.parent
if proj_cfg_file and proj_cfg_file != user_config_file:
return proj_cfg_file | Find path to project-wide config file
Search from current working directory, and traverse path up to
directory with .versionner.rc file or root directory
:param user_config_file: instance with user-wide config path
:type: pathlib.Path
:rtype: pathlib.Path | Below is the the instruction that describes the task:
### Input:
Find path to project-wide config file
Search from current working directory, and traverse path up to
directory with .versionner.rc file or root directory
:param user_config_file: instance with user-wide config path
:type: pathlib.Path
:rtype: pathlib.Path
### Response:
def _find_project_config_file(user_config_file):
"""Find path to project-wide config file
Search from current working directory, and traverse path up to
directory with .versionner.rc file or root directory
:param user_config_file: instance with user-wide config path
:type: pathlib.Path
:rtype: pathlib.Path
"""
proj_cfg_dir = pathlib.Path('.').absolute()
proj_cfg_file = None
root = pathlib.Path('/')
while proj_cfg_dir != root:
proj_cfg_file = proj_cfg_dir / defaults.RC_FILENAME
if proj_cfg_file.exists():
break
proj_cfg_file = None
# pylint: disable=redefined-variable-type
proj_cfg_dir = proj_cfg_dir.parent
if proj_cfg_file and proj_cfg_file != user_config_file:
return proj_cfg_file |
def register_callback_renamed(self, func, serialised=True):
"""
Register a callback for resource rename. This will be called when any resource
is renamed within your agent.
If `serialised` is not set, the callbacks might arrive in a different order to they were requested.
The payload passed to your callback is an OrderedDict with the following keys
#!python
r : R_ENTITY, R_FEED, etc # the type of resource deleted
lid : <name> # the new local name of the resource
oldLid : <name> # the old local name of the resource
id : <GUID> # the global Id of the resource
`Note` resource types are defined [here](../Core/Const.m.html)
`Example`
#!python
def renamed_callback(args):
print(args)
...
client.register_callback_renamed(renamed_callback)
This would print out something like the following on renaming of an R_ENTITY
#!python
OrderedDict([(u'lid', u'new_name'),
(u'r', 1),
(u'oldLid', u'old_name'),
(u'id', u'4448993b44738411de5fe2a6cf32d957')])
"""
self.__client.register_callback_renamed(partial(self.__callback_payload_only, func), serialised=serialised) | Register a callback for resource rename. This will be called when any resource
is renamed within your agent.
If `serialised` is not set, the callbacks might arrive in a different order to they were requested.
The payload passed to your callback is an OrderedDict with the following keys
#!python
r : R_ENTITY, R_FEED, etc # the type of resource deleted
lid : <name> # the new local name of the resource
oldLid : <name> # the old local name of the resource
id : <GUID> # the global Id of the resource
`Note` resource types are defined [here](../Core/Const.m.html)
`Example`
#!python
def renamed_callback(args):
print(args)
...
client.register_callback_renamed(renamed_callback)
This would print out something like the following on renaming of an R_ENTITY
#!python
OrderedDict([(u'lid', u'new_name'),
(u'r', 1),
(u'oldLid', u'old_name'),
(u'id', u'4448993b44738411de5fe2a6cf32d957')]) | Below is the the instruction that describes the task:
### Input:
Register a callback for resource rename. This will be called when any resource
is renamed within your agent.
If `serialised` is not set, the callbacks might arrive in a different order to they were requested.
The payload passed to your callback is an OrderedDict with the following keys
#!python
r : R_ENTITY, R_FEED, etc # the type of resource deleted
lid : <name> # the new local name of the resource
oldLid : <name> # the old local name of the resource
id : <GUID> # the global Id of the resource
`Note` resource types are defined [here](../Core/Const.m.html)
`Example`
#!python
def renamed_callback(args):
print(args)
...
client.register_callback_renamed(renamed_callback)
This would print out something like the following on renaming of an R_ENTITY
#!python
OrderedDict([(u'lid', u'new_name'),
(u'r', 1),
(u'oldLid', u'old_name'),
(u'id', u'4448993b44738411de5fe2a6cf32d957')])
### Response:
def register_callback_renamed(self, func, serialised=True):
"""
Register a callback for resource rename. This will be called when any resource
is renamed within your agent.
If `serialised` is not set, the callbacks might arrive in a different order to they were requested.
The payload passed to your callback is an OrderedDict with the following keys
#!python
r : R_ENTITY, R_FEED, etc # the type of resource deleted
lid : <name> # the new local name of the resource
oldLid : <name> # the old local name of the resource
id : <GUID> # the global Id of the resource
`Note` resource types are defined [here](../Core/Const.m.html)
`Example`
#!python
def renamed_callback(args):
print(args)
...
client.register_callback_renamed(renamed_callback)
This would print out something like the following on renaming of an R_ENTITY
#!python
OrderedDict([(u'lid', u'new_name'),
(u'r', 1),
(u'oldLid', u'old_name'),
(u'id', u'4448993b44738411de5fe2a6cf32d957')])
"""
self.__client.register_callback_renamed(partial(self.__callback_payload_only, func), serialised=serialised) |
def _has_bcftools_germline_stats(data):
"""Check for the presence of a germline stats file, CWL compatible.
"""
stats_file = tz.get_in(["summary", "qc"], data)
if isinstance(stats_file, dict):
stats_file = tz.get_in(["variants", "base"], stats_file)
if not stats_file:
stats_file = ""
return stats_file.find("bcftools_stats_germline") > 0 | Check for the presence of a germline stats file, CWL compatible. | Below is the the instruction that describes the task:
### Input:
Check for the presence of a germline stats file, CWL compatible.
### Response:
def _has_bcftools_germline_stats(data):
"""Check for the presence of a germline stats file, CWL compatible.
"""
stats_file = tz.get_in(["summary", "qc"], data)
if isinstance(stats_file, dict):
stats_file = tz.get_in(["variants", "base"], stats_file)
if not stats_file:
stats_file = ""
return stats_file.find("bcftools_stats_germline") > 0 |
def _field_value_text(self, field):
"""Return the html representation of the value of the given field"""
if field in self.fields:
return unicode(self.get(field))
else:
return self.get_timemachine_instance(field)._object_name_text() | Return the html representation of the value of the given field | Below is the the instruction that describes the task:
### Input:
Return the html representation of the value of the given field
### Response:
def _field_value_text(self, field):
"""Return the html representation of the value of the given field"""
if field in self.fields:
return unicode(self.get(field))
else:
return self.get_timemachine_instance(field)._object_name_text() |
def prepread(sheet, header=True, startcell=None, stopcell=None):
"""Return four StartStop objects, defining the outer bounds of
header row and data range, respectively. If header is False, the
first two items will be None.
--> [headstart, headstop, datstart, datstop]
sheet: xlrd.sheet.Sheet instance
Ready for use.
header: bool or str
True if the defined data range includes a header with field
names. Else False - the whole range is data. If a string, it is
spread sheet style notation of the startcell for the header
("F9"). The "width" of this record is the same as for the data.
startcell: str or None
If given, a spread sheet style notation of the cell where reading
start, ("F9").
stopcell: str or None
A spread sheet style notation of the cell where data end,
("F9").
startcell and stopcell can both be None, either one specified or
both specified.
Note to self: consider making possible to specify headers in a column.
"""
datstart, datstop = _get_startstop(sheet, startcell, stopcell)
headstart, headstop = StartStop(0, 0), StartStop(0, 0) # Holders
def typicalprep():
headstart.row, headstart.col = datstart.row, datstart.col
headstop.row, headstop.col = datstart.row + 1, datstop.col
# Tick the data start row by 1:
datstart.row += 1
def offsetheaderprep():
headstart.row, headstart.col = headrow, headcol
headstop.row = headrow + 1
headstop.col = headcol + (datstop.col - datstart.col) # stop > start
if header is True: # Simply the toprow of the table.
typicalprep()
return [headstart, headstop, datstart, datstop]
elif header: # Then it is a string if not False. ("F9")
m = re.match(XLNOT_RX, header)
headrow = int(m.group(2)) - 1
headcol = letter2num(m.group(1), zbase=True)
if headrow == datstart.row and headcol == datstart.col:
typicalprep()
return [headstart, headstop, datstart, datstop]
elif headrow == datstart.row:
typicalprep()
offsetheaderprep()
return [headstart, headstop, datstart, datstop]
else:
offsetheaderprep()
return [headstart, headstop, datstart, datstop]
else: # header is False
return [None, None, datstart, datstop] | Return four StartStop objects, defining the outer bounds of
header row and data range, respectively. If header is False, the
first two items will be None.
--> [headstart, headstop, datstart, datstop]
sheet: xlrd.sheet.Sheet instance
Ready for use.
header: bool or str
True if the defined data range includes a header with field
names. Else False - the whole range is data. If a string, it is
spread sheet style notation of the startcell for the header
("F9"). The "width" of this record is the same as for the data.
startcell: str or None
If given, a spread sheet style notation of the cell where reading
start, ("F9").
stopcell: str or None
A spread sheet style notation of the cell where data end,
("F9").
startcell and stopcell can both be None, either one specified or
both specified.
Note to self: consider making possible to specify headers in a column. | Below is the the instruction that describes the task:
### Input:
Return four StartStop objects, defining the outer bounds of
header row and data range, respectively. If header is False, the
first two items will be None.
--> [headstart, headstop, datstart, datstop]
sheet: xlrd.sheet.Sheet instance
Ready for use.
header: bool or str
True if the defined data range includes a header with field
names. Else False - the whole range is data. If a string, it is
spread sheet style notation of the startcell for the header
("F9"). The "width" of this record is the same as for the data.
startcell: str or None
If given, a spread sheet style notation of the cell where reading
start, ("F9").
stopcell: str or None
A spread sheet style notation of the cell where data end,
("F9").
startcell and stopcell can both be None, either one specified or
both specified.
Note to self: consider making possible to specify headers in a column.
### Response:
def prepread(sheet, header=True, startcell=None, stopcell=None):
"""Return four StartStop objects, defining the outer bounds of
header row and data range, respectively. If header is False, the
first two items will be None.
--> [headstart, headstop, datstart, datstop]
sheet: xlrd.sheet.Sheet instance
Ready for use.
header: bool or str
True if the defined data range includes a header with field
names. Else False - the whole range is data. If a string, it is
spread sheet style notation of the startcell for the header
("F9"). The "width" of this record is the same as for the data.
startcell: str or None
If given, a spread sheet style notation of the cell where reading
start, ("F9").
stopcell: str or None
A spread sheet style notation of the cell where data end,
("F9").
startcell and stopcell can both be None, either one specified or
both specified.
Note to self: consider making possible to specify headers in a column.
"""
datstart, datstop = _get_startstop(sheet, startcell, stopcell)
headstart, headstop = StartStop(0, 0), StartStop(0, 0) # Holders
def typicalprep():
headstart.row, headstart.col = datstart.row, datstart.col
headstop.row, headstop.col = datstart.row + 1, datstop.col
# Tick the data start row by 1:
datstart.row += 1
def offsetheaderprep():
headstart.row, headstart.col = headrow, headcol
headstop.row = headrow + 1
headstop.col = headcol + (datstop.col - datstart.col) # stop > start
if header is True: # Simply the toprow of the table.
typicalprep()
return [headstart, headstop, datstart, datstop]
elif header: # Then it is a string if not False. ("F9")
m = re.match(XLNOT_RX, header)
headrow = int(m.group(2)) - 1
headcol = letter2num(m.group(1), zbase=True)
if headrow == datstart.row and headcol == datstart.col:
typicalprep()
return [headstart, headstop, datstart, datstop]
elif headrow == datstart.row:
typicalprep()
offsetheaderprep()
return [headstart, headstop, datstart, datstop]
else:
offsetheaderprep()
return [headstart, headstop, datstart, datstop]
else: # header is False
return [None, None, datstart, datstop] |
def list_balancers(profile, **libcloud_kwargs):
'''
Return a list of load balancers.
:param profile: The profile key
:type profile: ``str``
:param libcloud_kwargs: Extra arguments for the driver's list_balancers method
:type libcloud_kwargs: ``dict``
CLI Example:
.. code-block:: bash
salt myminion libcloud_storage.list_balancers profile1
'''
conn = _get_driver(profile=profile)
libcloud_kwargs = salt.utils.args.clean_kwargs(**libcloud_kwargs)
balancers = conn.list_balancers(**libcloud_kwargs)
ret = []
for balancer in balancers:
ret.append(_simple_balancer(balancer))
return ret | Return a list of load balancers.
:param profile: The profile key
:type profile: ``str``
:param libcloud_kwargs: Extra arguments for the driver's list_balancers method
:type libcloud_kwargs: ``dict``
CLI Example:
.. code-block:: bash
salt myminion libcloud_storage.list_balancers profile1 | Below is the the instruction that describes the task:
### Input:
Return a list of load balancers.
:param profile: The profile key
:type profile: ``str``
:param libcloud_kwargs: Extra arguments for the driver's list_balancers method
:type libcloud_kwargs: ``dict``
CLI Example:
.. code-block:: bash
salt myminion libcloud_storage.list_balancers profile1
### Response:
def list_balancers(profile, **libcloud_kwargs):
'''
Return a list of load balancers.
:param profile: The profile key
:type profile: ``str``
:param libcloud_kwargs: Extra arguments for the driver's list_balancers method
:type libcloud_kwargs: ``dict``
CLI Example:
.. code-block:: bash
salt myminion libcloud_storage.list_balancers profile1
'''
conn = _get_driver(profile=profile)
libcloud_kwargs = salt.utils.args.clean_kwargs(**libcloud_kwargs)
balancers = conn.list_balancers(**libcloud_kwargs)
ret = []
for balancer in balancers:
ret.append(_simple_balancer(balancer))
return ret |
def clear(self, page_size=10, vtimeout=10):
"""Utility function to remove all messages from a queue"""
n = 0
l = self.get_messages(page_size, vtimeout)
while l:
for m in l:
self.delete_message(m)
n += 1
l = self.get_messages(page_size, vtimeout)
return n | Utility function to remove all messages from a queue | Below is the the instruction that describes the task:
### Input:
Utility function to remove all messages from a queue
### Response:
def clear(self, page_size=10, vtimeout=10):
"""Utility function to remove all messages from a queue"""
n = 0
l = self.get_messages(page_size, vtimeout)
while l:
for m in l:
self.delete_message(m)
n += 1
l = self.get_messages(page_size, vtimeout)
return n |
def estimateabundance(self):
"""
Estimate the abundance of taxonomic groups
"""
logging.info('Estimating abundance of taxonomic groups')
# Create and start threads
for i in range(self.cpus):
# Send the threads to the appropriate destination function
threads = Thread(target=self.estimate, args=())
# Set the daemon to true - something to do with thread management
threads.setDaemon(True)
# Start the threading
threads.start()
with progressbar(self.runmetadata.samples) as bar:
for sample in bar:
try:
if sample.general.combined != 'NA':
# Set the name of the abundance report
sample.general.abundance = sample.general.combined.split('.')[0] + '_abundance.csv'
# if not hasattr(sample, 'commands'):
if not sample.commands.datastore:
sample.commands = GenObject()
# Define system calls
sample.commands.target = self.targetcall
sample.commands.classify = self.classifycall
sample.commands.abundancecall = \
'cd {} && ./estimate_abundance.sh -D {} -F {} > {}'.format(self.clarkpath,
self.databasepath,
sample.general.classification,
sample.general.abundance)
self.abundancequeue.put(sample)
except KeyError:
pass
self.abundancequeue.join() | Estimate the abundance of taxonomic groups | Below is the the instruction that describes the task:
### Input:
Estimate the abundance of taxonomic groups
### Response:
def estimateabundance(self):
"""
Estimate the abundance of taxonomic groups
"""
logging.info('Estimating abundance of taxonomic groups')
# Create and start threads
for i in range(self.cpus):
# Send the threads to the appropriate destination function
threads = Thread(target=self.estimate, args=())
# Set the daemon to true - something to do with thread management
threads.setDaemon(True)
# Start the threading
threads.start()
with progressbar(self.runmetadata.samples) as bar:
for sample in bar:
try:
if sample.general.combined != 'NA':
# Set the name of the abundance report
sample.general.abundance = sample.general.combined.split('.')[0] + '_abundance.csv'
# if not hasattr(sample, 'commands'):
if not sample.commands.datastore:
sample.commands = GenObject()
# Define system calls
sample.commands.target = self.targetcall
sample.commands.classify = self.classifycall
sample.commands.abundancecall = \
'cd {} && ./estimate_abundance.sh -D {} -F {} > {}'.format(self.clarkpath,
self.databasepath,
sample.general.classification,
sample.general.abundance)
self.abundancequeue.put(sample)
except KeyError:
pass
self.abundancequeue.join() |
def round(self, value_array):
"""
Rounds a categorical variable by setting to one the max of the given vector and to zero the rest of the entries.
Assumes an 1x[number of categories] array (due to one-hot encoding) as an input
"""
rounded_values = np.zeros(value_array.shape)
rounded_values[np.argmax(value_array)] = 1
return rounded_values | Rounds a categorical variable by setting to one the max of the given vector and to zero the rest of the entries.
Assumes an 1x[number of categories] array (due to one-hot encoding) as an input | Below is the the instruction that describes the task:
### Input:
Rounds a categorical variable by setting to one the max of the given vector and to zero the rest of the entries.
Assumes an 1x[number of categories] array (due to one-hot encoding) as an input
### Response:
def round(self, value_array):
"""
Rounds a categorical variable by setting to one the max of the given vector and to zero the rest of the entries.
Assumes an 1x[number of categories] array (due to one-hot encoding) as an input
"""
rounded_values = np.zeros(value_array.shape)
rounded_values[np.argmax(value_array)] = 1
return rounded_values |
def _header(self, pam=False):
"""Return file header as byte string."""
if pam or self.magicnum == b'P7':
header = "\n".join((
"P7",
"HEIGHT %i" % self.height,
"WIDTH %i" % self.width,
"DEPTH %i" % self.depth,
"MAXVAL %i" % self.maxval,
"\n".join("TUPLTYPE %s" % unicode(i) for i in self.tupltypes),
"ENDHDR\n"))
elif self.maxval == 1:
header = "P4 %i %i\n" % (self.width, self.height)
elif self.depth == 1:
header = "P5 %i %i %i\n" % (self.width, self.height, self.maxval)
else:
header = "P6 %i %i %i\n" % (self.width, self.height, self.maxval)
if sys.version_info[0] > 2:
header = bytes(header, 'ascii')
return header | Return file header as byte string. | Below is the the instruction that describes the task:
### Input:
Return file header as byte string.
### Response:
def _header(self, pam=False):
"""Return file header as byte string."""
if pam or self.magicnum == b'P7':
header = "\n".join((
"P7",
"HEIGHT %i" % self.height,
"WIDTH %i" % self.width,
"DEPTH %i" % self.depth,
"MAXVAL %i" % self.maxval,
"\n".join("TUPLTYPE %s" % unicode(i) for i in self.tupltypes),
"ENDHDR\n"))
elif self.maxval == 1:
header = "P4 %i %i\n" % (self.width, self.height)
elif self.depth == 1:
header = "P5 %i %i %i\n" % (self.width, self.height, self.maxval)
else:
header = "P6 %i %i %i\n" % (self.width, self.height, self.maxval)
if sys.version_info[0] > 2:
header = bytes(header, 'ascii')
return header |
def _generate_message_error(cls, response_code, messages, response_id):
"""
:type response_code: int
:type messages: list[str]
:type response_id: str
:rtype: str
"""
line_response_code = cls._FORMAT_RESPONSE_CODE_LINE \
.format(response_code)
line_response_id = cls._FORMAT_RESPONSE_ID_LINE.format(response_id)
line_error_message = cls._FORMAT_ERROR_MESSAGE_LINE.format(
cls._GLUE_ERROR_MESSAGE_STRING_EMPTY.join(messages)
)
return cls._glue_all_error_message(
[line_response_code, line_response_id, line_error_message]
) | :type response_code: int
:type messages: list[str]
:type response_id: str
:rtype: str | Below is the the instruction that describes the task:
### Input:
:type response_code: int
:type messages: list[str]
:type response_id: str
:rtype: str
### Response:
def _generate_message_error(cls, response_code, messages, response_id):
"""
:type response_code: int
:type messages: list[str]
:type response_id: str
:rtype: str
"""
line_response_code = cls._FORMAT_RESPONSE_CODE_LINE \
.format(response_code)
line_response_id = cls._FORMAT_RESPONSE_ID_LINE.format(response_id)
line_error_message = cls._FORMAT_ERROR_MESSAGE_LINE.format(
cls._GLUE_ERROR_MESSAGE_STRING_EMPTY.join(messages)
)
return cls._glue_all_error_message(
[line_response_code, line_response_id, line_error_message]
) |
def in_(self, qfield, *values):
''' Works the same as the query expression method ``in_``
'''
self.__query_obj.in_(qfield, *values)
return self | Works the same as the query expression method ``in_`` | Below is the the instruction that describes the task:
### Input:
Works the same as the query expression method ``in_``
### Response:
def in_(self, qfield, *values):
''' Works the same as the query expression method ``in_``
'''
self.__query_obj.in_(qfield, *values)
return self |
def add_to_batch(self, batch):
'''
Adds paths to the given batch object. They are all added as
GL_TRIANGLES, so the batch will aggregate them all into a single OpenGL
primitive.
'''
for name in self.paths:
svg_path = self.paths[name]
svg_path.add_to_batch(batch) | Adds paths to the given batch object. They are all added as
GL_TRIANGLES, so the batch will aggregate them all into a single OpenGL
primitive. | Below is the the instruction that describes the task:
### Input:
Adds paths to the given batch object. They are all added as
GL_TRIANGLES, so the batch will aggregate them all into a single OpenGL
primitive.
### Response:
def add_to_batch(self, batch):
'''
Adds paths to the given batch object. They are all added as
GL_TRIANGLES, so the batch will aggregate them all into a single OpenGL
primitive.
'''
for name in self.paths:
svg_path = self.paths[name]
svg_path.add_to_batch(batch) |
def _validate_index_level(self, level):
"""
Validate index level.
For single-level Index getting level number is a no-op, but some
verification must be done like in MultiIndex.
"""
if isinstance(level, int):
if level < 0 and level != -1:
raise IndexError("Too many levels: Index has only 1 level,"
" %d is not a valid level number" % (level, ))
elif level > 0:
raise IndexError("Too many levels:"
" Index has only 1 level, not %d" %
(level + 1))
elif level != self.name:
raise KeyError('Level %s must be same as name (%s)' %
(level, self.name)) | Validate index level.
For single-level Index getting level number is a no-op, but some
verification must be done like in MultiIndex. | Below is the the instruction that describes the task:
### Input:
Validate index level.
For single-level Index getting level number is a no-op, but some
verification must be done like in MultiIndex.
### Response:
def _validate_index_level(self, level):
"""
Validate index level.
For single-level Index getting level number is a no-op, but some
verification must be done like in MultiIndex.
"""
if isinstance(level, int):
if level < 0 and level != -1:
raise IndexError("Too many levels: Index has only 1 level,"
" %d is not a valid level number" % (level, ))
elif level > 0:
raise IndexError("Too many levels:"
" Index has only 1 level, not %d" %
(level + 1))
elif level != self.name:
raise KeyError('Level %s must be same as name (%s)' %
(level, self.name)) |
def agent_for_socks_port(reactor, torconfig, socks_config, pool=None):
"""
This returns a Deferred that fires with an object that implements
:class:`twisted.web.iweb.IAgent` and is thus suitable for passing
to ``treq`` as the ``agent=`` kwarg. Of course can be used
directly; see `using Twisted web cliet
<http://twistedmatrix.com/documents/current/web/howto/client.html>`_. If
you have a :class:`txtorcon.Tor` instance already, the preferred
API is to call :meth:`txtorcon.Tor.web_agent` on it.
:param torconfig: a :class:`txtorcon.TorConfig` instance.
:param socks_config: anything valid for Tor's ``SocksPort``
option. This is generally just a TCP port (e.g. ``9050``), but
can also be a unix path like so ``unix:/path/to/socket`` (Tor
has restrictions on the ownership/permissions of the directory
containing ``socket``). If the given SOCKS option is not
already available in the underlying Tor instance, it is
re-configured to add the SOCKS option.
"""
# :param tls: True (the default) will use Twisted's default options
# with the hostname in the URI -- that is, TLS verification
# similar to a Browser. Otherwise, you can pass whatever Twisted
# returns for `optionsForClientTLS
# <https://twistedmatrix.com/documents/current/api/twisted.internet.ssl.optionsForClientTLS.html>`_
socks_config = str(socks_config) # sadly, all lists are lists-of-strings to Tor :/
if socks_config not in torconfig.SocksPort:
txtorlog.msg("Adding SOCKS port '{}' to Tor".format(socks_config))
torconfig.SocksPort.append(socks_config)
try:
yield torconfig.save()
except Exception as e:
raise RuntimeError(
"Failed to reconfigure Tor with SOCKS port '{}': {}".format(
socks_config, str(e)
)
)
if socks_config.startswith('unix:'):
socks_ep = UNIXClientEndpoint(reactor, socks_config[5:])
else:
if ':' in socks_config:
host, port = socks_config.split(':', 1)
else:
host = '127.0.0.1'
port = int(socks_config)
socks_ep = TCP4ClientEndpoint(reactor, host, port)
returnValue(
Agent.usingEndpointFactory(
reactor,
_AgentEndpointFactoryUsingTor(reactor, socks_ep),
pool=pool,
)
) | This returns a Deferred that fires with an object that implements
:class:`twisted.web.iweb.IAgent` and is thus suitable for passing
to ``treq`` as the ``agent=`` kwarg. Of course can be used
directly; see `using Twisted web cliet
<http://twistedmatrix.com/documents/current/web/howto/client.html>`_. If
you have a :class:`txtorcon.Tor` instance already, the preferred
API is to call :meth:`txtorcon.Tor.web_agent` on it.
:param torconfig: a :class:`txtorcon.TorConfig` instance.
:param socks_config: anything valid for Tor's ``SocksPort``
option. This is generally just a TCP port (e.g. ``9050``), but
can also be a unix path like so ``unix:/path/to/socket`` (Tor
has restrictions on the ownership/permissions of the directory
containing ``socket``). If the given SOCKS option is not
already available in the underlying Tor instance, it is
re-configured to add the SOCKS option. | Below is the the instruction that describes the task:
### Input:
This returns a Deferred that fires with an object that implements
:class:`twisted.web.iweb.IAgent` and is thus suitable for passing
to ``treq`` as the ``agent=`` kwarg. Of course can be used
directly; see `using Twisted web cliet
<http://twistedmatrix.com/documents/current/web/howto/client.html>`_. If
you have a :class:`txtorcon.Tor` instance already, the preferred
API is to call :meth:`txtorcon.Tor.web_agent` on it.
:param torconfig: a :class:`txtorcon.TorConfig` instance.
:param socks_config: anything valid for Tor's ``SocksPort``
option. This is generally just a TCP port (e.g. ``9050``), but
can also be a unix path like so ``unix:/path/to/socket`` (Tor
has restrictions on the ownership/permissions of the directory
containing ``socket``). If the given SOCKS option is not
already available in the underlying Tor instance, it is
re-configured to add the SOCKS option.
### Response:
def agent_for_socks_port(reactor, torconfig, socks_config, pool=None):
"""
This returns a Deferred that fires with an object that implements
:class:`twisted.web.iweb.IAgent` and is thus suitable for passing
to ``treq`` as the ``agent=`` kwarg. Of course can be used
directly; see `using Twisted web cliet
<http://twistedmatrix.com/documents/current/web/howto/client.html>`_. If
you have a :class:`txtorcon.Tor` instance already, the preferred
API is to call :meth:`txtorcon.Tor.web_agent` on it.
:param torconfig: a :class:`txtorcon.TorConfig` instance.
:param socks_config: anything valid for Tor's ``SocksPort``
option. This is generally just a TCP port (e.g. ``9050``), but
can also be a unix path like so ``unix:/path/to/socket`` (Tor
has restrictions on the ownership/permissions of the directory
containing ``socket``). If the given SOCKS option is not
already available in the underlying Tor instance, it is
re-configured to add the SOCKS option.
"""
# :param tls: True (the default) will use Twisted's default options
# with the hostname in the URI -- that is, TLS verification
# similar to a Browser. Otherwise, you can pass whatever Twisted
# returns for `optionsForClientTLS
# <https://twistedmatrix.com/documents/current/api/twisted.internet.ssl.optionsForClientTLS.html>`_
socks_config = str(socks_config) # sadly, all lists are lists-of-strings to Tor :/
if socks_config not in torconfig.SocksPort:
txtorlog.msg("Adding SOCKS port '{}' to Tor".format(socks_config))
torconfig.SocksPort.append(socks_config)
try:
yield torconfig.save()
except Exception as e:
raise RuntimeError(
"Failed to reconfigure Tor with SOCKS port '{}': {}".format(
socks_config, str(e)
)
)
if socks_config.startswith('unix:'):
socks_ep = UNIXClientEndpoint(reactor, socks_config[5:])
else:
if ':' in socks_config:
host, port = socks_config.split(':', 1)
else:
host = '127.0.0.1'
port = int(socks_config)
socks_ep = TCP4ClientEndpoint(reactor, host, port)
returnValue(
Agent.usingEndpointFactory(
reactor,
_AgentEndpointFactoryUsingTor(reactor, socks_ep),
pool=pool,
)
) |
def set_input_divide_by_period(holder, period, array):
"""
This function can be declared as a ``set_input`` attribute of a variable.
In this case, the variable will accept inputs on larger periods that its definition period, and the value for the larger period will be divided between its subperiods.
To read more about ``set_input`` attributes, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#set-input-automatically-process-variable-inputs-defined-for-periods-not-matching-the-definition-period>`_.
"""
if not isinstance(array, np.ndarray):
array = np.array(array)
period_size = period.size
period_unit = period.unit
if holder.variable.definition_period == MONTH:
cached_period_unit = periods.MONTH
elif holder.variable.definition_period == YEAR:
cached_period_unit = periods.YEAR
else:
raise ValueError('set_input_divide_by_period can be used only for yearly or monthly variables.')
after_instant = period.start.offset(period_size, period_unit)
# Count the number of elementary periods to change, and the difference with what is already known.
remaining_array = array.copy()
sub_period = period.start.period(cached_period_unit)
sub_periods_count = 0
while sub_period.start < after_instant:
existing_array = holder.get_array(sub_period)
if existing_array is not None:
remaining_array -= existing_array
else:
sub_periods_count += 1
sub_period = sub_period.offset(1)
# Cache the input data
if sub_periods_count > 0:
divided_array = remaining_array / sub_periods_count
sub_period = period.start.period(cached_period_unit)
while sub_period.start < after_instant:
if holder.get_array(sub_period) is None:
holder._set(sub_period, divided_array)
sub_period = sub_period.offset(1)
elif not (remaining_array == 0).all():
raise ValueError("Inconsistent input: variable {0} has already been set for all months contained in period {1}, and value {2} provided for {1} doesn't match the total ({3}). This error may also be thrown if you try to call set_input twice for the same variable and period.".format(holder.variable.name, period, array, array - remaining_array)) | This function can be declared as a ``set_input`` attribute of a variable.
In this case, the variable will accept inputs on larger periods that its definition period, and the value for the larger period will be divided between its subperiods.
To read more about ``set_input`` attributes, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#set-input-automatically-process-variable-inputs-defined-for-periods-not-matching-the-definition-period>`_. | Below is the the instruction that describes the task:
### Input:
This function can be declared as a ``set_input`` attribute of a variable.
In this case, the variable will accept inputs on larger periods that its definition period, and the value for the larger period will be divided between its subperiods.
To read more about ``set_input`` attributes, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#set-input-automatically-process-variable-inputs-defined-for-periods-not-matching-the-definition-period>`_.
### Response:
def set_input_divide_by_period(holder, period, array):
"""
This function can be declared as a ``set_input`` attribute of a variable.
In this case, the variable will accept inputs on larger periods that its definition period, and the value for the larger period will be divided between its subperiods.
To read more about ``set_input`` attributes, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#set-input-automatically-process-variable-inputs-defined-for-periods-not-matching-the-definition-period>`_.
"""
if not isinstance(array, np.ndarray):
array = np.array(array)
period_size = period.size
period_unit = period.unit
if holder.variable.definition_period == MONTH:
cached_period_unit = periods.MONTH
elif holder.variable.definition_period == YEAR:
cached_period_unit = periods.YEAR
else:
raise ValueError('set_input_divide_by_period can be used only for yearly or monthly variables.')
after_instant = period.start.offset(period_size, period_unit)
# Count the number of elementary periods to change, and the difference with what is already known.
remaining_array = array.copy()
sub_period = period.start.period(cached_period_unit)
sub_periods_count = 0
while sub_period.start < after_instant:
existing_array = holder.get_array(sub_period)
if existing_array is not None:
remaining_array -= existing_array
else:
sub_periods_count += 1
sub_period = sub_period.offset(1)
# Cache the input data
if sub_periods_count > 0:
divided_array = remaining_array / sub_periods_count
sub_period = period.start.period(cached_period_unit)
while sub_period.start < after_instant:
if holder.get_array(sub_period) is None:
holder._set(sub_period, divided_array)
sub_period = sub_period.offset(1)
elif not (remaining_array == 0).all():
raise ValueError("Inconsistent input: variable {0} has already been set for all months contained in period {1}, and value {2} provided for {1} doesn't match the total ({3}). This error may also be thrown if you try to call set_input twice for the same variable and period.".format(holder.variable.name, period, array, array - remaining_array)) |
def breadcrumb(self):
""" Get the category hierarchy leading up to this category, including
root and self.
For example, path/to/long/category will return a list containing
Category('path'), Category('path/to'), and Category('path/to/long').
"""
ret = []
here = self
while here:
ret.append(here)
here = here.parent
return list(reversed(ret)) | Get the category hierarchy leading up to this category, including
root and self.
For example, path/to/long/category will return a list containing
Category('path'), Category('path/to'), and Category('path/to/long'). | Below is the the instruction that describes the task:
### Input:
Get the category hierarchy leading up to this category, including
root and self.
For example, path/to/long/category will return a list containing
Category('path'), Category('path/to'), and Category('path/to/long').
### Response:
def breadcrumb(self):
""" Get the category hierarchy leading up to this category, including
root and self.
For example, path/to/long/category will return a list containing
Category('path'), Category('path/to'), and Category('path/to/long').
"""
ret = []
here = self
while here:
ret.append(here)
here = here.parent
return list(reversed(ret)) |
def qteSplitApplet(self, applet: (QtmacsApplet, str)=None,
splitHoriz: bool=True,
windowObj: QtmacsWindow=None):
"""
Reveal ``applet`` by splitting the space occupied by the
current applet.
If ``applet`` is already visible then the method does
nothing. Furthermore, this method does not change the focus,
ie. the currently active applet will remain active.
If ``applet`` is **None** then the next invisible applet
will be shown. If ``windowObj`` is **None** then the
currently active window will be used.
The ``applet`` parameter can either be an instance of
``QtmacsApplet`` or a string denoting an applet ID. In the
latter case the ``qteGetAppletHandle`` method is used to fetch
the respective applet instance.
|Args|
* ``applet`` (**QtmacsApplet**, **str**): the applet to reveal.
* ``splitHoriz`` (**bool**): whether to split horizontally
or vertically.
* ``windowObj`` (**QtmacsWindow**): the window in which to
reveal ``applet``.
|Returns|
* **bool**: if **True**, ``applet`` was revealed.
|Raises|
* **QtmacsArgumentError** if at least one argument has an invalid type.
"""
# If ``newAppObj`` was specified by its ID (ie. a string) then
# fetch the associated ``QtmacsApplet`` instance. If
# ``newAppObj`` is already an instance of ``QtmacsApplet``
# then use it directly.
if isinstance(applet, str):
newAppObj = self.qteGetAppletHandle(applet)
else:
newAppObj = applet
# Use the currently active window if none was specified.
if windowObj is None:
windowObj = self.qteActiveWindow()
if windowObj is None:
msg = 'Cannot determine the currently active window.'
self.qteLogger.error(msg, stack_info=True)
return
# Convert ``splitHoriz`` to the respective Qt constant.
if splitHoriz:
splitOrientation = QtCore.Qt.Horizontal
else:
splitOrientation = QtCore.Qt.Vertical
if newAppObj is None:
# If no new applet was specified use the next available
# invisible applet.
newAppObj = self.qteNextApplet(skipVisible=True,
skipInvisible=False)
else:
# Do nothing if the new applet is already visible.
if newAppObj.qteIsVisible():
return False
# If we still have not found an applet then there are no
# invisible applets left to show. Therefore, splitting makes
# no sense.
if newAppObj is None:
self.qteLogger.warning('All applets are already visible.')
return False
# If the root splitter is empty then add the new applet and
# return immediately.
if windowObj.qteAppletSplitter.count() == 0:
windowObj.qteAppletSplitter.qteAddWidget(newAppObj)
windowObj.qteAppletSplitter.setOrientation(splitOrientation)
return True
# ------------------------------------------------------------
# The root splitter contains at least one widget, if we got
# this far.
# ------------------------------------------------------------
# Shorthand to last active applet in the current window. Query
# this applet with qteNextApplet method because
# self._qteActiveApplet may be a mini applet, and we are only
# interested in genuine applets.
curApp = self.qteNextApplet(numSkip=0, windowObj=windowObj)
# Get a reference to the splitter in which the currently
# active applet lives. This may be the root splitter, or one
# of its child splitters.
split = self._qteFindAppletInSplitter(
curApp, windowObj.qteAppletSplitter)
if split is None:
msg = 'Active applet <b>{}</b> not in the layout.'
msg = msg.format(curApp.qteAppletID())
self.qteLogger.error(msg, stack_info=True)
return False
# If 'curApp' lives in the root splitter, and the root
# splitter contains only a single element, then simply add the
# new applet as the second element and return.
if split is windowObj.qteAppletSplitter:
if split.count() == 1:
split.qteAddWidget(newAppObj)
split.setOrientation(splitOrientation)
return True
# ------------------------------------------------------------
# The splitter (root or not) contains two widgets, if we got
# this far.
# ------------------------------------------------------------
# Determine the index of the applet inside the splitter.
curAppIdx = split.indexOf(curApp)
# Create a new splitter and populate it with 'curApp' and the
# previously invisible ``newAppObj``. Then insert this new splitter at
# the position where the old applet was taken from. Note: widgets are
# inserted with ``qteAddWidget`` (because they are ``QtmacsApplet``
# instances), whereas splitters are added with ``insertWidget``, NOT
# ``qteInsertWidget``. The reason is that splitters do not require the
# extra TLC necessary for applets in terms of how and where to show
# them.
newSplit = QtmacsSplitter(splitOrientation, windowObj)
curApp.setParent(None)
newSplit.qteAddWidget(curApp)
newSplit.qteAddWidget(newAppObj)
split.insertWidget(curAppIdx, newSplit)
# Adjust the size of two widgets in ``split`` (ie. ``newSplit`` and
# whatever other widget) to take up equal space. The same adjusment is
# made for ``newSplit``, but there the ``qteAddWidget`` methods have
# already taken care of it.
split.qteAdjustWidgetSizes()
return True | Reveal ``applet`` by splitting the space occupied by the
current applet.
If ``applet`` is already visible then the method does
nothing. Furthermore, this method does not change the focus,
ie. the currently active applet will remain active.
If ``applet`` is **None** then the next invisible applet
will be shown. If ``windowObj`` is **None** then the
currently active window will be used.
The ``applet`` parameter can either be an instance of
``QtmacsApplet`` or a string denoting an applet ID. In the
latter case the ``qteGetAppletHandle`` method is used to fetch
the respective applet instance.
|Args|
* ``applet`` (**QtmacsApplet**, **str**): the applet to reveal.
* ``splitHoriz`` (**bool**): whether to split horizontally
or vertically.
* ``windowObj`` (**QtmacsWindow**): the window in which to
reveal ``applet``.
|Returns|
* **bool**: if **True**, ``applet`` was revealed.
|Raises|
* **QtmacsArgumentError** if at least one argument has an invalid type. | Below is the the instruction that describes the task:
### Input:
Reveal ``applet`` by splitting the space occupied by the
current applet.
If ``applet`` is already visible then the method does
nothing. Furthermore, this method does not change the focus,
ie. the currently active applet will remain active.
If ``applet`` is **None** then the next invisible applet
will be shown. If ``windowObj`` is **None** then the
currently active window will be used.
The ``applet`` parameter can either be an instance of
``QtmacsApplet`` or a string denoting an applet ID. In the
latter case the ``qteGetAppletHandle`` method is used to fetch
the respective applet instance.
|Args|
* ``applet`` (**QtmacsApplet**, **str**): the applet to reveal.
* ``splitHoriz`` (**bool**): whether to split horizontally
or vertically.
* ``windowObj`` (**QtmacsWindow**): the window in which to
reveal ``applet``.
|Returns|
* **bool**: if **True**, ``applet`` was revealed.
|Raises|
* **QtmacsArgumentError** if at least one argument has an invalid type.
### Response:
def qteSplitApplet(self, applet: (QtmacsApplet, str)=None,
splitHoriz: bool=True,
windowObj: QtmacsWindow=None):
"""
Reveal ``applet`` by splitting the space occupied by the
current applet.
If ``applet`` is already visible then the method does
nothing. Furthermore, this method does not change the focus,
ie. the currently active applet will remain active.
If ``applet`` is **None** then the next invisible applet
will be shown. If ``windowObj`` is **None** then the
currently active window will be used.
The ``applet`` parameter can either be an instance of
``QtmacsApplet`` or a string denoting an applet ID. In the
latter case the ``qteGetAppletHandle`` method is used to fetch
the respective applet instance.
|Args|
* ``applet`` (**QtmacsApplet**, **str**): the applet to reveal.
* ``splitHoriz`` (**bool**): whether to split horizontally
or vertically.
* ``windowObj`` (**QtmacsWindow**): the window in which to
reveal ``applet``.
|Returns|
* **bool**: if **True**, ``applet`` was revealed.
|Raises|
* **QtmacsArgumentError** if at least one argument has an invalid type.
"""
# If ``newAppObj`` was specified by its ID (ie. a string) then
# fetch the associated ``QtmacsApplet`` instance. If
# ``newAppObj`` is already an instance of ``QtmacsApplet``
# then use it directly.
if isinstance(applet, str):
newAppObj = self.qteGetAppletHandle(applet)
else:
newAppObj = applet
# Use the currently active window if none was specified.
if windowObj is None:
windowObj = self.qteActiveWindow()
if windowObj is None:
msg = 'Cannot determine the currently active window.'
self.qteLogger.error(msg, stack_info=True)
return
# Convert ``splitHoriz`` to the respective Qt constant.
if splitHoriz:
splitOrientation = QtCore.Qt.Horizontal
else:
splitOrientation = QtCore.Qt.Vertical
if newAppObj is None:
# If no new applet was specified use the next available
# invisible applet.
newAppObj = self.qteNextApplet(skipVisible=True,
skipInvisible=False)
else:
# Do nothing if the new applet is already visible.
if newAppObj.qteIsVisible():
return False
# If we still have not found an applet then there are no
# invisible applets left to show. Therefore, splitting makes
# no sense.
if newAppObj is None:
self.qteLogger.warning('All applets are already visible.')
return False
# If the root splitter is empty then add the new applet and
# return immediately.
if windowObj.qteAppletSplitter.count() == 0:
windowObj.qteAppletSplitter.qteAddWidget(newAppObj)
windowObj.qteAppletSplitter.setOrientation(splitOrientation)
return True
# ------------------------------------------------------------
# The root splitter contains at least one widget, if we got
# this far.
# ------------------------------------------------------------
# Shorthand to last active applet in the current window. Query
# this applet with qteNextApplet method because
# self._qteActiveApplet may be a mini applet, and we are only
# interested in genuine applets.
curApp = self.qteNextApplet(numSkip=0, windowObj=windowObj)
# Get a reference to the splitter in which the currently
# active applet lives. This may be the root splitter, or one
# of its child splitters.
split = self._qteFindAppletInSplitter(
curApp, windowObj.qteAppletSplitter)
if split is None:
msg = 'Active applet <b>{}</b> not in the layout.'
msg = msg.format(curApp.qteAppletID())
self.qteLogger.error(msg, stack_info=True)
return False
# If 'curApp' lives in the root splitter, and the root
# splitter contains only a single element, then simply add the
# new applet as the second element and return.
if split is windowObj.qteAppletSplitter:
if split.count() == 1:
split.qteAddWidget(newAppObj)
split.setOrientation(splitOrientation)
return True
# ------------------------------------------------------------
# The splitter (root or not) contains two widgets, if we got
# this far.
# ------------------------------------------------------------
# Determine the index of the applet inside the splitter.
curAppIdx = split.indexOf(curApp)
# Create a new splitter and populate it with 'curApp' and the
# previously invisible ``newAppObj``. Then insert this new splitter at
# the position where the old applet was taken from. Note: widgets are
# inserted with ``qteAddWidget`` (because they are ``QtmacsApplet``
# instances), whereas splitters are added with ``insertWidget``, NOT
# ``qteInsertWidget``. The reason is that splitters do not require the
# extra TLC necessary for applets in terms of how and where to show
# them.
newSplit = QtmacsSplitter(splitOrientation, windowObj)
curApp.setParent(None)
newSplit.qteAddWidget(curApp)
newSplit.qteAddWidget(newAppObj)
split.insertWidget(curAppIdx, newSplit)
# Adjust the size of two widgets in ``split`` (ie. ``newSplit`` and
# whatever other widget) to take up equal space. The same adjusment is
# made for ``newSplit``, but there the ``qteAddWidget`` methods have
# already taken care of it.
split.qteAdjustWidgetSizes()
return True |
def first_derivative(f, **kwargs):
"""Calculate the first derivative of a grid of values.
Works for both regularly-spaced data and grids with varying spacing.
Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with
attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or
`delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned
as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached
coordinate information belonging to `axis` will be used and the derivative will be returned
as an `xarray.DataArray`.
This uses 3 points to calculate the derivative, using forward or backward at the edges of
the grid as appropriate, and centered elsewhere. The irregular spacing is handled
explicitly, using the formulation as specified by [Bowen2005]_.
Parameters
----------
f : array-like
Array of values of which to calculate the derivative
axis : int or str, optional
The array axis along which to take the derivative. If `f` is ndarray-like, must be an
integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate
dimension name or the axis type) or integer (referring to axis number), unless using
implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults
to 0.
x : array-like, optional
The coordinate values corresponding to the grid points in `f`.
delta : array-like, optional
Spacing between the grid points in `f`. Should be one item less than the size
of `f` along `axis`.
Returns
-------
array-like
The first derivative calculated along the selected axis.
See Also
--------
second_derivative
"""
n, axis, delta = _process_deriv_args(f, kwargs)
# create slice objects --- initially all are [:, :, ..., :]
slice0 = [slice(None)] * n
slice1 = [slice(None)] * n
slice2 = [slice(None)] * n
delta_slice0 = [slice(None)] * n
delta_slice1 = [slice(None)] * n
# First handle centered case
slice0[axis] = slice(None, -2)
slice1[axis] = slice(1, -1)
slice2[axis] = slice(2, None)
delta_slice0[axis] = slice(None, -1)
delta_slice1[axis] = slice(1, None)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
delta_diff = delta[tuple(delta_slice1)] - delta[tuple(delta_slice0)]
center = (- delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
+ delta_diff / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
+ delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
# Fill in "left" edge with forward difference
slice0[axis] = slice(None, 1)
slice1[axis] = slice(1, 2)
slice2[axis] = slice(2, 3)
delta_slice0[axis] = slice(None, 1)
delta_slice1[axis] = slice(1, 2)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
big_delta = combined_delta + delta[tuple(delta_slice0)]
left = (- big_delta / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
+ combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
- delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
# Now the "right" edge with backward difference
slice0[axis] = slice(-3, -2)
slice1[axis] = slice(-2, -1)
slice2[axis] = slice(-1, None)
delta_slice0[axis] = slice(-2, -1)
delta_slice1[axis] = slice(-1, None)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
big_delta = combined_delta + delta[tuple(delta_slice1)]
right = (delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
- combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
+ big_delta / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
return concatenate((left, center, right), axis=axis) | Calculate the first derivative of a grid of values.
Works for both regularly-spaced data and grids with varying spacing.
Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with
attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or
`delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned
as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached
coordinate information belonging to `axis` will be used and the derivative will be returned
as an `xarray.DataArray`.
This uses 3 points to calculate the derivative, using forward or backward at the edges of
the grid as appropriate, and centered elsewhere. The irregular spacing is handled
explicitly, using the formulation as specified by [Bowen2005]_.
Parameters
----------
f : array-like
Array of values of which to calculate the derivative
axis : int or str, optional
The array axis along which to take the derivative. If `f` is ndarray-like, must be an
integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate
dimension name or the axis type) or integer (referring to axis number), unless using
implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults
to 0.
x : array-like, optional
The coordinate values corresponding to the grid points in `f`.
delta : array-like, optional
Spacing between the grid points in `f`. Should be one item less than the size
of `f` along `axis`.
Returns
-------
array-like
The first derivative calculated along the selected axis.
See Also
--------
second_derivative | Below is the the instruction that describes the task:
### Input:
Calculate the first derivative of a grid of values.
Works for both regularly-spaced data and grids with varying spacing.
Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with
attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or
`delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned
as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached
coordinate information belonging to `axis` will be used and the derivative will be returned
as an `xarray.DataArray`.
This uses 3 points to calculate the derivative, using forward or backward at the edges of
the grid as appropriate, and centered elsewhere. The irregular spacing is handled
explicitly, using the formulation as specified by [Bowen2005]_.
Parameters
----------
f : array-like
Array of values of which to calculate the derivative
axis : int or str, optional
The array axis along which to take the derivative. If `f` is ndarray-like, must be an
integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate
dimension name or the axis type) or integer (referring to axis number), unless using
implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults
to 0.
x : array-like, optional
The coordinate values corresponding to the grid points in `f`.
delta : array-like, optional
Spacing between the grid points in `f`. Should be one item less than the size
of `f` along `axis`.
Returns
-------
array-like
The first derivative calculated along the selected axis.
See Also
--------
second_derivative
### Response:
def first_derivative(f, **kwargs):
"""Calculate the first derivative of a grid of values.
Works for both regularly-spaced data and grids with varying spacing.
Either `x` or `delta` must be specified, or `f` must be given as an `xarray.DataArray` with
attached coordinate and projection information. If `f` is an `xarray.DataArray`, and `x` or
`delta` are given, `f` will be converted to a `pint.Quantity` and the derivative returned
as a `pint.Quantity`, otherwise, if neither `x` nor `delta` are given, the attached
coordinate information belonging to `axis` will be used and the derivative will be returned
as an `xarray.DataArray`.
This uses 3 points to calculate the derivative, using forward or backward at the edges of
the grid as appropriate, and centered elsewhere. The irregular spacing is handled
explicitly, using the formulation as specified by [Bowen2005]_.
Parameters
----------
f : array-like
Array of values of which to calculate the derivative
axis : int or str, optional
The array axis along which to take the derivative. If `f` is ndarray-like, must be an
integer. If `f` is a `DataArray`, can be a string (referring to either the coordinate
dimension name or the axis type) or integer (referring to axis number), unless using
implicit conversion to `pint.Quantity`, in which case it must be an integer. Defaults
to 0.
x : array-like, optional
The coordinate values corresponding to the grid points in `f`.
delta : array-like, optional
Spacing between the grid points in `f`. Should be one item less than the size
of `f` along `axis`.
Returns
-------
array-like
The first derivative calculated along the selected axis.
See Also
--------
second_derivative
"""
n, axis, delta = _process_deriv_args(f, kwargs)
# create slice objects --- initially all are [:, :, ..., :]
slice0 = [slice(None)] * n
slice1 = [slice(None)] * n
slice2 = [slice(None)] * n
delta_slice0 = [slice(None)] * n
delta_slice1 = [slice(None)] * n
# First handle centered case
slice0[axis] = slice(None, -2)
slice1[axis] = slice(1, -1)
slice2[axis] = slice(2, None)
delta_slice0[axis] = slice(None, -1)
delta_slice1[axis] = slice(1, None)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
delta_diff = delta[tuple(delta_slice1)] - delta[tuple(delta_slice0)]
center = (- delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
+ delta_diff / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
+ delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
# Fill in "left" edge with forward difference
slice0[axis] = slice(None, 1)
slice1[axis] = slice(1, 2)
slice2[axis] = slice(2, 3)
delta_slice0[axis] = slice(None, 1)
delta_slice1[axis] = slice(1, 2)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
big_delta = combined_delta + delta[tuple(delta_slice0)]
left = (- big_delta / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
+ combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
- delta[tuple(delta_slice0)] / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
# Now the "right" edge with backward difference
slice0[axis] = slice(-3, -2)
slice1[axis] = slice(-2, -1)
slice2[axis] = slice(-1, None)
delta_slice0[axis] = slice(-2, -1)
delta_slice1[axis] = slice(-1, None)
combined_delta = delta[tuple(delta_slice0)] + delta[tuple(delta_slice1)]
big_delta = combined_delta + delta[tuple(delta_slice1)]
right = (delta[tuple(delta_slice1)] / (combined_delta * delta[tuple(delta_slice0)])
* f[tuple(slice0)]
- combined_delta / (delta[tuple(delta_slice0)] * delta[tuple(delta_slice1)])
* f[tuple(slice1)]
+ big_delta / (combined_delta * delta[tuple(delta_slice1)])
* f[tuple(slice2)])
return concatenate((left, center, right), axis=axis) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.