code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
| text
stringlengths 164
112k
|
---|---|---|
def emulate_wheel(self, data, direction, timeval):
"""Emulate rel values for the mouse wheel.
In evdev, a single click forwards of the mouse wheel is 1 and
a click back is -1. Windows uses 120 and -120. We floor divide
the Windows number by 120. This is fine for the digital scroll
wheels found on the vast majority of mice. It also works on
the analogue ball on the top of the Apple mouse.
What do the analogue scroll wheels found on 200 quid high end
gaming mice do? If the lowest unit is 120 then we are okay. If
they report changes of less than 120 units Windows, then this
might be an unacceptable loss of precision. Needless to say, I
don't have such a mouse to test one way or the other.
"""
if direction == 'x':
code = 0x06
elif direction == 'z':
# Not enitely sure if this exists
code = 0x07
else:
code = 0x08
if WIN:
data = data // 120
return self.create_event_object(
"Relative",
code,
data,
timeval) | Emulate rel values for the mouse wheel.
In evdev, a single click forwards of the mouse wheel is 1 and
a click back is -1. Windows uses 120 and -120. We floor divide
the Windows number by 120. This is fine for the digital scroll
wheels found on the vast majority of mice. It also works on
the analogue ball on the top of the Apple mouse.
What do the analogue scroll wheels found on 200 quid high end
gaming mice do? If the lowest unit is 120 then we are okay. If
they report changes of less than 120 units Windows, then this
might be an unacceptable loss of precision. Needless to say, I
don't have such a mouse to test one way or the other. | Below is the the instruction that describes the task:
### Input:
Emulate rel values for the mouse wheel.
In evdev, a single click forwards of the mouse wheel is 1 and
a click back is -1. Windows uses 120 and -120. We floor divide
the Windows number by 120. This is fine for the digital scroll
wheels found on the vast majority of mice. It also works on
the analogue ball on the top of the Apple mouse.
What do the analogue scroll wheels found on 200 quid high end
gaming mice do? If the lowest unit is 120 then we are okay. If
they report changes of less than 120 units Windows, then this
might be an unacceptable loss of precision. Needless to say, I
don't have such a mouse to test one way or the other.
### Response:
def emulate_wheel(self, data, direction, timeval):
"""Emulate rel values for the mouse wheel.
In evdev, a single click forwards of the mouse wheel is 1 and
a click back is -1. Windows uses 120 and -120. We floor divide
the Windows number by 120. This is fine for the digital scroll
wheels found on the vast majority of mice. It also works on
the analogue ball on the top of the Apple mouse.
What do the analogue scroll wheels found on 200 quid high end
gaming mice do? If the lowest unit is 120 then we are okay. If
they report changes of less than 120 units Windows, then this
might be an unacceptable loss of precision. Needless to say, I
don't have such a mouse to test one way or the other.
"""
if direction == 'x':
code = 0x06
elif direction == 'z':
# Not enitely sure if this exists
code = 0x07
else:
code = 0x08
if WIN:
data = data // 120
return self.create_event_object(
"Relative",
code,
data,
timeval) |
def populateBufferContextMenu(self, parentMenu):
"""Populates the editing buffer context menu.
The buffer context menu shown for the current edited/viewed file
will have an item with a plugin name and subitems which are
populated here. If no items were populated then the plugin menu
item will not be shown.
Note: when a buffer context menu is selected by the user it always
refers to the current widget. To get access to the current
editing widget the plugin can use: self.ide.currentEditorWidget
The widget could be of different types and some circumstances
should be considered, e.g.:
- it could be a new file which has not been saved yet
- it could be modified
- it could be that the disk file has already been deleted
- etc.
Having the current widget reference the plugin is able to
retrieve the infirmation it needs.
"""
parentMenu.addAction("Configure", self.configure)
parentMenu.addAction("Collect garbage", self.__collectGarbage) | Populates the editing buffer context menu.
The buffer context menu shown for the current edited/viewed file
will have an item with a plugin name and subitems which are
populated here. If no items were populated then the plugin menu
item will not be shown.
Note: when a buffer context menu is selected by the user it always
refers to the current widget. To get access to the current
editing widget the plugin can use: self.ide.currentEditorWidget
The widget could be of different types and some circumstances
should be considered, e.g.:
- it could be a new file which has not been saved yet
- it could be modified
- it could be that the disk file has already been deleted
- etc.
Having the current widget reference the plugin is able to
retrieve the infirmation it needs. | Below is the the instruction that describes the task:
### Input:
Populates the editing buffer context menu.
The buffer context menu shown for the current edited/viewed file
will have an item with a plugin name and subitems which are
populated here. If no items were populated then the plugin menu
item will not be shown.
Note: when a buffer context menu is selected by the user it always
refers to the current widget. To get access to the current
editing widget the plugin can use: self.ide.currentEditorWidget
The widget could be of different types and some circumstances
should be considered, e.g.:
- it could be a new file which has not been saved yet
- it could be modified
- it could be that the disk file has already been deleted
- etc.
Having the current widget reference the plugin is able to
retrieve the infirmation it needs.
### Response:
def populateBufferContextMenu(self, parentMenu):
"""Populates the editing buffer context menu.
The buffer context menu shown for the current edited/viewed file
will have an item with a plugin name and subitems which are
populated here. If no items were populated then the plugin menu
item will not be shown.
Note: when a buffer context menu is selected by the user it always
refers to the current widget. To get access to the current
editing widget the plugin can use: self.ide.currentEditorWidget
The widget could be of different types and some circumstances
should be considered, e.g.:
- it could be a new file which has not been saved yet
- it could be modified
- it could be that the disk file has already been deleted
- etc.
Having the current widget reference the plugin is able to
retrieve the infirmation it needs.
"""
parentMenu.addAction("Configure", self.configure)
parentMenu.addAction("Collect garbage", self.__collectGarbage) |
def mark_data_dirty(self):
""" Called from item to indicate its data or metadata has changed."""
self.__cache.set_cached_value_dirty(self.__display_item, self.__cache_property_name)
self.__initialize_cache()
self.__cached_value_dirty = True | Called from item to indicate its data or metadata has changed. | Below is the the instruction that describes the task:
### Input:
Called from item to indicate its data or metadata has changed.
### Response:
def mark_data_dirty(self):
""" Called from item to indicate its data or metadata has changed."""
self.__cache.set_cached_value_dirty(self.__display_item, self.__cache_property_name)
self.__initialize_cache()
self.__cached_value_dirty = True |
def fit(self,Params0=None,grad_threshold=1e-2):
"""
fit a variance component model with the predefined design and the initialization and returns all the results
"""
# GPVD initialization
lik = limix.CLikNormalNULL()
# Initial Params
if Params0==None:
n_params = self.C1.getNumberParams()
n_params+= self.C2.getNumberParams()
Params0 = SP.rand(n_params)
# MultiGP Framework
covar = []
gp = []
mean = []
for i in range(self.N):
covar.append(limix.CLinCombCF())
covar[i].addCovariance(self.C1)
covar[i].addCovariance(self.C2)
coeff = SP.array([self.eigen[i],1])
covar[i].setCoeff(coeff)
mean.append(limix.CLinearMean(self.Yt[:,i],SP.eye(self.P)))
gpVD = limix.CGPvarDecomp(covar[0],lik,mean[0],SP.ones(self.N),self.P,self.Yt,Params0)
for i in range(self.N):
gp.append(limix.CGPbase(covar[i],lik,mean[i]))
gp[i].setY(self.Yt[:,i])
gpVD.addGP(gp[i])
# Optimization
gpVD.initGPs()
gpopt = limix.CGPopt(gpVD)
LML0=-1.0*gpVD.LML()
start_time = time.time()
conv = gpopt.opt()
time_train = time.time() - start_time
LML=-1.0*gpVD.LML()
LMLgrad = SP.linalg.norm(gpVD.LMLgrad()['covar'])
Params = gpVD.getParams()['covar']
# Check whether limix::CVarianceDecomposition.train() has converged
if conv!=True or LMLgrad>grad_threshold or Params.max()>10:
print('limix::CVarianceDecomposition::train has not converged')
res=None
else:
res = {
'Params0': Params0,
'Params': Params,
'LML': SP.array([LML]),
'LML0': SP.array([LML0]),
'LMLgrad': SP.array([LMLgrad]),
'time_train': SP.array([time_train]),
}
return res
pass | fit a variance component model with the predefined design and the initialization and returns all the results | Below is the the instruction that describes the task:
### Input:
fit a variance component model with the predefined design and the initialization and returns all the results
### Response:
def fit(self,Params0=None,grad_threshold=1e-2):
"""
fit a variance component model with the predefined design and the initialization and returns all the results
"""
# GPVD initialization
lik = limix.CLikNormalNULL()
# Initial Params
if Params0==None:
n_params = self.C1.getNumberParams()
n_params+= self.C2.getNumberParams()
Params0 = SP.rand(n_params)
# MultiGP Framework
covar = []
gp = []
mean = []
for i in range(self.N):
covar.append(limix.CLinCombCF())
covar[i].addCovariance(self.C1)
covar[i].addCovariance(self.C2)
coeff = SP.array([self.eigen[i],1])
covar[i].setCoeff(coeff)
mean.append(limix.CLinearMean(self.Yt[:,i],SP.eye(self.P)))
gpVD = limix.CGPvarDecomp(covar[0],lik,mean[0],SP.ones(self.N),self.P,self.Yt,Params0)
for i in range(self.N):
gp.append(limix.CGPbase(covar[i],lik,mean[i]))
gp[i].setY(self.Yt[:,i])
gpVD.addGP(gp[i])
# Optimization
gpVD.initGPs()
gpopt = limix.CGPopt(gpVD)
LML0=-1.0*gpVD.LML()
start_time = time.time()
conv = gpopt.opt()
time_train = time.time() - start_time
LML=-1.0*gpVD.LML()
LMLgrad = SP.linalg.norm(gpVD.LMLgrad()['covar'])
Params = gpVD.getParams()['covar']
# Check whether limix::CVarianceDecomposition.train() has converged
if conv!=True or LMLgrad>grad_threshold or Params.max()>10:
print('limix::CVarianceDecomposition::train has not converged')
res=None
else:
res = {
'Params0': Params0,
'Params': Params,
'LML': SP.array([LML]),
'LML0': SP.array([LML0]),
'LMLgrad': SP.array([LMLgrad]),
'time_train': SP.array([time_train]),
}
return res
pass |
def get_k8s_metadata():
"""Get kubernetes container metadata, as on GCP GKE."""
k8s_metadata = {}
gcp_cluster = (gcp_metadata_config.GcpMetadataConfig
.get_attribute(gcp_metadata_config.CLUSTER_NAME_KEY))
if gcp_cluster is not None:
k8s_metadata[CLUSTER_NAME_KEY] = gcp_cluster
for attribute_key, attribute_env in _K8S_ENV_ATTRIBUTES.items():
attribute_value = os.environ.get(attribute_env)
if attribute_value is not None:
k8s_metadata[attribute_key] = attribute_value
return k8s_metadata | Get kubernetes container metadata, as on GCP GKE. | Below is the the instruction that describes the task:
### Input:
Get kubernetes container metadata, as on GCP GKE.
### Response:
def get_k8s_metadata():
"""Get kubernetes container metadata, as on GCP GKE."""
k8s_metadata = {}
gcp_cluster = (gcp_metadata_config.GcpMetadataConfig
.get_attribute(gcp_metadata_config.CLUSTER_NAME_KEY))
if gcp_cluster is not None:
k8s_metadata[CLUSTER_NAME_KEY] = gcp_cluster
for attribute_key, attribute_env in _K8S_ENV_ATTRIBUTES.items():
attribute_value = os.environ.get(attribute_env)
if attribute_value is not None:
k8s_metadata[attribute_key] = attribute_value
return k8s_metadata |
def init0(self, dae):
"""Set initial p and q for power flow"""
self.p0 = matrix(self.p, (self.n, 1), 'd')
self.q0 = matrix(self.q, (self.n, 1), 'd') | Set initial p and q for power flow | Below is the the instruction that describes the task:
### Input:
Set initial p and q for power flow
### Response:
def init0(self, dae):
"""Set initial p and q for power flow"""
self.p0 = matrix(self.p, (self.n, 1), 'd')
self.q0 = matrix(self.q, (self.n, 1), 'd') |
def click(self):
"""
Pretends to user clicked it, sending the signal and everything.
"""
if self.is_checkable():
if self.is_checked(): self.set_checked(False)
else: self.set_checked(True)
self.signal_clicked.emit(self.is_checked())
else:
self.signal_clicked.emit(True)
return self | Pretends to user clicked it, sending the signal and everything. | Below is the the instruction that describes the task:
### Input:
Pretends to user clicked it, sending the signal and everything.
### Response:
def click(self):
"""
Pretends to user clicked it, sending the signal and everything.
"""
if self.is_checkable():
if self.is_checked(): self.set_checked(False)
else: self.set_checked(True)
self.signal_clicked.emit(self.is_checked())
else:
self.signal_clicked.emit(True)
return self |
def dump(self,indent='',depth=0):
"""Diagnostic method for listing out the contents of a C{ParseResults}.
Accepts an optional C{indent} argument so that this string can be embedded
in a nested display of other data."""
out = []
out.append( indent+_ustr(self.asList()) )
keys = self.items()
keys.sort()
for k,v in keys:
if out:
out.append('\n')
out.append( "%s%s- %s: " % (indent,(' '*depth), k) )
if isinstance(v,ParseResults):
if v.keys():
out.append( v.dump(indent,depth+1) )
else:
out.append(_ustr(v))
else:
out.append(_ustr(v))
return "".join(out) | Diagnostic method for listing out the contents of a C{ParseResults}.
Accepts an optional C{indent} argument so that this string can be embedded
in a nested display of other data. | Below is the the instruction that describes the task:
### Input:
Diagnostic method for listing out the contents of a C{ParseResults}.
Accepts an optional C{indent} argument so that this string can be embedded
in a nested display of other data.
### Response:
def dump(self,indent='',depth=0):
"""Diagnostic method for listing out the contents of a C{ParseResults}.
Accepts an optional C{indent} argument so that this string can be embedded
in a nested display of other data."""
out = []
out.append( indent+_ustr(self.asList()) )
keys = self.items()
keys.sort()
for k,v in keys:
if out:
out.append('\n')
out.append( "%s%s- %s: " % (indent,(' '*depth), k) )
if isinstance(v,ParseResults):
if v.keys():
out.append( v.dump(indent,depth+1) )
else:
out.append(_ustr(v))
else:
out.append(_ustr(v))
return "".join(out) |
def get_controller_list(self):
"""
Returns an iterable of tuples containing (index, controller_name) pairs.
Controller indexes start at 0.
You may easily transform this to a {name: index} mapping by using:
>>> controllers = {name: index for index, name in raildriver.get_controller_list()}
:return enumerate
"""
ret_str = self.dll.GetControllerList().decode()
if not ret_str:
return []
return enumerate(ret_str.split('::')) | Returns an iterable of tuples containing (index, controller_name) pairs.
Controller indexes start at 0.
You may easily transform this to a {name: index} mapping by using:
>>> controllers = {name: index for index, name in raildriver.get_controller_list()}
:return enumerate | Below is the the instruction that describes the task:
### Input:
Returns an iterable of tuples containing (index, controller_name) pairs.
Controller indexes start at 0.
You may easily transform this to a {name: index} mapping by using:
>>> controllers = {name: index for index, name in raildriver.get_controller_list()}
:return enumerate
### Response:
def get_controller_list(self):
"""
Returns an iterable of tuples containing (index, controller_name) pairs.
Controller indexes start at 0.
You may easily transform this to a {name: index} mapping by using:
>>> controllers = {name: index for index, name in raildriver.get_controller_list()}
:return enumerate
"""
ret_str = self.dll.GetControllerList().decode()
if not ret_str:
return []
return enumerate(ret_str.split('::')) |
def update_objective_bank(self, objective_bank_form):
"""Updates an existing objective bank.
arg: objective_bank_form (osid.learning.ObjectiveBankForm):
the form containing the elements to be updated
raise: IllegalState - ``objective_bank_form`` already used in
an update transaction
raise: InvalidArgument - the form contains an invalid value
raise: NullArgument - ``objective_bank_form`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
raise: Unsupported - ``objective_bank_form did not originate
from get_objective_bank_form_for_update()``
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.BinAdminSession.update_bin_template
if self._catalog_session is not None:
return self._catalog_session.update_catalog(catalog_form=objective_bank_form)
collection = JSONClientValidated('learning',
collection='ObjectiveBank',
runtime=self._runtime)
if not isinstance(objective_bank_form, ABCObjectiveBankForm):
raise errors.InvalidArgument('argument type is not an ObjectiveBankForm')
if not objective_bank_form.is_for_update():
raise errors.InvalidArgument('the ObjectiveBankForm is for update only, not create')
try:
if self._forms[objective_bank_form.get_id().get_identifier()] == UPDATED:
raise errors.IllegalState('objective_bank_form already used in an update transaction')
except KeyError:
raise errors.Unsupported('objective_bank_form did not originate from this session')
if not objective_bank_form.is_valid():
raise errors.InvalidArgument('one or more of the form elements is invalid')
collection.save(objective_bank_form._my_map) # save is deprecated - change to replace_one
self._forms[objective_bank_form.get_id().get_identifier()] = UPDATED
# Note: this is out of spec. The OSIDs don't require an object to be returned
return objects.ObjectiveBank(osid_object_map=objective_bank_form._my_map, runtime=self._runtime, proxy=self._proxy) | Updates an existing objective bank.
arg: objective_bank_form (osid.learning.ObjectiveBankForm):
the form containing the elements to be updated
raise: IllegalState - ``objective_bank_form`` already used in
an update transaction
raise: InvalidArgument - the form contains an invalid value
raise: NullArgument - ``objective_bank_form`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
raise: Unsupported - ``objective_bank_form did not originate
from get_objective_bank_form_for_update()``
*compliance: mandatory -- This method must be implemented.* | Below is the the instruction that describes the task:
### Input:
Updates an existing objective bank.
arg: objective_bank_form (osid.learning.ObjectiveBankForm):
the form containing the elements to be updated
raise: IllegalState - ``objective_bank_form`` already used in
an update transaction
raise: InvalidArgument - the form contains an invalid value
raise: NullArgument - ``objective_bank_form`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
raise: Unsupported - ``objective_bank_form did not originate
from get_objective_bank_form_for_update()``
*compliance: mandatory -- This method must be implemented.*
### Response:
def update_objective_bank(self, objective_bank_form):
"""Updates an existing objective bank.
arg: objective_bank_form (osid.learning.ObjectiveBankForm):
the form containing the elements to be updated
raise: IllegalState - ``objective_bank_form`` already used in
an update transaction
raise: InvalidArgument - the form contains an invalid value
raise: NullArgument - ``objective_bank_form`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
raise: Unsupported - ``objective_bank_form did not originate
from get_objective_bank_form_for_update()``
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.BinAdminSession.update_bin_template
if self._catalog_session is not None:
return self._catalog_session.update_catalog(catalog_form=objective_bank_form)
collection = JSONClientValidated('learning',
collection='ObjectiveBank',
runtime=self._runtime)
if not isinstance(objective_bank_form, ABCObjectiveBankForm):
raise errors.InvalidArgument('argument type is not an ObjectiveBankForm')
if not objective_bank_form.is_for_update():
raise errors.InvalidArgument('the ObjectiveBankForm is for update only, not create')
try:
if self._forms[objective_bank_form.get_id().get_identifier()] == UPDATED:
raise errors.IllegalState('objective_bank_form already used in an update transaction')
except KeyError:
raise errors.Unsupported('objective_bank_form did not originate from this session')
if not objective_bank_form.is_valid():
raise errors.InvalidArgument('one or more of the form elements is invalid')
collection.save(objective_bank_form._my_map) # save is deprecated - change to replace_one
self._forms[objective_bank_form.get_id().get_identifier()] = UPDATED
# Note: this is out of spec. The OSIDs don't require an object to be returned
return objects.ObjectiveBank(osid_object_map=objective_bank_form._my_map, runtime=self._runtime, proxy=self._proxy) |
def get_scopes(self, vpnid='.*'):
"""Returns a list of all the scopes from CPNR server."""
request_url = self._build_url(['Scope'], vpn=vpnid)
return self._do_request('GET', request_url) | Returns a list of all the scopes from CPNR server. | Below is the the instruction that describes the task:
### Input:
Returns a list of all the scopes from CPNR server.
### Response:
def get_scopes(self, vpnid='.*'):
"""Returns a list of all the scopes from CPNR server."""
request_url = self._build_url(['Scope'], vpn=vpnid)
return self._do_request('GET', request_url) |
def create_usuario(self):
"""Get an instance of usuario services facade."""
return Usuario(
self.networkapi_url,
self.user,
self.password,
self.user_ldap) | Get an instance of usuario services facade. | Below is the the instruction that describes the task:
### Input:
Get an instance of usuario services facade.
### Response:
def create_usuario(self):
"""Get an instance of usuario services facade."""
return Usuario(
self.networkapi_url,
self.user,
self.password,
self.user_ldap) |
def try_date(obj) -> Optional[datetime.date]:
"""Attempts to convert an object into a date.
If the date format is known, it's recommended to use the corresponding function
This is meant to be used in constructors.
Parameters
----------
obj: :class:`str`, :class:`datetime.datetime`, :class:`datetime.date`
The object to convert.
Returns
-------
:class:`datetime.date`, optional
The represented date.
"""
if obj is None:
return None
if isinstance(obj, datetime.datetime):
return obj.date()
if isinstance(obj, datetime.date):
return obj
res = parse_tibia_date(obj)
if res is not None:
return res
res = parse_tibia_full_date(obj)
if res is not None:
return res
res = parse_tibiadata_date(obj)
return res | Attempts to convert an object into a date.
If the date format is known, it's recommended to use the corresponding function
This is meant to be used in constructors.
Parameters
----------
obj: :class:`str`, :class:`datetime.datetime`, :class:`datetime.date`
The object to convert.
Returns
-------
:class:`datetime.date`, optional
The represented date. | Below is the the instruction that describes the task:
### Input:
Attempts to convert an object into a date.
If the date format is known, it's recommended to use the corresponding function
This is meant to be used in constructors.
Parameters
----------
obj: :class:`str`, :class:`datetime.datetime`, :class:`datetime.date`
The object to convert.
Returns
-------
:class:`datetime.date`, optional
The represented date.
### Response:
def try_date(obj) -> Optional[datetime.date]:
"""Attempts to convert an object into a date.
If the date format is known, it's recommended to use the corresponding function
This is meant to be used in constructors.
Parameters
----------
obj: :class:`str`, :class:`datetime.datetime`, :class:`datetime.date`
The object to convert.
Returns
-------
:class:`datetime.date`, optional
The represented date.
"""
if obj is None:
return None
if isinstance(obj, datetime.datetime):
return obj.date()
if isinstance(obj, datetime.date):
return obj
res = parse_tibia_date(obj)
if res is not None:
return res
res = parse_tibia_full_date(obj)
if res is not None:
return res
res = parse_tibiadata_date(obj)
return res |
def create_dialog_node(self,
workspace_id,
dialog_node,
description=None,
conditions=None,
parent=None,
previous_sibling=None,
output=None,
context=None,
metadata=None,
next_step=None,
title=None,
node_type=None,
event_name=None,
variable=None,
actions=None,
digress_in=None,
digress_out=None,
digress_out_slots=None,
user_label=None,
**kwargs):
"""
Create dialog node.
Create a new dialog node.
This operation is limited to 500 requests per 30 minutes. For more information,
see **Rate limiting**.
:param str workspace_id: Unique identifier of the workspace.
:param str dialog_node: The dialog node ID. This string must conform to the
following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 1024 characters.
:param str description: The description of the dialog node. This string cannot
contain carriage return, newline, or tab characters, and it must be no longer than
128 characters.
:param str conditions: The condition that will trigger the dialog node. This
string cannot contain carriage return, newline, or tab characters, and it must be
no longer than 2048 characters.
:param str parent: The ID of the parent dialog node. This property is omitted if
the dialog node has no parent.
:param str previous_sibling: The ID of the previous sibling dialog node. This
property is omitted if the dialog node has no previous sibling.
:param DialogNodeOutput output: The output of the dialog node. For more
information about how to specify dialog node output, see the
[documentation](https://cloud.ibm.com/docs/services/assistant/dialog-overview.html#dialog-overview-responses).
:param dict context: The context for the dialog node.
:param dict metadata: The metadata for the dialog node.
:param DialogNodeNextStep next_step: The next step to execute following this
dialog node.
:param str title: The alias used to identify the dialog node. This string must
conform to the following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 64 characters.
:param str node_type: How the dialog node is processed.
:param str event_name: How an `event_handler` node is processed.
:param str variable: The location in the dialog context where output is stored.
:param list[DialogNodeAction] actions: An array of objects describing any actions
to be invoked by the dialog node.
:param str digress_in: Whether this top-level dialog node can be digressed into.
:param str digress_out: Whether this dialog node can be returned to after a
digression.
:param str digress_out_slots: Whether the user can digress to top-level nodes
while filling out slots.
:param str user_label: A label that can be displayed externally to describe the
purpose of the node to users. This string must be no longer than 512 characters.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
"""
if workspace_id is None:
raise ValueError('workspace_id must be provided')
if dialog_node is None:
raise ValueError('dialog_node must be provided')
if output is not None:
output = self._convert_model(output, DialogNodeOutput)
if next_step is not None:
next_step = self._convert_model(next_step, DialogNodeNextStep)
if actions is not None:
actions = [
self._convert_model(x, DialogNodeAction) for x in actions
]
headers = {}
if 'headers' in kwargs:
headers.update(kwargs.get('headers'))
sdk_headers = get_sdk_headers('conversation', 'V1',
'create_dialog_node')
headers.update(sdk_headers)
params = {'version': self.version}
data = {
'dialog_node': dialog_node,
'description': description,
'conditions': conditions,
'parent': parent,
'previous_sibling': previous_sibling,
'output': output,
'context': context,
'metadata': metadata,
'next_step': next_step,
'title': title,
'type': node_type,
'event_name': event_name,
'variable': variable,
'actions': actions,
'digress_in': digress_in,
'digress_out': digress_out,
'digress_out_slots': digress_out_slots,
'user_label': user_label
}
url = '/v1/workspaces/{0}/dialog_nodes'.format(
*self._encode_path_vars(workspace_id))
response = self.request(
method='POST',
url=url,
headers=headers,
params=params,
json=data,
accept_json=True)
return response | Create dialog node.
Create a new dialog node.
This operation is limited to 500 requests per 30 minutes. For more information,
see **Rate limiting**.
:param str workspace_id: Unique identifier of the workspace.
:param str dialog_node: The dialog node ID. This string must conform to the
following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 1024 characters.
:param str description: The description of the dialog node. This string cannot
contain carriage return, newline, or tab characters, and it must be no longer than
128 characters.
:param str conditions: The condition that will trigger the dialog node. This
string cannot contain carriage return, newline, or tab characters, and it must be
no longer than 2048 characters.
:param str parent: The ID of the parent dialog node. This property is omitted if
the dialog node has no parent.
:param str previous_sibling: The ID of the previous sibling dialog node. This
property is omitted if the dialog node has no previous sibling.
:param DialogNodeOutput output: The output of the dialog node. For more
information about how to specify dialog node output, see the
[documentation](https://cloud.ibm.com/docs/services/assistant/dialog-overview.html#dialog-overview-responses).
:param dict context: The context for the dialog node.
:param dict metadata: The metadata for the dialog node.
:param DialogNodeNextStep next_step: The next step to execute following this
dialog node.
:param str title: The alias used to identify the dialog node. This string must
conform to the following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 64 characters.
:param str node_type: How the dialog node is processed.
:param str event_name: How an `event_handler` node is processed.
:param str variable: The location in the dialog context where output is stored.
:param list[DialogNodeAction] actions: An array of objects describing any actions
to be invoked by the dialog node.
:param str digress_in: Whether this top-level dialog node can be digressed into.
:param str digress_out: Whether this dialog node can be returned to after a
digression.
:param str digress_out_slots: Whether the user can digress to top-level nodes
while filling out slots.
:param str user_label: A label that can be displayed externally to describe the
purpose of the node to users. This string must be no longer than 512 characters.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse | Below is the the instruction that describes the task:
### Input:
Create dialog node.
Create a new dialog node.
This operation is limited to 500 requests per 30 minutes. For more information,
see **Rate limiting**.
:param str workspace_id: Unique identifier of the workspace.
:param str dialog_node: The dialog node ID. This string must conform to the
following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 1024 characters.
:param str description: The description of the dialog node. This string cannot
contain carriage return, newline, or tab characters, and it must be no longer than
128 characters.
:param str conditions: The condition that will trigger the dialog node. This
string cannot contain carriage return, newline, or tab characters, and it must be
no longer than 2048 characters.
:param str parent: The ID of the parent dialog node. This property is omitted if
the dialog node has no parent.
:param str previous_sibling: The ID of the previous sibling dialog node. This
property is omitted if the dialog node has no previous sibling.
:param DialogNodeOutput output: The output of the dialog node. For more
information about how to specify dialog node output, see the
[documentation](https://cloud.ibm.com/docs/services/assistant/dialog-overview.html#dialog-overview-responses).
:param dict context: The context for the dialog node.
:param dict metadata: The metadata for the dialog node.
:param DialogNodeNextStep next_step: The next step to execute following this
dialog node.
:param str title: The alias used to identify the dialog node. This string must
conform to the following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 64 characters.
:param str node_type: How the dialog node is processed.
:param str event_name: How an `event_handler` node is processed.
:param str variable: The location in the dialog context where output is stored.
:param list[DialogNodeAction] actions: An array of objects describing any actions
to be invoked by the dialog node.
:param str digress_in: Whether this top-level dialog node can be digressed into.
:param str digress_out: Whether this dialog node can be returned to after a
digression.
:param str digress_out_slots: Whether the user can digress to top-level nodes
while filling out slots.
:param str user_label: A label that can be displayed externally to describe the
purpose of the node to users. This string must be no longer than 512 characters.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
### Response:
def create_dialog_node(self,
workspace_id,
dialog_node,
description=None,
conditions=None,
parent=None,
previous_sibling=None,
output=None,
context=None,
metadata=None,
next_step=None,
title=None,
node_type=None,
event_name=None,
variable=None,
actions=None,
digress_in=None,
digress_out=None,
digress_out_slots=None,
user_label=None,
**kwargs):
"""
Create dialog node.
Create a new dialog node.
This operation is limited to 500 requests per 30 minutes. For more information,
see **Rate limiting**.
:param str workspace_id: Unique identifier of the workspace.
:param str dialog_node: The dialog node ID. This string must conform to the
following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 1024 characters.
:param str description: The description of the dialog node. This string cannot
contain carriage return, newline, or tab characters, and it must be no longer than
128 characters.
:param str conditions: The condition that will trigger the dialog node. This
string cannot contain carriage return, newline, or tab characters, and it must be
no longer than 2048 characters.
:param str parent: The ID of the parent dialog node. This property is omitted if
the dialog node has no parent.
:param str previous_sibling: The ID of the previous sibling dialog node. This
property is omitted if the dialog node has no previous sibling.
:param DialogNodeOutput output: The output of the dialog node. For more
information about how to specify dialog node output, see the
[documentation](https://cloud.ibm.com/docs/services/assistant/dialog-overview.html#dialog-overview-responses).
:param dict context: The context for the dialog node.
:param dict metadata: The metadata for the dialog node.
:param DialogNodeNextStep next_step: The next step to execute following this
dialog node.
:param str title: The alias used to identify the dialog node. This string must
conform to the following restrictions:
- It can contain only Unicode alphanumeric, space, underscore, hyphen, and dot
characters.
- It must be no longer than 64 characters.
:param str node_type: How the dialog node is processed.
:param str event_name: How an `event_handler` node is processed.
:param str variable: The location in the dialog context where output is stored.
:param list[DialogNodeAction] actions: An array of objects describing any actions
to be invoked by the dialog node.
:param str digress_in: Whether this top-level dialog node can be digressed into.
:param str digress_out: Whether this dialog node can be returned to after a
digression.
:param str digress_out_slots: Whether the user can digress to top-level nodes
while filling out slots.
:param str user_label: A label that can be displayed externally to describe the
purpose of the node to users. This string must be no longer than 512 characters.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
"""
if workspace_id is None:
raise ValueError('workspace_id must be provided')
if dialog_node is None:
raise ValueError('dialog_node must be provided')
if output is not None:
output = self._convert_model(output, DialogNodeOutput)
if next_step is not None:
next_step = self._convert_model(next_step, DialogNodeNextStep)
if actions is not None:
actions = [
self._convert_model(x, DialogNodeAction) for x in actions
]
headers = {}
if 'headers' in kwargs:
headers.update(kwargs.get('headers'))
sdk_headers = get_sdk_headers('conversation', 'V1',
'create_dialog_node')
headers.update(sdk_headers)
params = {'version': self.version}
data = {
'dialog_node': dialog_node,
'description': description,
'conditions': conditions,
'parent': parent,
'previous_sibling': previous_sibling,
'output': output,
'context': context,
'metadata': metadata,
'next_step': next_step,
'title': title,
'type': node_type,
'event_name': event_name,
'variable': variable,
'actions': actions,
'digress_in': digress_in,
'digress_out': digress_out,
'digress_out_slots': digress_out_slots,
'user_label': user_label
}
url = '/v1/workspaces/{0}/dialog_nodes'.format(
*self._encode_path_vars(workspace_id))
response = self.request(
method='POST',
url=url,
headers=headers,
params=params,
json=data,
accept_json=True)
return response |
def get_config(self, budget):
"""
Function to sample a new configuration
This function is called inside Hyperband to query a new configuration
Parameters:
-----------
budget: float
the budget for which this configuration is scheduled
returns: config
should return a valid configuration
"""
self.logger.debug('start sampling a new configuration.')
sample = None
info_dict = {}
# If no model is available, sample from prior
# also mix in a fraction of random configs
if len(self.kde_models.keys()) == 0 or np.random.rand() < self.random_fraction:
sample = self.configspace.sample_configuration()
info_dict['model_based_pick'] = False
best = np.inf
best_vector = None
if sample is None:
try:
#sample from largest budget
budget = max(self.kde_models.keys())
l = self.kde_models[budget]['good'].pdf
g = self.kde_models[budget]['bad' ].pdf
minimize_me = lambda x: max(1e-32, g(x))/max(l(x),1e-32)
kde_good = self.kde_models[budget]['good']
kde_bad = self.kde_models[budget]['bad']
for i in range(self.num_samples):
idx = np.random.randint(0, len(kde_good.data))
datum = kde_good.data[idx]
vector = []
for m,bw,t in zip(datum, kde_good.bw, self.vartypes):
bw = max(bw, self.min_bandwidth)
if t == 0:
bw = self.bw_factor*bw
try:
vector.append(sps.truncnorm.rvs(-m/bw,(1-m)/bw, loc=m, scale=bw))
except:
self.logger.warning("Truncated Normal failed for:\ndatum=%s\nbandwidth=%s\nfor entry with value %s"%(datum, kde_good.bw, m))
self.logger.warning("data in the KDE:\n%s"%kde_good.data)
else:
if np.random.rand() < (1-bw):
vector.append(int(m))
else:
vector.append(np.random.randint(t))
val = minimize_me(vector)
if not np.isfinite(val):
self.logger.warning('sampled vector: %s has EI value %s'%(vector, val))
self.logger.warning("data in the KDEs:\n%s\n%s"%(kde_good.data, kde_bad.data))
self.logger.warning("bandwidth of the KDEs:\n%s\n%s"%(kde_good.bw, kde_bad.bw))
self.logger.warning("l(x) = %s"%(l(vector)))
self.logger.warning("g(x) = %s"%(g(vector)))
# right now, this happens because a KDE does not contain all values for a categorical parameter
# this cannot be fixed with the statsmodels KDE, so for now, we are just going to evaluate this one
# if the good_kde has a finite value, i.e. there is no config with that value in the bad kde, so it shouldn't be terrible.
if np.isfinite(l(vector)):
best_vector = vector
break
if val < best:
best = val
best_vector = vector
if best_vector is None:
self.logger.debug("Sampling based optimization with %i samples failed -> using random configuration"%self.num_samples)
sample = self.configspace.sample_configuration().get_dictionary()
info_dict['model_based_pick'] = False
else:
self.logger.debug('best_vector: {}, {}, {}, {}'.format(best_vector, best, l(best_vector), g(best_vector)))
for i, hp_value in enumerate(best_vector):
if isinstance(
self.configspace.get_hyperparameter(
self.configspace.get_hyperparameter_by_idx(i)
),
ConfigSpace.hyperparameters.CategoricalHyperparameter
):
best_vector[i] = int(np.rint(best_vector[i]))
sample = ConfigSpace.Configuration(self.configspace, vector=best_vector).get_dictionary()
try:
sample = ConfigSpace.util.deactivate_inactive_hyperparameters(
configuration_space=self.configspace,
configuration=sample
)
info_dict['model_based_pick'] = True
except Exception as e:
self.logger.warning(("="*50 + "\n")*3 +\
"Error converting configuration:\n%s"%sample+\
"\n here is a traceback:" +\
traceback.format_exc())
raise(e)
except:
self.logger.warning("Sampling based optimization with %i samples failed\n %s \nUsing random configuration"%(self.num_samples, traceback.format_exc()))
sample = self.configspace.sample_configuration()
info_dict['model_based_pick'] = False
try:
sample = ConfigSpace.util.deactivate_inactive_hyperparameters(
configuration_space=self.configspace,
configuration=sample.get_dictionary()
).get_dictionary()
except Exception as e:
self.logger.warning("Error (%s) converting configuration: %s -> "
"using random configuration!",
e,
sample)
sample = self.configspace.sample_configuration().get_dictionary()
self.logger.debug('done sampling a new configuration.')
return sample, info_dict | Function to sample a new configuration
This function is called inside Hyperband to query a new configuration
Parameters:
-----------
budget: float
the budget for which this configuration is scheduled
returns: config
should return a valid configuration | Below is the the instruction that describes the task:
### Input:
Function to sample a new configuration
This function is called inside Hyperband to query a new configuration
Parameters:
-----------
budget: float
the budget for which this configuration is scheduled
returns: config
should return a valid configuration
### Response:
def get_config(self, budget):
"""
Function to sample a new configuration
This function is called inside Hyperband to query a new configuration
Parameters:
-----------
budget: float
the budget for which this configuration is scheduled
returns: config
should return a valid configuration
"""
self.logger.debug('start sampling a new configuration.')
sample = None
info_dict = {}
# If no model is available, sample from prior
# also mix in a fraction of random configs
if len(self.kde_models.keys()) == 0 or np.random.rand() < self.random_fraction:
sample = self.configspace.sample_configuration()
info_dict['model_based_pick'] = False
best = np.inf
best_vector = None
if sample is None:
try:
#sample from largest budget
budget = max(self.kde_models.keys())
l = self.kde_models[budget]['good'].pdf
g = self.kde_models[budget]['bad' ].pdf
minimize_me = lambda x: max(1e-32, g(x))/max(l(x),1e-32)
kde_good = self.kde_models[budget]['good']
kde_bad = self.kde_models[budget]['bad']
for i in range(self.num_samples):
idx = np.random.randint(0, len(kde_good.data))
datum = kde_good.data[idx]
vector = []
for m,bw,t in zip(datum, kde_good.bw, self.vartypes):
bw = max(bw, self.min_bandwidth)
if t == 0:
bw = self.bw_factor*bw
try:
vector.append(sps.truncnorm.rvs(-m/bw,(1-m)/bw, loc=m, scale=bw))
except:
self.logger.warning("Truncated Normal failed for:\ndatum=%s\nbandwidth=%s\nfor entry with value %s"%(datum, kde_good.bw, m))
self.logger.warning("data in the KDE:\n%s"%kde_good.data)
else:
if np.random.rand() < (1-bw):
vector.append(int(m))
else:
vector.append(np.random.randint(t))
val = minimize_me(vector)
if not np.isfinite(val):
self.logger.warning('sampled vector: %s has EI value %s'%(vector, val))
self.logger.warning("data in the KDEs:\n%s\n%s"%(kde_good.data, kde_bad.data))
self.logger.warning("bandwidth of the KDEs:\n%s\n%s"%(kde_good.bw, kde_bad.bw))
self.logger.warning("l(x) = %s"%(l(vector)))
self.logger.warning("g(x) = %s"%(g(vector)))
# right now, this happens because a KDE does not contain all values for a categorical parameter
# this cannot be fixed with the statsmodels KDE, so for now, we are just going to evaluate this one
# if the good_kde has a finite value, i.e. there is no config with that value in the bad kde, so it shouldn't be terrible.
if np.isfinite(l(vector)):
best_vector = vector
break
if val < best:
best = val
best_vector = vector
if best_vector is None:
self.logger.debug("Sampling based optimization with %i samples failed -> using random configuration"%self.num_samples)
sample = self.configspace.sample_configuration().get_dictionary()
info_dict['model_based_pick'] = False
else:
self.logger.debug('best_vector: {}, {}, {}, {}'.format(best_vector, best, l(best_vector), g(best_vector)))
for i, hp_value in enumerate(best_vector):
if isinstance(
self.configspace.get_hyperparameter(
self.configspace.get_hyperparameter_by_idx(i)
),
ConfigSpace.hyperparameters.CategoricalHyperparameter
):
best_vector[i] = int(np.rint(best_vector[i]))
sample = ConfigSpace.Configuration(self.configspace, vector=best_vector).get_dictionary()
try:
sample = ConfigSpace.util.deactivate_inactive_hyperparameters(
configuration_space=self.configspace,
configuration=sample
)
info_dict['model_based_pick'] = True
except Exception as e:
self.logger.warning(("="*50 + "\n")*3 +\
"Error converting configuration:\n%s"%sample+\
"\n here is a traceback:" +\
traceback.format_exc())
raise(e)
except:
self.logger.warning("Sampling based optimization with %i samples failed\n %s \nUsing random configuration"%(self.num_samples, traceback.format_exc()))
sample = self.configspace.sample_configuration()
info_dict['model_based_pick'] = False
try:
sample = ConfigSpace.util.deactivate_inactive_hyperparameters(
configuration_space=self.configspace,
configuration=sample.get_dictionary()
).get_dictionary()
except Exception as e:
self.logger.warning("Error (%s) converting configuration: %s -> "
"using random configuration!",
e,
sample)
sample = self.configspace.sample_configuration().get_dictionary()
self.logger.debug('done sampling a new configuration.')
return sample, info_dict |
def prepare_xml(args, parser):
"""Prepares XML files for stripping.
This process creates a single, normalised TEI XML file for each
work.
"""
if args.source == constants.TEI_SOURCE_CBETA_GITHUB:
corpus_class = tacl.TEICorpusCBETAGitHub
else:
raise Exception('Unsupported TEI source option provided')
corpus = corpus_class(args.input, args.output)
corpus.tidy() | Prepares XML files for stripping.
This process creates a single, normalised TEI XML file for each
work. | Below is the the instruction that describes the task:
### Input:
Prepares XML files for stripping.
This process creates a single, normalised TEI XML file for each
work.
### Response:
def prepare_xml(args, parser):
"""Prepares XML files for stripping.
This process creates a single, normalised TEI XML file for each
work.
"""
if args.source == constants.TEI_SOURCE_CBETA_GITHUB:
corpus_class = tacl.TEICorpusCBETAGitHub
else:
raise Exception('Unsupported TEI source option provided')
corpus = corpus_class(args.input, args.output)
corpus.tidy() |
def translate(self, date_string, keep_formatting=False, settings=None):
"""
Translate the date string to its English equivalent.
:param date_string:
A string representing date and/or time in a recognizably valid format.
:type date_string: str|unicode
:param keep_formatting:
If True, retain formatting of the date string after translation.
:type keep_formatting: bool
:return: translated date string.
"""
date_string = self._translate_numerals(date_string)
if settings.NORMALIZE:
date_string = normalize_unicode(date_string)
date_string = self._simplify(date_string, settings=settings)
dictionary = self._get_dictionary(settings)
date_string_tokens = dictionary.split(date_string, keep_formatting)
relative_translations = self._get_relative_translations(settings=settings)
for i, word in enumerate(date_string_tokens):
word = word.lower()
for pattern, replacement in relative_translations.items():
if pattern.match(word):
date_string_tokens[i] = pattern.sub(replacement, word)
else:
if word in dictionary:
date_string_tokens[i] = dictionary[word] or ''
if "in" in date_string_tokens:
date_string_tokens = self._clear_future_words(date_string_tokens)
return self._join(list(filter(bool, date_string_tokens)),
separator="" if keep_formatting else " ", settings=settings) | Translate the date string to its English equivalent.
:param date_string:
A string representing date and/or time in a recognizably valid format.
:type date_string: str|unicode
:param keep_formatting:
If True, retain formatting of the date string after translation.
:type keep_formatting: bool
:return: translated date string. | Below is the the instruction that describes the task:
### Input:
Translate the date string to its English equivalent.
:param date_string:
A string representing date and/or time in a recognizably valid format.
:type date_string: str|unicode
:param keep_formatting:
If True, retain formatting of the date string after translation.
:type keep_formatting: bool
:return: translated date string.
### Response:
def translate(self, date_string, keep_formatting=False, settings=None):
"""
Translate the date string to its English equivalent.
:param date_string:
A string representing date and/or time in a recognizably valid format.
:type date_string: str|unicode
:param keep_formatting:
If True, retain formatting of the date string after translation.
:type keep_formatting: bool
:return: translated date string.
"""
date_string = self._translate_numerals(date_string)
if settings.NORMALIZE:
date_string = normalize_unicode(date_string)
date_string = self._simplify(date_string, settings=settings)
dictionary = self._get_dictionary(settings)
date_string_tokens = dictionary.split(date_string, keep_formatting)
relative_translations = self._get_relative_translations(settings=settings)
for i, word in enumerate(date_string_tokens):
word = word.lower()
for pattern, replacement in relative_translations.items():
if pattern.match(word):
date_string_tokens[i] = pattern.sub(replacement, word)
else:
if word in dictionary:
date_string_tokens[i] = dictionary[word] or ''
if "in" in date_string_tokens:
date_string_tokens = self._clear_future_words(date_string_tokens)
return self._join(list(filter(bool, date_string_tokens)),
separator="" if keep_formatting else " ", settings=settings) |
def _generate_response_head_bytes(status_code, headers):
"""
:type status_code: int
:type headers: dict[str, str]
:rtype: bytes
"""
head_string = str(status_code) + _DELIMITER_NEWLINE
header_tuples = sorted((k, headers[k]) for k in headers)
for name, value in header_tuples:
name = _get_header_correctly_cased(name)
if _should_sign_response_header(name):
head_string += _FORMAT_HEADER_STRING.format(name, value)
return (head_string + _DELIMITER_NEWLINE).encode() | :type status_code: int
:type headers: dict[str, str]
:rtype: bytes | Below is the the instruction that describes the task:
### Input:
:type status_code: int
:type headers: dict[str, str]
:rtype: bytes
### Response:
def _generate_response_head_bytes(status_code, headers):
"""
:type status_code: int
:type headers: dict[str, str]
:rtype: bytes
"""
head_string = str(status_code) + _DELIMITER_NEWLINE
header_tuples = sorted((k, headers[k]) for k in headers)
for name, value in header_tuples:
name = _get_header_correctly_cased(name)
if _should_sign_response_header(name):
head_string += _FORMAT_HEADER_STRING.format(name, value)
return (head_string + _DELIMITER_NEWLINE).encode() |
def ver_cmp(ver1, ver2):
"""
Compare lago versions
Args:
ver1(str): version string
ver2(str): version string
Returns:
Return negative if ver1<ver2, zero if ver1==ver2, positive if
ver1>ver2.
"""
return cmp(
pkg_resources.parse_version(ver1), pkg_resources.parse_version(ver2)
) | Compare lago versions
Args:
ver1(str): version string
ver2(str): version string
Returns:
Return negative if ver1<ver2, zero if ver1==ver2, positive if
ver1>ver2. | Below is the the instruction that describes the task:
### Input:
Compare lago versions
Args:
ver1(str): version string
ver2(str): version string
Returns:
Return negative if ver1<ver2, zero if ver1==ver2, positive if
ver1>ver2.
### Response:
def ver_cmp(ver1, ver2):
"""
Compare lago versions
Args:
ver1(str): version string
ver2(str): version string
Returns:
Return negative if ver1<ver2, zero if ver1==ver2, positive if
ver1>ver2.
"""
return cmp(
pkg_resources.parse_version(ver1), pkg_resources.parse_version(ver2)
) |
def UpdateResourcesFromResFile(dstpath, srcpath, types=None, names=None,
languages=None):
"""
Update or add resources from dll/exe file srcpath in dll/exe file dstpath.
types = a list of resource types to update (None = all)
names = a list of resource names to update (None = all)
languages = a list of resource languages to update (None = all)
"""
res = GetResources(srcpath, types, names, languages)
UpdateResourcesFromDict(dstpath, res) | Update or add resources from dll/exe file srcpath in dll/exe file dstpath.
types = a list of resource types to update (None = all)
names = a list of resource names to update (None = all)
languages = a list of resource languages to update (None = all) | Below is the the instruction that describes the task:
### Input:
Update or add resources from dll/exe file srcpath in dll/exe file dstpath.
types = a list of resource types to update (None = all)
names = a list of resource names to update (None = all)
languages = a list of resource languages to update (None = all)
### Response:
def UpdateResourcesFromResFile(dstpath, srcpath, types=None, names=None,
languages=None):
"""
Update or add resources from dll/exe file srcpath in dll/exe file dstpath.
types = a list of resource types to update (None = all)
names = a list of resource names to update (None = all)
languages = a list of resource languages to update (None = all)
"""
res = GetResources(srcpath, types, names, languages)
UpdateResourcesFromDict(dstpath, res) |
async def disable_digital_reporting(self, pin):
"""
Disables digital reporting. By turning reporting off for this pin,
Reporting is disabled for all 8 bits in the "port"
:param pin: Pin and all pins for this port
:returns: No return value
"""
port = pin // 8
command = [PrivateConstants.REPORT_DIGITAL + port,
PrivateConstants.REPORTING_DISABLE]
await self._send_command(command) | Disables digital reporting. By turning reporting off for this pin,
Reporting is disabled for all 8 bits in the "port"
:param pin: Pin and all pins for this port
:returns: No return value | Below is the the instruction that describes the task:
### Input:
Disables digital reporting. By turning reporting off for this pin,
Reporting is disabled for all 8 bits in the "port"
:param pin: Pin and all pins for this port
:returns: No return value
### Response:
async def disable_digital_reporting(self, pin):
"""
Disables digital reporting. By turning reporting off for this pin,
Reporting is disabled for all 8 bits in the "port"
:param pin: Pin and all pins for this port
:returns: No return value
"""
port = pin // 8
command = [PrivateConstants.REPORT_DIGITAL + port,
PrivateConstants.REPORTING_DISABLE]
await self._send_command(command) |
def recode_from_groupby(c, sort, ci):
"""
Reverse the codes_to_groupby to account for sort / observed.
Parameters
----------
c : Categorical
sort : boolean
The value of the sort parameter groupby was called with.
ci : CategoricalIndex
The codes / categories to recode
Returns
-------
CategoricalIndex
"""
# we re-order to the original category orderings
if sort:
return ci.set_categories(c.categories)
# we are not sorting, so add unobserved to the end
return ci.add_categories(
c.categories[~c.categories.isin(ci.categories)]) | Reverse the codes_to_groupby to account for sort / observed.
Parameters
----------
c : Categorical
sort : boolean
The value of the sort parameter groupby was called with.
ci : CategoricalIndex
The codes / categories to recode
Returns
-------
CategoricalIndex | Below is the the instruction that describes the task:
### Input:
Reverse the codes_to_groupby to account for sort / observed.
Parameters
----------
c : Categorical
sort : boolean
The value of the sort parameter groupby was called with.
ci : CategoricalIndex
The codes / categories to recode
Returns
-------
CategoricalIndex
### Response:
def recode_from_groupby(c, sort, ci):
"""
Reverse the codes_to_groupby to account for sort / observed.
Parameters
----------
c : Categorical
sort : boolean
The value of the sort parameter groupby was called with.
ci : CategoricalIndex
The codes / categories to recode
Returns
-------
CategoricalIndex
"""
# we re-order to the original category orderings
if sort:
return ci.set_categories(c.categories)
# we are not sorting, so add unobserved to the end
return ci.add_categories(
c.categories[~c.categories.isin(ci.categories)]) |
def create(fc_layers=None, dropout=None, pretrained=True):
""" Vel factory function """
def instantiate(**_):
return Resnet34(fc_layers, dropout, pretrained)
return ModelFactory.generic(instantiate) | Vel factory function | Below is the the instruction that describes the task:
### Input:
Vel factory function
### Response:
def create(fc_layers=None, dropout=None, pretrained=True):
""" Vel factory function """
def instantiate(**_):
return Resnet34(fc_layers, dropout, pretrained)
return ModelFactory.generic(instantiate) |
def run_analyze_dimension_and_radius(data, rmin, rmax, nradii, adjacency_method='brute', adjacency_kwds = {},
fit_range=None, savefig=False, plot_name = 'dimension_plot.png'):
"""
This function is used to estimate the doubling dimension (approximately equal to the intrinsic
dimension) by computing a graph of neighborhood radius versus average number of neighbors.
The "radius" refers to the truncation constant where all distances greater than
a specified radius are taken to be infinite. This is used for example in the
truncated Gaussian kernel in estimate_radius.py
Parameters
----------
data : numpy array,
Original data set for which we are estimating the bandwidth
rmin : float,
smallest radius to consider
rmax : float,
largest radius to consider
nradii : int,
number of radii between rmax and rmin to consider
adjacency_method : string,
megaman adjacency method to use, default 'brute' see geometry.py for details
adjacency_kwds : dict,
dictionary of keywords for adjacency method
fit_range : list of ints,
range of radii to consider default is range(nradii), i.e. all of them
savefig: bool,
whether to save the radius vs. neighbors figure
plot_name: string,
filename of the figure to be saved as.
Returns
-------
results : dictionary
contains the radii, average nieghbors, min and max number of neighbors and number
of points with no neighbors.
dim : float,
estimated doubling dimension (used as an estimate of the intrinsic dimension)
"""
n, D = data.shape
radii = 10**(np.linspace(np.log10(rmin), np.log10(rmax), nradii))
dists = compute_largest_radius_distance(data, rmax, adjacency_method, adjacency_kwds)
results = neighborhood_analysis(dists, radii)
avg_neighbors = results['avg_neighbors'].flatten()
radii = results['radii'].flatten()
if fit_range is None:
fit_range = range(len(radii))
dim = find_dimension_plot(avg_neighbors, radii, fit_range, savefig, plot_name)
return(results, dim) | This function is used to estimate the doubling dimension (approximately equal to the intrinsic
dimension) by computing a graph of neighborhood radius versus average number of neighbors.
The "radius" refers to the truncation constant where all distances greater than
a specified radius are taken to be infinite. This is used for example in the
truncated Gaussian kernel in estimate_radius.py
Parameters
----------
data : numpy array,
Original data set for which we are estimating the bandwidth
rmin : float,
smallest radius to consider
rmax : float,
largest radius to consider
nradii : int,
number of radii between rmax and rmin to consider
adjacency_method : string,
megaman adjacency method to use, default 'brute' see geometry.py for details
adjacency_kwds : dict,
dictionary of keywords for adjacency method
fit_range : list of ints,
range of radii to consider default is range(nradii), i.e. all of them
savefig: bool,
whether to save the radius vs. neighbors figure
plot_name: string,
filename of the figure to be saved as.
Returns
-------
results : dictionary
contains the radii, average nieghbors, min and max number of neighbors and number
of points with no neighbors.
dim : float,
estimated doubling dimension (used as an estimate of the intrinsic dimension) | Below is the the instruction that describes the task:
### Input:
This function is used to estimate the doubling dimension (approximately equal to the intrinsic
dimension) by computing a graph of neighborhood radius versus average number of neighbors.
The "radius" refers to the truncation constant where all distances greater than
a specified radius are taken to be infinite. This is used for example in the
truncated Gaussian kernel in estimate_radius.py
Parameters
----------
data : numpy array,
Original data set for which we are estimating the bandwidth
rmin : float,
smallest radius to consider
rmax : float,
largest radius to consider
nradii : int,
number of radii between rmax and rmin to consider
adjacency_method : string,
megaman adjacency method to use, default 'brute' see geometry.py for details
adjacency_kwds : dict,
dictionary of keywords for adjacency method
fit_range : list of ints,
range of radii to consider default is range(nradii), i.e. all of them
savefig: bool,
whether to save the radius vs. neighbors figure
plot_name: string,
filename of the figure to be saved as.
Returns
-------
results : dictionary
contains the radii, average nieghbors, min and max number of neighbors and number
of points with no neighbors.
dim : float,
estimated doubling dimension (used as an estimate of the intrinsic dimension)
### Response:
def run_analyze_dimension_and_radius(data, rmin, rmax, nradii, adjacency_method='brute', adjacency_kwds = {},
fit_range=None, savefig=False, plot_name = 'dimension_plot.png'):
"""
This function is used to estimate the doubling dimension (approximately equal to the intrinsic
dimension) by computing a graph of neighborhood radius versus average number of neighbors.
The "radius" refers to the truncation constant where all distances greater than
a specified radius are taken to be infinite. This is used for example in the
truncated Gaussian kernel in estimate_radius.py
Parameters
----------
data : numpy array,
Original data set for which we are estimating the bandwidth
rmin : float,
smallest radius to consider
rmax : float,
largest radius to consider
nradii : int,
number of radii between rmax and rmin to consider
adjacency_method : string,
megaman adjacency method to use, default 'brute' see geometry.py for details
adjacency_kwds : dict,
dictionary of keywords for adjacency method
fit_range : list of ints,
range of radii to consider default is range(nradii), i.e. all of them
savefig: bool,
whether to save the radius vs. neighbors figure
plot_name: string,
filename of the figure to be saved as.
Returns
-------
results : dictionary
contains the radii, average nieghbors, min and max number of neighbors and number
of points with no neighbors.
dim : float,
estimated doubling dimension (used as an estimate of the intrinsic dimension)
"""
n, D = data.shape
radii = 10**(np.linspace(np.log10(rmin), np.log10(rmax), nradii))
dists = compute_largest_radius_distance(data, rmax, adjacency_method, adjacency_kwds)
results = neighborhood_analysis(dists, radii)
avg_neighbors = results['avg_neighbors'].flatten()
radii = results['radii'].flatten()
if fit_range is None:
fit_range = range(len(radii))
dim = find_dimension_plot(avg_neighbors, radii, fit_range, savefig, plot_name)
return(results, dim) |
def graph_coloring_qubo(graph, k):
"""
the QUBO for k-coloring a graph A is as follows:
variables:
x_{v,c} = 1 if vertex v of A gets color c; x_{v,c} = 0 otherwise
constraints:
1) each v in A gets exactly one color.
This constraint is enforced by including the term (\sum_c x_{v,c} - 1)^2 in the QUBO,
which is minimized when \sum_c x_{v,c} = 1.
2) If u and v in A are adjacent, then they get different colors.
This constraint is enforced by including terms x_{v,c} x_{u,c} in the QUBO,
which is minimzed when at most one of u and v get color c.
Total QUBO:
Q(x) = \sum_v (\sum_c x_{v,c} - 1)^2 + \sum_{u ~ v} \sum_c x_{v,c} x_{u,c}
The graph of interactions for this QUBO consists of cliques of size k (with vertices {x_{v,c} for c = 0,...,k-1})
plus k disjoint copies of the graph A (one for each color).
"""
K = nx.complete_graph(k)
g1 = nx.cartesian_product(nx.create_empty_copy(graph), K)
g2 = nx.cartesian_product(graph, nx.create_empty_copy(K))
return nx.compose(g1, g2) | the QUBO for k-coloring a graph A is as follows:
variables:
x_{v,c} = 1 if vertex v of A gets color c; x_{v,c} = 0 otherwise
constraints:
1) each v in A gets exactly one color.
This constraint is enforced by including the term (\sum_c x_{v,c} - 1)^2 in the QUBO,
which is minimized when \sum_c x_{v,c} = 1.
2) If u and v in A are adjacent, then they get different colors.
This constraint is enforced by including terms x_{v,c} x_{u,c} in the QUBO,
which is minimzed when at most one of u and v get color c.
Total QUBO:
Q(x) = \sum_v (\sum_c x_{v,c} - 1)^2 + \sum_{u ~ v} \sum_c x_{v,c} x_{u,c}
The graph of interactions for this QUBO consists of cliques of size k (with vertices {x_{v,c} for c = 0,...,k-1})
plus k disjoint copies of the graph A (one for each color). | Below is the the instruction that describes the task:
### Input:
the QUBO for k-coloring a graph A is as follows:
variables:
x_{v,c} = 1 if vertex v of A gets color c; x_{v,c} = 0 otherwise
constraints:
1) each v in A gets exactly one color.
This constraint is enforced by including the term (\sum_c x_{v,c} - 1)^2 in the QUBO,
which is minimized when \sum_c x_{v,c} = 1.
2) If u and v in A are adjacent, then they get different colors.
This constraint is enforced by including terms x_{v,c} x_{u,c} in the QUBO,
which is minimzed when at most one of u and v get color c.
Total QUBO:
Q(x) = \sum_v (\sum_c x_{v,c} - 1)^2 + \sum_{u ~ v} \sum_c x_{v,c} x_{u,c}
The graph of interactions for this QUBO consists of cliques of size k (with vertices {x_{v,c} for c = 0,...,k-1})
plus k disjoint copies of the graph A (one for each color).
### Response:
def graph_coloring_qubo(graph, k):
"""
the QUBO for k-coloring a graph A is as follows:
variables:
x_{v,c} = 1 if vertex v of A gets color c; x_{v,c} = 0 otherwise
constraints:
1) each v in A gets exactly one color.
This constraint is enforced by including the term (\sum_c x_{v,c} - 1)^2 in the QUBO,
which is minimized when \sum_c x_{v,c} = 1.
2) If u and v in A are adjacent, then they get different colors.
This constraint is enforced by including terms x_{v,c} x_{u,c} in the QUBO,
which is minimzed when at most one of u and v get color c.
Total QUBO:
Q(x) = \sum_v (\sum_c x_{v,c} - 1)^2 + \sum_{u ~ v} \sum_c x_{v,c} x_{u,c}
The graph of interactions for this QUBO consists of cliques of size k (with vertices {x_{v,c} for c = 0,...,k-1})
plus k disjoint copies of the graph A (one for each color).
"""
K = nx.complete_graph(k)
g1 = nx.cartesian_product(nx.create_empty_copy(graph), K)
g2 = nx.cartesian_product(graph, nx.create_empty_copy(K))
return nx.compose(g1, g2) |
def load_engines(manager, class_name, base_module, engines, class_key='ENGINE', engine_type='engine'):
"""Load engines."""
loaded_engines = {}
for module_name_or_dict in engines:
if not isinstance(module_name_or_dict, dict):
module_name_or_dict = {
class_key: module_name_or_dict
}
try:
module_name = module_name_or_dict[class_key]
engine_settings = module_name_or_dict
except KeyError:
raise ImproperlyConfigured("If {} specification is a dictionary, it must define {}".format(
engine_type, class_key))
try:
engine_module = import_module(module_name)
try:
engine = getattr(engine_module, class_name)(manager=manager, settings=engine_settings)
if not isinstance(engine, BaseEngine):
raise ImproperlyConfigured("{} module {} class {} must extend BaseEngine".format(
engine_type.capitalize(), module_name, class_name))
except AttributeError:
raise ImproperlyConfigured("{} module {} is missing a {} class".format(
engine_type.capitalize(), module_name, class_name))
if engine.get_name() in loaded_engines:
raise ImproperlyConfigured("Duplicated {} {}".format(engine_type, engine.get_name()))
loaded_engines[engine.get_name()] = engine
except ImportError as ex:
# The engine wasn't found. Display a helpful error message listing all possible
# (built-in) engines.
engine_dir = os.path.join(os.path.dirname(upath(__file__)), base_module)
try:
builtin_engines = [name for _, name, _ in pkgutil.iter_modules([engine_dir])]
except EnvironmentError:
builtin_engines = []
if module_name not in ['resolwe.flow.{}.{}'.format(base_module, builtin_engine)
for builtin_engine in builtin_engines]:
engine_reprs = map(repr, sorted(builtin_engines))
error_msg = ("{} isn't an available dataflow {}.\n"
"Try using 'resolwe.flow.{}.XXX', where XXX is one of:\n"
" {}\n"
"Error was: {}".format(
module_name, engine_type, base_module, ", ".join(engine_reprs), ex
))
raise ImproperlyConfigured(error_msg)
else:
# If there's some other error, this must be an error in Django
raise
return loaded_engines | Load engines. | Below is the the instruction that describes the task:
### Input:
Load engines.
### Response:
def load_engines(manager, class_name, base_module, engines, class_key='ENGINE', engine_type='engine'):
"""Load engines."""
loaded_engines = {}
for module_name_or_dict in engines:
if not isinstance(module_name_or_dict, dict):
module_name_or_dict = {
class_key: module_name_or_dict
}
try:
module_name = module_name_or_dict[class_key]
engine_settings = module_name_or_dict
except KeyError:
raise ImproperlyConfigured("If {} specification is a dictionary, it must define {}".format(
engine_type, class_key))
try:
engine_module = import_module(module_name)
try:
engine = getattr(engine_module, class_name)(manager=manager, settings=engine_settings)
if not isinstance(engine, BaseEngine):
raise ImproperlyConfigured("{} module {} class {} must extend BaseEngine".format(
engine_type.capitalize(), module_name, class_name))
except AttributeError:
raise ImproperlyConfigured("{} module {} is missing a {} class".format(
engine_type.capitalize(), module_name, class_name))
if engine.get_name() in loaded_engines:
raise ImproperlyConfigured("Duplicated {} {}".format(engine_type, engine.get_name()))
loaded_engines[engine.get_name()] = engine
except ImportError as ex:
# The engine wasn't found. Display a helpful error message listing all possible
# (built-in) engines.
engine_dir = os.path.join(os.path.dirname(upath(__file__)), base_module)
try:
builtin_engines = [name for _, name, _ in pkgutil.iter_modules([engine_dir])]
except EnvironmentError:
builtin_engines = []
if module_name not in ['resolwe.flow.{}.{}'.format(base_module, builtin_engine)
for builtin_engine in builtin_engines]:
engine_reprs = map(repr, sorted(builtin_engines))
error_msg = ("{} isn't an available dataflow {}.\n"
"Try using 'resolwe.flow.{}.XXX', where XXX is one of:\n"
" {}\n"
"Error was: {}".format(
module_name, engine_type, base_module, ", ".join(engine_reprs), ex
))
raise ImproperlyConfigured(error_msg)
else:
# If there's some other error, this must be an error in Django
raise
return loaded_engines |
def createDirStruct(paths, verbose=True):
'''Loops ait.config._datapaths from AIT_CONFIG and creates a directory.
Replaces year and doy with the respective year and day-of-year.
If neither are given as arguments, current UTC day and year are used.
Args:
paths:
[optional] list of directory paths you would like to create.
doy and year will be replaced by the datetime day and year, respectively.
datetime:
UTC Datetime string in ISO 8601 Format YYYY-MM-DDTHH:mm:ssZ
'''
for k, path in paths.items():
p = None
try:
pathlist = path if type(path) is list else [ path ]
for p in pathlist:
os.makedirs(p)
if verbose:
log.info('Creating directory: ' + p)
except OSError, e:
#print path
if e.errno == errno.EEXIST and os.path.isdir(p):
pass
else:
raise
return True | Loops ait.config._datapaths from AIT_CONFIG and creates a directory.
Replaces year and doy with the respective year and day-of-year.
If neither are given as arguments, current UTC day and year are used.
Args:
paths:
[optional] list of directory paths you would like to create.
doy and year will be replaced by the datetime day and year, respectively.
datetime:
UTC Datetime string in ISO 8601 Format YYYY-MM-DDTHH:mm:ssZ | Below is the the instruction that describes the task:
### Input:
Loops ait.config._datapaths from AIT_CONFIG and creates a directory.
Replaces year and doy with the respective year and day-of-year.
If neither are given as arguments, current UTC day and year are used.
Args:
paths:
[optional] list of directory paths you would like to create.
doy and year will be replaced by the datetime day and year, respectively.
datetime:
UTC Datetime string in ISO 8601 Format YYYY-MM-DDTHH:mm:ssZ
### Response:
def createDirStruct(paths, verbose=True):
'''Loops ait.config._datapaths from AIT_CONFIG and creates a directory.
Replaces year and doy with the respective year and day-of-year.
If neither are given as arguments, current UTC day and year are used.
Args:
paths:
[optional] list of directory paths you would like to create.
doy and year will be replaced by the datetime day and year, respectively.
datetime:
UTC Datetime string in ISO 8601 Format YYYY-MM-DDTHH:mm:ssZ
'''
for k, path in paths.items():
p = None
try:
pathlist = path if type(path) is list else [ path ]
for p in pathlist:
os.makedirs(p)
if verbose:
log.info('Creating directory: ' + p)
except OSError, e:
#print path
if e.errno == errno.EEXIST and os.path.isdir(p):
pass
else:
raise
return True |
def to_web(graph: BELGraph,
host: Optional[str] = None,
user: Optional[str] = None,
password: Optional[str] = None,
public: bool = False,
) -> requests.Response:
"""Send a graph to the receiver service and returns the :mod:`requests` response object.
:param graph: A BEL graph
:param host: The location of the BEL Commons server. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_HOST`` or the environment as ``PYBEL_REMOTE_HOST`` Defaults to
:data:`pybel.constants.DEFAULT_SERVICE_URL`
:param user: Username for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_USER`` or the environment as ``PYBEL_REMOTE_USER``
:param password: Password for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_PASSWORD`` or the environment as ``PYBEL_REMOTE_PASSWORD``
:return: The response object from :mod:`requests`
"""
if host is None:
host = _get_host()
log.debug('using host: %s', host)
if user is None:
user = _get_user()
if user is None:
raise ValueError('no user found')
if password is None:
password = _get_password()
if password is None:
raise ValueError('no password found')
url = host.rstrip('/') + RECIEVE_ENDPOINT
response = requests.post(
url,
json=to_json(graph),
headers={
'content-type': 'application/json',
'User-Agent': 'PyBEL v{}'.format(get_version()),
'bel-commons-public': 'true' if public else 'false',
},
auth=(user, password),
)
log.debug('received response: %s', response)
return response | Send a graph to the receiver service and returns the :mod:`requests` response object.
:param graph: A BEL graph
:param host: The location of the BEL Commons server. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_HOST`` or the environment as ``PYBEL_REMOTE_HOST`` Defaults to
:data:`pybel.constants.DEFAULT_SERVICE_URL`
:param user: Username for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_USER`` or the environment as ``PYBEL_REMOTE_USER``
:param password: Password for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_PASSWORD`` or the environment as ``PYBEL_REMOTE_PASSWORD``
:return: The response object from :mod:`requests` | Below is the the instruction that describes the task:
### Input:
Send a graph to the receiver service and returns the :mod:`requests` response object.
:param graph: A BEL graph
:param host: The location of the BEL Commons server. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_HOST`` or the environment as ``PYBEL_REMOTE_HOST`` Defaults to
:data:`pybel.constants.DEFAULT_SERVICE_URL`
:param user: Username for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_USER`` or the environment as ``PYBEL_REMOTE_USER``
:param password: Password for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_PASSWORD`` or the environment as ``PYBEL_REMOTE_PASSWORD``
:return: The response object from :mod:`requests`
### Response:
def to_web(graph: BELGraph,
host: Optional[str] = None,
user: Optional[str] = None,
password: Optional[str] = None,
public: bool = False,
) -> requests.Response:
"""Send a graph to the receiver service and returns the :mod:`requests` response object.
:param graph: A BEL graph
:param host: The location of the BEL Commons server. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_HOST`` or the environment as ``PYBEL_REMOTE_HOST`` Defaults to
:data:`pybel.constants.DEFAULT_SERVICE_URL`
:param user: Username for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_USER`` or the environment as ``PYBEL_REMOTE_USER``
:param password: Password for BEL Commons. Alternatively, looks up in PyBEL config with
``PYBEL_REMOTE_PASSWORD`` or the environment as ``PYBEL_REMOTE_PASSWORD``
:return: The response object from :mod:`requests`
"""
if host is None:
host = _get_host()
log.debug('using host: %s', host)
if user is None:
user = _get_user()
if user is None:
raise ValueError('no user found')
if password is None:
password = _get_password()
if password is None:
raise ValueError('no password found')
url = host.rstrip('/') + RECIEVE_ENDPOINT
response = requests.post(
url,
json=to_json(graph),
headers={
'content-type': 'application/json',
'User-Agent': 'PyBEL v{}'.format(get_version()),
'bel-commons-public': 'true' if public else 'false',
},
auth=(user, password),
)
log.debug('received response: %s', response)
return response |
def update_project(project):
"""Update a project instance.
:param project: PYBOSSA project
:type project: PYBOSSA Project
:returns: True -- the response status code
"""
try:
project_id = project.id
project = _forbidden_attributes(project)
res = _pybossa_req('put', 'project', project_id, payload=project.data)
if res.get('id'):
return Project(res)
else:
return res
except: # pragma: no cover
raise | Update a project instance.
:param project: PYBOSSA project
:type project: PYBOSSA Project
:returns: True -- the response status code | Below is the the instruction that describes the task:
### Input:
Update a project instance.
:param project: PYBOSSA project
:type project: PYBOSSA Project
:returns: True -- the response status code
### Response:
def update_project(project):
"""Update a project instance.
:param project: PYBOSSA project
:type project: PYBOSSA Project
:returns: True -- the response status code
"""
try:
project_id = project.id
project = _forbidden_attributes(project)
res = _pybossa_req('put', 'project', project_id, payload=project.data)
if res.get('id'):
return Project(res)
else:
return res
except: # pragma: no cover
raise |
def _imm_dir(self):
'''
An immutable object's dir function should list not only its attributes, but also its un-cached
lazy values.
'''
dir0 = set(dir(self.__class__))
dir0.update(self.__dict__.keys())
dir0.update(six.iterkeys(_imm_value_data(self)))
return sorted(list(dir0)) | An immutable object's dir function should list not only its attributes, but also its un-cached
lazy values. | Below is the the instruction that describes the task:
### Input:
An immutable object's dir function should list not only its attributes, but also its un-cached
lazy values.
### Response:
def _imm_dir(self):
'''
An immutable object's dir function should list not only its attributes, but also its un-cached
lazy values.
'''
dir0 = set(dir(self.__class__))
dir0.update(self.__dict__.keys())
dir0.update(six.iterkeys(_imm_value_data(self)))
return sorted(list(dir0)) |
def step_through(self, msg='', shutit_pexpect_child=None, level=1, print_input=True, value=True):
"""Implements a step-through function, using pause_point.
"""
shutit_global.shutit_global_object.yield_to_draw()
shutit_pexpect_child = shutit_pexpect_child or self.get_current_shutit_pexpect_session().pexpect_child
shutit_pexpect_session = self.get_shutit_pexpect_session_from_child(shutit_pexpect_child)
if (not shutit_global.shutit_global_object.determine_interactive() or not shutit_global.shutit_global_object.interactive or
shutit_global.shutit_global_object.interactive < level):
return True
self.build['step_through'] = value
shutit_pexpect_session.pause_point(msg, print_input=print_input, level=level)
return True | Implements a step-through function, using pause_point. | Below is the the instruction that describes the task:
### Input:
Implements a step-through function, using pause_point.
### Response:
def step_through(self, msg='', shutit_pexpect_child=None, level=1, print_input=True, value=True):
"""Implements a step-through function, using pause_point.
"""
shutit_global.shutit_global_object.yield_to_draw()
shutit_pexpect_child = shutit_pexpect_child or self.get_current_shutit_pexpect_session().pexpect_child
shutit_pexpect_session = self.get_shutit_pexpect_session_from_child(shutit_pexpect_child)
if (not shutit_global.shutit_global_object.determine_interactive() or not shutit_global.shutit_global_object.interactive or
shutit_global.shutit_global_object.interactive < level):
return True
self.build['step_through'] = value
shutit_pexpect_session.pause_point(msg, print_input=print_input, level=level)
return True |
def parseSpectra(self):
""" #TODO: docstring
:returns: #TODO: docstring
"""
#Note: the spectra need to be iterated completely to save the
#metadataNode
if self._parsed:
raise TypeError('Mzml file already parsed.')
self._parsed = True
return self._parseMzml() | #TODO: docstring
:returns: #TODO: docstring | Below is the the instruction that describes the task:
### Input:
#TODO: docstring
:returns: #TODO: docstring
### Response:
def parseSpectra(self):
""" #TODO: docstring
:returns: #TODO: docstring
"""
#Note: the spectra need to be iterated completely to save the
#metadataNode
if self._parsed:
raise TypeError('Mzml file already parsed.')
self._parsed = True
return self._parseMzml() |
def names2dnsrepr(x):
"""
Take as input a list of DNS names or a single DNS name
and encode it in DNS format (with possible compression)
If a string that is already a DNS name in DNS format
is passed, it is returned unmodified. Result is a string.
!!! At the moment, compression is not implemented !!!
"""
if type(x) is str:
if x and x[-1] == '\x00': # stupid heuristic
return x.encode('ascii')
x = [x.encode('ascii')]
elif type(x) is bytes:
if x and x[-1] == 0:
return x
x = [x]
res = []
for n in x:
if type(n) is str:
n = n.encode('ascii')
termin = b"\x00"
if n.count(b'.') == 0: # single-component gets one more
termin += bytes([0])
n = b"".join(map(lambda y: chr(len(y)).encode('ascii')+y, n.split(b"."))) + termin
res.append(n)
return b"".join(res) | Take as input a list of DNS names or a single DNS name
and encode it in DNS format (with possible compression)
If a string that is already a DNS name in DNS format
is passed, it is returned unmodified. Result is a string.
!!! At the moment, compression is not implemented !!! | Below is the the instruction that describes the task:
### Input:
Take as input a list of DNS names or a single DNS name
and encode it in DNS format (with possible compression)
If a string that is already a DNS name in DNS format
is passed, it is returned unmodified. Result is a string.
!!! At the moment, compression is not implemented !!!
### Response:
def names2dnsrepr(x):
"""
Take as input a list of DNS names or a single DNS name
and encode it in DNS format (with possible compression)
If a string that is already a DNS name in DNS format
is passed, it is returned unmodified. Result is a string.
!!! At the moment, compression is not implemented !!!
"""
if type(x) is str:
if x and x[-1] == '\x00': # stupid heuristic
return x.encode('ascii')
x = [x.encode('ascii')]
elif type(x) is bytes:
if x and x[-1] == 0:
return x
x = [x]
res = []
for n in x:
if type(n) is str:
n = n.encode('ascii')
termin = b"\x00"
if n.count(b'.') == 0: # single-component gets one more
termin += bytes([0])
n = b"".join(map(lambda y: chr(len(y)).encode('ascii')+y, n.split(b"."))) + termin
res.append(n)
return b"".join(res) |
def python_matches(self,text):
"""Match attributes or global python names"""
#io.rprint('Completer->python_matches, txt=%r' % text) # dbg
if "." in text:
try:
matches = self.attr_matches(text)
if text.endswith('.') and self.omit__names:
if self.omit__names == 1:
# true if txt is _not_ a __ name, false otherwise:
no__name = (lambda txt:
re.match(r'.*\.__.*?__',txt) is None)
else:
# true if txt is _not_ a _ name, false otherwise:
no__name = (lambda txt:
re.match(r'.*\._.*?',txt) is None)
matches = filter(no__name, matches)
except NameError:
# catches <undefined attributes>.<tab>
matches = []
else:
matches = self.global_matches(text)
return matches | Match attributes or global python names | Below is the the instruction that describes the task:
### Input:
Match attributes or global python names
### Response:
def python_matches(self,text):
"""Match attributes or global python names"""
#io.rprint('Completer->python_matches, txt=%r' % text) # dbg
if "." in text:
try:
matches = self.attr_matches(text)
if text.endswith('.') and self.omit__names:
if self.omit__names == 1:
# true if txt is _not_ a __ name, false otherwise:
no__name = (lambda txt:
re.match(r'.*\.__.*?__',txt) is None)
else:
# true if txt is _not_ a _ name, false otherwise:
no__name = (lambda txt:
re.match(r'.*\._.*?',txt) is None)
matches = filter(no__name, matches)
except NameError:
# catches <undefined attributes>.<tab>
matches = []
else:
matches = self.global_matches(text)
return matches |
def _flush(self):
"""Write persistent dictionary to disc."""
_logger.debug("_flush()")
self._lock.acquire_write() # TODO: read access is enough?
try:
self._dict.sync()
finally:
self._lock.release() | Write persistent dictionary to disc. | Below is the the instruction that describes the task:
### Input:
Write persistent dictionary to disc.
### Response:
def _flush(self):
"""Write persistent dictionary to disc."""
_logger.debug("_flush()")
self._lock.acquire_write() # TODO: read access is enough?
try:
self._dict.sync()
finally:
self._lock.release() |
def peng_mant(snum):
r"""
Return the mantissa of a number represented in engineering notation.
:param snum: Number
:type snum: :ref:`EngineeringNotationNumber`
:rtype: float
.. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]
.. Auto-generated exceptions documentation for
.. peng.functions.peng_mant
:raises: RuntimeError (Argument \`snum\` is not valid)
.. [[[end]]]
For example:
>>> import peng
>>> peng.peng_mant(peng.peng(1235.6789E3, 3, False))
1.236
"""
snum = snum.rstrip()
return float(snum if snum[-1].isdigit() else snum[:-1]) | r"""
Return the mantissa of a number represented in engineering notation.
:param snum: Number
:type snum: :ref:`EngineeringNotationNumber`
:rtype: float
.. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]
.. Auto-generated exceptions documentation for
.. peng.functions.peng_mant
:raises: RuntimeError (Argument \`snum\` is not valid)
.. [[[end]]]
For example:
>>> import peng
>>> peng.peng_mant(peng.peng(1235.6789E3, 3, False))
1.236 | Below is the the instruction that describes the task:
### Input:
r"""
Return the mantissa of a number represented in engineering notation.
:param snum: Number
:type snum: :ref:`EngineeringNotationNumber`
:rtype: float
.. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]
.. Auto-generated exceptions documentation for
.. peng.functions.peng_mant
:raises: RuntimeError (Argument \`snum\` is not valid)
.. [[[end]]]
For example:
>>> import peng
>>> peng.peng_mant(peng.peng(1235.6789E3, 3, False))
1.236
### Response:
def peng_mant(snum):
r"""
Return the mantissa of a number represented in engineering notation.
:param snum: Number
:type snum: :ref:`EngineeringNotationNumber`
:rtype: float
.. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]
.. Auto-generated exceptions documentation for
.. peng.functions.peng_mant
:raises: RuntimeError (Argument \`snum\` is not valid)
.. [[[end]]]
For example:
>>> import peng
>>> peng.peng_mant(peng.peng(1235.6789E3, 3, False))
1.236
"""
snum = snum.rstrip()
return float(snum if snum[-1].isdigit() else snum[:-1]) |
def _write_model(self, specification, specification_set):
""" Write autogenerate specification file
"""
filename = "vspk/%s%s.cs" % (self._class_prefix, specification.entity_name)
override_content = self._extract_override_content(specification.entity_name)
superclass_name = "RestObject"
defaults = {}
section = specification.entity_name
if self.attrs_defaults.has_section(section):
for attribute in self.attrs_defaults.options(section):
defaults[attribute] = self.attrs_defaults.get(section, attribute)
self.write(destination=self.output_directory,
filename=filename,
template_name="model.cs.tpl",
specification=specification,
specification_set=specification_set,
version=self.api_version,
name=self._name,
class_prefix=self._class_prefix,
product_accronym=self._product_accronym,
override_content=override_content,
superclass_name=superclass_name,
header=self.header_content,
version_string=self._api_version_string,
package_name=self._package_name,
attribute_defaults=defaults)
return (filename, specification.entity_name) | Write autogenerate specification file | Below is the the instruction that describes the task:
### Input:
Write autogenerate specification file
### Response:
def _write_model(self, specification, specification_set):
""" Write autogenerate specification file
"""
filename = "vspk/%s%s.cs" % (self._class_prefix, specification.entity_name)
override_content = self._extract_override_content(specification.entity_name)
superclass_name = "RestObject"
defaults = {}
section = specification.entity_name
if self.attrs_defaults.has_section(section):
for attribute in self.attrs_defaults.options(section):
defaults[attribute] = self.attrs_defaults.get(section, attribute)
self.write(destination=self.output_directory,
filename=filename,
template_name="model.cs.tpl",
specification=specification,
specification_set=specification_set,
version=self.api_version,
name=self._name,
class_prefix=self._class_prefix,
product_accronym=self._product_accronym,
override_content=override_content,
superclass_name=superclass_name,
header=self.header_content,
version_string=self._api_version_string,
package_name=self._package_name,
attribute_defaults=defaults)
return (filename, specification.entity_name) |
def set_dimensions(self, variables, unlimited_dims=None):
"""
This provides a centralized method to set the dimensions on the data
store.
Parameters
----------
variables : dict-like
Dictionary of key/value (variable name / xr.Variable) pairs
unlimited_dims : list-like
List of dimension names that should be treated as unlimited
dimensions.
"""
if unlimited_dims is None:
unlimited_dims = set()
existing_dims = self.get_dimensions()
dims = OrderedDict()
for v in unlimited_dims: # put unlimited_dims first
dims[v] = None
for v in variables.values():
dims.update(dict(zip(v.dims, v.shape)))
for dim, length in dims.items():
if dim in existing_dims and length != existing_dims[dim]:
raise ValueError(
"Unable to update size for existing dimension"
"%r (%d != %d)" % (dim, length, existing_dims[dim]))
elif dim not in existing_dims:
is_unlimited = dim in unlimited_dims
self.set_dimension(dim, length, is_unlimited) | This provides a centralized method to set the dimensions on the data
store.
Parameters
----------
variables : dict-like
Dictionary of key/value (variable name / xr.Variable) pairs
unlimited_dims : list-like
List of dimension names that should be treated as unlimited
dimensions. | Below is the the instruction that describes the task:
### Input:
This provides a centralized method to set the dimensions on the data
store.
Parameters
----------
variables : dict-like
Dictionary of key/value (variable name / xr.Variable) pairs
unlimited_dims : list-like
List of dimension names that should be treated as unlimited
dimensions.
### Response:
def set_dimensions(self, variables, unlimited_dims=None):
"""
This provides a centralized method to set the dimensions on the data
store.
Parameters
----------
variables : dict-like
Dictionary of key/value (variable name / xr.Variable) pairs
unlimited_dims : list-like
List of dimension names that should be treated as unlimited
dimensions.
"""
if unlimited_dims is None:
unlimited_dims = set()
existing_dims = self.get_dimensions()
dims = OrderedDict()
for v in unlimited_dims: # put unlimited_dims first
dims[v] = None
for v in variables.values():
dims.update(dict(zip(v.dims, v.shape)))
for dim, length in dims.items():
if dim in existing_dims and length != existing_dims[dim]:
raise ValueError(
"Unable to update size for existing dimension"
"%r (%d != %d)" % (dim, length, existing_dims[dim]))
elif dim not in existing_dims:
is_unlimited = dim in unlimited_dims
self.set_dimension(dim, length, is_unlimited) |
def set_value(self, key, value):
"""
Set the recent files value in QSettings.
:param key: value key
:param value: new value
"""
if value is None:
value = []
value = [os.path.normpath(pth) for pth in value]
self._settings.setValue('recent_files/%s' % key, value) | Set the recent files value in QSettings.
:param key: value key
:param value: new value | Below is the the instruction that describes the task:
### Input:
Set the recent files value in QSettings.
:param key: value key
:param value: new value
### Response:
def set_value(self, key, value):
"""
Set the recent files value in QSettings.
:param key: value key
:param value: new value
"""
if value is None:
value = []
value = [os.path.normpath(pth) for pth in value]
self._settings.setValue('recent_files/%s' % key, value) |
def build_list(self, title=None, items=None):
"""Presents the user with a vertical list of multiple items.
Allows the user to select a single item.
Selection generates a user query containing the title of the list item
*Note* Returns a completely new object,
and does not modify the existing response object
Therefore, to add items, must be assigned to new variable
or call the method directly after initializing list
example usage:
simple = ask('I speak this text')
mylist = simple.build_list('List Title')
mylist.add_item('Item1', 'key1')
mylist.add_item('Item2', 'key2')
return mylist
Arguments:
title {str} -- Title displayed at top of list card
Returns:
_ListSelector -- [_Response object exposing the add_item method]
"""
list_card = _ListSelector(
self._speech, display_text=self._display_text, title=title, items=items
)
return list_card | Presents the user with a vertical list of multiple items.
Allows the user to select a single item.
Selection generates a user query containing the title of the list item
*Note* Returns a completely new object,
and does not modify the existing response object
Therefore, to add items, must be assigned to new variable
or call the method directly after initializing list
example usage:
simple = ask('I speak this text')
mylist = simple.build_list('List Title')
mylist.add_item('Item1', 'key1')
mylist.add_item('Item2', 'key2')
return mylist
Arguments:
title {str} -- Title displayed at top of list card
Returns:
_ListSelector -- [_Response object exposing the add_item method] | Below is the the instruction that describes the task:
### Input:
Presents the user with a vertical list of multiple items.
Allows the user to select a single item.
Selection generates a user query containing the title of the list item
*Note* Returns a completely new object,
and does not modify the existing response object
Therefore, to add items, must be assigned to new variable
or call the method directly after initializing list
example usage:
simple = ask('I speak this text')
mylist = simple.build_list('List Title')
mylist.add_item('Item1', 'key1')
mylist.add_item('Item2', 'key2')
return mylist
Arguments:
title {str} -- Title displayed at top of list card
Returns:
_ListSelector -- [_Response object exposing the add_item method]
### Response:
def build_list(self, title=None, items=None):
"""Presents the user with a vertical list of multiple items.
Allows the user to select a single item.
Selection generates a user query containing the title of the list item
*Note* Returns a completely new object,
and does not modify the existing response object
Therefore, to add items, must be assigned to new variable
or call the method directly after initializing list
example usage:
simple = ask('I speak this text')
mylist = simple.build_list('List Title')
mylist.add_item('Item1', 'key1')
mylist.add_item('Item2', 'key2')
return mylist
Arguments:
title {str} -- Title displayed at top of list card
Returns:
_ListSelector -- [_Response object exposing the add_item method]
"""
list_card = _ListSelector(
self._speech, display_text=self._display_text, title=title, items=items
)
return list_card |
def set_by_dotted_path(d, path, value):
"""
Set an entry in a nested dict using a dotted path.
Will create dictionaries as needed.
Examples
--------
>>> d = {'foo': {'bar': 7}}
>>> set_by_dotted_path(d, 'foo.bar', 10)
>>> d
{'foo': {'bar': 10}}
>>> set_by_dotted_path(d, 'foo.d.baz', 3)
>>> d
{'foo': {'bar': 10, 'd': {'baz': 3}}}
"""
split_path = path.split('.')
current_option = d
for p in split_path[:-1]:
if p not in current_option:
current_option[p] = dict()
current_option = current_option[p]
current_option[split_path[-1]] = value | Set an entry in a nested dict using a dotted path.
Will create dictionaries as needed.
Examples
--------
>>> d = {'foo': {'bar': 7}}
>>> set_by_dotted_path(d, 'foo.bar', 10)
>>> d
{'foo': {'bar': 10}}
>>> set_by_dotted_path(d, 'foo.d.baz', 3)
>>> d
{'foo': {'bar': 10, 'd': {'baz': 3}}} | Below is the the instruction that describes the task:
### Input:
Set an entry in a nested dict using a dotted path.
Will create dictionaries as needed.
Examples
--------
>>> d = {'foo': {'bar': 7}}
>>> set_by_dotted_path(d, 'foo.bar', 10)
>>> d
{'foo': {'bar': 10}}
>>> set_by_dotted_path(d, 'foo.d.baz', 3)
>>> d
{'foo': {'bar': 10, 'd': {'baz': 3}}}
### Response:
def set_by_dotted_path(d, path, value):
"""
Set an entry in a nested dict using a dotted path.
Will create dictionaries as needed.
Examples
--------
>>> d = {'foo': {'bar': 7}}
>>> set_by_dotted_path(d, 'foo.bar', 10)
>>> d
{'foo': {'bar': 10}}
>>> set_by_dotted_path(d, 'foo.d.baz', 3)
>>> d
{'foo': {'bar': 10, 'd': {'baz': 3}}}
"""
split_path = path.split('.')
current_option = d
for p in split_path[:-1]:
if p not in current_option:
current_option[p] = dict()
current_option = current_option[p]
current_option[split_path[-1]] = value |
def soft_kill(jid, state_id=None):
'''
Set up a state run to die before executing the given state id,
this instructs a running state to safely exit at a given
state id. This needs to pass in the jid of the running state.
If a state_id is not passed then the jid referenced will be safely exited
at the beginning of the next state run.
'''
minion = salt.minion.MasterMinion(__opts__)
minion.functions['state.soft_kill'](jid, state_id) | Set up a state run to die before executing the given state id,
this instructs a running state to safely exit at a given
state id. This needs to pass in the jid of the running state.
If a state_id is not passed then the jid referenced will be safely exited
at the beginning of the next state run. | Below is the the instruction that describes the task:
### Input:
Set up a state run to die before executing the given state id,
this instructs a running state to safely exit at a given
state id. This needs to pass in the jid of the running state.
If a state_id is not passed then the jid referenced will be safely exited
at the beginning of the next state run.
### Response:
def soft_kill(jid, state_id=None):
'''
Set up a state run to die before executing the given state id,
this instructs a running state to safely exit at a given
state id. This needs to pass in the jid of the running state.
If a state_id is not passed then the jid referenced will be safely exited
at the beginning of the next state run.
'''
minion = salt.minion.MasterMinion(__opts__)
minion.functions['state.soft_kill'](jid, state_id) |
def _create_get_request(self, resource, billomat_id='', command=None, params=None):
"""
Creates a get request and return the response data
"""
if not params:
params = {}
if not command:
command = ''
else:
command = '/' + command
assert (isinstance(resource, str))
if billomat_id:
assert (isinstance(billomat_id, int) or isinstance(billomat_id, str))
if isinstance(billomat_id, int):
billomat_id = str(billomat_id)
response = self.session.get(
url=self.api_url + resource + ('/' + billomat_id if billomat_id else '') + command,
params=params,
)
return self._handle_response(response) | Creates a get request and return the response data | Below is the the instruction that describes the task:
### Input:
Creates a get request and return the response data
### Response:
def _create_get_request(self, resource, billomat_id='', command=None, params=None):
"""
Creates a get request and return the response data
"""
if not params:
params = {}
if not command:
command = ''
else:
command = '/' + command
assert (isinstance(resource, str))
if billomat_id:
assert (isinstance(billomat_id, int) or isinstance(billomat_id, str))
if isinstance(billomat_id, int):
billomat_id = str(billomat_id)
response = self.session.get(
url=self.api_url + resource + ('/' + billomat_id if billomat_id else '') + command,
params=params,
)
return self._handle_response(response) |
def reweight_by_distance(self, coords, metric='l2', copy=False):
'''Replaces existing edge weights by distances between connected vertices.
The new weight of edge (i,j) is given by: metric(coords[i], coords[j]).
coords : (num_vertices x d) array of coordinates, in vertex order
metric : str or callable, see sklearn.metrics.pairwise.paired_distances'''
if not self.is_weighted():
warnings.warn('Cannot supply weights for unweighted graph; '
'ignoring call to reweight_by_distance')
return self
# TODO: take advantage of symmetry of metric function
ii, jj = self.pairs().T
if metric == 'precomputed':
assert coords.ndim == 2 and coords.shape[0] == coords.shape[1]
d = coords[ii,jj]
else:
d = paired_distances(coords[ii], coords[jj], metric=metric)
return self._update_edges(d, copy=copy) | Replaces existing edge weights by distances between connected vertices.
The new weight of edge (i,j) is given by: metric(coords[i], coords[j]).
coords : (num_vertices x d) array of coordinates, in vertex order
metric : str or callable, see sklearn.metrics.pairwise.paired_distances | Below is the the instruction that describes the task:
### Input:
Replaces existing edge weights by distances between connected vertices.
The new weight of edge (i,j) is given by: metric(coords[i], coords[j]).
coords : (num_vertices x d) array of coordinates, in vertex order
metric : str or callable, see sklearn.metrics.pairwise.paired_distances
### Response:
def reweight_by_distance(self, coords, metric='l2', copy=False):
'''Replaces existing edge weights by distances between connected vertices.
The new weight of edge (i,j) is given by: metric(coords[i], coords[j]).
coords : (num_vertices x d) array of coordinates, in vertex order
metric : str or callable, see sklearn.metrics.pairwise.paired_distances'''
if not self.is_weighted():
warnings.warn('Cannot supply weights for unweighted graph; '
'ignoring call to reweight_by_distance')
return self
# TODO: take advantage of symmetry of metric function
ii, jj = self.pairs().T
if metric == 'precomputed':
assert coords.ndim == 2 and coords.shape[0] == coords.shape[1]
d = coords[ii,jj]
else:
d = paired_distances(coords[ii], coords[jj], metric=metric)
return self._update_edges(d, copy=copy) |
def create_workflow_template(
self,
parent,
template,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates new workflow template.
Example:
>>> from google.cloud import dataproc_v1beta2
>>>
>>> client = dataproc_v1beta2.WorkflowTemplateServiceClient()
>>>
>>> parent = client.region_path('[PROJECT]', '[REGION]')
>>>
>>> # TODO: Initialize `template`:
>>> template = {}
>>>
>>> response = client.create_workflow_template(parent, template)
Args:
parent (str): Required. The "resource name" of the region, as described in
https://cloud.google.com/apis/design/resource\_names of the form
``projects/{project_id}/regions/{region}``
template (Union[dict, ~google.cloud.dataproc_v1beta2.types.WorkflowTemplate]): Required. The Dataproc workflow template to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "create_workflow_template" not in self._inner_api_calls:
self._inner_api_calls[
"create_workflow_template"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_workflow_template,
default_retry=self._method_configs["CreateWorkflowTemplate"].retry,
default_timeout=self._method_configs["CreateWorkflowTemplate"].timeout,
client_info=self._client_info,
)
request = workflow_templates_pb2.CreateWorkflowTemplateRequest(
parent=parent, template=template
)
return self._inner_api_calls["create_workflow_template"](
request, retry=retry, timeout=timeout, metadata=metadata
) | Creates new workflow template.
Example:
>>> from google.cloud import dataproc_v1beta2
>>>
>>> client = dataproc_v1beta2.WorkflowTemplateServiceClient()
>>>
>>> parent = client.region_path('[PROJECT]', '[REGION]')
>>>
>>> # TODO: Initialize `template`:
>>> template = {}
>>>
>>> response = client.create_workflow_template(parent, template)
Args:
parent (str): Required. The "resource name" of the region, as described in
https://cloud.google.com/apis/design/resource\_names of the form
``projects/{project_id}/regions/{region}``
template (Union[dict, ~google.cloud.dataproc_v1beta2.types.WorkflowTemplate]): Required. The Dataproc workflow template to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid. | Below is the the instruction that describes the task:
### Input:
Creates new workflow template.
Example:
>>> from google.cloud import dataproc_v1beta2
>>>
>>> client = dataproc_v1beta2.WorkflowTemplateServiceClient()
>>>
>>> parent = client.region_path('[PROJECT]', '[REGION]')
>>>
>>> # TODO: Initialize `template`:
>>> template = {}
>>>
>>> response = client.create_workflow_template(parent, template)
Args:
parent (str): Required. The "resource name" of the region, as described in
https://cloud.google.com/apis/design/resource\_names of the form
``projects/{project_id}/regions/{region}``
template (Union[dict, ~google.cloud.dataproc_v1beta2.types.WorkflowTemplate]): Required. The Dataproc workflow template to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
### Response:
def create_workflow_template(
self,
parent,
template,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates new workflow template.
Example:
>>> from google.cloud import dataproc_v1beta2
>>>
>>> client = dataproc_v1beta2.WorkflowTemplateServiceClient()
>>>
>>> parent = client.region_path('[PROJECT]', '[REGION]')
>>>
>>> # TODO: Initialize `template`:
>>> template = {}
>>>
>>> response = client.create_workflow_template(parent, template)
Args:
parent (str): Required. The "resource name" of the region, as described in
https://cloud.google.com/apis/design/resource\_names of the form
``projects/{project_id}/regions/{region}``
template (Union[dict, ~google.cloud.dataproc_v1beta2.types.WorkflowTemplate]): Required. The Dataproc workflow template to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.dataproc_v1beta2.types.WorkflowTemplate` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "create_workflow_template" not in self._inner_api_calls:
self._inner_api_calls[
"create_workflow_template"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_workflow_template,
default_retry=self._method_configs["CreateWorkflowTemplate"].retry,
default_timeout=self._method_configs["CreateWorkflowTemplate"].timeout,
client_info=self._client_info,
)
request = workflow_templates_pb2.CreateWorkflowTemplateRequest(
parent=parent, template=template
)
return self._inner_api_calls["create_workflow_template"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
def _set_id_variable_by_entity_key(self) -> Dict[str, str]:
'''Identify and set the good ids for the different entities'''
if self.id_variable_by_entity_key is None:
self.id_variable_by_entity_key = dict(
(entity.key, entity.key + '_id') for entity in self.tax_benefit_system.entities)
log.debug("Use default id_variable names:\n {}".format(self.id_variable_by_entity_key))
return self.id_variable_by_entity_key | Identify and set the good ids for the different entities | Below is the the instruction that describes the task:
### Input:
Identify and set the good ids for the different entities
### Response:
def _set_id_variable_by_entity_key(self) -> Dict[str, str]:
'''Identify and set the good ids for the different entities'''
if self.id_variable_by_entity_key is None:
self.id_variable_by_entity_key = dict(
(entity.key, entity.key + '_id') for entity in self.tax_benefit_system.entities)
log.debug("Use default id_variable names:\n {}".format(self.id_variable_by_entity_key))
return self.id_variable_by_entity_key |
def make_muc_userinfo(self):
"""
Create <x xmlns="...muc#user"/> element in the stanza.
:return: the element created.
:returntype: `MucUserX`
"""
self.clear_muc_child()
self.muc_child=MucUserX(parent=self.xmlnode)
return self.muc_child | Create <x xmlns="...muc#user"/> element in the stanza.
:return: the element created.
:returntype: `MucUserX` | Below is the the instruction that describes the task:
### Input:
Create <x xmlns="...muc#user"/> element in the stanza.
:return: the element created.
:returntype: `MucUserX`
### Response:
def make_muc_userinfo(self):
"""
Create <x xmlns="...muc#user"/> element in the stanza.
:return: the element created.
:returntype: `MucUserX`
"""
self.clear_muc_child()
self.muc_child=MucUserX(parent=self.xmlnode)
return self.muc_child |
def sha2_crypt(key, salt, hashfunc, rounds=_ROUNDS_DEFAULT):
"""
This algorithm is insane. History can be found at
https://en.wikipedia.org/wiki/Crypt_%28C%29
"""
key = key.encode('utf-8')
h = hashfunc()
alt_h = hashfunc()
digest_size = h.digest_size
key_len = len(key)
# First, feed key, salt and then key again to the alt hash
alt_h.update(key)
alt_h.update(salt.encode('utf-8'))
alt_h.update(key)
alt_result = alt_h.digest()
# Feed key and salt to the primary hash
h.update(key)
h.update(salt.encode('utf-8'))
# Feed as many (loopping) bytes from alt digest as the length of the key
for i in range(key_len//digest_size):
h.update(alt_result)
h.update(alt_result[:(key_len % digest_size)])
# Take the binary representation of the length of the key and for every
# 1 add the alternate digest, for every 0 the key
bits = key_len
while bits > 0:
if bits & 1 == 0:
h.update(key)
else:
h.update(alt_result)
bits >>= 1
# Store the results from the primary hash
alt_result = h.digest()
h = hashfunc()
# Add password for each character in the password
for i in range(key_len):
h.update(key)
temp_result = h.digest()
# Compute a P array of the bytes in temp repeated for the length of the key
p_bytes = temp_result * (key_len // digest_size)
p_bytes += temp_result[:(key_len % digest_size)]
alt_h = hashfunc()
# Add the salt 16 + arbitrary amount decided by first byte in alt digest
for i in range(16 + byte2int(alt_result[0])):
alt_h.update(salt.encode('utf-8'))
temp_result = alt_h.digest()
# Compute a S array of the bytes in temp_result repeated for the length
# of the salt
s_bytes = temp_result * (len(salt) // digest_size)
s_bytes += temp_result[:(len(salt) % digest_size)]
# Do the actual iterations
for i in range(rounds):
h = hashfunc()
# Alternate adding either the P array or the alt digest
if i & 1 != 0:
h.update(p_bytes)
else:
h.update(alt_result)
# If the round is divisible by 3, add the S array
if i % 3 != 0:
h.update(s_bytes)
# If the round is divisible by 7, add the P array
if i % 7 != 0:
h.update(p_bytes)
# Alternate adding either the P array or the alt digest, opposite
# of first step
if i & 1 != 0:
h.update(alt_result)
else:
h.update(p_bytes)
alt_result = h.digest()
# Compute the base64-ish representation of the hash
ret = []
if digest_size == 64:
# SHA-512
ret.append(b64_from_24bit(alt_result[0], alt_result[21], alt_result[42], 4))
ret.append(b64_from_24bit(alt_result[22], alt_result[43], alt_result[1], 4))
ret.append(b64_from_24bit(alt_result[44], alt_result[2], alt_result[23], 4))
ret.append(b64_from_24bit(alt_result[3], alt_result[24], alt_result[45], 4))
ret.append(b64_from_24bit(alt_result[25], alt_result[46], alt_result[4], 4))
ret.append(b64_from_24bit(alt_result[47], alt_result[5], alt_result[26], 4))
ret.append(b64_from_24bit(alt_result[6], alt_result[27], alt_result[48], 4))
ret.append(b64_from_24bit(alt_result[28], alt_result[49], alt_result[7], 4))
ret.append(b64_from_24bit(alt_result[50], alt_result[8], alt_result[29], 4))
ret.append(b64_from_24bit(alt_result[9], alt_result[30], alt_result[51], 4))
ret.append(b64_from_24bit(alt_result[31], alt_result[52], alt_result[10], 4))
ret.append(b64_from_24bit(alt_result[53], alt_result[11], alt_result[32], 4))
ret.append(b64_from_24bit(alt_result[12], alt_result[33], alt_result[54], 4))
ret.append(b64_from_24bit(alt_result[34], alt_result[55], alt_result[13], 4))
ret.append(b64_from_24bit(alt_result[56], alt_result[14], alt_result[35], 4))
ret.append(b64_from_24bit(alt_result[15], alt_result[36], alt_result[57], 4))
ret.append(b64_from_24bit(alt_result[37], alt_result[58], alt_result[16], 4))
ret.append(b64_from_24bit(alt_result[59], alt_result[17], alt_result[38], 4))
ret.append(b64_from_24bit(alt_result[18], alt_result[39], alt_result[60], 4))
ret.append(b64_from_24bit(alt_result[40], alt_result[61], alt_result[19], 4))
ret.append(b64_from_24bit(alt_result[62], alt_result[20], alt_result[41], 4))
ret.append(b64_from_24bit(int2byte(0), int2byte(0), alt_result[63], 2))
else:
# SHA-256
ret.append(b64_from_24bit(alt_result[0], alt_result[10], alt_result[20], 4))
ret.append(b64_from_24bit(alt_result[21], alt_result[1], alt_result[11], 4))
ret.append(b64_from_24bit(alt_result[12], alt_result[22], alt_result[2], 4))
ret.append(b64_from_24bit(alt_result[3], alt_result[13], alt_result[23], 4))
ret.append(b64_from_24bit(alt_result[24], alt_result[4], alt_result[14], 4))
ret.append(b64_from_24bit(alt_result[15], alt_result[25], alt_result[5], 4))
ret.append(b64_from_24bit(alt_result[6], alt_result[16], alt_result[26], 4))
ret.append(b64_from_24bit(alt_result[27], alt_result[7], alt_result[17], 4))
ret.append(b64_from_24bit(alt_result[18], alt_result[28], alt_result[8], 4))
ret.append(b64_from_24bit(alt_result[9], alt_result[19], alt_result[29], 4))
ret.append(b64_from_24bit(int2byte(0), alt_result[31], alt_result[30], 3))
algo = 6 if digest_size == 64 else 5
if rounds == _ROUNDS_DEFAULT:
return '${0}${1}${2}'.format(algo, salt, ''.join(ret))
else:
return '${0}$rounds={1}${2}${3}'.format(algo, rounds, salt, ''.join(ret)) | This algorithm is insane. History can be found at
https://en.wikipedia.org/wiki/Crypt_%28C%29 | Below is the the instruction that describes the task:
### Input:
This algorithm is insane. History can be found at
https://en.wikipedia.org/wiki/Crypt_%28C%29
### Response:
def sha2_crypt(key, salt, hashfunc, rounds=_ROUNDS_DEFAULT):
"""
This algorithm is insane. History can be found at
https://en.wikipedia.org/wiki/Crypt_%28C%29
"""
key = key.encode('utf-8')
h = hashfunc()
alt_h = hashfunc()
digest_size = h.digest_size
key_len = len(key)
# First, feed key, salt and then key again to the alt hash
alt_h.update(key)
alt_h.update(salt.encode('utf-8'))
alt_h.update(key)
alt_result = alt_h.digest()
# Feed key and salt to the primary hash
h.update(key)
h.update(salt.encode('utf-8'))
# Feed as many (loopping) bytes from alt digest as the length of the key
for i in range(key_len//digest_size):
h.update(alt_result)
h.update(alt_result[:(key_len % digest_size)])
# Take the binary representation of the length of the key and for every
# 1 add the alternate digest, for every 0 the key
bits = key_len
while bits > 0:
if bits & 1 == 0:
h.update(key)
else:
h.update(alt_result)
bits >>= 1
# Store the results from the primary hash
alt_result = h.digest()
h = hashfunc()
# Add password for each character in the password
for i in range(key_len):
h.update(key)
temp_result = h.digest()
# Compute a P array of the bytes in temp repeated for the length of the key
p_bytes = temp_result * (key_len // digest_size)
p_bytes += temp_result[:(key_len % digest_size)]
alt_h = hashfunc()
# Add the salt 16 + arbitrary amount decided by first byte in alt digest
for i in range(16 + byte2int(alt_result[0])):
alt_h.update(salt.encode('utf-8'))
temp_result = alt_h.digest()
# Compute a S array of the bytes in temp_result repeated for the length
# of the salt
s_bytes = temp_result * (len(salt) // digest_size)
s_bytes += temp_result[:(len(salt) % digest_size)]
# Do the actual iterations
for i in range(rounds):
h = hashfunc()
# Alternate adding either the P array or the alt digest
if i & 1 != 0:
h.update(p_bytes)
else:
h.update(alt_result)
# If the round is divisible by 3, add the S array
if i % 3 != 0:
h.update(s_bytes)
# If the round is divisible by 7, add the P array
if i % 7 != 0:
h.update(p_bytes)
# Alternate adding either the P array or the alt digest, opposite
# of first step
if i & 1 != 0:
h.update(alt_result)
else:
h.update(p_bytes)
alt_result = h.digest()
# Compute the base64-ish representation of the hash
ret = []
if digest_size == 64:
# SHA-512
ret.append(b64_from_24bit(alt_result[0], alt_result[21], alt_result[42], 4))
ret.append(b64_from_24bit(alt_result[22], alt_result[43], alt_result[1], 4))
ret.append(b64_from_24bit(alt_result[44], alt_result[2], alt_result[23], 4))
ret.append(b64_from_24bit(alt_result[3], alt_result[24], alt_result[45], 4))
ret.append(b64_from_24bit(alt_result[25], alt_result[46], alt_result[4], 4))
ret.append(b64_from_24bit(alt_result[47], alt_result[5], alt_result[26], 4))
ret.append(b64_from_24bit(alt_result[6], alt_result[27], alt_result[48], 4))
ret.append(b64_from_24bit(alt_result[28], alt_result[49], alt_result[7], 4))
ret.append(b64_from_24bit(alt_result[50], alt_result[8], alt_result[29], 4))
ret.append(b64_from_24bit(alt_result[9], alt_result[30], alt_result[51], 4))
ret.append(b64_from_24bit(alt_result[31], alt_result[52], alt_result[10], 4))
ret.append(b64_from_24bit(alt_result[53], alt_result[11], alt_result[32], 4))
ret.append(b64_from_24bit(alt_result[12], alt_result[33], alt_result[54], 4))
ret.append(b64_from_24bit(alt_result[34], alt_result[55], alt_result[13], 4))
ret.append(b64_from_24bit(alt_result[56], alt_result[14], alt_result[35], 4))
ret.append(b64_from_24bit(alt_result[15], alt_result[36], alt_result[57], 4))
ret.append(b64_from_24bit(alt_result[37], alt_result[58], alt_result[16], 4))
ret.append(b64_from_24bit(alt_result[59], alt_result[17], alt_result[38], 4))
ret.append(b64_from_24bit(alt_result[18], alt_result[39], alt_result[60], 4))
ret.append(b64_from_24bit(alt_result[40], alt_result[61], alt_result[19], 4))
ret.append(b64_from_24bit(alt_result[62], alt_result[20], alt_result[41], 4))
ret.append(b64_from_24bit(int2byte(0), int2byte(0), alt_result[63], 2))
else:
# SHA-256
ret.append(b64_from_24bit(alt_result[0], alt_result[10], alt_result[20], 4))
ret.append(b64_from_24bit(alt_result[21], alt_result[1], alt_result[11], 4))
ret.append(b64_from_24bit(alt_result[12], alt_result[22], alt_result[2], 4))
ret.append(b64_from_24bit(alt_result[3], alt_result[13], alt_result[23], 4))
ret.append(b64_from_24bit(alt_result[24], alt_result[4], alt_result[14], 4))
ret.append(b64_from_24bit(alt_result[15], alt_result[25], alt_result[5], 4))
ret.append(b64_from_24bit(alt_result[6], alt_result[16], alt_result[26], 4))
ret.append(b64_from_24bit(alt_result[27], alt_result[7], alt_result[17], 4))
ret.append(b64_from_24bit(alt_result[18], alt_result[28], alt_result[8], 4))
ret.append(b64_from_24bit(alt_result[9], alt_result[19], alt_result[29], 4))
ret.append(b64_from_24bit(int2byte(0), alt_result[31], alt_result[30], 3))
algo = 6 if digest_size == 64 else 5
if rounds == _ROUNDS_DEFAULT:
return '${0}${1}${2}'.format(algo, salt, ''.join(ret))
else:
return '${0}$rounds={1}${2}${3}'.format(algo, rounds, salt, ''.join(ret)) |
def transfer_to(self, cloudpath, bbox, block_size=None, compress=True):
"""
Transfer files from one storage location to another, bypassing
volume painting. This enables using a single CloudVolume instance
to transfer big volumes. In some cases, gsutil or aws s3 cli tools
may be more appropriate. This method is provided for convenience. It
may be optimized for better performance over time as demand requires.
cloudpath (str): path to storage layer
bbox (Bbox object): ROI to transfer
block_size (int): number of file chunks to transfer per I/O batch.
compress (bool): Set to False to upload as uncompressed
"""
if type(bbox) is Bbox:
requested_bbox = bbox
else:
(requested_bbox, _, _) = self.__interpret_slices(bbox)
realized_bbox = self.__realized_bbox(requested_bbox)
if requested_bbox != realized_bbox:
raise exceptions.AlignmentError(
"Unable to transfer non-chunk aligned bounding boxes. Requested: {}, Realized: {}".format(
requested_bbox, realized_bbox
))
default_block_size_MB = 50 # MB
chunk_MB = self.underlying.rectVolume() * np.dtype(self.dtype).itemsize * self.num_channels
if self.layer_type == 'image':
# kind of an average guess for some EM datasets, have seen up to 1.9x and as low as 1.1
# affinites are also images, but have very different compression ratios. e.g. 3x for kempressed
chunk_MB /= 1.3
else: # segmentation
chunk_MB /= 100.0 # compression ratios between 80 and 800....
chunk_MB /= 1024.0 * 1024.0
if block_size:
step = block_size
else:
step = int(default_block_size_MB // chunk_MB) + 1
try:
destvol = CloudVolume(cloudpath, mip=self.mip)
except exceptions.InfoUnavailableError:
destvol = CloudVolume(cloudpath, mip=self.mip, info=self.info, provenance=self.provenance.serialize())
destvol.commit_info()
destvol.commit_provenance()
except exceptions.ScaleUnavailableError:
destvol = CloudVolume(cloudpath)
for i in range(len(destvol.scales) + 1, len(self.scales)):
destvol.scales.append(
self.scales[i]
)
destvol.commit_info()
destvol.commit_provenance()
num_blocks = np.ceil(self.bounds.volume() / self.underlying.rectVolume()) / step
num_blocks = int(np.ceil(num_blocks))
cloudpaths = txrx.chunknames(realized_bbox, self.bounds, self.key, self.underlying)
pbar = tqdm(
desc='Transferring Blocks of {} Chunks'.format(step),
unit='blocks',
disable=(not self.progress),
total=num_blocks,
)
with pbar:
with Storage(self.layer_cloudpath) as src_stor:
with Storage(cloudpath) as dest_stor:
for _ in range(num_blocks, 0, -1):
srcpaths = list(itertools.islice(cloudpaths, step))
files = src_stor.get_files(srcpaths)
files = [ (f['filename'], f['content']) for f in files ]
dest_stor.put_files(
files=files,
compress=compress,
content_type=txrx.content_type(destvol),
)
pbar.update() | Transfer files from one storage location to another, bypassing
volume painting. This enables using a single CloudVolume instance
to transfer big volumes. In some cases, gsutil or aws s3 cli tools
may be more appropriate. This method is provided for convenience. It
may be optimized for better performance over time as demand requires.
cloudpath (str): path to storage layer
bbox (Bbox object): ROI to transfer
block_size (int): number of file chunks to transfer per I/O batch.
compress (bool): Set to False to upload as uncompressed | Below is the the instruction that describes the task:
### Input:
Transfer files from one storage location to another, bypassing
volume painting. This enables using a single CloudVolume instance
to transfer big volumes. In some cases, gsutil or aws s3 cli tools
may be more appropriate. This method is provided for convenience. It
may be optimized for better performance over time as demand requires.
cloudpath (str): path to storage layer
bbox (Bbox object): ROI to transfer
block_size (int): number of file chunks to transfer per I/O batch.
compress (bool): Set to False to upload as uncompressed
### Response:
def transfer_to(self, cloudpath, bbox, block_size=None, compress=True):
"""
Transfer files from one storage location to another, bypassing
volume painting. This enables using a single CloudVolume instance
to transfer big volumes. In some cases, gsutil or aws s3 cli tools
may be more appropriate. This method is provided for convenience. It
may be optimized for better performance over time as demand requires.
cloudpath (str): path to storage layer
bbox (Bbox object): ROI to transfer
block_size (int): number of file chunks to transfer per I/O batch.
compress (bool): Set to False to upload as uncompressed
"""
if type(bbox) is Bbox:
requested_bbox = bbox
else:
(requested_bbox, _, _) = self.__interpret_slices(bbox)
realized_bbox = self.__realized_bbox(requested_bbox)
if requested_bbox != realized_bbox:
raise exceptions.AlignmentError(
"Unable to transfer non-chunk aligned bounding boxes. Requested: {}, Realized: {}".format(
requested_bbox, realized_bbox
))
default_block_size_MB = 50 # MB
chunk_MB = self.underlying.rectVolume() * np.dtype(self.dtype).itemsize * self.num_channels
if self.layer_type == 'image':
# kind of an average guess for some EM datasets, have seen up to 1.9x and as low as 1.1
# affinites are also images, but have very different compression ratios. e.g. 3x for kempressed
chunk_MB /= 1.3
else: # segmentation
chunk_MB /= 100.0 # compression ratios between 80 and 800....
chunk_MB /= 1024.0 * 1024.0
if block_size:
step = block_size
else:
step = int(default_block_size_MB // chunk_MB) + 1
try:
destvol = CloudVolume(cloudpath, mip=self.mip)
except exceptions.InfoUnavailableError:
destvol = CloudVolume(cloudpath, mip=self.mip, info=self.info, provenance=self.provenance.serialize())
destvol.commit_info()
destvol.commit_provenance()
except exceptions.ScaleUnavailableError:
destvol = CloudVolume(cloudpath)
for i in range(len(destvol.scales) + 1, len(self.scales)):
destvol.scales.append(
self.scales[i]
)
destvol.commit_info()
destvol.commit_provenance()
num_blocks = np.ceil(self.bounds.volume() / self.underlying.rectVolume()) / step
num_blocks = int(np.ceil(num_blocks))
cloudpaths = txrx.chunknames(realized_bbox, self.bounds, self.key, self.underlying)
pbar = tqdm(
desc='Transferring Blocks of {} Chunks'.format(step),
unit='blocks',
disable=(not self.progress),
total=num_blocks,
)
with pbar:
with Storage(self.layer_cloudpath) as src_stor:
with Storage(cloudpath) as dest_stor:
for _ in range(num_blocks, 0, -1):
srcpaths = list(itertools.islice(cloudpaths, step))
files = src_stor.get_files(srcpaths)
files = [ (f['filename'], f['content']) for f in files ]
dest_stor.put_files(
files=files,
compress=compress,
content_type=txrx.content_type(destvol),
)
pbar.update() |
def _get(self, end_point, params=None, **kwargs):
"""Send a HTTP GET request to a Todoist API end-point.
:param end_point: The Todoist API end-point.
:type end_point: str
:param params: The required request parameters.
:type params: dict
:param kwargs: Any optional parameters.
:type kwargs: dict
:return: The HTTP response to the request.
:rtype: :class:`requests.Response`
"""
return self._request(requests.get, end_point, params, **kwargs) | Send a HTTP GET request to a Todoist API end-point.
:param end_point: The Todoist API end-point.
:type end_point: str
:param params: The required request parameters.
:type params: dict
:param kwargs: Any optional parameters.
:type kwargs: dict
:return: The HTTP response to the request.
:rtype: :class:`requests.Response` | Below is the the instruction that describes the task:
### Input:
Send a HTTP GET request to a Todoist API end-point.
:param end_point: The Todoist API end-point.
:type end_point: str
:param params: The required request parameters.
:type params: dict
:param kwargs: Any optional parameters.
:type kwargs: dict
:return: The HTTP response to the request.
:rtype: :class:`requests.Response`
### Response:
def _get(self, end_point, params=None, **kwargs):
"""Send a HTTP GET request to a Todoist API end-point.
:param end_point: The Todoist API end-point.
:type end_point: str
:param params: The required request parameters.
:type params: dict
:param kwargs: Any optional parameters.
:type kwargs: dict
:return: The HTTP response to the request.
:rtype: :class:`requests.Response`
"""
return self._request(requests.get, end_point, params, **kwargs) |
def hmm(args):
"""
%prog hmm workdir sample_key
Run CNV segmentation caller. The workdir must contain a subfolder called
`sample_key-cn` that contains CN for each chromosome. A `beta` directory
that contains scaler for each bin must also be present in the current
directory.
"""
p = OptionParser(hmm.__doc__)
p.add_option("--mu", default=.003, type="float",
help="Transition probability")
p.add_option("--sigma", default=.1, type="float",
help="Standard deviation of Gaussian emission distribution")
p.add_option("--threshold", default=1, type="float",
help="Standard deviation must be < this "
"in the baseline population")
opts, args = p.parse_args(args)
if len(args) != 2:
sys.exit(not p.print_help())
workdir, sample_key = args
model = CopyNumberHMM(workdir=workdir, mu=opts.mu, sigma=opts.sigma,
threshold=opts.threshold)
events = model.run(sample_key)
params = ".mu-{}.sigma-{}.threshold-{}"\
.format(opts.mu, opts.sigma, opts.threshold)
hmmfile = op.join(workdir, sample_key + params + ".seg")
fw = open(hmmfile, "w")
nevents = 0
for mean_cn, rr, event in events:
if event is None:
continue
print(" ".join((event.bedline, sample_key)), file=fw)
nevents += 1
fw.close()
logging.debug("A total of {} aberrant events written to `{}`"
.format(nevents, hmmfile))
return hmmfile | %prog hmm workdir sample_key
Run CNV segmentation caller. The workdir must contain a subfolder called
`sample_key-cn` that contains CN for each chromosome. A `beta` directory
that contains scaler for each bin must also be present in the current
directory. | Below is the the instruction that describes the task:
### Input:
%prog hmm workdir sample_key
Run CNV segmentation caller. The workdir must contain a subfolder called
`sample_key-cn` that contains CN for each chromosome. A `beta` directory
that contains scaler for each bin must also be present in the current
directory.
### Response:
def hmm(args):
"""
%prog hmm workdir sample_key
Run CNV segmentation caller. The workdir must contain a subfolder called
`sample_key-cn` that contains CN for each chromosome. A `beta` directory
that contains scaler for each bin must also be present in the current
directory.
"""
p = OptionParser(hmm.__doc__)
p.add_option("--mu", default=.003, type="float",
help="Transition probability")
p.add_option("--sigma", default=.1, type="float",
help="Standard deviation of Gaussian emission distribution")
p.add_option("--threshold", default=1, type="float",
help="Standard deviation must be < this "
"in the baseline population")
opts, args = p.parse_args(args)
if len(args) != 2:
sys.exit(not p.print_help())
workdir, sample_key = args
model = CopyNumberHMM(workdir=workdir, mu=opts.mu, sigma=opts.sigma,
threshold=opts.threshold)
events = model.run(sample_key)
params = ".mu-{}.sigma-{}.threshold-{}"\
.format(opts.mu, opts.sigma, opts.threshold)
hmmfile = op.join(workdir, sample_key + params + ".seg")
fw = open(hmmfile, "w")
nevents = 0
for mean_cn, rr, event in events:
if event is None:
continue
print(" ".join((event.bedline, sample_key)), file=fw)
nevents += 1
fw.close()
logging.debug("A total of {} aberrant events written to `{}`"
.format(nevents, hmmfile))
return hmmfile |
def calc_tc_v1(self):
"""Adjust the measured air temperature to the altitude of the
individual zones.
Required control parameters:
|NmbZones|
|TCAlt|
|ZoneZ|
|ZRelT|
Required input sequence:
|hland_inputs.T|
Calculated flux sequences:
|TC|
Basic equation:
:math:`TC = T - TCAlt \\cdot (ZoneZ-ZRelT)`
Examples:
Prepare two zones, the first one lying at the reference
height and the second one 200 meters above:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(2)
>>> zrelt(2.0)
>>> zonez(2.0, 4.0)
Applying the usual temperature lapse rate of 0.6°C/100m does
not affect the temperature of the first zone but reduces the
temperature of the second zone by 1.2°C:
>>> tcalt(0.6)
>>> inputs.t = 5.0
>>> model.calc_tc_v1()
>>> fluxes.tc
tc(5.0, 3.8)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nmbzones):
flu.tc[k] = inp.t-con.tcalt[k]*(con.zonez[k]-con.zrelt) | Adjust the measured air temperature to the altitude of the
individual zones.
Required control parameters:
|NmbZones|
|TCAlt|
|ZoneZ|
|ZRelT|
Required input sequence:
|hland_inputs.T|
Calculated flux sequences:
|TC|
Basic equation:
:math:`TC = T - TCAlt \\cdot (ZoneZ-ZRelT)`
Examples:
Prepare two zones, the first one lying at the reference
height and the second one 200 meters above:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(2)
>>> zrelt(2.0)
>>> zonez(2.0, 4.0)
Applying the usual temperature lapse rate of 0.6°C/100m does
not affect the temperature of the first zone but reduces the
temperature of the second zone by 1.2°C:
>>> tcalt(0.6)
>>> inputs.t = 5.0
>>> model.calc_tc_v1()
>>> fluxes.tc
tc(5.0, 3.8) | Below is the the instruction that describes the task:
### Input:
Adjust the measured air temperature to the altitude of the
individual zones.
Required control parameters:
|NmbZones|
|TCAlt|
|ZoneZ|
|ZRelT|
Required input sequence:
|hland_inputs.T|
Calculated flux sequences:
|TC|
Basic equation:
:math:`TC = T - TCAlt \\cdot (ZoneZ-ZRelT)`
Examples:
Prepare two zones, the first one lying at the reference
height and the second one 200 meters above:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(2)
>>> zrelt(2.0)
>>> zonez(2.0, 4.0)
Applying the usual temperature lapse rate of 0.6°C/100m does
not affect the temperature of the first zone but reduces the
temperature of the second zone by 1.2°C:
>>> tcalt(0.6)
>>> inputs.t = 5.0
>>> model.calc_tc_v1()
>>> fluxes.tc
tc(5.0, 3.8)
### Response:
def calc_tc_v1(self):
"""Adjust the measured air temperature to the altitude of the
individual zones.
Required control parameters:
|NmbZones|
|TCAlt|
|ZoneZ|
|ZRelT|
Required input sequence:
|hland_inputs.T|
Calculated flux sequences:
|TC|
Basic equation:
:math:`TC = T - TCAlt \\cdot (ZoneZ-ZRelT)`
Examples:
Prepare two zones, the first one lying at the reference
height and the second one 200 meters above:
>>> from hydpy.models.hland import *
>>> parameterstep('1d')
>>> nmbzones(2)
>>> zrelt(2.0)
>>> zonez(2.0, 4.0)
Applying the usual temperature lapse rate of 0.6°C/100m does
not affect the temperature of the first zone but reduces the
temperature of the second zone by 1.2°C:
>>> tcalt(0.6)
>>> inputs.t = 5.0
>>> model.calc_tc_v1()
>>> fluxes.tc
tc(5.0, 3.8)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nmbzones):
flu.tc[k] = inp.t-con.tcalt[k]*(con.zonez[k]-con.zrelt) |
def stop_button_click_handler(self):
"""Method to handle what to do when the stop button is pressed"""
self.stop_button.setDisabled(True)
# Interrupt computations or stop debugging
if not self.shellwidget._reading:
self.interrupt_kernel()
else:
self.shellwidget.write_to_stdin('exit') | Method to handle what to do when the stop button is pressed | Below is the the instruction that describes the task:
### Input:
Method to handle what to do when the stop button is pressed
### Response:
def stop_button_click_handler(self):
"""Method to handle what to do when the stop button is pressed"""
self.stop_button.setDisabled(True)
# Interrupt computations or stop debugging
if not self.shellwidget._reading:
self.interrupt_kernel()
else:
self.shellwidget.write_to_stdin('exit') |
def polyder(c, m=1, scl=1, axis=0):
'''not quite a copy of numpy's version because this was faster to implement.
'''
c = list(c)
cnt = int(m)
if cnt == 0:
return c
n = len(c)
if cnt >= n:
c = c[:1]*0
else:
for i in range(cnt):
n = n - 1
c *= scl
der = [0.0 for _ in range(n)]
for j in range(n, 0, -1):
der[j - 1] = j*c[j]
c = der
return c | not quite a copy of numpy's version because this was faster to implement. | Below is the the instruction that describes the task:
### Input:
not quite a copy of numpy's version because this was faster to implement.
### Response:
def polyder(c, m=1, scl=1, axis=0):
'''not quite a copy of numpy's version because this was faster to implement.
'''
c = list(c)
cnt = int(m)
if cnt == 0:
return c
n = len(c)
if cnt >= n:
c = c[:1]*0
else:
for i in range(cnt):
n = n - 1
c *= scl
der = [0.0 for _ in range(n)]
for j in range(n, 0, -1):
der[j - 1] = j*c[j]
c = der
return c |
def content(self):
"""Content of the response, in bytes or unicode
(if available).
"""
if self._content is not None:
return self._content
if self._content_consumed:
raise RuntimeError('The content for this response was '
'already consumed')
# Read the contents.
try:
self._content = self.raw.read()
except AttributeError:
return None
# Decode GZip'd content.
if 'gzip' in self.headers.get('content-encoding', ''):
try:
self._content = decode_gzip(self._content)
except zlib.error:
pass
# Decode unicode content.
if self.config.get('decode_unicode'):
self._content = get_unicode_from_response(self)
self._content_consumed = True
return self._content | Content of the response, in bytes or unicode
(if available). | Below is the the instruction that describes the task:
### Input:
Content of the response, in bytes or unicode
(if available).
### Response:
def content(self):
"""Content of the response, in bytes or unicode
(if available).
"""
if self._content is not None:
return self._content
if self._content_consumed:
raise RuntimeError('The content for this response was '
'already consumed')
# Read the contents.
try:
self._content = self.raw.read()
except AttributeError:
return None
# Decode GZip'd content.
if 'gzip' in self.headers.get('content-encoding', ''):
try:
self._content = decode_gzip(self._content)
except zlib.error:
pass
# Decode unicode content.
if self.config.get('decode_unicode'):
self._content = get_unicode_from_response(self)
self._content_consumed = True
return self._content |
def remove_empty(rec):
""" Deletes sequences that were marked for deletion by convert_to_IUPAC """
for header, sequence in rec.mapping.items():
if all(char == 'X' for char in sequence):
rec.headers.remove(header)
rec.sequences.remove(sequence)
rec.update()
return rec | Deletes sequences that were marked for deletion by convert_to_IUPAC | Below is the the instruction that describes the task:
### Input:
Deletes sequences that were marked for deletion by convert_to_IUPAC
### Response:
def remove_empty(rec):
""" Deletes sequences that were marked for deletion by convert_to_IUPAC """
for header, sequence in rec.mapping.items():
if all(char == 'X' for char in sequence):
rec.headers.remove(header)
rec.sequences.remove(sequence)
rec.update()
return rec |
def on_step_end(self, step, logs):
""" Update progression bar at the end of each step """
if self.info_names is None:
self.info_names = logs['info'].keys()
values = [('reward', logs['reward'])]
if KERAS_VERSION > '2.1.3':
self.progbar.update((self.step % self.interval) + 1, values=values)
else:
self.progbar.update((self.step % self.interval) + 1, values=values, force=True)
self.step += 1
self.metrics.append(logs['metrics'])
if len(self.info_names) > 0:
self.infos.append([logs['info'][k] for k in self.info_names]) | Update progression bar at the end of each step | Below is the the instruction that describes the task:
### Input:
Update progression bar at the end of each step
### Response:
def on_step_end(self, step, logs):
""" Update progression bar at the end of each step """
if self.info_names is None:
self.info_names = logs['info'].keys()
values = [('reward', logs['reward'])]
if KERAS_VERSION > '2.1.3':
self.progbar.update((self.step % self.interval) + 1, values=values)
else:
self.progbar.update((self.step % self.interval) + 1, values=values, force=True)
self.step += 1
self.metrics.append(logs['metrics'])
if len(self.info_names) > 0:
self.infos.append([logs['info'][k] for k in self.info_names]) |
def download_highlights(self,
user: Union[int, Profile],
fast_update: bool = False,
filename_target: Optional[str] = None,
storyitem_filter: Optional[Callable[[StoryItem], bool]] = None) -> None:
"""
Download available highlights from a user whose ID is given.
To use this, one needs to be logged in.
.. versionadded:: 4.1
:param user: ID or Profile of the user whose highlights should get downloaded.
:param fast_update: If true, abort when first already-downloaded picture is encountered
:param filename_target: Replacement for {target} in dirname_pattern and filename_pattern
or None if profile name and the highlights' titles should be used instead
:param storyitem_filter: function(storyitem), which returns True if given StoryItem should be downloaded
"""
for user_highlight in self.get_highlights(user):
name = user_highlight.owner_username
self.context.log("Retrieving highlights \"{}\" from profile {}".format(user_highlight.title, name))
totalcount = user_highlight.itemcount
count = 1
for item in user_highlight.get_items():
if storyitem_filter is not None and not storyitem_filter(item):
self.context.log("<{} skipped>".format(item), flush=True)
continue
self.context.log("[%3i/%3i] " % (count, totalcount), end="", flush=True)
count += 1
with self.context.error_catcher('Download highlights \"{}\" from user {}'.format(user_highlight.title, name)):
downloaded = self.download_storyitem(item, filename_target
if filename_target
else '{}/{}'.format(name, user_highlight.title))
if fast_update and not downloaded:
break | Download available highlights from a user whose ID is given.
To use this, one needs to be logged in.
.. versionadded:: 4.1
:param user: ID or Profile of the user whose highlights should get downloaded.
:param fast_update: If true, abort when first already-downloaded picture is encountered
:param filename_target: Replacement for {target} in dirname_pattern and filename_pattern
or None if profile name and the highlights' titles should be used instead
:param storyitem_filter: function(storyitem), which returns True if given StoryItem should be downloaded | Below is the the instruction that describes the task:
### Input:
Download available highlights from a user whose ID is given.
To use this, one needs to be logged in.
.. versionadded:: 4.1
:param user: ID or Profile of the user whose highlights should get downloaded.
:param fast_update: If true, abort when first already-downloaded picture is encountered
:param filename_target: Replacement for {target} in dirname_pattern and filename_pattern
or None if profile name and the highlights' titles should be used instead
:param storyitem_filter: function(storyitem), which returns True if given StoryItem should be downloaded
### Response:
def download_highlights(self,
user: Union[int, Profile],
fast_update: bool = False,
filename_target: Optional[str] = None,
storyitem_filter: Optional[Callable[[StoryItem], bool]] = None) -> None:
"""
Download available highlights from a user whose ID is given.
To use this, one needs to be logged in.
.. versionadded:: 4.1
:param user: ID or Profile of the user whose highlights should get downloaded.
:param fast_update: If true, abort when first already-downloaded picture is encountered
:param filename_target: Replacement for {target} in dirname_pattern and filename_pattern
or None if profile name and the highlights' titles should be used instead
:param storyitem_filter: function(storyitem), which returns True if given StoryItem should be downloaded
"""
for user_highlight in self.get_highlights(user):
name = user_highlight.owner_username
self.context.log("Retrieving highlights \"{}\" from profile {}".format(user_highlight.title, name))
totalcount = user_highlight.itemcount
count = 1
for item in user_highlight.get_items():
if storyitem_filter is not None and not storyitem_filter(item):
self.context.log("<{} skipped>".format(item), flush=True)
continue
self.context.log("[%3i/%3i] " % (count, totalcount), end="", flush=True)
count += 1
with self.context.error_catcher('Download highlights \"{}\" from user {}'.format(user_highlight.title, name)):
downloaded = self.download_storyitem(item, filename_target
if filename_target
else '{}/{}'.format(name, user_highlight.title))
if fast_update and not downloaded:
break |
def remove(self, task_cls):
""" Remove task from the storage. If task class are stored multiple times
(if :attr:`.WTaskRegistryStorage.__multiple_tasks_per_tag__` is True) - removes all of them.
:param task_cls: task to remove
:return: None
"""
registry_tag = task_cls.__registry_tag__
if registry_tag in self.__registry.keys():
self.__registry[registry_tag] = \
list(filter(lambda x: x != task_cls, self.__registry[registry_tag]))
if len(self.__registry[registry_tag]) == 0:
self.__registry.pop(registry_tag) | Remove task from the storage. If task class are stored multiple times
(if :attr:`.WTaskRegistryStorage.__multiple_tasks_per_tag__` is True) - removes all of them.
:param task_cls: task to remove
:return: None | Below is the the instruction that describes the task:
### Input:
Remove task from the storage. If task class are stored multiple times
(if :attr:`.WTaskRegistryStorage.__multiple_tasks_per_tag__` is True) - removes all of them.
:param task_cls: task to remove
:return: None
### Response:
def remove(self, task_cls):
""" Remove task from the storage. If task class are stored multiple times
(if :attr:`.WTaskRegistryStorage.__multiple_tasks_per_tag__` is True) - removes all of them.
:param task_cls: task to remove
:return: None
"""
registry_tag = task_cls.__registry_tag__
if registry_tag in self.__registry.keys():
self.__registry[registry_tag] = \
list(filter(lambda x: x != task_cls, self.__registry[registry_tag]))
if len(self.__registry[registry_tag]) == 0:
self.__registry.pop(registry_tag) |
def service_delete(service_id=None, name=None, profile=None, **connection_args):
'''
Delete a service from Keystone service catalog
CLI Examples:
.. code-block:: bash
salt '*' keystone.service_delete c965f79c4f864eaaa9c3b41904e67082
salt '*' keystone.service_delete name=nova
'''
kstone = auth(profile, **connection_args)
if name:
service_id = service_get(name=name, profile=profile,
**connection_args)[name]['id']
kstone.services.delete(service_id)
return 'Keystone service ID "{0}" deleted'.format(service_id) | Delete a service from Keystone service catalog
CLI Examples:
.. code-block:: bash
salt '*' keystone.service_delete c965f79c4f864eaaa9c3b41904e67082
salt '*' keystone.service_delete name=nova | Below is the the instruction that describes the task:
### Input:
Delete a service from Keystone service catalog
CLI Examples:
.. code-block:: bash
salt '*' keystone.service_delete c965f79c4f864eaaa9c3b41904e67082
salt '*' keystone.service_delete name=nova
### Response:
def service_delete(service_id=None, name=None, profile=None, **connection_args):
'''
Delete a service from Keystone service catalog
CLI Examples:
.. code-block:: bash
salt '*' keystone.service_delete c965f79c4f864eaaa9c3b41904e67082
salt '*' keystone.service_delete name=nova
'''
kstone = auth(profile, **connection_args)
if name:
service_id = service_get(name=name, profile=profile,
**connection_args)[name]['id']
kstone.services.delete(service_id)
return 'Keystone service ID "{0}" deleted'.format(service_id) |
def _get_rate(self, mag):
"""
Calculate and return the annual occurrence rate for a specific bin.
:param mag:
Magnitude value corresponding to the center of the bin of interest.
:returns:
Float number, the annual occurrence rate for the :param mag value.
"""
mag_lo = mag - self.bin_width / 2.0
mag_hi = mag + self.bin_width / 2.0
if mag >= self.min_mag and mag < self.char_mag - DELTA_CHAR / 2:
# return rate according to exponential distribution
return (10 ** (self.a_val - self.b_val * mag_lo)
- 10 ** (self.a_val - self.b_val * mag_hi))
else:
# return characteristic rate (distributed over the characteristic
# range) for the given bin width
return (self.char_rate / DELTA_CHAR) * self.bin_width | Calculate and return the annual occurrence rate for a specific bin.
:param mag:
Magnitude value corresponding to the center of the bin of interest.
:returns:
Float number, the annual occurrence rate for the :param mag value. | Below is the the instruction that describes the task:
### Input:
Calculate and return the annual occurrence rate for a specific bin.
:param mag:
Magnitude value corresponding to the center of the bin of interest.
:returns:
Float number, the annual occurrence rate for the :param mag value.
### Response:
def _get_rate(self, mag):
"""
Calculate and return the annual occurrence rate for a specific bin.
:param mag:
Magnitude value corresponding to the center of the bin of interest.
:returns:
Float number, the annual occurrence rate for the :param mag value.
"""
mag_lo = mag - self.bin_width / 2.0
mag_hi = mag + self.bin_width / 2.0
if mag >= self.min_mag and mag < self.char_mag - DELTA_CHAR / 2:
# return rate according to exponential distribution
return (10 ** (self.a_val - self.b_val * mag_lo)
- 10 ** (self.a_val - self.b_val * mag_hi))
else:
# return characteristic rate (distributed over the characteristic
# range) for the given bin width
return (self.char_rate / DELTA_CHAR) * self.bin_width |
def overall_serovar_call(serovar_prediction, antigen_predictor):
"""
Predict serovar from cgMLST cluster membership analysis and antigen BLAST results.
SerovarPrediction object is assigned H1, H2 and Serogroup from the antigen BLAST results.
Antigen BLAST results will predict a particular serovar or list of serovars, however,
the cgMLST membership may be able to help narrow down the list of potential serovars.
Notes:
If the cgMLST predicted serovar is within the list of antigen BLAST predicted serovars,
then the serovar is assigned the cgMLST predicted serovar.
If all antigens are found, but an antigen serovar is not found then the serovar is assigned
a pseudo-antigenic formula (Serogroup:H1:H2), otherwise the serovar is assigned the cgMLST prediction.
If the antigen predicted serovar does not match the cgMLST predicted serovar,
- the serovar is the cgMLST serovar if the cgMLST cluster level is <= 0.1 (10% or less)
- otherwise, the serovar is antigen predicted serovar(s)
Args:
serovar_prediction (src.serovar_prediction.SerovarPrediction): Serovar prediction results (antigen+cgMLST[+Mash])
antigen_predictor (src.serovar_prediction.SerovarPredictor): Antigen search results
Returns:
src.serovar_prediction.SerovarPrediction: Serovar prediction results with overall prediction from antigen + cgMLST
"""
assert isinstance(serovar_prediction, SerovarPrediction)
assert isinstance(antigen_predictor, SerovarPredictor)
h1 = antigen_predictor.h1
h2 = antigen_predictor.h2
sg = antigen_predictor.serogroup
spp = serovar_prediction.cgmlst_subspecies
if spp is None:
if 'mash_match' in serovar_prediction.__dict__:
spp = serovar_prediction.__dict__['mash_subspecies']
serovar_prediction.serovar_antigen = antigen_predictor.serovar
cgmlst_serovar = serovar_prediction.serovar_cgmlst
cgmlst_distance = float(serovar_prediction.cgmlst_distance)
null_result = '-:-:-'
try:
spp_roman = spp_name_to_roman[spp]
except:
spp_roman = None
is_antigen_null = lambda x: (x is None or x == '' or x == '-')
if antigen_predictor.serovar is None:
if is_antigen_null(sg) and is_antigen_null(h1) and is_antigen_null(h2):
if spp_roman is not None:
serovar_prediction.serovar = '{} {}:{}:{}'.format(spp_roman, sg, h1, h2)
else:
serovar_prediction.serovar = '{}:{}:{}'.format(spp_roman, sg, h1, h2)
elif cgmlst_serovar is not None and cgmlst_distance <= CGMLST_DISTANCE_THRESHOLD:
serovar_prediction.serovar = cgmlst_serovar
else:
serovar_prediction.serovar = null_result
if 'mash_match' in serovar_prediction.__dict__:
spd = serovar_prediction.__dict__
mash_dist = float(spd['mash_distance'])
if mash_dist <= MASH_DISTANCE_THRESHOLD:
serovar_prediction.serovar = spd['mash_serovar']
else:
serovars_from_antigen = antigen_predictor.serovar.split('|')
if not isinstance(serovars_from_antigen, list):
serovars_from_antigen = [serovars_from_antigen]
if cgmlst_serovar is not None:
if cgmlst_serovar in serovars_from_antigen:
serovar_prediction.serovar = cgmlst_serovar
else:
if float(cgmlst_distance) <= CGMLST_DISTANCE_THRESHOLD:
serovar_prediction.serovar = cgmlst_serovar
elif 'mash_match' in serovar_prediction.__dict__:
spd = serovar_prediction.__dict__
mash_serovar = spd['mash_serovar']
mash_dist = float(spd['mash_distance'])
if mash_serovar in serovars_from_antigen:
serovar_prediction.serovar = mash_serovar
else:
if mash_dist <= MASH_DISTANCE_THRESHOLD:
serovar_prediction.serovar = mash_serovar
if serovar_prediction.serovar is None:
serovar_prediction.serovar = serovar_prediction.serovar_antigen
if serovar_prediction.h1 is None:
serovar_prediction.h1 = '-'
if serovar_prediction.h2 is None:
serovar_prediction.h2 = '-'
if serovar_prediction.serogroup is None:
serovar_prediction.serogroup = '-'
if serovar_prediction.serovar_antigen is None:
if spp_roman is not None:
serovar_prediction.serovar_antigen = '{} -:-:-'.format(spp_roman)
else:
serovar_prediction.serovar_antigen = '-:-:-'
if serovar_prediction.serovar is None:
serovar_prediction.serovar = serovar_prediction.serovar_antigen
return serovar_prediction | Predict serovar from cgMLST cluster membership analysis and antigen BLAST results.
SerovarPrediction object is assigned H1, H2 and Serogroup from the antigen BLAST results.
Antigen BLAST results will predict a particular serovar or list of serovars, however,
the cgMLST membership may be able to help narrow down the list of potential serovars.
Notes:
If the cgMLST predicted serovar is within the list of antigen BLAST predicted serovars,
then the serovar is assigned the cgMLST predicted serovar.
If all antigens are found, but an antigen serovar is not found then the serovar is assigned
a pseudo-antigenic formula (Serogroup:H1:H2), otherwise the serovar is assigned the cgMLST prediction.
If the antigen predicted serovar does not match the cgMLST predicted serovar,
- the serovar is the cgMLST serovar if the cgMLST cluster level is <= 0.1 (10% or less)
- otherwise, the serovar is antigen predicted serovar(s)
Args:
serovar_prediction (src.serovar_prediction.SerovarPrediction): Serovar prediction results (antigen+cgMLST[+Mash])
antigen_predictor (src.serovar_prediction.SerovarPredictor): Antigen search results
Returns:
src.serovar_prediction.SerovarPrediction: Serovar prediction results with overall prediction from antigen + cgMLST | Below is the the instruction that describes the task:
### Input:
Predict serovar from cgMLST cluster membership analysis and antigen BLAST results.
SerovarPrediction object is assigned H1, H2 and Serogroup from the antigen BLAST results.
Antigen BLAST results will predict a particular serovar or list of serovars, however,
the cgMLST membership may be able to help narrow down the list of potential serovars.
Notes:
If the cgMLST predicted serovar is within the list of antigen BLAST predicted serovars,
then the serovar is assigned the cgMLST predicted serovar.
If all antigens are found, but an antigen serovar is not found then the serovar is assigned
a pseudo-antigenic formula (Serogroup:H1:H2), otherwise the serovar is assigned the cgMLST prediction.
If the antigen predicted serovar does not match the cgMLST predicted serovar,
- the serovar is the cgMLST serovar if the cgMLST cluster level is <= 0.1 (10% or less)
- otherwise, the serovar is antigen predicted serovar(s)
Args:
serovar_prediction (src.serovar_prediction.SerovarPrediction): Serovar prediction results (antigen+cgMLST[+Mash])
antigen_predictor (src.serovar_prediction.SerovarPredictor): Antigen search results
Returns:
src.serovar_prediction.SerovarPrediction: Serovar prediction results with overall prediction from antigen + cgMLST
### Response:
def overall_serovar_call(serovar_prediction, antigen_predictor):
"""
Predict serovar from cgMLST cluster membership analysis and antigen BLAST results.
SerovarPrediction object is assigned H1, H2 and Serogroup from the antigen BLAST results.
Antigen BLAST results will predict a particular serovar or list of serovars, however,
the cgMLST membership may be able to help narrow down the list of potential serovars.
Notes:
If the cgMLST predicted serovar is within the list of antigen BLAST predicted serovars,
then the serovar is assigned the cgMLST predicted serovar.
If all antigens are found, but an antigen serovar is not found then the serovar is assigned
a pseudo-antigenic formula (Serogroup:H1:H2), otherwise the serovar is assigned the cgMLST prediction.
If the antigen predicted serovar does not match the cgMLST predicted serovar,
- the serovar is the cgMLST serovar if the cgMLST cluster level is <= 0.1 (10% or less)
- otherwise, the serovar is antigen predicted serovar(s)
Args:
serovar_prediction (src.serovar_prediction.SerovarPrediction): Serovar prediction results (antigen+cgMLST[+Mash])
antigen_predictor (src.serovar_prediction.SerovarPredictor): Antigen search results
Returns:
src.serovar_prediction.SerovarPrediction: Serovar prediction results with overall prediction from antigen + cgMLST
"""
assert isinstance(serovar_prediction, SerovarPrediction)
assert isinstance(antigen_predictor, SerovarPredictor)
h1 = antigen_predictor.h1
h2 = antigen_predictor.h2
sg = antigen_predictor.serogroup
spp = serovar_prediction.cgmlst_subspecies
if spp is None:
if 'mash_match' in serovar_prediction.__dict__:
spp = serovar_prediction.__dict__['mash_subspecies']
serovar_prediction.serovar_antigen = antigen_predictor.serovar
cgmlst_serovar = serovar_prediction.serovar_cgmlst
cgmlst_distance = float(serovar_prediction.cgmlst_distance)
null_result = '-:-:-'
try:
spp_roman = spp_name_to_roman[spp]
except:
spp_roman = None
is_antigen_null = lambda x: (x is None or x == '' or x == '-')
if antigen_predictor.serovar is None:
if is_antigen_null(sg) and is_antigen_null(h1) and is_antigen_null(h2):
if spp_roman is not None:
serovar_prediction.serovar = '{} {}:{}:{}'.format(spp_roman, sg, h1, h2)
else:
serovar_prediction.serovar = '{}:{}:{}'.format(spp_roman, sg, h1, h2)
elif cgmlst_serovar is not None and cgmlst_distance <= CGMLST_DISTANCE_THRESHOLD:
serovar_prediction.serovar = cgmlst_serovar
else:
serovar_prediction.serovar = null_result
if 'mash_match' in serovar_prediction.__dict__:
spd = serovar_prediction.__dict__
mash_dist = float(spd['mash_distance'])
if mash_dist <= MASH_DISTANCE_THRESHOLD:
serovar_prediction.serovar = spd['mash_serovar']
else:
serovars_from_antigen = antigen_predictor.serovar.split('|')
if not isinstance(serovars_from_antigen, list):
serovars_from_antigen = [serovars_from_antigen]
if cgmlst_serovar is not None:
if cgmlst_serovar in serovars_from_antigen:
serovar_prediction.serovar = cgmlst_serovar
else:
if float(cgmlst_distance) <= CGMLST_DISTANCE_THRESHOLD:
serovar_prediction.serovar = cgmlst_serovar
elif 'mash_match' in serovar_prediction.__dict__:
spd = serovar_prediction.__dict__
mash_serovar = spd['mash_serovar']
mash_dist = float(spd['mash_distance'])
if mash_serovar in serovars_from_antigen:
serovar_prediction.serovar = mash_serovar
else:
if mash_dist <= MASH_DISTANCE_THRESHOLD:
serovar_prediction.serovar = mash_serovar
if serovar_prediction.serovar is None:
serovar_prediction.serovar = serovar_prediction.serovar_antigen
if serovar_prediction.h1 is None:
serovar_prediction.h1 = '-'
if serovar_prediction.h2 is None:
serovar_prediction.h2 = '-'
if serovar_prediction.serogroup is None:
serovar_prediction.serogroup = '-'
if serovar_prediction.serovar_antigen is None:
if spp_roman is not None:
serovar_prediction.serovar_antigen = '{} -:-:-'.format(spp_roman)
else:
serovar_prediction.serovar_antigen = '-:-:-'
if serovar_prediction.serovar is None:
serovar_prediction.serovar = serovar_prediction.serovar_antigen
return serovar_prediction |
def _instruction_to_superop(cls, instruction):
"""Convert a QuantumCircuit or Instruction to a SuperOp."""
# Convert circuit to an instruction
if isinstance(instruction, QuantumCircuit):
instruction = instruction.to_instruction()
# Initialize an identity superoperator of the correct size
# of the circuit
op = SuperOp(np.eye(4 ** instruction.num_qubits))
op._append_instruction(instruction)
return op | Convert a QuantumCircuit or Instruction to a SuperOp. | Below is the the instruction that describes the task:
### Input:
Convert a QuantumCircuit or Instruction to a SuperOp.
### Response:
def _instruction_to_superop(cls, instruction):
"""Convert a QuantumCircuit or Instruction to a SuperOp."""
# Convert circuit to an instruction
if isinstance(instruction, QuantumCircuit):
instruction = instruction.to_instruction()
# Initialize an identity superoperator of the correct size
# of the circuit
op = SuperOp(np.eye(4 ** instruction.num_qubits))
op._append_instruction(instruction)
return op |
def import_dashboard_config(modules):
"""Imports configuration from all the modules and merges it."""
config = collections.defaultdict(dict)
for module in modules:
for submodule in import_submodules(module).values():
if hasattr(submodule, 'DASHBOARD'):
dashboard = submodule.DASHBOARD
config[dashboard].update(submodule.__dict__)
elif (hasattr(submodule, 'PANEL') or
hasattr(submodule, 'PANEL_GROUP') or
hasattr(submodule, 'FEATURE')):
# If enabled and local.enabled contains a same filename,
# the file loaded later (i.e., local.enabled) will be used.
name = submodule.__name__.rsplit('.', 1)[1]
config[name] = submodule.__dict__
else:
logging.warning("Skipping %s because it doesn't have DASHBOARD"
", PANEL, PANEL_GROUP, or FEATURE defined.",
submodule.__name__)
return sorted(config.items(),
key=lambda c: c[1]['__name__'].rsplit('.', 1)[1]) | Imports configuration from all the modules and merges it. | Below is the the instruction that describes the task:
### Input:
Imports configuration from all the modules and merges it.
### Response:
def import_dashboard_config(modules):
"""Imports configuration from all the modules and merges it."""
config = collections.defaultdict(dict)
for module in modules:
for submodule in import_submodules(module).values():
if hasattr(submodule, 'DASHBOARD'):
dashboard = submodule.DASHBOARD
config[dashboard].update(submodule.__dict__)
elif (hasattr(submodule, 'PANEL') or
hasattr(submodule, 'PANEL_GROUP') or
hasattr(submodule, 'FEATURE')):
# If enabled and local.enabled contains a same filename,
# the file loaded later (i.e., local.enabled) will be used.
name = submodule.__name__.rsplit('.', 1)[1]
config[name] = submodule.__dict__
else:
logging.warning("Skipping %s because it doesn't have DASHBOARD"
", PANEL, PANEL_GROUP, or FEATURE defined.",
submodule.__name__)
return sorted(config.items(),
key=lambda c: c[1]['__name__'].rsplit('.', 1)[1]) |
def init_app(self, app):
"""Setup before_request, after_request handlers for tracing.
"""
app.config.setdefault("TRACY_REQUIRE_CLIENT", False)
if not hasattr(app, 'extensions'):
app.extensions = {}
app.extensions['restpoints'] = self
app.before_request(self._before)
app.after_request(self._after) | Setup before_request, after_request handlers for tracing. | Below is the the instruction that describes the task:
### Input:
Setup before_request, after_request handlers for tracing.
### Response:
def init_app(self, app):
"""Setup before_request, after_request handlers for tracing.
"""
app.config.setdefault("TRACY_REQUIRE_CLIENT", False)
if not hasattr(app, 'extensions'):
app.extensions = {}
app.extensions['restpoints'] = self
app.before_request(self._before)
app.after_request(self._after) |
def _array_type_std_res(self, counts, total, colsum, rowsum):
"""Return ndarray containing standard residuals for array values.
The shape of the return value is the same as that of *counts*.
Array variables require special processing because of the
underlying math. Essentially, it boils down to the fact that the
variable dimensions are mutually independent, and standard residuals
are calculated for each of them separately, and then stacked together
in the resulting array.
"""
if self.mr_dim_ind == 0:
# --This is a special case where broadcasting cannot be
# --automatically done. We need to "inflate" the single dimensional
# --ndarrays, to be able to treat them as "columns" (essentially a
# --Nx1 ndarray). This is needed for subsequent multiplication
# --that needs to happen column wise (rowsum * colsum) / total.
total = total[:, np.newaxis]
rowsum = rowsum[:, np.newaxis]
expected_counts = rowsum * colsum / total
variance = rowsum * colsum * (total - rowsum) * (total - colsum) / total ** 3
return (counts - expected_counts) / np.sqrt(variance) | Return ndarray containing standard residuals for array values.
The shape of the return value is the same as that of *counts*.
Array variables require special processing because of the
underlying math. Essentially, it boils down to the fact that the
variable dimensions are mutually independent, and standard residuals
are calculated for each of them separately, and then stacked together
in the resulting array. | Below is the the instruction that describes the task:
### Input:
Return ndarray containing standard residuals for array values.
The shape of the return value is the same as that of *counts*.
Array variables require special processing because of the
underlying math. Essentially, it boils down to the fact that the
variable dimensions are mutually independent, and standard residuals
are calculated for each of them separately, and then stacked together
in the resulting array.
### Response:
def _array_type_std_res(self, counts, total, colsum, rowsum):
"""Return ndarray containing standard residuals for array values.
The shape of the return value is the same as that of *counts*.
Array variables require special processing because of the
underlying math. Essentially, it boils down to the fact that the
variable dimensions are mutually independent, and standard residuals
are calculated for each of them separately, and then stacked together
in the resulting array.
"""
if self.mr_dim_ind == 0:
# --This is a special case where broadcasting cannot be
# --automatically done. We need to "inflate" the single dimensional
# --ndarrays, to be able to treat them as "columns" (essentially a
# --Nx1 ndarray). This is needed for subsequent multiplication
# --that needs to happen column wise (rowsum * colsum) / total.
total = total[:, np.newaxis]
rowsum = rowsum[:, np.newaxis]
expected_counts = rowsum * colsum / total
variance = rowsum * colsum * (total - rowsum) * (total - colsum) / total ** 3
return (counts - expected_counts) / np.sqrt(variance) |
def close_idle_connections(self, pool_id=None):
"""close idle connections to mongo"""
if not hasattr(self, '_pools'):
return
if pool_id:
if pool_id not in self._pools:
raise ProgrammingError("pool %r does not exist" % pool_id)
else:
pool = self._pools[pool_id]
pool.close()
else:
for pool_id, pool in self._pools.items():
pool.close() | close idle connections to mongo | Below is the the instruction that describes the task:
### Input:
close idle connections to mongo
### Response:
def close_idle_connections(self, pool_id=None):
"""close idle connections to mongo"""
if not hasattr(self, '_pools'):
return
if pool_id:
if pool_id not in self._pools:
raise ProgrammingError("pool %r does not exist" % pool_id)
else:
pool = self._pools[pool_id]
pool.close()
else:
for pool_id, pool in self._pools.items():
pool.close() |
def pos(self):
"""A dictionary of the current walker positions.
If the sampler hasn't been run yet, returns p0.
"""
pos = self._pos
if pos is None:
return self.p0
# convert to dict
pos = {param: self._pos[..., k]
for (k, param) in enumerate(self.sampling_params)}
return pos | A dictionary of the current walker positions.
If the sampler hasn't been run yet, returns p0. | Below is the the instruction that describes the task:
### Input:
A dictionary of the current walker positions.
If the sampler hasn't been run yet, returns p0.
### Response:
def pos(self):
"""A dictionary of the current walker positions.
If the sampler hasn't been run yet, returns p0.
"""
pos = self._pos
if pos is None:
return self.p0
# convert to dict
pos = {param: self._pos[..., k]
for (k, param) in enumerate(self.sampling_params)}
return pos |
def _init_journal(self, permissive=True):
"""Add the initialization lines to the journal.
By default adds JrnObj variable and timestamp to the journal contents.
Args:
permissive (bool): if True most errors in journal will not
cause Revit to stop journal execution.
Some still do.
"""
nowstamp = datetime.now().strftime("%d-%b-%Y %H:%M:%S.%f")[:-3]
self._add_entry(templates.INIT
.format(time_stamp=nowstamp))
if permissive:
self._add_entry(templates.INIT_DEBUG) | Add the initialization lines to the journal.
By default adds JrnObj variable and timestamp to the journal contents.
Args:
permissive (bool): if True most errors in journal will not
cause Revit to stop journal execution.
Some still do. | Below is the the instruction that describes the task:
### Input:
Add the initialization lines to the journal.
By default adds JrnObj variable and timestamp to the journal contents.
Args:
permissive (bool): if True most errors in journal will not
cause Revit to stop journal execution.
Some still do.
### Response:
def _init_journal(self, permissive=True):
"""Add the initialization lines to the journal.
By default adds JrnObj variable and timestamp to the journal contents.
Args:
permissive (bool): if True most errors in journal will not
cause Revit to stop journal execution.
Some still do.
"""
nowstamp = datetime.now().strftime("%d-%b-%Y %H:%M:%S.%f")[:-3]
self._add_entry(templates.INIT
.format(time_stamp=nowstamp))
if permissive:
self._add_entry(templates.INIT_DEBUG) |
def increase(self, ubound=1, top_id=None):
"""
Increases a potential upper bound that can be imposed on the
literals in the sum of an existing :class:`ITotalizer` object to a
new value.
:param ubound: a new upper bound.
:param top_id: a new top variable identifier.
:type ubound: int
:type top_id: integer or None
The top identifier ``top_id`` applied only if it is greater than
the one used in ``self``.
This method creates additional clauses encoding the existing
totalizer tree up to the new upper bound given and appends them to
the list of clauses of :class:`.CNF` ``self.cnf``. The number of
newly created clauses is stored in variable ``self.nof_new``.
Also, a list of bounds ``self.rhs`` gets increased and its length
becomes ``ubound+1``.
The method can be used in the following way:
.. code-block:: python
>>> from pysat.card import ITotalizer
>>> t = ITotalizer(lits=[1, 2, 3], ubound=1)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7]]
>>> print t.rhs
[6, 7]
>>>
>>> t.increase(ubound=2)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7], [-3, -5, 8]]
>>> print t.cnf.clauses[-t.nof_new:]
[[-3, -5, 8]]
>>> print t.rhs
[6, 7, 8]
>>> t.delete()
"""
self.top_id = max(self.top_id, top_id if top_id != None else 0)
# do nothing if the bound is set incorrectly
if ubound <= self.ubound or self.ubound >= len(self.lits):
self.nof_new = 0
return
else:
self.ubound = ubound
# saving default SIGINT handler
def_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_DFL)
# updating the object and adding more variables and clauses
clauses, self.rhs, self.top_id = pycard.itot_inc(self.tobj,
self.ubound, self.top_id)
# recovering default SIGINT handler
def_sigint_handler = signal.signal(signal.SIGINT, def_sigint_handler)
# saving the result
self.cnf.clauses.extend(clauses)
self.cnf.nv = self.top_id
# keeping the number of newly added clauses
self.nof_new = len(clauses) | Increases a potential upper bound that can be imposed on the
literals in the sum of an existing :class:`ITotalizer` object to a
new value.
:param ubound: a new upper bound.
:param top_id: a new top variable identifier.
:type ubound: int
:type top_id: integer or None
The top identifier ``top_id`` applied only if it is greater than
the one used in ``self``.
This method creates additional clauses encoding the existing
totalizer tree up to the new upper bound given and appends them to
the list of clauses of :class:`.CNF` ``self.cnf``. The number of
newly created clauses is stored in variable ``self.nof_new``.
Also, a list of bounds ``self.rhs`` gets increased and its length
becomes ``ubound+1``.
The method can be used in the following way:
.. code-block:: python
>>> from pysat.card import ITotalizer
>>> t = ITotalizer(lits=[1, 2, 3], ubound=1)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7]]
>>> print t.rhs
[6, 7]
>>>
>>> t.increase(ubound=2)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7], [-3, -5, 8]]
>>> print t.cnf.clauses[-t.nof_new:]
[[-3, -5, 8]]
>>> print t.rhs
[6, 7, 8]
>>> t.delete() | Below is the the instruction that describes the task:
### Input:
Increases a potential upper bound that can be imposed on the
literals in the sum of an existing :class:`ITotalizer` object to a
new value.
:param ubound: a new upper bound.
:param top_id: a new top variable identifier.
:type ubound: int
:type top_id: integer or None
The top identifier ``top_id`` applied only if it is greater than
the one used in ``self``.
This method creates additional clauses encoding the existing
totalizer tree up to the new upper bound given and appends them to
the list of clauses of :class:`.CNF` ``self.cnf``. The number of
newly created clauses is stored in variable ``self.nof_new``.
Also, a list of bounds ``self.rhs`` gets increased and its length
becomes ``ubound+1``.
The method can be used in the following way:
.. code-block:: python
>>> from pysat.card import ITotalizer
>>> t = ITotalizer(lits=[1, 2, 3], ubound=1)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7]]
>>> print t.rhs
[6, 7]
>>>
>>> t.increase(ubound=2)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7], [-3, -5, 8]]
>>> print t.cnf.clauses[-t.nof_new:]
[[-3, -5, 8]]
>>> print t.rhs
[6, 7, 8]
>>> t.delete()
### Response:
def increase(self, ubound=1, top_id=None):
"""
Increases a potential upper bound that can be imposed on the
literals in the sum of an existing :class:`ITotalizer` object to a
new value.
:param ubound: a new upper bound.
:param top_id: a new top variable identifier.
:type ubound: int
:type top_id: integer or None
The top identifier ``top_id`` applied only if it is greater than
the one used in ``self``.
This method creates additional clauses encoding the existing
totalizer tree up to the new upper bound given and appends them to
the list of clauses of :class:`.CNF` ``self.cnf``. The number of
newly created clauses is stored in variable ``self.nof_new``.
Also, a list of bounds ``self.rhs`` gets increased and its length
becomes ``ubound+1``.
The method can be used in the following way:
.. code-block:: python
>>> from pysat.card import ITotalizer
>>> t = ITotalizer(lits=[1, 2, 3], ubound=1)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7]]
>>> print t.rhs
[6, 7]
>>>
>>> t.increase(ubound=2)
>>> print t.cnf.clauses
[[-2, 4], [-1, 4], [-1, -2, 5], [-4, 6], [-5, 7], [-3, 6], [-3, -4, 7], [-3, -5, 8]]
>>> print t.cnf.clauses[-t.nof_new:]
[[-3, -5, 8]]
>>> print t.rhs
[6, 7, 8]
>>> t.delete()
"""
self.top_id = max(self.top_id, top_id if top_id != None else 0)
# do nothing if the bound is set incorrectly
if ubound <= self.ubound or self.ubound >= len(self.lits):
self.nof_new = 0
return
else:
self.ubound = ubound
# saving default SIGINT handler
def_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_DFL)
# updating the object and adding more variables and clauses
clauses, self.rhs, self.top_id = pycard.itot_inc(self.tobj,
self.ubound, self.top_id)
# recovering default SIGINT handler
def_sigint_handler = signal.signal(signal.SIGINT, def_sigint_handler)
# saving the result
self.cnf.clauses.extend(clauses)
self.cnf.nv = self.top_id
# keeping the number of newly added clauses
self.nof_new = len(clauses) |
def _record2card(self, record):
"""
when we add new records they don't have a card,
this sort of fakes it up similar to what cfitsio
does, just for display purposes. e.g.
DBL = 23.299843
LNG = 3423432
KEYSNC = 'hello '
KEYSC = 'hello ' / a comment for string
KEYDC = 3.14159265358979 / a comment for pi
KEYLC = 323423432 / a comment for long
basically,
- 8 chars, left aligned, for the keyword name
- a space
- 20 chars for value, left aligned for strings, right aligned for
numbers
- if there is a comment, one space followed by / then another space
then the comment out to 80 chars
"""
name = record['name']
value = record['value']
v_isstring = isstring(value)
if name == 'COMMENT':
# card = 'COMMENT %s' % value
card = 'COMMENT %s' % value
elif name == 'CONTINUE':
card = 'CONTINUE %s' % value
elif name == 'HISTORY':
card = 'HISTORY %s' % value
else:
if len(name) > 8:
card = 'HIERARCH %s= ' % name
else:
card = '%-8s= ' % name[0:8]
# these may be string representations of data, or actual strings
if v_isstring:
value = str(value)
if len(value) > 0:
if value[0] != "'":
# this is a string representing a string header field
# make it look like it will look in the header
value = "'" + value + "'"
vstr = '%-20s' % value
else:
vstr = "%20s" % value
else:
vstr = "''"
else:
vstr = '%20s' % value
card += vstr
if 'comment' in record:
card += ' / %s' % record['comment']
if v_isstring and len(card) > 80:
card = card[0:79] + "'"
else:
card = card[0:80]
return card | when we add new records they don't have a card,
this sort of fakes it up similar to what cfitsio
does, just for display purposes. e.g.
DBL = 23.299843
LNG = 3423432
KEYSNC = 'hello '
KEYSC = 'hello ' / a comment for string
KEYDC = 3.14159265358979 / a comment for pi
KEYLC = 323423432 / a comment for long
basically,
- 8 chars, left aligned, for the keyword name
- a space
- 20 chars for value, left aligned for strings, right aligned for
numbers
- if there is a comment, one space followed by / then another space
then the comment out to 80 chars | Below is the the instruction that describes the task:
### Input:
when we add new records they don't have a card,
this sort of fakes it up similar to what cfitsio
does, just for display purposes. e.g.
DBL = 23.299843
LNG = 3423432
KEYSNC = 'hello '
KEYSC = 'hello ' / a comment for string
KEYDC = 3.14159265358979 / a comment for pi
KEYLC = 323423432 / a comment for long
basically,
- 8 chars, left aligned, for the keyword name
- a space
- 20 chars for value, left aligned for strings, right aligned for
numbers
- if there is a comment, one space followed by / then another space
then the comment out to 80 chars
### Response:
def _record2card(self, record):
"""
when we add new records they don't have a card,
this sort of fakes it up similar to what cfitsio
does, just for display purposes. e.g.
DBL = 23.299843
LNG = 3423432
KEYSNC = 'hello '
KEYSC = 'hello ' / a comment for string
KEYDC = 3.14159265358979 / a comment for pi
KEYLC = 323423432 / a comment for long
basically,
- 8 chars, left aligned, for the keyword name
- a space
- 20 chars for value, left aligned for strings, right aligned for
numbers
- if there is a comment, one space followed by / then another space
then the comment out to 80 chars
"""
name = record['name']
value = record['value']
v_isstring = isstring(value)
if name == 'COMMENT':
# card = 'COMMENT %s' % value
card = 'COMMENT %s' % value
elif name == 'CONTINUE':
card = 'CONTINUE %s' % value
elif name == 'HISTORY':
card = 'HISTORY %s' % value
else:
if len(name) > 8:
card = 'HIERARCH %s= ' % name
else:
card = '%-8s= ' % name[0:8]
# these may be string representations of data, or actual strings
if v_isstring:
value = str(value)
if len(value) > 0:
if value[0] != "'":
# this is a string representing a string header field
# make it look like it will look in the header
value = "'" + value + "'"
vstr = '%-20s' % value
else:
vstr = "%20s" % value
else:
vstr = "''"
else:
vstr = '%20s' % value
card += vstr
if 'comment' in record:
card += ' / %s' % record['comment']
if v_isstring and len(card) > 80:
card = card[0:79] + "'"
else:
card = card[0:80]
return card |
def obfuscate(cls, idStr):
"""
Mildly obfuscates the specified ID string in an easily reversible
fashion. This is not intended for security purposes, but rather to
dissuade users from depending on our internal ID structures.
"""
return unicode(base64.urlsafe_b64encode(
idStr.encode('utf-8')).replace(b'=', b'')) | Mildly obfuscates the specified ID string in an easily reversible
fashion. This is not intended for security purposes, but rather to
dissuade users from depending on our internal ID structures. | Below is the the instruction that describes the task:
### Input:
Mildly obfuscates the specified ID string in an easily reversible
fashion. This is not intended for security purposes, but rather to
dissuade users from depending on our internal ID structures.
### Response:
def obfuscate(cls, idStr):
"""
Mildly obfuscates the specified ID string in an easily reversible
fashion. This is not intended for security purposes, but rather to
dissuade users from depending on our internal ID structures.
"""
return unicode(base64.urlsafe_b64encode(
idStr.encode('utf-8')).replace(b'=', b'')) |
def locale_info():
'''
Provides
defaultlanguage
defaultencoding
'''
grains = {}
grains['locale_info'] = {}
if salt.utils.platform.is_proxy():
return grains
try:
(
grains['locale_info']['defaultlanguage'],
grains['locale_info']['defaultencoding']
) = locale.getdefaultlocale()
except Exception:
# locale.getdefaultlocale can ValueError!! Catch anything else it
# might do, per #2205
grains['locale_info']['defaultlanguage'] = 'unknown'
grains['locale_info']['defaultencoding'] = 'unknown'
grains['locale_info']['detectedencoding'] = __salt_system_encoding__
if _DATEUTIL_TZ:
grains['locale_info']['timezone'] = datetime.datetime.now(dateutil.tz.tzlocal()).tzname()
return grains | Provides
defaultlanguage
defaultencoding | Below is the the instruction that describes the task:
### Input:
Provides
defaultlanguage
defaultencoding
### Response:
def locale_info():
'''
Provides
defaultlanguage
defaultencoding
'''
grains = {}
grains['locale_info'] = {}
if salt.utils.platform.is_proxy():
return grains
try:
(
grains['locale_info']['defaultlanguage'],
grains['locale_info']['defaultencoding']
) = locale.getdefaultlocale()
except Exception:
# locale.getdefaultlocale can ValueError!! Catch anything else it
# might do, per #2205
grains['locale_info']['defaultlanguage'] = 'unknown'
grains['locale_info']['defaultencoding'] = 'unknown'
grains['locale_info']['detectedencoding'] = __salt_system_encoding__
if _DATEUTIL_TZ:
grains['locale_info']['timezone'] = datetime.datetime.now(dateutil.tz.tzlocal()).tzname()
return grains |
def select(self, template_name):
"""
Select a specific family from the party.
:type template_name: str
:param template_name: Template name of Family to select from a party.
:returns: Family
"""
return [fam for fam in self.families
if fam.template.name == template_name][0] | Select a specific family from the party.
:type template_name: str
:param template_name: Template name of Family to select from a party.
:returns: Family | Below is the the instruction that describes the task:
### Input:
Select a specific family from the party.
:type template_name: str
:param template_name: Template name of Family to select from a party.
:returns: Family
### Response:
def select(self, template_name):
"""
Select a specific family from the party.
:type template_name: str
:param template_name: Template name of Family to select from a party.
:returns: Family
"""
return [fam for fam in self.families
if fam.template.name == template_name][0] |
def calc_tkor_v1(self):
"""Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.tkor[k] = con.kt[k] + inp.teml | Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0) | Below is the the instruction that describes the task:
### Input:
Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0)
### Response:
def calc_tkor_v1(self):
"""Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.tkor[k] = con.kt[k] + inp.teml |
def make_complete(self):
"""
Turns the site collection into a complete one, if needed
"""
# reset the site indices from 0 to N-1 and set self.complete to self
self.array['sids'] = numpy.arange(len(self), dtype=numpy.uint32)
self.complete = self | Turns the site collection into a complete one, if needed | Below is the the instruction that describes the task:
### Input:
Turns the site collection into a complete one, if needed
### Response:
def make_complete(self):
"""
Turns the site collection into a complete one, if needed
"""
# reset the site indices from 0 to N-1 and set self.complete to self
self.array['sids'] = numpy.arange(len(self), dtype=numpy.uint32)
self.complete = self |
def _parse_response_types(argspec, attrs):
"""
from the given parameters, return back the response type dictionaries.
"""
return_type = argspec.annotations.get("return") or None
type_description = attrs.parameter_descriptions.get("return", "")
response_types = attrs.response_types.copy()
if return_type or len(response_types) == 0:
response_types[attrs.success_code] = ResponseType(
type=return_type,
type_description=type_description,
description="success",
)
return response_types | from the given parameters, return back the response type dictionaries. | Below is the the instruction that describes the task:
### Input:
from the given parameters, return back the response type dictionaries.
### Response:
def _parse_response_types(argspec, attrs):
"""
from the given parameters, return back the response type dictionaries.
"""
return_type = argspec.annotations.get("return") or None
type_description = attrs.parameter_descriptions.get("return", "")
response_types = attrs.response_types.copy()
if return_type or len(response_types) == 0:
response_types[attrs.success_code] = ResponseType(
type=return_type,
type_description=type_description,
description="success",
)
return response_types |
def list_(
pkg=None,
user=None,
installed=False,
env=None):
'''
List packages matching a search string.
pkg
Search string for matching package names
user
The user to run cabal list with
installed
If True, only return installed packages.
env
Environment variables to set when invoking cabal. Uses the
same ``env`` format as the :py:func:`cmd.run
<salt.modules.cmdmod.run>` execution function
CLI example:
.. code-block:: bash
salt '*' cabal.list
salt '*' cabal.list ShellCheck
'''
cmd = ['cabal list --simple-output']
if installed:
cmd.append('--installed')
if pkg:
cmd.append('"{0}"'.format(pkg))
result = __salt__['cmd.run_all'](' '.join(cmd), runas=user, env=env)
packages = {}
for line in result['stdout'].splitlines():
data = line.split()
package_name = data[0]
package_version = data[1]
packages[package_name] = package_version
return packages | List packages matching a search string.
pkg
Search string for matching package names
user
The user to run cabal list with
installed
If True, only return installed packages.
env
Environment variables to set when invoking cabal. Uses the
same ``env`` format as the :py:func:`cmd.run
<salt.modules.cmdmod.run>` execution function
CLI example:
.. code-block:: bash
salt '*' cabal.list
salt '*' cabal.list ShellCheck | Below is the the instruction that describes the task:
### Input:
List packages matching a search string.
pkg
Search string for matching package names
user
The user to run cabal list with
installed
If True, only return installed packages.
env
Environment variables to set when invoking cabal. Uses the
same ``env`` format as the :py:func:`cmd.run
<salt.modules.cmdmod.run>` execution function
CLI example:
.. code-block:: bash
salt '*' cabal.list
salt '*' cabal.list ShellCheck
### Response:
def list_(
pkg=None,
user=None,
installed=False,
env=None):
'''
List packages matching a search string.
pkg
Search string for matching package names
user
The user to run cabal list with
installed
If True, only return installed packages.
env
Environment variables to set when invoking cabal. Uses the
same ``env`` format as the :py:func:`cmd.run
<salt.modules.cmdmod.run>` execution function
CLI example:
.. code-block:: bash
salt '*' cabal.list
salt '*' cabal.list ShellCheck
'''
cmd = ['cabal list --simple-output']
if installed:
cmd.append('--installed')
if pkg:
cmd.append('"{0}"'.format(pkg))
result = __salt__['cmd.run_all'](' '.join(cmd), runas=user, env=env)
packages = {}
for line in result['stdout'].splitlines():
data = line.split()
package_name = data[0]
package_version = data[1]
packages[package_name] = package_version
return packages |
def zip_file(self, app_path, app_name, tmp_path):
"""Zip the App with tcex extension.
Args:
app_path (str): The path of the current project.
app_name (str): The name of the App.
tmp_path (str): The temp output path for the zip.
"""
# zip build directory
zip_file = os.path.join(app_path, self.args.outdir, app_name)
zip_file_zip = '{}.zip'.format(zip_file)
zip_file_tcx = '{}.tcx'.format(zip_file)
shutil.make_archive(zip_file, 'zip', tmp_path, app_name)
shutil.move(zip_file_zip, zip_file_tcx)
self._app_packages.append(zip_file_tcx)
# update package data
self.package_data['package'].append({'action': 'App Package:', 'output': zip_file_tcx}) | Zip the App with tcex extension.
Args:
app_path (str): The path of the current project.
app_name (str): The name of the App.
tmp_path (str): The temp output path for the zip. | Below is the the instruction that describes the task:
### Input:
Zip the App with tcex extension.
Args:
app_path (str): The path of the current project.
app_name (str): The name of the App.
tmp_path (str): The temp output path for the zip.
### Response:
def zip_file(self, app_path, app_name, tmp_path):
"""Zip the App with tcex extension.
Args:
app_path (str): The path of the current project.
app_name (str): The name of the App.
tmp_path (str): The temp output path for the zip.
"""
# zip build directory
zip_file = os.path.join(app_path, self.args.outdir, app_name)
zip_file_zip = '{}.zip'.format(zip_file)
zip_file_tcx = '{}.tcx'.format(zip_file)
shutil.make_archive(zip_file, 'zip', tmp_path, app_name)
shutil.move(zip_file_zip, zip_file_tcx)
self._app_packages.append(zip_file_tcx)
# update package data
self.package_data['package'].append({'action': 'App Package:', 'output': zip_file_tcx}) |
def numericalize(self, arrs, device=None):
"""Convert a padded minibatch into a variable tensor.
Each item in the minibatch will be numericalized independently and the resulting
tensors will be stacked at the first dimension.
Arguments:
arr (List[List[str]]): List of tokenized and padded examples.
device (str or torch.device): A string or instance of `torch.device`
specifying which device the Variables are going to be created on.
If left as default, the tensors will be created on cpu. Default: None.
"""
numericalized = []
self.nesting_field.include_lengths = False
if self.include_lengths:
arrs, sentence_lengths, word_lengths = arrs
for arr in arrs:
numericalized_ex = self.nesting_field.numericalize(
arr, device=device)
numericalized.append(numericalized_ex)
padded_batch = torch.stack(numericalized)
self.nesting_field.include_lengths = True
if self.include_lengths:
sentence_lengths = \
torch.tensor(sentence_lengths, dtype=self.dtype, device=device)
word_lengths = torch.tensor(word_lengths, dtype=self.dtype, device=device)
return (padded_batch, sentence_lengths, word_lengths)
return padded_batch | Convert a padded minibatch into a variable tensor.
Each item in the minibatch will be numericalized independently and the resulting
tensors will be stacked at the first dimension.
Arguments:
arr (List[List[str]]): List of tokenized and padded examples.
device (str or torch.device): A string or instance of `torch.device`
specifying which device the Variables are going to be created on.
If left as default, the tensors will be created on cpu. Default: None. | Below is the the instruction that describes the task:
### Input:
Convert a padded minibatch into a variable tensor.
Each item in the minibatch will be numericalized independently and the resulting
tensors will be stacked at the first dimension.
Arguments:
arr (List[List[str]]): List of tokenized and padded examples.
device (str or torch.device): A string or instance of `torch.device`
specifying which device the Variables are going to be created on.
If left as default, the tensors will be created on cpu. Default: None.
### Response:
def numericalize(self, arrs, device=None):
"""Convert a padded minibatch into a variable tensor.
Each item in the minibatch will be numericalized independently and the resulting
tensors will be stacked at the first dimension.
Arguments:
arr (List[List[str]]): List of tokenized and padded examples.
device (str or torch.device): A string or instance of `torch.device`
specifying which device the Variables are going to be created on.
If left as default, the tensors will be created on cpu. Default: None.
"""
numericalized = []
self.nesting_field.include_lengths = False
if self.include_lengths:
arrs, sentence_lengths, word_lengths = arrs
for arr in arrs:
numericalized_ex = self.nesting_field.numericalize(
arr, device=device)
numericalized.append(numericalized_ex)
padded_batch = torch.stack(numericalized)
self.nesting_field.include_lengths = True
if self.include_lengths:
sentence_lengths = \
torch.tensor(sentence_lengths, dtype=self.dtype, device=device)
word_lengths = torch.tensor(word_lengths, dtype=self.dtype, device=device)
return (padded_batch, sentence_lengths, word_lengths)
return padded_batch |
def create_subject_access_review(self, body, **kwargs):
"""
create a SubjectAccessReview
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_subject_access_review(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SubjectAccessReview body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1SubjectAccessReview
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_subject_access_review_with_http_info(body, **kwargs)
else:
(data) = self.create_subject_access_review_with_http_info(body, **kwargs)
return data | create a SubjectAccessReview
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_subject_access_review(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SubjectAccessReview body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1SubjectAccessReview
If the method is called asynchronously,
returns the request thread. | Below is the the instruction that describes the task:
### Input:
create a SubjectAccessReview
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_subject_access_review(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SubjectAccessReview body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1SubjectAccessReview
If the method is called asynchronously,
returns the request thread.
### Response:
def create_subject_access_review(self, body, **kwargs):
"""
create a SubjectAccessReview
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_subject_access_review(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1SubjectAccessReview body: (required)
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param str pretty: If 'true', then the output is pretty printed.
:return: V1SubjectAccessReview
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_subject_access_review_with_http_info(body, **kwargs)
else:
(data) = self.create_subject_access_review_with_http_info(body, **kwargs)
return data |
def _validate_filters_ndb(cls, filters, model_class):
"""Validate ndb.Model filters."""
if not filters:
return
properties = model_class._properties
for idx, f in enumerate(filters):
prop, ineq, val = f
if prop not in properties:
raise errors.BadReaderParamsError(
"Property %s is not defined for entity type %s",
prop, model_class._get_kind())
# Attempt to cast the value to a KeyProperty if appropriate.
# This enables filtering against keys.
try:
if (isinstance(val, basestring) and
isinstance(properties[prop],
(ndb.KeyProperty, ndb.ComputedProperty))):
val = ndb.Key(urlsafe=val)
filters[idx] = [prop, ineq, val]
except:
pass
# Validate the value of each filter. We need to know filters have
# valid value to carry out splits.
try:
properties[prop]._do_validate(val)
except db.BadValueError, e:
raise errors.BadReaderParamsError(e) | Validate ndb.Model filters. | Below is the the instruction that describes the task:
### Input:
Validate ndb.Model filters.
### Response:
def _validate_filters_ndb(cls, filters, model_class):
"""Validate ndb.Model filters."""
if not filters:
return
properties = model_class._properties
for idx, f in enumerate(filters):
prop, ineq, val = f
if prop not in properties:
raise errors.BadReaderParamsError(
"Property %s is not defined for entity type %s",
prop, model_class._get_kind())
# Attempt to cast the value to a KeyProperty if appropriate.
# This enables filtering against keys.
try:
if (isinstance(val, basestring) and
isinstance(properties[prop],
(ndb.KeyProperty, ndb.ComputedProperty))):
val = ndb.Key(urlsafe=val)
filters[idx] = [prop, ineq, val]
except:
pass
# Validate the value of each filter. We need to know filters have
# valid value to carry out splits.
try:
properties[prop]._do_validate(val)
except db.BadValueError, e:
raise errors.BadReaderParamsError(e) |
def download_file(self, bucket, key, filename, extra_args=None,
expected_size=None):
"""Downloads the object's contents to a file
:type bucket: str
:param bucket: The name of the bucket to download from
:type key: str
:param key: The name of the key to download from
:type filename: str
:param filename: The name of a file to download to.
:type extra_args: dict
:param extra_args: Extra arguments that may be passed to the
client operation
:type expected_size: int
:param expected_size: The expected size in bytes of the download. If
provided, the downloader will not call HeadObject to determine the
object's size and use the provided value instead. The size is
needed to determine whether to do a multipart download.
:rtype: s3transfer.futures.TransferFuture
:returns: Transfer future representing the download
"""
self._start_if_needed()
if extra_args is None:
extra_args = {}
self._validate_all_known_args(extra_args)
transfer_id = self._transfer_monitor.notify_new_transfer()
download_file_request = DownloadFileRequest(
transfer_id=transfer_id, bucket=bucket, key=key,
filename=filename, extra_args=extra_args,
expected_size=expected_size,
)
logger.debug(
'Submitting download file request: %s.', download_file_request)
self._download_request_queue.put(download_file_request)
call_args = CallArgs(
bucket=bucket, key=key, filename=filename, extra_args=extra_args,
expected_size=expected_size)
future = self._get_transfer_future(transfer_id, call_args)
return future | Downloads the object's contents to a file
:type bucket: str
:param bucket: The name of the bucket to download from
:type key: str
:param key: The name of the key to download from
:type filename: str
:param filename: The name of a file to download to.
:type extra_args: dict
:param extra_args: Extra arguments that may be passed to the
client operation
:type expected_size: int
:param expected_size: The expected size in bytes of the download. If
provided, the downloader will not call HeadObject to determine the
object's size and use the provided value instead. The size is
needed to determine whether to do a multipart download.
:rtype: s3transfer.futures.TransferFuture
:returns: Transfer future representing the download | Below is the the instruction that describes the task:
### Input:
Downloads the object's contents to a file
:type bucket: str
:param bucket: The name of the bucket to download from
:type key: str
:param key: The name of the key to download from
:type filename: str
:param filename: The name of a file to download to.
:type extra_args: dict
:param extra_args: Extra arguments that may be passed to the
client operation
:type expected_size: int
:param expected_size: The expected size in bytes of the download. If
provided, the downloader will not call HeadObject to determine the
object's size and use the provided value instead. The size is
needed to determine whether to do a multipart download.
:rtype: s3transfer.futures.TransferFuture
:returns: Transfer future representing the download
### Response:
def download_file(self, bucket, key, filename, extra_args=None,
expected_size=None):
"""Downloads the object's contents to a file
:type bucket: str
:param bucket: The name of the bucket to download from
:type key: str
:param key: The name of the key to download from
:type filename: str
:param filename: The name of a file to download to.
:type extra_args: dict
:param extra_args: Extra arguments that may be passed to the
client operation
:type expected_size: int
:param expected_size: The expected size in bytes of the download. If
provided, the downloader will not call HeadObject to determine the
object's size and use the provided value instead. The size is
needed to determine whether to do a multipart download.
:rtype: s3transfer.futures.TransferFuture
:returns: Transfer future representing the download
"""
self._start_if_needed()
if extra_args is None:
extra_args = {}
self._validate_all_known_args(extra_args)
transfer_id = self._transfer_monitor.notify_new_transfer()
download_file_request = DownloadFileRequest(
transfer_id=transfer_id, bucket=bucket, key=key,
filename=filename, extra_args=extra_args,
expected_size=expected_size,
)
logger.debug(
'Submitting download file request: %s.', download_file_request)
self._download_request_queue.put(download_file_request)
call_args = CallArgs(
bucket=bucket, key=key, filename=filename, extra_args=extra_args,
expected_size=expected_size)
future = self._get_transfer_future(transfer_id, call_args)
return future |
def sflow_collector_collector_ip_address(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
sflow = ET.SubElement(config, "sflow", xmlns="urn:brocade.com:mgmt:brocade-sflow")
collector = ET.SubElement(sflow, "collector")
collector_port_number_key = ET.SubElement(collector, "collector-port-number")
collector_port_number_key.text = kwargs.pop('collector_port_number')
collector_ip_address = ET.SubElement(collector, "collector-ip-address")
collector_ip_address.text = kwargs.pop('collector_ip_address')
callback = kwargs.pop('callback', self._callback)
return callback(config) | Auto Generated Code | Below is the the instruction that describes the task:
### Input:
Auto Generated Code
### Response:
def sflow_collector_collector_ip_address(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
sflow = ET.SubElement(config, "sflow", xmlns="urn:brocade.com:mgmt:brocade-sflow")
collector = ET.SubElement(sflow, "collector")
collector_port_number_key = ET.SubElement(collector, "collector-port-number")
collector_port_number_key.text = kwargs.pop('collector_port_number')
collector_ip_address = ET.SubElement(collector, "collector-ip-address")
collector_ip_address.text = kwargs.pop('collector_ip_address')
callback = kwargs.pop('callback', self._callback)
return callback(config) |
def compress(self, d=DEFAULT):
"""Returns a copy of d with compressed leaves."""
if d is DEFAULT:
d = self
if isinstance(d, list):
l = [v for v in (self.compress(v) for v in d)]
try:
return list(set(l))
except TypeError:
# list contains not hashables
ret = []
for i in l:
if i not in ret:
ret.append(i)
return ret
elif isinstance(d, type(self)):
return type(self)({k: v for k, v in ((k, self.compress(v)) for k, v in d.items())})
elif isinstance(d, dict):
return {k: v for k, v in ((k, self.compress(v)) for k, v in d.items())}
return d | Returns a copy of d with compressed leaves. | Below is the the instruction that describes the task:
### Input:
Returns a copy of d with compressed leaves.
### Response:
def compress(self, d=DEFAULT):
"""Returns a copy of d with compressed leaves."""
if d is DEFAULT:
d = self
if isinstance(d, list):
l = [v for v in (self.compress(v) for v in d)]
try:
return list(set(l))
except TypeError:
# list contains not hashables
ret = []
for i in l:
if i not in ret:
ret.append(i)
return ret
elif isinstance(d, type(self)):
return type(self)({k: v for k, v in ((k, self.compress(v)) for k, v in d.items())})
elif isinstance(d, dict):
return {k: v for k, v in ((k, self.compress(v)) for k, v in d.items())}
return d |
def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | Below is the the instruction that describes the task:
### Input:
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
### Response:
def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) |
def enable_one_shot_page_breakpoint(self, dwProcessId, address):
"""
Enables the page breakpoint at the given address for only one shot.
@see:
L{define_page_breakpoint},
L{has_page_breakpoint},
L{get_page_breakpoint},
L{enable_page_breakpoint},
L{disable_page_breakpoint}
L{erase_page_breakpoint},
@type dwProcessId: int
@param dwProcessId: Process global ID.
@type address: int
@param address: Memory address of breakpoint.
"""
p = self.system.get_process(dwProcessId)
bp = self.get_page_breakpoint(dwProcessId, address)
if bp.is_running():
self.__del_running_bp_from_all_threads(bp)
bp.one_shot(p, None) | Enables the page breakpoint at the given address for only one shot.
@see:
L{define_page_breakpoint},
L{has_page_breakpoint},
L{get_page_breakpoint},
L{enable_page_breakpoint},
L{disable_page_breakpoint}
L{erase_page_breakpoint},
@type dwProcessId: int
@param dwProcessId: Process global ID.
@type address: int
@param address: Memory address of breakpoint. | Below is the the instruction that describes the task:
### Input:
Enables the page breakpoint at the given address for only one shot.
@see:
L{define_page_breakpoint},
L{has_page_breakpoint},
L{get_page_breakpoint},
L{enable_page_breakpoint},
L{disable_page_breakpoint}
L{erase_page_breakpoint},
@type dwProcessId: int
@param dwProcessId: Process global ID.
@type address: int
@param address: Memory address of breakpoint.
### Response:
def enable_one_shot_page_breakpoint(self, dwProcessId, address):
"""
Enables the page breakpoint at the given address for only one shot.
@see:
L{define_page_breakpoint},
L{has_page_breakpoint},
L{get_page_breakpoint},
L{enable_page_breakpoint},
L{disable_page_breakpoint}
L{erase_page_breakpoint},
@type dwProcessId: int
@param dwProcessId: Process global ID.
@type address: int
@param address: Memory address of breakpoint.
"""
p = self.system.get_process(dwProcessId)
bp = self.get_page_breakpoint(dwProcessId, address)
if bp.is_running():
self.__del_running_bp_from_all_threads(bp)
bp.one_shot(p, None) |
async def api_request(self, url, params):
"""Make api fetch request."""
request = None
try:
with async_timeout.timeout(DEFAULT_TIMEOUT, loop=self._event_loop):
request = await self._api_session.get(
url, params=params)
if request.status != 200:
_LOGGER.error('Error fetching Emby data: %s', request.status)
return None
request_json = await request.json()
if 'error' in request_json:
_LOGGER.error('Error converting Emby data to json: %s: %s',
request_json['error']['code'],
request_json['error']['message'])
return None
return request_json
except (aiohttp.ClientError, asyncio.TimeoutError,
ConnectionRefusedError) as err:
_LOGGER.error('Error fetching Emby data: %s', err)
return None | Make api fetch request. | Below is the the instruction that describes the task:
### Input:
Make api fetch request.
### Response:
async def api_request(self, url, params):
"""Make api fetch request."""
request = None
try:
with async_timeout.timeout(DEFAULT_TIMEOUT, loop=self._event_loop):
request = await self._api_session.get(
url, params=params)
if request.status != 200:
_LOGGER.error('Error fetching Emby data: %s', request.status)
return None
request_json = await request.json()
if 'error' in request_json:
_LOGGER.error('Error converting Emby data to json: %s: %s',
request_json['error']['code'],
request_json['error']['message'])
return None
return request_json
except (aiohttp.ClientError, asyncio.TimeoutError,
ConnectionRefusedError) as err:
_LOGGER.error('Error fetching Emby data: %s', err)
return None |
def predict(self, temp_type):
"""
Transpile the predict method.
Parameters
----------
:param temp_type : string
The kind of export type (embedded, separated, exported).
Returns
-------
:return : string
The transpiled predict method as string.
"""
# Exported:
if temp_type == 'exported':
temp = self.temp('exported.class')
return temp.format(class_name=self.class_name,
method_name=self.method_name,
n_features=self.n_features)
# Embedded:
if temp_type == 'embedded':
method = self.create_method_embedded()
return self.create_class_embedded(method) | Transpile the predict method.
Parameters
----------
:param temp_type : string
The kind of export type (embedded, separated, exported).
Returns
-------
:return : string
The transpiled predict method as string. | Below is the the instruction that describes the task:
### Input:
Transpile the predict method.
Parameters
----------
:param temp_type : string
The kind of export type (embedded, separated, exported).
Returns
-------
:return : string
The transpiled predict method as string.
### Response:
def predict(self, temp_type):
"""
Transpile the predict method.
Parameters
----------
:param temp_type : string
The kind of export type (embedded, separated, exported).
Returns
-------
:return : string
The transpiled predict method as string.
"""
# Exported:
if temp_type == 'exported':
temp = self.temp('exported.class')
return temp.format(class_name=self.class_name,
method_name=self.method_name,
n_features=self.n_features)
# Embedded:
if temp_type == 'embedded':
method = self.create_method_embedded()
return self.create_class_embedded(method) |
def prior(self):
"""
Model prior for particular model.
Product of eclipse probability (``self.prob``),
the fraction of scenario that is allowed by the various
constraints (``self.selectfrac``), and all additional
factors in ``self.priorfactors``.
"""
prior = self.prob * self.selectfrac
for f in self.priorfactors:
prior *= self.priorfactors[f]
return prior | Model prior for particular model.
Product of eclipse probability (``self.prob``),
the fraction of scenario that is allowed by the various
constraints (``self.selectfrac``), and all additional
factors in ``self.priorfactors``. | Below is the the instruction that describes the task:
### Input:
Model prior for particular model.
Product of eclipse probability (``self.prob``),
the fraction of scenario that is allowed by the various
constraints (``self.selectfrac``), and all additional
factors in ``self.priorfactors``.
### Response:
def prior(self):
"""
Model prior for particular model.
Product of eclipse probability (``self.prob``),
the fraction of scenario that is allowed by the various
constraints (``self.selectfrac``), and all additional
factors in ``self.priorfactors``.
"""
prior = self.prob * self.selectfrac
for f in self.priorfactors:
prior *= self.priorfactors[f]
return prior |
def Flash(self, partition, timeout_ms=0, info_cb=DEFAULT_MESSAGE_CALLBACK):
"""Flashes the last downloaded file to the given partition.
Args:
partition: Partition to overwrite with the new image.
timeout_ms: Optional timeout in milliseconds to wait for it to finish.
info_cb: See Download. Usually no messages.
Returns:
Response to a download request, normally nothing.
"""
return self._SimpleCommand(b'flash', arg=partition, info_cb=info_cb,
timeout_ms=timeout_ms) | Flashes the last downloaded file to the given partition.
Args:
partition: Partition to overwrite with the new image.
timeout_ms: Optional timeout in milliseconds to wait for it to finish.
info_cb: See Download. Usually no messages.
Returns:
Response to a download request, normally nothing. | Below is the the instruction that describes the task:
### Input:
Flashes the last downloaded file to the given partition.
Args:
partition: Partition to overwrite with the new image.
timeout_ms: Optional timeout in milliseconds to wait for it to finish.
info_cb: See Download. Usually no messages.
Returns:
Response to a download request, normally nothing.
### Response:
def Flash(self, partition, timeout_ms=0, info_cb=DEFAULT_MESSAGE_CALLBACK):
"""Flashes the last downloaded file to the given partition.
Args:
partition: Partition to overwrite with the new image.
timeout_ms: Optional timeout in milliseconds to wait for it to finish.
info_cb: See Download. Usually no messages.
Returns:
Response to a download request, normally nothing.
"""
return self._SimpleCommand(b'flash', arg=partition, info_cb=info_cb,
timeout_ms=timeout_ms) |
def _node_has_modifier(graph: BELGraph, node: BaseEntity, modifier: str) -> bool:
"""Return true if over any of a nodes edges, it has a given modifier.
Modifier can be one of:
- :data:`pybel.constants.ACTIVITY`,
- :data:`pybel.constants.DEGRADATION`
- :data:`pybel.constants.TRANSLOCATION`.
:param modifier: One of :data:`pybel.constants.ACTIVITY`, :data:`pybel.constants.DEGRADATION`, or
:data:`pybel.constants.TRANSLOCATION`
"""
modifier_in_subject = any(
part_has_modifier(d, SUBJECT, modifier)
for _, _, d in graph.out_edges(node, data=True)
)
modifier_in_object = any(
part_has_modifier(d, OBJECT, modifier)
for _, _, d in graph.in_edges(node, data=True)
)
return modifier_in_subject or modifier_in_object | Return true if over any of a nodes edges, it has a given modifier.
Modifier can be one of:
- :data:`pybel.constants.ACTIVITY`,
- :data:`pybel.constants.DEGRADATION`
- :data:`pybel.constants.TRANSLOCATION`.
:param modifier: One of :data:`pybel.constants.ACTIVITY`, :data:`pybel.constants.DEGRADATION`, or
:data:`pybel.constants.TRANSLOCATION` | Below is the the instruction that describes the task:
### Input:
Return true if over any of a nodes edges, it has a given modifier.
Modifier can be one of:
- :data:`pybel.constants.ACTIVITY`,
- :data:`pybel.constants.DEGRADATION`
- :data:`pybel.constants.TRANSLOCATION`.
:param modifier: One of :data:`pybel.constants.ACTIVITY`, :data:`pybel.constants.DEGRADATION`, or
:data:`pybel.constants.TRANSLOCATION`
### Response:
def _node_has_modifier(graph: BELGraph, node: BaseEntity, modifier: str) -> bool:
"""Return true if over any of a nodes edges, it has a given modifier.
Modifier can be one of:
- :data:`pybel.constants.ACTIVITY`,
- :data:`pybel.constants.DEGRADATION`
- :data:`pybel.constants.TRANSLOCATION`.
:param modifier: One of :data:`pybel.constants.ACTIVITY`, :data:`pybel.constants.DEGRADATION`, or
:data:`pybel.constants.TRANSLOCATION`
"""
modifier_in_subject = any(
part_has_modifier(d, SUBJECT, modifier)
for _, _, d in graph.out_edges(node, data=True)
)
modifier_in_object = any(
part_has_modifier(d, OBJECT, modifier)
for _, _, d in graph.in_edges(node, data=True)
)
return modifier_in_subject or modifier_in_object |
def has(self):
"""Whether the cache file exists in the file system."""
self._done = os.path.exists(self._cache_file)
return self._done or self._out is not None | Whether the cache file exists in the file system. | Below is the the instruction that describes the task:
### Input:
Whether the cache file exists in the file system.
### Response:
def has(self):
"""Whether the cache file exists in the file system."""
self._done = os.path.exists(self._cache_file)
return self._done or self._out is not None |
def makekey(self, *args):
""" return a binary key for the nodeid, tag and optional value """
if len(args) > 1:
args = args[:1] + (args[1].encode('utf-8'),) + args[2:]
if len(args) == 3 and type(args[-1]) == str:
# node.tag.string type keys
return struct.pack(self.keyfmt[:1 + len(args)], b'.', *args[:-1]) + args[-1].encode('utf-8')
elif len(args) == 3 and type(args[-1]) == type(-1) and args[-1] < 0:
# negative values -> need lowercase fmt char
return struct.pack(self.keyfmt[:1 + len(args)] + self.fmt.lower(), b'.', *args)
else:
# node.tag.value type keys
return struct.pack(self.keyfmt[:2 + len(args)], b'.', *args) | return a binary key for the nodeid, tag and optional value | Below is the the instruction that describes the task:
### Input:
return a binary key for the nodeid, tag and optional value
### Response:
def makekey(self, *args):
""" return a binary key for the nodeid, tag and optional value """
if len(args) > 1:
args = args[:1] + (args[1].encode('utf-8'),) + args[2:]
if len(args) == 3 and type(args[-1]) == str:
# node.tag.string type keys
return struct.pack(self.keyfmt[:1 + len(args)], b'.', *args[:-1]) + args[-1].encode('utf-8')
elif len(args) == 3 and type(args[-1]) == type(-1) and args[-1] < 0:
# negative values -> need lowercase fmt char
return struct.pack(self.keyfmt[:1 + len(args)] + self.fmt.lower(), b'.', *args)
else:
# node.tag.value type keys
return struct.pack(self.keyfmt[:2 + len(args)], b'.', *args) |
def toc(self):
"""
Returns a rich list of texts in the catalog.
"""
output = []
for key in sorted(self.catalog.keys()):
edition = self.catalog[key]['edition']
length = len(self.catalog[key]['transliteration'])
output.append(
"Pnum: {key}, Edition: {edition}, length: {length} line(s)".format(
key=key, edition=edition, length=length))
return output | Returns a rich list of texts in the catalog. | Below is the the instruction that describes the task:
### Input:
Returns a rich list of texts in the catalog.
### Response:
def toc(self):
"""
Returns a rich list of texts in the catalog.
"""
output = []
for key in sorted(self.catalog.keys()):
edition = self.catalog[key]['edition']
length = len(self.catalog[key]['transliteration'])
output.append(
"Pnum: {key}, Edition: {edition}, length: {length} line(s)".format(
key=key, edition=edition, length=length))
return output |
def get_assessment_lookup_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the bank
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.AssessmentLookupSession) - ``an
_assessment_lookup_session``
raise: NotFound - ``bank_id`` not found
raise: NullArgument - ``bank_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_assessment_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentLookupSession(bank_id, proxy, self._runtime) | Gets the ``OsidSession`` associated with the assessment lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the bank
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.AssessmentLookupSession) - ``an
_assessment_lookup_session``
raise: NotFound - ``bank_id`` not found
raise: NullArgument - ``bank_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_assessment_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_lookup()`` and
``supports_visible_federation()`` are ``true``.* | Below is the the instruction that describes the task:
### Input:
Gets the ``OsidSession`` associated with the assessment lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the bank
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.AssessmentLookupSession) - ``an
_assessment_lookup_session``
raise: NotFound - ``bank_id`` not found
raise: NullArgument - ``bank_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_assessment_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_lookup()`` and
``supports_visible_federation()`` are ``true``.*
### Response:
def get_assessment_lookup_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the bank
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.AssessmentLookupSession) - ``an
_assessment_lookup_session``
raise: NotFound - ``bank_id`` not found
raise: NullArgument - ``bank_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_assessment_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentLookupSession(bank_id, proxy, self._runtime) |
def convert_from_unicode(data):
"""
converts unicode data to a string
:param data: the data to convert
:return:
"""
# if isinstance(data, basestring):
if isinstance(data, str):
return str(data)
elif isinstance(data, collectionsAbc.Mapping):
return dict(map(convert_from_unicode, data.items()))
elif isinstance(data, collectionsAbc.Iterable):
return type(data)(map(convert_from_unicode, data))
else:
return data | converts unicode data to a string
:param data: the data to convert
:return: | Below is the the instruction that describes the task:
### Input:
converts unicode data to a string
:param data: the data to convert
:return:
### Response:
def convert_from_unicode(data):
"""
converts unicode data to a string
:param data: the data to convert
:return:
"""
# if isinstance(data, basestring):
if isinstance(data, str):
return str(data)
elif isinstance(data, collectionsAbc.Mapping):
return dict(map(convert_from_unicode, data.items()))
elif isinstance(data, collectionsAbc.Iterable):
return type(data)(map(convert_from_unicode, data))
else:
return data |
def from_blob(cls, blob):
"""
Return a new |Image| subclass instance parsed from the image binary
contained in *blob*.
"""
stream = BytesIO(blob)
return cls._from_stream(stream, blob) | Return a new |Image| subclass instance parsed from the image binary
contained in *blob*. | Below is the the instruction that describes the task:
### Input:
Return a new |Image| subclass instance parsed from the image binary
contained in *blob*.
### Response:
def from_blob(cls, blob):
"""
Return a new |Image| subclass instance parsed from the image binary
contained in *blob*.
"""
stream = BytesIO(blob)
return cls._from_stream(stream, blob) |
def get_buckets(self, timeout=None):
"""
Get the list of buckets under this bucket-type as
:class:`RiakBucket <riak.bucket.RiakBucket>` instances.
.. warning:: Do not use this in production, as it requires
traversing through all keys stored in a cluster.
.. note:: This request is automatically retried :attr:`retries`
times if it fails due to network error.
:param timeout: a timeout value in milliseconds
:type timeout: int
:rtype: list of :class:`RiakBucket <riak.bucket.RiakBucket>`
instances
"""
return self._client.get_buckets(bucket_type=self, timeout=timeout) | Get the list of buckets under this bucket-type as
:class:`RiakBucket <riak.bucket.RiakBucket>` instances.
.. warning:: Do not use this in production, as it requires
traversing through all keys stored in a cluster.
.. note:: This request is automatically retried :attr:`retries`
times if it fails due to network error.
:param timeout: a timeout value in milliseconds
:type timeout: int
:rtype: list of :class:`RiakBucket <riak.bucket.RiakBucket>`
instances | Below is the the instruction that describes the task:
### Input:
Get the list of buckets under this bucket-type as
:class:`RiakBucket <riak.bucket.RiakBucket>` instances.
.. warning:: Do not use this in production, as it requires
traversing through all keys stored in a cluster.
.. note:: This request is automatically retried :attr:`retries`
times if it fails due to network error.
:param timeout: a timeout value in milliseconds
:type timeout: int
:rtype: list of :class:`RiakBucket <riak.bucket.RiakBucket>`
instances
### Response:
def get_buckets(self, timeout=None):
"""
Get the list of buckets under this bucket-type as
:class:`RiakBucket <riak.bucket.RiakBucket>` instances.
.. warning:: Do not use this in production, as it requires
traversing through all keys stored in a cluster.
.. note:: This request is automatically retried :attr:`retries`
times if it fails due to network error.
:param timeout: a timeout value in milliseconds
:type timeout: int
:rtype: list of :class:`RiakBucket <riak.bucket.RiakBucket>`
instances
"""
return self._client.get_buckets(bucket_type=self, timeout=timeout) |
def list_themes(directory=None):
"""Gets a list of the installed themes."""
repo = require_repo(directory)
path = os.path.join(repo, themes_dir)
return os.listdir(path) if os.path.isdir(path) else None | Gets a list of the installed themes. | Below is the the instruction that describes the task:
### Input:
Gets a list of the installed themes.
### Response:
def list_themes(directory=None):
"""Gets a list of the installed themes."""
repo = require_repo(directory)
path = os.path.join(repo, themes_dir)
return os.listdir(path) if os.path.isdir(path) else None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.