code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
| text
stringlengths 164
112k
|
---|---|---|
def event(self, event_data, priority="normal", event_method="EVENT"):
"""This function will send event packets to the server. This is the
main method you would use to send data from your application to the
server.
Whenever an event is sent to the server, a universally unique event id
(euuid) is created for each event and stored in the "event_uuids"
dictionary. This dictionary contains a list of all events that are
currently waiting for a response from the server. The event will only
be removed from this dictionary if the server responds with LEGAL or
ILLEGAL or if the request times out.
Args:
event_data (dict): The event data to send to the server. This data
will be passed through the server's middleware to determine if the
event is legal or not, and then processed by the server it is legal
priority (string): The event's priority informs the server of whether
or not the client is going to wait for a confirmation message from
the server indicating whether its event was LEGAL or ILLEGAL.
Setting this to "normal" informs the server that the client will
wait for a response from the server before processing the event.
Setting this to "high" informs the server that the client will NOT
wait for a response. Defaults to "normal".
event_method (string): The type of event to send to the server. Valid
methods are "EVENT", "AUTH". Defaults to "EVENT".
Returns:
A universally unique identifier (uuid) of the event.
Examples:
>>> event_data
>>> priority
"""
logger.debug("event: " + str(event_data))
# Generate an event UUID for this event
euuid = uuid.uuid1()
logger.debug("<%s> <euuid:%s> Sending event data to server: "
"%s" % (str(self.cuuid), str(euuid), str(self.server)))
if not self.listener.listening:
logger.warning("Neteria client is not listening.")
# If we're not even registered, don't even bother.
if not self.registered:
logger.warning("<%s> <euuid:%s> Client is currently not registered. "
"Event not sent." % (str(self.cuuid), str(euuid)))
return False
# Send the event data to the server
packet = {"method": event_method,
"cuuid": str(self.cuuid),
"euuid": str(euuid),
"event_data": event_data,
"timestamp": str(datetime.now()),
"retry": 0,
"priority": priority}
self.listener.send_datagram(
serialize_data(packet, self.compression,
self.encryption, self.server_key),
self.server)
logger.debug("<%s> Sending EVENT Packet: %s" % (str(self.cuuid),
pformat(packet)))
# Set the sent event to our event buffer to see if we need to roll back
# or anything
self.event_uuids[str(euuid)] = packet
# Now we need to reschedule a timeout/retransmit check
logger.debug("<%s> Scheduling retry in %s seconds" % (str(self.cuuid),
str(self.timeout)))
self.listener.call_later(self.timeout, self.retransmit, packet)
return euuid | This function will send event packets to the server. This is the
main method you would use to send data from your application to the
server.
Whenever an event is sent to the server, a universally unique event id
(euuid) is created for each event and stored in the "event_uuids"
dictionary. This dictionary contains a list of all events that are
currently waiting for a response from the server. The event will only
be removed from this dictionary if the server responds with LEGAL or
ILLEGAL or if the request times out.
Args:
event_data (dict): The event data to send to the server. This data
will be passed through the server's middleware to determine if the
event is legal or not, and then processed by the server it is legal
priority (string): The event's priority informs the server of whether
or not the client is going to wait for a confirmation message from
the server indicating whether its event was LEGAL or ILLEGAL.
Setting this to "normal" informs the server that the client will
wait for a response from the server before processing the event.
Setting this to "high" informs the server that the client will NOT
wait for a response. Defaults to "normal".
event_method (string): The type of event to send to the server. Valid
methods are "EVENT", "AUTH". Defaults to "EVENT".
Returns:
A universally unique identifier (uuid) of the event.
Examples:
>>> event_data
>>> priority | Below is the the instruction that describes the task:
### Input:
This function will send event packets to the server. This is the
main method you would use to send data from your application to the
server.
Whenever an event is sent to the server, a universally unique event id
(euuid) is created for each event and stored in the "event_uuids"
dictionary. This dictionary contains a list of all events that are
currently waiting for a response from the server. The event will only
be removed from this dictionary if the server responds with LEGAL or
ILLEGAL or if the request times out.
Args:
event_data (dict): The event data to send to the server. This data
will be passed through the server's middleware to determine if the
event is legal or not, and then processed by the server it is legal
priority (string): The event's priority informs the server of whether
or not the client is going to wait for a confirmation message from
the server indicating whether its event was LEGAL or ILLEGAL.
Setting this to "normal" informs the server that the client will
wait for a response from the server before processing the event.
Setting this to "high" informs the server that the client will NOT
wait for a response. Defaults to "normal".
event_method (string): The type of event to send to the server. Valid
methods are "EVENT", "AUTH". Defaults to "EVENT".
Returns:
A universally unique identifier (uuid) of the event.
Examples:
>>> event_data
>>> priority
### Response:
def event(self, event_data, priority="normal", event_method="EVENT"):
"""This function will send event packets to the server. This is the
main method you would use to send data from your application to the
server.
Whenever an event is sent to the server, a universally unique event id
(euuid) is created for each event and stored in the "event_uuids"
dictionary. This dictionary contains a list of all events that are
currently waiting for a response from the server. The event will only
be removed from this dictionary if the server responds with LEGAL or
ILLEGAL or if the request times out.
Args:
event_data (dict): The event data to send to the server. This data
will be passed through the server's middleware to determine if the
event is legal or not, and then processed by the server it is legal
priority (string): The event's priority informs the server of whether
or not the client is going to wait for a confirmation message from
the server indicating whether its event was LEGAL or ILLEGAL.
Setting this to "normal" informs the server that the client will
wait for a response from the server before processing the event.
Setting this to "high" informs the server that the client will NOT
wait for a response. Defaults to "normal".
event_method (string): The type of event to send to the server. Valid
methods are "EVENT", "AUTH". Defaults to "EVENT".
Returns:
A universally unique identifier (uuid) of the event.
Examples:
>>> event_data
>>> priority
"""
logger.debug("event: " + str(event_data))
# Generate an event UUID for this event
euuid = uuid.uuid1()
logger.debug("<%s> <euuid:%s> Sending event data to server: "
"%s" % (str(self.cuuid), str(euuid), str(self.server)))
if not self.listener.listening:
logger.warning("Neteria client is not listening.")
# If we're not even registered, don't even bother.
if not self.registered:
logger.warning("<%s> <euuid:%s> Client is currently not registered. "
"Event not sent." % (str(self.cuuid), str(euuid)))
return False
# Send the event data to the server
packet = {"method": event_method,
"cuuid": str(self.cuuid),
"euuid": str(euuid),
"event_data": event_data,
"timestamp": str(datetime.now()),
"retry": 0,
"priority": priority}
self.listener.send_datagram(
serialize_data(packet, self.compression,
self.encryption, self.server_key),
self.server)
logger.debug("<%s> Sending EVENT Packet: %s" % (str(self.cuuid),
pformat(packet)))
# Set the sent event to our event buffer to see if we need to roll back
# or anything
self.event_uuids[str(euuid)] = packet
# Now we need to reschedule a timeout/retransmit check
logger.debug("<%s> Scheduling retry in %s seconds" % (str(self.cuuid),
str(self.timeout)))
self.listener.call_later(self.timeout, self.retransmit, packet)
return euuid |
def format_error_message(exception_message, task_exception=False):
"""Improve the formatting of an exception thrown by a remote function.
This method takes a traceback from an exception and makes it nicer by
removing a few uninformative lines and adding some space to indent the
remaining lines nicely.
Args:
exception_message (str): A message generated by traceback.format_exc().
Returns:
A string of the formatted exception message.
"""
lines = exception_message.split("\n")
if task_exception:
# For errors that occur inside of tasks, remove lines 1 and 2 which are
# always the same, they just contain information about the worker code.
lines = lines[0:1] + lines[3:]
pass
return "\n".join(lines) | Improve the formatting of an exception thrown by a remote function.
This method takes a traceback from an exception and makes it nicer by
removing a few uninformative lines and adding some space to indent the
remaining lines nicely.
Args:
exception_message (str): A message generated by traceback.format_exc().
Returns:
A string of the formatted exception message. | Below is the the instruction that describes the task:
### Input:
Improve the formatting of an exception thrown by a remote function.
This method takes a traceback from an exception and makes it nicer by
removing a few uninformative lines and adding some space to indent the
remaining lines nicely.
Args:
exception_message (str): A message generated by traceback.format_exc().
Returns:
A string of the formatted exception message.
### Response:
def format_error_message(exception_message, task_exception=False):
"""Improve the formatting of an exception thrown by a remote function.
This method takes a traceback from an exception and makes it nicer by
removing a few uninformative lines and adding some space to indent the
remaining lines nicely.
Args:
exception_message (str): A message generated by traceback.format_exc().
Returns:
A string of the formatted exception message.
"""
lines = exception_message.split("\n")
if task_exception:
# For errors that occur inside of tasks, remove lines 1 and 2 which are
# always the same, they just contain information about the worker code.
lines = lines[0:1] + lines[3:]
pass
return "\n".join(lines) |
def translate(offset, dtype=None):
"""Translate by an offset (x, y, z) .
Parameters
----------
offset : array-like, shape (3,)
Translation in x, y, z.
dtype : dtype | None
Output type (if None, don't cast).
Returns
-------
M : ndarray
Transformation matrix describing the translation.
"""
assert len(offset) == 3
x, y, z = offset
M = np.array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[x, y, z, 1.0]], dtype)
return M | Translate by an offset (x, y, z) .
Parameters
----------
offset : array-like, shape (3,)
Translation in x, y, z.
dtype : dtype | None
Output type (if None, don't cast).
Returns
-------
M : ndarray
Transformation matrix describing the translation. | Below is the the instruction that describes the task:
### Input:
Translate by an offset (x, y, z) .
Parameters
----------
offset : array-like, shape (3,)
Translation in x, y, z.
dtype : dtype | None
Output type (if None, don't cast).
Returns
-------
M : ndarray
Transformation matrix describing the translation.
### Response:
def translate(offset, dtype=None):
"""Translate by an offset (x, y, z) .
Parameters
----------
offset : array-like, shape (3,)
Translation in x, y, z.
dtype : dtype | None
Output type (if None, don't cast).
Returns
-------
M : ndarray
Transformation matrix describing the translation.
"""
assert len(offset) == 3
x, y, z = offset
M = np.array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[x, y, z, 1.0]], dtype)
return M |
def find_reference_section_no_title_generic(docbody, marker_patterns):
"""This function would generally be used when it was not possible to locate
the start of a document's reference section by means of its title.
Instead, this function will look for reference lines that have numeric
markers of the format [1], [2], {1}, {2}, etc.
@param docbody: (list) of strings -each string is a line in the document.
@return: (dictionary) :
{ 'start_line' : (integer) - index in docbody of 1st reference line,
'title_string' : (None) - title of the reference section
(None since no title),
'marker' : (string) - the marker of the first reference line,
'marker_pattern' : (string) - the regexp string used to find the
marker,
'title_marker_same_line' : (integer) 0 - to signal title not on same
line as marker.
}
Much of this information is used by later functions to rebuild
a reference section.
-- OR --
(None) - when the reference section could not be found.
"""
if not docbody:
return None
ref_start_line = ref_line_marker = None
# try to find first reference line in the reference section:
found_ref_sect = False
for reversed_index, line in enumerate(reversed(docbody)):
mark_match = regex_match_list(line.strip(), marker_patterns)
if mark_match and mark_match.group('marknum') == '1':
# Get marker recognition pattern:
mark_pattern = mark_match.re.pattern
# Look for [2] in next 10 lines:
next_test_lines = 10
index = len(docbody) - reversed_index
zone_to_check = docbody[index:index + next_test_lines]
if len(zone_to_check) < 5:
# We found a 1 towards the end, we assume
# we only have one reference
found = True
else:
# Check for number 2
found = False
for l in zone_to_check:
mark_match2 = regex_match_list(l.strip(), marker_patterns)
if mark_match2 and mark_match2.group('marknum') == '2':
found = True
break
if found:
# Found next reference line:
found_ref_sect = True
ref_start_line = len(docbody) - 1 - reversed_index
ref_line_marker = mark_match.group('mark')
ref_line_marker_pattern = mark_pattern
break
if found_ref_sect:
ref_sectn_details = {
'start_line': ref_start_line,
'title_string': None,
'marker': ref_line_marker.strip(),
'marker_pattern': ref_line_marker_pattern,
'title_marker_same_line': False,
}
else:
# didn't manage to find the reference section
ref_sectn_details = None
return ref_sectn_details | This function would generally be used when it was not possible to locate
the start of a document's reference section by means of its title.
Instead, this function will look for reference lines that have numeric
markers of the format [1], [2], {1}, {2}, etc.
@param docbody: (list) of strings -each string is a line in the document.
@return: (dictionary) :
{ 'start_line' : (integer) - index in docbody of 1st reference line,
'title_string' : (None) - title of the reference section
(None since no title),
'marker' : (string) - the marker of the first reference line,
'marker_pattern' : (string) - the regexp string used to find the
marker,
'title_marker_same_line' : (integer) 0 - to signal title not on same
line as marker.
}
Much of this information is used by later functions to rebuild
a reference section.
-- OR --
(None) - when the reference section could not be found. | Below is the the instruction that describes the task:
### Input:
This function would generally be used when it was not possible to locate
the start of a document's reference section by means of its title.
Instead, this function will look for reference lines that have numeric
markers of the format [1], [2], {1}, {2}, etc.
@param docbody: (list) of strings -each string is a line in the document.
@return: (dictionary) :
{ 'start_line' : (integer) - index in docbody of 1st reference line,
'title_string' : (None) - title of the reference section
(None since no title),
'marker' : (string) - the marker of the first reference line,
'marker_pattern' : (string) - the regexp string used to find the
marker,
'title_marker_same_line' : (integer) 0 - to signal title not on same
line as marker.
}
Much of this information is used by later functions to rebuild
a reference section.
-- OR --
(None) - when the reference section could not be found.
### Response:
def find_reference_section_no_title_generic(docbody, marker_patterns):
"""This function would generally be used when it was not possible to locate
the start of a document's reference section by means of its title.
Instead, this function will look for reference lines that have numeric
markers of the format [1], [2], {1}, {2}, etc.
@param docbody: (list) of strings -each string is a line in the document.
@return: (dictionary) :
{ 'start_line' : (integer) - index in docbody of 1st reference line,
'title_string' : (None) - title of the reference section
(None since no title),
'marker' : (string) - the marker of the first reference line,
'marker_pattern' : (string) - the regexp string used to find the
marker,
'title_marker_same_line' : (integer) 0 - to signal title not on same
line as marker.
}
Much of this information is used by later functions to rebuild
a reference section.
-- OR --
(None) - when the reference section could not be found.
"""
if not docbody:
return None
ref_start_line = ref_line_marker = None
# try to find first reference line in the reference section:
found_ref_sect = False
for reversed_index, line in enumerate(reversed(docbody)):
mark_match = regex_match_list(line.strip(), marker_patterns)
if mark_match and mark_match.group('marknum') == '1':
# Get marker recognition pattern:
mark_pattern = mark_match.re.pattern
# Look for [2] in next 10 lines:
next_test_lines = 10
index = len(docbody) - reversed_index
zone_to_check = docbody[index:index + next_test_lines]
if len(zone_to_check) < 5:
# We found a 1 towards the end, we assume
# we only have one reference
found = True
else:
# Check for number 2
found = False
for l in zone_to_check:
mark_match2 = regex_match_list(l.strip(), marker_patterns)
if mark_match2 and mark_match2.group('marknum') == '2':
found = True
break
if found:
# Found next reference line:
found_ref_sect = True
ref_start_line = len(docbody) - 1 - reversed_index
ref_line_marker = mark_match.group('mark')
ref_line_marker_pattern = mark_pattern
break
if found_ref_sect:
ref_sectn_details = {
'start_line': ref_start_line,
'title_string': None,
'marker': ref_line_marker.strip(),
'marker_pattern': ref_line_marker_pattern,
'title_marker_same_line': False,
}
else:
# didn't manage to find the reference section
ref_sectn_details = None
return ref_sectn_details |
def calc_missingremoterelease_v1(self):
"""Calculate the portion of the required remote demand that could not
be met by the actual discharge release.
Required flux sequences:
|RequiredRemoteRelease|
|ActualRelease|
Calculated flux sequence:
|MissingRemoteRelease|
Basic equation:
:math:`MissingRemoteRelease = max(
RequiredRemoteRelease-ActualRelease, 0)`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> fluxes.requiredremoterelease = 2.0
>>> fluxes.actualrelease = 1.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(1.0)
>>> fluxes.actualrelease = 3.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(0.0)
"""
flu = self.sequences.fluxes.fastaccess
flu.missingremoterelease = max(
flu.requiredremoterelease-flu.actualrelease, 0.) | Calculate the portion of the required remote demand that could not
be met by the actual discharge release.
Required flux sequences:
|RequiredRemoteRelease|
|ActualRelease|
Calculated flux sequence:
|MissingRemoteRelease|
Basic equation:
:math:`MissingRemoteRelease = max(
RequiredRemoteRelease-ActualRelease, 0)`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> fluxes.requiredremoterelease = 2.0
>>> fluxes.actualrelease = 1.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(1.0)
>>> fluxes.actualrelease = 3.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(0.0) | Below is the the instruction that describes the task:
### Input:
Calculate the portion of the required remote demand that could not
be met by the actual discharge release.
Required flux sequences:
|RequiredRemoteRelease|
|ActualRelease|
Calculated flux sequence:
|MissingRemoteRelease|
Basic equation:
:math:`MissingRemoteRelease = max(
RequiredRemoteRelease-ActualRelease, 0)`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> fluxes.requiredremoterelease = 2.0
>>> fluxes.actualrelease = 1.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(1.0)
>>> fluxes.actualrelease = 3.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(0.0)
### Response:
def calc_missingremoterelease_v1(self):
"""Calculate the portion of the required remote demand that could not
be met by the actual discharge release.
Required flux sequences:
|RequiredRemoteRelease|
|ActualRelease|
Calculated flux sequence:
|MissingRemoteRelease|
Basic equation:
:math:`MissingRemoteRelease = max(
RequiredRemoteRelease-ActualRelease, 0)`
Example:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> fluxes.requiredremoterelease = 2.0
>>> fluxes.actualrelease = 1.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(1.0)
>>> fluxes.actualrelease = 3.0
>>> model.calc_missingremoterelease_v1()
>>> fluxes.missingremoterelease
missingremoterelease(0.0)
"""
flu = self.sequences.fluxes.fastaccess
flu.missingremoterelease = max(
flu.requiredremoterelease-flu.actualrelease, 0.) |
def _init_file_logger(logger, level, log_path, log_size, log_count):
"""
one logger only have one level RotatingFileHandler
"""
if level not in [logging.NOTSET, logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL]:
level = logging.DEBUG
for h in logger.handlers:
if isinstance(h, logging.handlers.RotatingFileHandler):
if h.level == level:
return
fh = logging.handlers.RotatingFileHandler(
log_path, maxBytes=log_size, backupCount=log_count)
fh.setLevel(level)
fh.setFormatter(_formatter)
logger.addHandler(fh) | one logger only have one level RotatingFileHandler | Below is the the instruction that describes the task:
### Input:
one logger only have one level RotatingFileHandler
### Response:
def _init_file_logger(logger, level, log_path, log_size, log_count):
"""
one logger only have one level RotatingFileHandler
"""
if level not in [logging.NOTSET, logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL]:
level = logging.DEBUG
for h in logger.handlers:
if isinstance(h, logging.handlers.RotatingFileHandler):
if h.level == level:
return
fh = logging.handlers.RotatingFileHandler(
log_path, maxBytes=log_size, backupCount=log_count)
fh.setLevel(level)
fh.setFormatter(_formatter)
logger.addHandler(fh) |
def phase_shifted_coefficients(amplitude_coefficients, form='cos',
shift=0.0):
r"""
Converts Fourier coefficients from the amplitude form to the
phase-shifted form, as either a sine or cosine series.
Amplitude form:
.. math::
m(t) = A_0 + \sum_{k=1}^n (a_k \sin(k \omega t)
+ b_k \cos(k \omega t))
Sine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \sin(k \omega t + \Phi_k)
Cosine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \cos(k \omega t + \Phi_k)
**Parameters**
amplitude_coefficients : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, a_1, b_1, \ldots a_n, b_n ]`.
form : str, optional
Form of output coefficients, must be one of 'sin' or 'cos'
(default 'cos').
shift : number, optional
Shift to apply to light curve (default 0.0).
**Returns**
out : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, A_1, \Phi_1, \ldots, A_n, \Phi_n ]`.
"""
if form != 'sin' and form != 'cos':
raise NotImplementedError(
'Fourier series must have form sin or cos')
# separate array of coefficients into respective parts
A_0 = amplitude_coefficients[0]
a_k = amplitude_coefficients[1::2]
b_k = amplitude_coefficients[2::2]
degree = a_k.size
k = numpy.arange(1, degree+1)
# A_k and Phi_k are the angle and hypotenuse in the right triangles
# pictured below. A_k is obtained with the Pythagorean theorem, and
# Phi_k is obtained with the 2-argument inverse tangent.
# The positions of a_k and b_k depend on whether it is a sin or cos
# series.
#
# Cos series Sin series
#
# b_k /|
# --------- / |
# \ Φ_k |_| / |
# \ | A_k / |
# \ | / | b_k
# \ | a_k / |
# A_k \ | / _|
# \ | / Φ_k | |
# \ | ---------
# \| a_k
#
A_k = numpy.sqrt(a_k**2 + b_k**2)
# phase coefficients are shifted to the left by optional ``shift``
if form == 'cos':
Phi_k = numpy.arctan2(-a_k, b_k) + 2*pi*k*shift
elif form == 'sin':
Phi_k = numpy.arctan2(b_k, a_k) + 2*pi*k*shift
# constrain Phi between 0 and 2*pi
Phi_k %= 2*pi
phase_shifted_coefficients_ = numpy.empty(amplitude_coefficients.shape,
dtype=float)
phase_shifted_coefficients_[0] = A_0
phase_shifted_coefficients_[1::2] = A_k
phase_shifted_coefficients_[2::2] = Phi_k
return phase_shifted_coefficients_ | r"""
Converts Fourier coefficients from the amplitude form to the
phase-shifted form, as either a sine or cosine series.
Amplitude form:
.. math::
m(t) = A_0 + \sum_{k=1}^n (a_k \sin(k \omega t)
+ b_k \cos(k \omega t))
Sine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \sin(k \omega t + \Phi_k)
Cosine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \cos(k \omega t + \Phi_k)
**Parameters**
amplitude_coefficients : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, a_1, b_1, \ldots a_n, b_n ]`.
form : str, optional
Form of output coefficients, must be one of 'sin' or 'cos'
(default 'cos').
shift : number, optional
Shift to apply to light curve (default 0.0).
**Returns**
out : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, A_1, \Phi_1, \ldots, A_n, \Phi_n ]`. | Below is the the instruction that describes the task:
### Input:
r"""
Converts Fourier coefficients from the amplitude form to the
phase-shifted form, as either a sine or cosine series.
Amplitude form:
.. math::
m(t) = A_0 + \sum_{k=1}^n (a_k \sin(k \omega t)
+ b_k \cos(k \omega t))
Sine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \sin(k \omega t + \Phi_k)
Cosine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \cos(k \omega t + \Phi_k)
**Parameters**
amplitude_coefficients : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, a_1, b_1, \ldots a_n, b_n ]`.
form : str, optional
Form of output coefficients, must be one of 'sin' or 'cos'
(default 'cos').
shift : number, optional
Shift to apply to light curve (default 0.0).
**Returns**
out : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, A_1, \Phi_1, \ldots, A_n, \Phi_n ]`.
### Response:
def phase_shifted_coefficients(amplitude_coefficients, form='cos',
shift=0.0):
r"""
Converts Fourier coefficients from the amplitude form to the
phase-shifted form, as either a sine or cosine series.
Amplitude form:
.. math::
m(t) = A_0 + \sum_{k=1}^n (a_k \sin(k \omega t)
+ b_k \cos(k \omega t))
Sine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \sin(k \omega t + \Phi_k)
Cosine form:
.. math::
m(t) = A_0 + \sum_{k=1}^n A_k \cos(k \omega t + \Phi_k)
**Parameters**
amplitude_coefficients : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, a_1, b_1, \ldots a_n, b_n ]`.
form : str, optional
Form of output coefficients, must be one of 'sin' or 'cos'
(default 'cos').
shift : number, optional
Shift to apply to light curve (default 0.0).
**Returns**
out : array-like, shape = [:math:`2n+1`]
Array of coefficients
:math:`[ A_0, A_1, \Phi_1, \ldots, A_n, \Phi_n ]`.
"""
if form != 'sin' and form != 'cos':
raise NotImplementedError(
'Fourier series must have form sin or cos')
# separate array of coefficients into respective parts
A_0 = amplitude_coefficients[0]
a_k = amplitude_coefficients[1::2]
b_k = amplitude_coefficients[2::2]
degree = a_k.size
k = numpy.arange(1, degree+1)
# A_k and Phi_k are the angle and hypotenuse in the right triangles
# pictured below. A_k is obtained with the Pythagorean theorem, and
# Phi_k is obtained with the 2-argument inverse tangent.
# The positions of a_k and b_k depend on whether it is a sin or cos
# series.
#
# Cos series Sin series
#
# b_k /|
# --------- / |
# \ Φ_k |_| / |
# \ | A_k / |
# \ | / | b_k
# \ | a_k / |
# A_k \ | / _|
# \ | / Φ_k | |
# \ | ---------
# \| a_k
#
A_k = numpy.sqrt(a_k**2 + b_k**2)
# phase coefficients are shifted to the left by optional ``shift``
if form == 'cos':
Phi_k = numpy.arctan2(-a_k, b_k) + 2*pi*k*shift
elif form == 'sin':
Phi_k = numpy.arctan2(b_k, a_k) + 2*pi*k*shift
# constrain Phi between 0 and 2*pi
Phi_k %= 2*pi
phase_shifted_coefficients_ = numpy.empty(amplitude_coefficients.shape,
dtype=float)
phase_shifted_coefficients_[0] = A_0
phase_shifted_coefficients_[1::2] = A_k
phase_shifted_coefficients_[2::2] = Phi_k
return phase_shifted_coefficients_ |
def na_value_for_dtype(dtype, compat=True):
"""
Return a dtype compat na value
Parameters
----------
dtype : string / dtype
compat : boolean, default True
Returns
-------
np.dtype or a pandas dtype
Examples
--------
>>> na_value_for_dtype(np.dtype('int64'))
0
>>> na_value_for_dtype(np.dtype('int64'), compat=False)
nan
>>> na_value_for_dtype(np.dtype('float64'))
nan
>>> na_value_for_dtype(np.dtype('bool'))
False
>>> na_value_for_dtype(np.dtype('datetime64[ns]'))
NaT
"""
dtype = pandas_dtype(dtype)
if is_extension_array_dtype(dtype):
return dtype.na_value
if (is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype) or
is_timedelta64_dtype(dtype) or is_period_dtype(dtype)):
return NaT
elif is_float_dtype(dtype):
return np.nan
elif is_integer_dtype(dtype):
if compat:
return 0
return np.nan
elif is_bool_dtype(dtype):
return False
return np.nan | Return a dtype compat na value
Parameters
----------
dtype : string / dtype
compat : boolean, default True
Returns
-------
np.dtype or a pandas dtype
Examples
--------
>>> na_value_for_dtype(np.dtype('int64'))
0
>>> na_value_for_dtype(np.dtype('int64'), compat=False)
nan
>>> na_value_for_dtype(np.dtype('float64'))
nan
>>> na_value_for_dtype(np.dtype('bool'))
False
>>> na_value_for_dtype(np.dtype('datetime64[ns]'))
NaT | Below is the the instruction that describes the task:
### Input:
Return a dtype compat na value
Parameters
----------
dtype : string / dtype
compat : boolean, default True
Returns
-------
np.dtype or a pandas dtype
Examples
--------
>>> na_value_for_dtype(np.dtype('int64'))
0
>>> na_value_for_dtype(np.dtype('int64'), compat=False)
nan
>>> na_value_for_dtype(np.dtype('float64'))
nan
>>> na_value_for_dtype(np.dtype('bool'))
False
>>> na_value_for_dtype(np.dtype('datetime64[ns]'))
NaT
### Response:
def na_value_for_dtype(dtype, compat=True):
"""
Return a dtype compat na value
Parameters
----------
dtype : string / dtype
compat : boolean, default True
Returns
-------
np.dtype or a pandas dtype
Examples
--------
>>> na_value_for_dtype(np.dtype('int64'))
0
>>> na_value_for_dtype(np.dtype('int64'), compat=False)
nan
>>> na_value_for_dtype(np.dtype('float64'))
nan
>>> na_value_for_dtype(np.dtype('bool'))
False
>>> na_value_for_dtype(np.dtype('datetime64[ns]'))
NaT
"""
dtype = pandas_dtype(dtype)
if is_extension_array_dtype(dtype):
return dtype.na_value
if (is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype) or
is_timedelta64_dtype(dtype) or is_period_dtype(dtype)):
return NaT
elif is_float_dtype(dtype):
return np.nan
elif is_integer_dtype(dtype):
if compat:
return 0
return np.nan
elif is_bool_dtype(dtype):
return False
return np.nan |
def set(self, key, value, **kw):
"""Place a value in the cache.
:param key: the value's key.
:param value: the value.
:param \**kw: cache configuration arguments.
"""
self.impl.set(key, value, **self._get_cache_kw(kw, None)) | Place a value in the cache.
:param key: the value's key.
:param value: the value.
:param \**kw: cache configuration arguments. | Below is the the instruction that describes the task:
### Input:
Place a value in the cache.
:param key: the value's key.
:param value: the value.
:param \**kw: cache configuration arguments.
### Response:
def set(self, key, value, **kw):
"""Place a value in the cache.
:param key: the value's key.
:param value: the value.
:param \**kw: cache configuration arguments.
"""
self.impl.set(key, value, **self._get_cache_kw(kw, None)) |
def getElementsByTagName(self, tagName, root='root', useIndex=True):
'''
getElementsByTagName - Searches and returns all elements with a specific tag name.
@param tagName <lowercase str> - A lowercase string of the tag name.
@param root <AdvancedTag/'root'> - Search starting at a specific node, if provided. if string 'root', the root of the parsed tree will be used.
@param useIndex - If True [default] and tag names are set to be indexed [default, see constructor], only the index will be used. If False, all tags
will be searched.
'''
(root, isFromRoot) = self._handleRootArg(root)
if useIndex is True and self.indexTagNames is True:
elements = self._tagNameMap.get(tagName, []) # Use .get here as to not create a lot of extra indexes on the defaultdict for misses
if isFromRoot is False:
_hasTagInParentLine = self._hasTagInParentLine
elements = [x for x in elements if _hasTagInParentLine(x, root)]
return TagCollection(elements)
return AdvancedHTMLParser.getElementsByTagName(self, tagName, root) | getElementsByTagName - Searches and returns all elements with a specific tag name.
@param tagName <lowercase str> - A lowercase string of the tag name.
@param root <AdvancedTag/'root'> - Search starting at a specific node, if provided. if string 'root', the root of the parsed tree will be used.
@param useIndex - If True [default] and tag names are set to be indexed [default, see constructor], only the index will be used. If False, all tags
will be searched. | Below is the the instruction that describes the task:
### Input:
getElementsByTagName - Searches and returns all elements with a specific tag name.
@param tagName <lowercase str> - A lowercase string of the tag name.
@param root <AdvancedTag/'root'> - Search starting at a specific node, if provided. if string 'root', the root of the parsed tree will be used.
@param useIndex - If True [default] and tag names are set to be indexed [default, see constructor], only the index will be used. If False, all tags
will be searched.
### Response:
def getElementsByTagName(self, tagName, root='root', useIndex=True):
'''
getElementsByTagName - Searches and returns all elements with a specific tag name.
@param tagName <lowercase str> - A lowercase string of the tag name.
@param root <AdvancedTag/'root'> - Search starting at a specific node, if provided. if string 'root', the root of the parsed tree will be used.
@param useIndex - If True [default] and tag names are set to be indexed [default, see constructor], only the index will be used. If False, all tags
will be searched.
'''
(root, isFromRoot) = self._handleRootArg(root)
if useIndex is True and self.indexTagNames is True:
elements = self._tagNameMap.get(tagName, []) # Use .get here as to not create a lot of extra indexes on the defaultdict for misses
if isFromRoot is False:
_hasTagInParentLine = self._hasTagInParentLine
elements = [x for x in elements if _hasTagInParentLine(x, root)]
return TagCollection(elements)
return AdvancedHTMLParser.getElementsByTagName(self, tagName, root) |
def with_(self, *relations):
"""
Set the relationships that should be eager loaded.
:return: The current Builder instance
:rtype: Builder
"""
if not relations:
return self
eagers = self._parse_relations(list(relations))
self._eager_load.update(eagers)
return self | Set the relationships that should be eager loaded.
:return: The current Builder instance
:rtype: Builder | Below is the the instruction that describes the task:
### Input:
Set the relationships that should be eager loaded.
:return: The current Builder instance
:rtype: Builder
### Response:
def with_(self, *relations):
"""
Set the relationships that should be eager loaded.
:return: The current Builder instance
:rtype: Builder
"""
if not relations:
return self
eagers = self._parse_relations(list(relations))
self._eager_load.update(eagers)
return self |
def slim_frame_data(frames, frame_allowance=25):
"""
Removes various excess metadata from middle frames which go beyond
``frame_allowance``.
Returns ``frames``.
"""
frames_len = 0
app_frames = []
system_frames = []
for frame in frames:
frames_len += 1
if frame.get('in_app'):
app_frames.append(frame)
else:
system_frames.append(frame)
if frames_len <= frame_allowance:
return frames
remaining = frames_len - frame_allowance
app_count = len(app_frames)
system_allowance = max(frame_allowance - app_count, 0)
if system_allowance:
half_max = int(system_allowance / 2)
# prioritize trimming system frames
for frame in system_frames[half_max:-half_max]:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
remaining -= 1
else:
for frame in system_frames:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
remaining -= 1
if remaining:
app_allowance = app_count - remaining
half_max = int(app_allowance / 2)
for frame in app_frames[half_max:-half_max]:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
return frames | Removes various excess metadata from middle frames which go beyond
``frame_allowance``.
Returns ``frames``. | Below is the the instruction that describes the task:
### Input:
Removes various excess metadata from middle frames which go beyond
``frame_allowance``.
Returns ``frames``.
### Response:
def slim_frame_data(frames, frame_allowance=25):
"""
Removes various excess metadata from middle frames which go beyond
``frame_allowance``.
Returns ``frames``.
"""
frames_len = 0
app_frames = []
system_frames = []
for frame in frames:
frames_len += 1
if frame.get('in_app'):
app_frames.append(frame)
else:
system_frames.append(frame)
if frames_len <= frame_allowance:
return frames
remaining = frames_len - frame_allowance
app_count = len(app_frames)
system_allowance = max(frame_allowance - app_count, 0)
if system_allowance:
half_max = int(system_allowance / 2)
# prioritize trimming system frames
for frame in system_frames[half_max:-half_max]:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
remaining -= 1
else:
for frame in system_frames:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
remaining -= 1
if remaining:
app_allowance = app_count - remaining
half_max = int(app_allowance / 2)
for frame in app_frames[half_max:-half_max]:
frame.pop('vars', None)
frame.pop('pre_context', None)
frame.pop('post_context', None)
return frames |
def get_min_sec_from_morning(self):
"""Get the first second from midnight where a timerange is effective
:return: smallest amount of second from midnight of all timerange
:rtype: int
"""
mins = []
for timerange in self.timeranges:
mins.append(timerange.get_sec_from_morning())
return min(mins) | Get the first second from midnight where a timerange is effective
:return: smallest amount of second from midnight of all timerange
:rtype: int | Below is the the instruction that describes the task:
### Input:
Get the first second from midnight where a timerange is effective
:return: smallest amount of second from midnight of all timerange
:rtype: int
### Response:
def get_min_sec_from_morning(self):
"""Get the first second from midnight where a timerange is effective
:return: smallest amount of second from midnight of all timerange
:rtype: int
"""
mins = []
for timerange in self.timeranges:
mins.append(timerange.get_sec_from_morning())
return min(mins) |
def init_logging(log_level):
"""
Init logging settings with default set to INFO
"""
log_level = log_level_to_string_map[min(log_level, 5)]
msg = "%(levelname)s - %(name)s:%(lineno)s - %(message)s" if log_level in os.environ else "%(levelname)s - %(message)s"
logging_conf = {
"version": 1,
"root": {
"level": log_level,
"handlers": ["console"]
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": log_level,
"formatter": "simple",
"stream": "ext://sys.stdout"
}
},
"formatters": {
"simple": {
"format": " {0}".format(msg)
}
}
}
logging.config.dictConfig(logging_conf) | Init logging settings with default set to INFO | Below is the the instruction that describes the task:
### Input:
Init logging settings with default set to INFO
### Response:
def init_logging(log_level):
"""
Init logging settings with default set to INFO
"""
log_level = log_level_to_string_map[min(log_level, 5)]
msg = "%(levelname)s - %(name)s:%(lineno)s - %(message)s" if log_level in os.environ else "%(levelname)s - %(message)s"
logging_conf = {
"version": 1,
"root": {
"level": log_level,
"handlers": ["console"]
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": log_level,
"formatter": "simple",
"stream": "ext://sys.stdout"
}
},
"formatters": {
"simple": {
"format": " {0}".format(msg)
}
}
}
logging.config.dictConfig(logging_conf) |
def parse(cls, resource, parent=None, _with_children=False):
""" Parse a resource
:param resource: Element rerpresenting a work
:param type: basestring, etree._Element
:param parent: Parent of the object
:type parent: XmlCtsTextgroupMetadata
:param _cls_dict: Dictionary of classes to generate subclasses
"""
xml = xmlparser(resource)
o = cls(urn=xml.get("urn"), parent=parent)
lang = xml.get("{http://www.w3.org/XML/1998/namespace}lang")
if lang is not None:
o.lang = lang
for child in xml.xpath("ti:title", namespaces=XPATH_NAMESPACES):
lg = child.get("{http://www.w3.org/XML/1998/namespace}lang")
if lg is not None:
o.set_cts_property("title", child.text, lg)
# Parse children
children = []
children.extend(_xpathDict(
xml=xml, xpath='ti:edition',
cls=cls.CLASS_EDITION, parent=o
))
children.extend(_xpathDict(
xml=xml, xpath='ti:translation',
cls=cls.CLASS_TRANSLATION, parent=o
))
children.extend(_xpathDict(
xml=xml, xpath='ti:commentary',
cls=cls.CLASS_COMMENTARY, parent=o
))
_parse_structured_metadata(o, xml)
if _with_children:
return o, children
return o | Parse a resource
:param resource: Element rerpresenting a work
:param type: basestring, etree._Element
:param parent: Parent of the object
:type parent: XmlCtsTextgroupMetadata
:param _cls_dict: Dictionary of classes to generate subclasses | Below is the the instruction that describes the task:
### Input:
Parse a resource
:param resource: Element rerpresenting a work
:param type: basestring, etree._Element
:param parent: Parent of the object
:type parent: XmlCtsTextgroupMetadata
:param _cls_dict: Dictionary of classes to generate subclasses
### Response:
def parse(cls, resource, parent=None, _with_children=False):
""" Parse a resource
:param resource: Element rerpresenting a work
:param type: basestring, etree._Element
:param parent: Parent of the object
:type parent: XmlCtsTextgroupMetadata
:param _cls_dict: Dictionary of classes to generate subclasses
"""
xml = xmlparser(resource)
o = cls(urn=xml.get("urn"), parent=parent)
lang = xml.get("{http://www.w3.org/XML/1998/namespace}lang")
if lang is not None:
o.lang = lang
for child in xml.xpath("ti:title", namespaces=XPATH_NAMESPACES):
lg = child.get("{http://www.w3.org/XML/1998/namespace}lang")
if lg is not None:
o.set_cts_property("title", child.text, lg)
# Parse children
children = []
children.extend(_xpathDict(
xml=xml, xpath='ti:edition',
cls=cls.CLASS_EDITION, parent=o
))
children.extend(_xpathDict(
xml=xml, xpath='ti:translation',
cls=cls.CLASS_TRANSLATION, parent=o
))
children.extend(_xpathDict(
xml=xml, xpath='ti:commentary',
cls=cls.CLASS_COMMENTARY, parent=o
))
_parse_structured_metadata(o, xml)
if _with_children:
return o, children
return o |
def create_meta_data(cls, options, args, parser):
"""
Override in subclass if required.
"""
meta_data = []
meta_data.append(('spiff_version', cls.get_version()))
if options.target_engine:
meta_data.append(('target_engine', options.target_engine))
if options.target_engine:
meta_data.append(
('target_engine_version', options.target_engine_version))
return meta_data | Override in subclass if required. | Below is the the instruction that describes the task:
### Input:
Override in subclass if required.
### Response:
def create_meta_data(cls, options, args, parser):
"""
Override in subclass if required.
"""
meta_data = []
meta_data.append(('spiff_version', cls.get_version()))
if options.target_engine:
meta_data.append(('target_engine', options.target_engine))
if options.target_engine:
meta_data.append(
('target_engine_version', options.target_engine_version))
return meta_data |
def get_interface(iface):
'''
Return the contents of an interface script
CLI Example:
.. code-block:: bash
salt '*' ip.get_interface eth0
'''
adapters = _parse_interfaces()
if iface in adapters:
try:
if iface == 'source':
template = JINJA.get_template('debian_source.jinja')
else:
template = JINJA.get_template('debian_eth.jinja')
except jinja2.exceptions.TemplateNotFound:
log.error('Could not load template debian_eth.jinja')
return ''
ifcfg = template.render({'name': iface, 'data': adapters[iface]})
# ensure lines in list end with newline, so difflib works
return [item + '\n' for item in ifcfg.split('\n')]
else:
return [] | Return the contents of an interface script
CLI Example:
.. code-block:: bash
salt '*' ip.get_interface eth0 | Below is the the instruction that describes the task:
### Input:
Return the contents of an interface script
CLI Example:
.. code-block:: bash
salt '*' ip.get_interface eth0
### Response:
def get_interface(iface):
'''
Return the contents of an interface script
CLI Example:
.. code-block:: bash
salt '*' ip.get_interface eth0
'''
adapters = _parse_interfaces()
if iface in adapters:
try:
if iface == 'source':
template = JINJA.get_template('debian_source.jinja')
else:
template = JINJA.get_template('debian_eth.jinja')
except jinja2.exceptions.TemplateNotFound:
log.error('Could not load template debian_eth.jinja')
return ''
ifcfg = template.render({'name': iface, 'data': adapters[iface]})
# ensure lines in list end with newline, so difflib works
return [item + '\n' for item in ifcfg.split('\n')]
else:
return [] |
def fmt_type(data_type):
"""
Returns a JSDoc annotation for a data type.
May contain a union of enumerated subtypes.
"""
if is_struct_type(data_type) and data_type.has_enumerated_subtypes():
possible_types = []
possible_subtypes = data_type.get_all_subtypes_with_tags()
for _, subtype in possible_subtypes:
possible_types.append(fmt_type_name(subtype))
if data_type.is_catch_all():
possible_types.append(fmt_type_name(data_type))
return fmt_jsdoc_union(possible_types)
else:
return fmt_type_name(data_type) | Returns a JSDoc annotation for a data type.
May contain a union of enumerated subtypes. | Below is the the instruction that describes the task:
### Input:
Returns a JSDoc annotation for a data type.
May contain a union of enumerated subtypes.
### Response:
def fmt_type(data_type):
"""
Returns a JSDoc annotation for a data type.
May contain a union of enumerated subtypes.
"""
if is_struct_type(data_type) and data_type.has_enumerated_subtypes():
possible_types = []
possible_subtypes = data_type.get_all_subtypes_with_tags()
for _, subtype in possible_subtypes:
possible_types.append(fmt_type_name(subtype))
if data_type.is_catch_all():
possible_types.append(fmt_type_name(data_type))
return fmt_jsdoc_union(possible_types)
else:
return fmt_type_name(data_type) |
def edit_encoding(cls, parent):
"""
Static helper method that shows the encoding editor dialog
If the dialog was accepted the new encodings are added to the settings.
:param parent: parent widget
:return: True in case of succes, False otherwise
"""
dlg = cls(parent)
if dlg.exec_() == dlg.Accepted:
settings = Cache()
settings.preferred_encodings = dlg.get_preferred_encodings()
return True
return False | Static helper method that shows the encoding editor dialog
If the dialog was accepted the new encodings are added to the settings.
:param parent: parent widget
:return: True in case of succes, False otherwise | Below is the the instruction that describes the task:
### Input:
Static helper method that shows the encoding editor dialog
If the dialog was accepted the new encodings are added to the settings.
:param parent: parent widget
:return: True in case of succes, False otherwise
### Response:
def edit_encoding(cls, parent):
"""
Static helper method that shows the encoding editor dialog
If the dialog was accepted the new encodings are added to the settings.
:param parent: parent widget
:return: True in case of succes, False otherwise
"""
dlg = cls(parent)
if dlg.exec_() == dlg.Accepted:
settings = Cache()
settings.preferred_encodings = dlg.get_preferred_encodings()
return True
return False |
def get_area_url(location, distance):
"""Generate URL for downloading OSM data within a region.
This function defines a boundary box where the edges touch a circle of
``distance`` kilometres in radius. It is important to note that the box is
neither a square, nor bounded within the circle.
The bounding box is strictly a trapezoid whose north and south edges are
different lengths, which is longer is dependant on whether the box is
calculated for a location in the Northern or Southern hemisphere. You will
get a shorter north edge in the Northern hemisphere, and vice versa. This
is simply because we are applying a flat transformation to a spherical
object, however for all general cases the difference will be negligible.
Args:
location (Point): Centre of the region
distance (int): Boundary distance in kilometres
Returns:
str: URL that can be used to fetch the OSM data within ``distance`` of
``location``
"""
locations = [location.destination(i, distance) for i in range(0, 360, 90)]
latitudes = list(map(attrgetter('latitude'), locations))
longitudes = list(map(attrgetter('longitude'), locations))
bounds = (min(longitudes), min(latitudes), max(longitudes), max(latitudes))
return ('http://api.openstreetmap.org/api/0.5/map?bbox='
+ ','.join(map(str, bounds))) | Generate URL for downloading OSM data within a region.
This function defines a boundary box where the edges touch a circle of
``distance`` kilometres in radius. It is important to note that the box is
neither a square, nor bounded within the circle.
The bounding box is strictly a trapezoid whose north and south edges are
different lengths, which is longer is dependant on whether the box is
calculated for a location in the Northern or Southern hemisphere. You will
get a shorter north edge in the Northern hemisphere, and vice versa. This
is simply because we are applying a flat transformation to a spherical
object, however for all general cases the difference will be negligible.
Args:
location (Point): Centre of the region
distance (int): Boundary distance in kilometres
Returns:
str: URL that can be used to fetch the OSM data within ``distance`` of
``location`` | Below is the the instruction that describes the task:
### Input:
Generate URL for downloading OSM data within a region.
This function defines a boundary box where the edges touch a circle of
``distance`` kilometres in radius. It is important to note that the box is
neither a square, nor bounded within the circle.
The bounding box is strictly a trapezoid whose north and south edges are
different lengths, which is longer is dependant on whether the box is
calculated for a location in the Northern or Southern hemisphere. You will
get a shorter north edge in the Northern hemisphere, and vice versa. This
is simply because we are applying a flat transformation to a spherical
object, however for all general cases the difference will be negligible.
Args:
location (Point): Centre of the region
distance (int): Boundary distance in kilometres
Returns:
str: URL that can be used to fetch the OSM data within ``distance`` of
``location``
### Response:
def get_area_url(location, distance):
"""Generate URL for downloading OSM data within a region.
This function defines a boundary box where the edges touch a circle of
``distance`` kilometres in radius. It is important to note that the box is
neither a square, nor bounded within the circle.
The bounding box is strictly a trapezoid whose north and south edges are
different lengths, which is longer is dependant on whether the box is
calculated for a location in the Northern or Southern hemisphere. You will
get a shorter north edge in the Northern hemisphere, and vice versa. This
is simply because we are applying a flat transformation to a spherical
object, however for all general cases the difference will be negligible.
Args:
location (Point): Centre of the region
distance (int): Boundary distance in kilometres
Returns:
str: URL that can be used to fetch the OSM data within ``distance`` of
``location``
"""
locations = [location.destination(i, distance) for i in range(0, 360, 90)]
latitudes = list(map(attrgetter('latitude'), locations))
longitudes = list(map(attrgetter('longitude'), locations))
bounds = (min(longitudes), min(latitudes), max(longitudes), max(latitudes))
return ('http://api.openstreetmap.org/api/0.5/map?bbox='
+ ','.join(map(str, bounds))) |
def _handle_next_export_subtask(self, export_state=None):
"""
Process the next export sub-task, if there is one.
:param ExportState export_state:
If provided, this is used instead of the database queue, in effect directing the exporter to process the
previous export again. This is used to avoid having to query the database when we know already what needs
to be done. It also maintains a cache of the entity so we don't have to re-acquire it on multiple exports.
:return:
A :class:`meteorpi_db.exporter.MeteorExporter.ExportStateCache` representing the state of the export, or
None if there was nothing to do.
"""
# Use a cached state, or generate a new one if required
if export_state is None or export_state.export_task is None:
export = self.db.get_next_entity_to_export()
if export is not None:
export_state = self.ExportState(export_task=export)
else:
return None
try:
auth = (export_state.export_task.target_user,
export_state.export_task.target_password)
target_url = export_state.export_task.target_url
response = post(url=target_url, verify=False,
json=export_state.entity_dict,
auth=auth)
response.raise_for_status()
json = response.json()
state = json['state']
if state == 'complete':
return export_state.fully_processed()
elif state == 'need_file_data':
file_id = json['file_id']
file_record = self.db.get_file(repository_fname=file_id)
if file_record is None:
return export_state.failed()
with open(self.db.file_path_for_id(file_id), 'rb') as file_content:
multi = MultipartEncoder(fields={'file': ('file', file_content, file_record.mime_type)})
post(url="{0}/data/{1}/{2}".format(target_url, file_id, file_record.file_md5),
data=multi, verify=False,
headers={'Content-Type': multi.content_type},
auth=auth)
return export_state.partially_processed()
elif state == 'continue':
return export_state.partially_processed()
else:
return export_state.confused()
except HTTPError:
traceback.print_exc()
return export_state.failed()
except ConnectionError:
traceback.print_exc()
return export_state.failed() | Process the next export sub-task, if there is one.
:param ExportState export_state:
If provided, this is used instead of the database queue, in effect directing the exporter to process the
previous export again. This is used to avoid having to query the database when we know already what needs
to be done. It also maintains a cache of the entity so we don't have to re-acquire it on multiple exports.
:return:
A :class:`meteorpi_db.exporter.MeteorExporter.ExportStateCache` representing the state of the export, or
None if there was nothing to do. | Below is the the instruction that describes the task:
### Input:
Process the next export sub-task, if there is one.
:param ExportState export_state:
If provided, this is used instead of the database queue, in effect directing the exporter to process the
previous export again. This is used to avoid having to query the database when we know already what needs
to be done. It also maintains a cache of the entity so we don't have to re-acquire it on multiple exports.
:return:
A :class:`meteorpi_db.exporter.MeteorExporter.ExportStateCache` representing the state of the export, or
None if there was nothing to do.
### Response:
def _handle_next_export_subtask(self, export_state=None):
"""
Process the next export sub-task, if there is one.
:param ExportState export_state:
If provided, this is used instead of the database queue, in effect directing the exporter to process the
previous export again. This is used to avoid having to query the database when we know already what needs
to be done. It also maintains a cache of the entity so we don't have to re-acquire it on multiple exports.
:return:
A :class:`meteorpi_db.exporter.MeteorExporter.ExportStateCache` representing the state of the export, or
None if there was nothing to do.
"""
# Use a cached state, or generate a new one if required
if export_state is None or export_state.export_task is None:
export = self.db.get_next_entity_to_export()
if export is not None:
export_state = self.ExportState(export_task=export)
else:
return None
try:
auth = (export_state.export_task.target_user,
export_state.export_task.target_password)
target_url = export_state.export_task.target_url
response = post(url=target_url, verify=False,
json=export_state.entity_dict,
auth=auth)
response.raise_for_status()
json = response.json()
state = json['state']
if state == 'complete':
return export_state.fully_processed()
elif state == 'need_file_data':
file_id = json['file_id']
file_record = self.db.get_file(repository_fname=file_id)
if file_record is None:
return export_state.failed()
with open(self.db.file_path_for_id(file_id), 'rb') as file_content:
multi = MultipartEncoder(fields={'file': ('file', file_content, file_record.mime_type)})
post(url="{0}/data/{1}/{2}".format(target_url, file_id, file_record.file_md5),
data=multi, verify=False,
headers={'Content-Type': multi.content_type},
auth=auth)
return export_state.partially_processed()
elif state == 'continue':
return export_state.partially_processed()
else:
return export_state.confused()
except HTTPError:
traceback.print_exc()
return export_state.failed()
except ConnectionError:
traceback.print_exc()
return export_state.failed() |
def create(dataset, target, feature=None, model = 'resnet-50',
l2_penalty=0.01,
l1_penalty=0.0,
solver='auto', feature_rescaling=True,
convergence_threshold = _DEFAULT_SOLVER_OPTIONS['convergence_threshold'],
step_size = _DEFAULT_SOLVER_OPTIONS['step_size'],
lbfgs_memory_level = _DEFAULT_SOLVER_OPTIONS['lbfgs_memory_level'],
max_iterations = _DEFAULT_SOLVER_OPTIONS['max_iterations'],
class_weights = None,
validation_set = 'auto',
verbose=True,
seed=None,
batch_size=64):
"""
Create a :class:`ImageClassifier` model.
Parameters
----------
dataset : SFrame
Input data. The column named by the 'feature' parameter will be
extracted for modeling.
target : string, or int
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in the order in which they are provided.
For example, a target variable with 'cat' and 'dog' as possible
values is mapped to 0 and 1 respectively with 0 being the base class
and 1 being the reference class. Use `model.classes` to retrieve
the order in which the classes are mapped.
feature : string, optional
indicates that the SFrame has only column of Image type and that will
Name of the column containing the input images. 'None' (the default)
indicates the only image column in `dataset` should be used as the
feature.
l2_penalty : float, optional
Weight on l2 regularization of the model. The larger this weight, the
more the model coefficients shrink toward 0. This introduces bias into
the model but decreases variance, potentially leading to better
predictions. The default value is 0.01; setting this parameter to 0
corresponds to unregularized logistic regression. See the ridge
regression reference for more detail.
l1_penalty : float, optional
Weight on l1 regularization of the model. Like the l2 penalty, the
higher the l1 penalty, the more the estimated coefficients shrink toward
0. The l1 penalty, however, completely zeros out sufficiently small
coefficients, automatically indicating features that are not useful
for the model. The default weight of 0 prevents any features from
being discarded. See the LASSO regression reference for more detail.
solver : string, optional
Name of the solver to be used to solve the regression. See the
references for more detail on each solver. Available solvers are:
- *auto (default)*: automatically chooses the best solver for the data
and model parameters.
- *newton*: Newton-Raphson
- *lbfgs*: limited memory BFGS
- *fista*: accelerated gradient descent
For this model, the Newton-Raphson method is equivalent to the
iteratively re-weighted least squares algorithm. If the l1_penalty is
greater than 0, use the 'fista' solver.
The model is trained using a carefully engineered collection of methods
that are automatically picked based on the input data. The ``newton``
method works best for datasets with plenty of examples and few features
(long datasets). Limited memory BFGS (``lbfgs``) is a robust solver for
wide datasets (i.e datasets with many coefficients). ``fista`` is the
default solver for l1-regularized linear regression. The solvers are all
automatically tuned and the default options should function well. See
the solver options guide for setting additional parameters for each of
the solvers.
See the user guide for additional details on how the solver is chosen.
(see `here
<https://apple.github.io/turicreate/docs/userguide/supervised-learning/linear-regression.html>`_)
feature_rescaling : boolean, optional
Feature rescaling is an important pre-processing step that ensures that
all features are on the same scale. An l2-norm rescaling is performed
to make sure that all features are of the same norm. Categorical
features are also rescaled by rescaling the dummy variables that are
used to represent them. The coefficients are returned in original scale
of the problem. This process is particularly useful when features
vary widely in their ranges.
convergence_threshold : float, optional
Convergence is tested using variation in the training objective. The
variation in the training objective is calculated using the difference
between the objective values between two steps. Consider reducing this
below the default value (0.01) for a more accurately trained model.
Beware of overfitting (i.e a model that works well only on the training
data) if this parameter is set to a very low value.
lbfgs_memory_level : float, optional
The L-BFGS algorithm keeps track of gradient information from the
previous ``lbfgs_memory_level`` iterations. The storage requirement for
each of these gradients is the ``num_coefficients`` in the problem.
Increasing the ``lbfgs_memory_level ``can help improve the quality of
the model trained. Setting this to more than ``max_iterations`` has the
same effect as setting it to ``max_iterations``.
model : string optional
Uses a pretrained model to bootstrap an image classifier:
- "resnet-50" : Uses a pretrained resnet model.
Exported Core ML model will be ~90M.
- "squeezenet_v1.1" : Uses a pretrained squeezenet model.
Exported Core ML model will be ~4.7M.
- "VisionFeaturePrint_Scene": Uses an OS internal feature extractor.
Only on available on iOS 12.0+,
macOS 10.14+ and tvOS 12.0+.
Exported Core ML model will be ~41K.
Models are downloaded from the internet if not available locally. Once
downloaded, the models are cached for future use.
step_size : float, optional
The starting step size to use for the ``fista`` solver. The default is
set to 1.0, this is an aggressive setting. If the first iteration takes
a considerable amount of time, reducing this parameter may speed up
model training.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If set to `None`, all classes are supposed to have weight one. The
`auto` mode set the class weight to be inversely proportional to number of
examples in the training data with the given class.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
The format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. The default value is 'auto'.
max_iterations : int, optional
The maximum number of allowed passes through the data. More passes over
the data can result in a more accurately trained model. Consider
increasing this (the default value is 10) if the training accuracy is
low and the *Grad-Norm* in the display is large.
verbose : bool, optional
If True, prints progress updates and model details.
seed : int, optional
Seed for random number generation. Set this value to ensure that the
same model is created every time.
batch_size : int, optional
If you are getting memory errors, try decreasing this value. If you
have a powerful computer, increasing this value may improve performance.
Returns
-------
out : ImageClassifier
A trained :class:`ImageClassifier` model.
Examples
--------
.. sourcecode:: python
>>> model = turicreate.image_classifier.create(data, target='is_expensive')
# Make predictions (in various forms)
>>> predictions = model.predict(data) # predictions
>>> predictions = model.classify(data) # predictions with confidence
>>> predictions = model.predict_topk(data) # Top-5 predictions (multiclass)
# Evaluate the model with ground truth data
>>> results = model.evaluate(data)
See Also
--------
ImageClassifier
"""
start_time = _time.time()
# Check model parameter
allowed_models = list(_pre_trained_models.MODELS.keys())
if _mac_ver() >= (10,14):
allowed_models.append('VisionFeaturePrint_Scene')
# Also, to make sure existing code doesn't break, replace incorrect name
# with the correct name version
if model == "VisionFeaturePrint_Screen":
print("WARNING: Correct spelling of model name is VisionFeaturePrint_Scene; VisionFeaturePrint_Screen will be removed in subsequent versions.")
model = "VisionFeaturePrint_Scene"
_tkutl._check_categorical_option_type('model', model, allowed_models)
# Check dataset parameter
if len(dataset) == 0:
raise _ToolkitError('Unable to train on empty dataset')
if (feature is not None) and (feature not in dataset.column_names()):
raise _ToolkitError("Image feature column '%s' does not exist" % feature)
if target not in dataset.column_names():
raise _ToolkitError("Target column '%s' does not exist" % target)
if(batch_size < 1):
raise ValueError("'batch_size' must be greater than or equal to 1")
if not (isinstance(validation_set, _tc.SFrame) or validation_set == 'auto' or validation_set is None):
raise TypeError("Unrecognized value for 'validation_set'.")
if feature is None:
feature = _tkutl._find_only_image_column(dataset)
feature_extractor = _image_feature_extractor._create_feature_extractor(model)
# Extract features
extracted_features = _tc.SFrame({
target: dataset[target],
'__image_features__': feature_extractor.extract_features(dataset, feature, verbose=verbose, batch_size=batch_size),
})
if isinstance(validation_set, _tc.SFrame):
extracted_features_validation = _tc.SFrame({
target: validation_set[target],
'__image_features__': feature_extractor.extract_features(validation_set, feature, verbose=verbose, batch_size=batch_size),
})
else:
extracted_features_validation = validation_set
# Train a classifier using the extracted features
extracted_features[target] = dataset[target]
lr_model = _tc.logistic_classifier.create(extracted_features,
features=['__image_features__'],
target=target,
max_iterations=max_iterations,
validation_set=extracted_features_validation,
seed=seed,
verbose=verbose, l2_penalty=l2_penalty, l1_penalty=l1_penalty,
solver=solver, feature_rescaling=feature_rescaling,
convergence_threshold=convergence_threshold,
step_size=step_size,
lbfgs_memory_level=lbfgs_memory_level,
class_weights=class_weights)
# set input image shape
if model in _pre_trained_models.MODELS:
input_image_shape = _pre_trained_models.MODELS[model].input_image_shape
else: # model == VisionFeaturePrint_Scene
input_image_shape = (3, 299, 299)
# Save the model
state = {
'classifier': lr_model,
'model': model,
'max_iterations': max_iterations,
'feature_extractor': feature_extractor,
'input_image_shape': input_image_shape,
'target': target,
'feature': feature,
'num_features': 1,
'num_classes': lr_model.num_classes,
'classes': lr_model.classes,
'num_examples': lr_model.num_examples,
'training_time': _time.time() - start_time,
'training_loss': lr_model.training_loss,
}
return ImageClassifier(state) | Create a :class:`ImageClassifier` model.
Parameters
----------
dataset : SFrame
Input data. The column named by the 'feature' parameter will be
extracted for modeling.
target : string, or int
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in the order in which they are provided.
For example, a target variable with 'cat' and 'dog' as possible
values is mapped to 0 and 1 respectively with 0 being the base class
and 1 being the reference class. Use `model.classes` to retrieve
the order in which the classes are mapped.
feature : string, optional
indicates that the SFrame has only column of Image type and that will
Name of the column containing the input images. 'None' (the default)
indicates the only image column in `dataset` should be used as the
feature.
l2_penalty : float, optional
Weight on l2 regularization of the model. The larger this weight, the
more the model coefficients shrink toward 0. This introduces bias into
the model but decreases variance, potentially leading to better
predictions. The default value is 0.01; setting this parameter to 0
corresponds to unregularized logistic regression. See the ridge
regression reference for more detail.
l1_penalty : float, optional
Weight on l1 regularization of the model. Like the l2 penalty, the
higher the l1 penalty, the more the estimated coefficients shrink toward
0. The l1 penalty, however, completely zeros out sufficiently small
coefficients, automatically indicating features that are not useful
for the model. The default weight of 0 prevents any features from
being discarded. See the LASSO regression reference for more detail.
solver : string, optional
Name of the solver to be used to solve the regression. See the
references for more detail on each solver. Available solvers are:
- *auto (default)*: automatically chooses the best solver for the data
and model parameters.
- *newton*: Newton-Raphson
- *lbfgs*: limited memory BFGS
- *fista*: accelerated gradient descent
For this model, the Newton-Raphson method is equivalent to the
iteratively re-weighted least squares algorithm. If the l1_penalty is
greater than 0, use the 'fista' solver.
The model is trained using a carefully engineered collection of methods
that are automatically picked based on the input data. The ``newton``
method works best for datasets with plenty of examples and few features
(long datasets). Limited memory BFGS (``lbfgs``) is a robust solver for
wide datasets (i.e datasets with many coefficients). ``fista`` is the
default solver for l1-regularized linear regression. The solvers are all
automatically tuned and the default options should function well. See
the solver options guide for setting additional parameters for each of
the solvers.
See the user guide for additional details on how the solver is chosen.
(see `here
<https://apple.github.io/turicreate/docs/userguide/supervised-learning/linear-regression.html>`_)
feature_rescaling : boolean, optional
Feature rescaling is an important pre-processing step that ensures that
all features are on the same scale. An l2-norm rescaling is performed
to make sure that all features are of the same norm. Categorical
features are also rescaled by rescaling the dummy variables that are
used to represent them. The coefficients are returned in original scale
of the problem. This process is particularly useful when features
vary widely in their ranges.
convergence_threshold : float, optional
Convergence is tested using variation in the training objective. The
variation in the training objective is calculated using the difference
between the objective values between two steps. Consider reducing this
below the default value (0.01) for a more accurately trained model.
Beware of overfitting (i.e a model that works well only on the training
data) if this parameter is set to a very low value.
lbfgs_memory_level : float, optional
The L-BFGS algorithm keeps track of gradient information from the
previous ``lbfgs_memory_level`` iterations. The storage requirement for
each of these gradients is the ``num_coefficients`` in the problem.
Increasing the ``lbfgs_memory_level ``can help improve the quality of
the model trained. Setting this to more than ``max_iterations`` has the
same effect as setting it to ``max_iterations``.
model : string optional
Uses a pretrained model to bootstrap an image classifier:
- "resnet-50" : Uses a pretrained resnet model.
Exported Core ML model will be ~90M.
- "squeezenet_v1.1" : Uses a pretrained squeezenet model.
Exported Core ML model will be ~4.7M.
- "VisionFeaturePrint_Scene": Uses an OS internal feature extractor.
Only on available on iOS 12.0+,
macOS 10.14+ and tvOS 12.0+.
Exported Core ML model will be ~41K.
Models are downloaded from the internet if not available locally. Once
downloaded, the models are cached for future use.
step_size : float, optional
The starting step size to use for the ``fista`` solver. The default is
set to 1.0, this is an aggressive setting. If the first iteration takes
a considerable amount of time, reducing this parameter may speed up
model training.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If set to `None`, all classes are supposed to have weight one. The
`auto` mode set the class weight to be inversely proportional to number of
examples in the training data with the given class.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
The format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. The default value is 'auto'.
max_iterations : int, optional
The maximum number of allowed passes through the data. More passes over
the data can result in a more accurately trained model. Consider
increasing this (the default value is 10) if the training accuracy is
low and the *Grad-Norm* in the display is large.
verbose : bool, optional
If True, prints progress updates and model details.
seed : int, optional
Seed for random number generation. Set this value to ensure that the
same model is created every time.
batch_size : int, optional
If you are getting memory errors, try decreasing this value. If you
have a powerful computer, increasing this value may improve performance.
Returns
-------
out : ImageClassifier
A trained :class:`ImageClassifier` model.
Examples
--------
.. sourcecode:: python
>>> model = turicreate.image_classifier.create(data, target='is_expensive')
# Make predictions (in various forms)
>>> predictions = model.predict(data) # predictions
>>> predictions = model.classify(data) # predictions with confidence
>>> predictions = model.predict_topk(data) # Top-5 predictions (multiclass)
# Evaluate the model with ground truth data
>>> results = model.evaluate(data)
See Also
--------
ImageClassifier | Below is the the instruction that describes the task:
### Input:
Create a :class:`ImageClassifier` model.
Parameters
----------
dataset : SFrame
Input data. The column named by the 'feature' parameter will be
extracted for modeling.
target : string, or int
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in the order in which they are provided.
For example, a target variable with 'cat' and 'dog' as possible
values is mapped to 0 and 1 respectively with 0 being the base class
and 1 being the reference class. Use `model.classes` to retrieve
the order in which the classes are mapped.
feature : string, optional
indicates that the SFrame has only column of Image type and that will
Name of the column containing the input images. 'None' (the default)
indicates the only image column in `dataset` should be used as the
feature.
l2_penalty : float, optional
Weight on l2 regularization of the model. The larger this weight, the
more the model coefficients shrink toward 0. This introduces bias into
the model but decreases variance, potentially leading to better
predictions. The default value is 0.01; setting this parameter to 0
corresponds to unregularized logistic regression. See the ridge
regression reference for more detail.
l1_penalty : float, optional
Weight on l1 regularization of the model. Like the l2 penalty, the
higher the l1 penalty, the more the estimated coefficients shrink toward
0. The l1 penalty, however, completely zeros out sufficiently small
coefficients, automatically indicating features that are not useful
for the model. The default weight of 0 prevents any features from
being discarded. See the LASSO regression reference for more detail.
solver : string, optional
Name of the solver to be used to solve the regression. See the
references for more detail on each solver. Available solvers are:
- *auto (default)*: automatically chooses the best solver for the data
and model parameters.
- *newton*: Newton-Raphson
- *lbfgs*: limited memory BFGS
- *fista*: accelerated gradient descent
For this model, the Newton-Raphson method is equivalent to the
iteratively re-weighted least squares algorithm. If the l1_penalty is
greater than 0, use the 'fista' solver.
The model is trained using a carefully engineered collection of methods
that are automatically picked based on the input data. The ``newton``
method works best for datasets with plenty of examples and few features
(long datasets). Limited memory BFGS (``lbfgs``) is a robust solver for
wide datasets (i.e datasets with many coefficients). ``fista`` is the
default solver for l1-regularized linear regression. The solvers are all
automatically tuned and the default options should function well. See
the solver options guide for setting additional parameters for each of
the solvers.
See the user guide for additional details on how the solver is chosen.
(see `here
<https://apple.github.io/turicreate/docs/userguide/supervised-learning/linear-regression.html>`_)
feature_rescaling : boolean, optional
Feature rescaling is an important pre-processing step that ensures that
all features are on the same scale. An l2-norm rescaling is performed
to make sure that all features are of the same norm. Categorical
features are also rescaled by rescaling the dummy variables that are
used to represent them. The coefficients are returned in original scale
of the problem. This process is particularly useful when features
vary widely in their ranges.
convergence_threshold : float, optional
Convergence is tested using variation in the training objective. The
variation in the training objective is calculated using the difference
between the objective values between two steps. Consider reducing this
below the default value (0.01) for a more accurately trained model.
Beware of overfitting (i.e a model that works well only on the training
data) if this parameter is set to a very low value.
lbfgs_memory_level : float, optional
The L-BFGS algorithm keeps track of gradient information from the
previous ``lbfgs_memory_level`` iterations. The storage requirement for
each of these gradients is the ``num_coefficients`` in the problem.
Increasing the ``lbfgs_memory_level ``can help improve the quality of
the model trained. Setting this to more than ``max_iterations`` has the
same effect as setting it to ``max_iterations``.
model : string optional
Uses a pretrained model to bootstrap an image classifier:
- "resnet-50" : Uses a pretrained resnet model.
Exported Core ML model will be ~90M.
- "squeezenet_v1.1" : Uses a pretrained squeezenet model.
Exported Core ML model will be ~4.7M.
- "VisionFeaturePrint_Scene": Uses an OS internal feature extractor.
Only on available on iOS 12.0+,
macOS 10.14+ and tvOS 12.0+.
Exported Core ML model will be ~41K.
Models are downloaded from the internet if not available locally. Once
downloaded, the models are cached for future use.
step_size : float, optional
The starting step size to use for the ``fista`` solver. The default is
set to 1.0, this is an aggressive setting. If the first iteration takes
a considerable amount of time, reducing this parameter may speed up
model training.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If set to `None`, all classes are supposed to have weight one. The
`auto` mode set the class weight to be inversely proportional to number of
examples in the training data with the given class.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
The format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. The default value is 'auto'.
max_iterations : int, optional
The maximum number of allowed passes through the data. More passes over
the data can result in a more accurately trained model. Consider
increasing this (the default value is 10) if the training accuracy is
low and the *Grad-Norm* in the display is large.
verbose : bool, optional
If True, prints progress updates and model details.
seed : int, optional
Seed for random number generation. Set this value to ensure that the
same model is created every time.
batch_size : int, optional
If you are getting memory errors, try decreasing this value. If you
have a powerful computer, increasing this value may improve performance.
Returns
-------
out : ImageClassifier
A trained :class:`ImageClassifier` model.
Examples
--------
.. sourcecode:: python
>>> model = turicreate.image_classifier.create(data, target='is_expensive')
# Make predictions (in various forms)
>>> predictions = model.predict(data) # predictions
>>> predictions = model.classify(data) # predictions with confidence
>>> predictions = model.predict_topk(data) # Top-5 predictions (multiclass)
# Evaluate the model with ground truth data
>>> results = model.evaluate(data)
See Also
--------
ImageClassifier
### Response:
def create(dataset, target, feature=None, model = 'resnet-50',
l2_penalty=0.01,
l1_penalty=0.0,
solver='auto', feature_rescaling=True,
convergence_threshold = _DEFAULT_SOLVER_OPTIONS['convergence_threshold'],
step_size = _DEFAULT_SOLVER_OPTIONS['step_size'],
lbfgs_memory_level = _DEFAULT_SOLVER_OPTIONS['lbfgs_memory_level'],
max_iterations = _DEFAULT_SOLVER_OPTIONS['max_iterations'],
class_weights = None,
validation_set = 'auto',
verbose=True,
seed=None,
batch_size=64):
"""
Create a :class:`ImageClassifier` model.
Parameters
----------
dataset : SFrame
Input data. The column named by the 'feature' parameter will be
extracted for modeling.
target : string, or int
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in the order in which they are provided.
For example, a target variable with 'cat' and 'dog' as possible
values is mapped to 0 and 1 respectively with 0 being the base class
and 1 being the reference class. Use `model.classes` to retrieve
the order in which the classes are mapped.
feature : string, optional
indicates that the SFrame has only column of Image type and that will
Name of the column containing the input images. 'None' (the default)
indicates the only image column in `dataset` should be used as the
feature.
l2_penalty : float, optional
Weight on l2 regularization of the model. The larger this weight, the
more the model coefficients shrink toward 0. This introduces bias into
the model but decreases variance, potentially leading to better
predictions. The default value is 0.01; setting this parameter to 0
corresponds to unregularized logistic regression. See the ridge
regression reference for more detail.
l1_penalty : float, optional
Weight on l1 regularization of the model. Like the l2 penalty, the
higher the l1 penalty, the more the estimated coefficients shrink toward
0. The l1 penalty, however, completely zeros out sufficiently small
coefficients, automatically indicating features that are not useful
for the model. The default weight of 0 prevents any features from
being discarded. See the LASSO regression reference for more detail.
solver : string, optional
Name of the solver to be used to solve the regression. See the
references for more detail on each solver. Available solvers are:
- *auto (default)*: automatically chooses the best solver for the data
and model parameters.
- *newton*: Newton-Raphson
- *lbfgs*: limited memory BFGS
- *fista*: accelerated gradient descent
For this model, the Newton-Raphson method is equivalent to the
iteratively re-weighted least squares algorithm. If the l1_penalty is
greater than 0, use the 'fista' solver.
The model is trained using a carefully engineered collection of methods
that are automatically picked based on the input data. The ``newton``
method works best for datasets with plenty of examples and few features
(long datasets). Limited memory BFGS (``lbfgs``) is a robust solver for
wide datasets (i.e datasets with many coefficients). ``fista`` is the
default solver for l1-regularized linear regression. The solvers are all
automatically tuned and the default options should function well. See
the solver options guide for setting additional parameters for each of
the solvers.
See the user guide for additional details on how the solver is chosen.
(see `here
<https://apple.github.io/turicreate/docs/userguide/supervised-learning/linear-regression.html>`_)
feature_rescaling : boolean, optional
Feature rescaling is an important pre-processing step that ensures that
all features are on the same scale. An l2-norm rescaling is performed
to make sure that all features are of the same norm. Categorical
features are also rescaled by rescaling the dummy variables that are
used to represent them. The coefficients are returned in original scale
of the problem. This process is particularly useful when features
vary widely in their ranges.
convergence_threshold : float, optional
Convergence is tested using variation in the training objective. The
variation in the training objective is calculated using the difference
between the objective values between two steps. Consider reducing this
below the default value (0.01) for a more accurately trained model.
Beware of overfitting (i.e a model that works well only on the training
data) if this parameter is set to a very low value.
lbfgs_memory_level : float, optional
The L-BFGS algorithm keeps track of gradient information from the
previous ``lbfgs_memory_level`` iterations. The storage requirement for
each of these gradients is the ``num_coefficients`` in the problem.
Increasing the ``lbfgs_memory_level ``can help improve the quality of
the model trained. Setting this to more than ``max_iterations`` has the
same effect as setting it to ``max_iterations``.
model : string optional
Uses a pretrained model to bootstrap an image classifier:
- "resnet-50" : Uses a pretrained resnet model.
Exported Core ML model will be ~90M.
- "squeezenet_v1.1" : Uses a pretrained squeezenet model.
Exported Core ML model will be ~4.7M.
- "VisionFeaturePrint_Scene": Uses an OS internal feature extractor.
Only on available on iOS 12.0+,
macOS 10.14+ and tvOS 12.0+.
Exported Core ML model will be ~41K.
Models are downloaded from the internet if not available locally. Once
downloaded, the models are cached for future use.
step_size : float, optional
The starting step size to use for the ``fista`` solver. The default is
set to 1.0, this is an aggressive setting. If the first iteration takes
a considerable amount of time, reducing this parameter may speed up
model training.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If set to `None`, all classes are supposed to have weight one. The
`auto` mode set the class weight to be inversely proportional to number of
examples in the training data with the given class.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
The format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. The default value is 'auto'.
max_iterations : int, optional
The maximum number of allowed passes through the data. More passes over
the data can result in a more accurately trained model. Consider
increasing this (the default value is 10) if the training accuracy is
low and the *Grad-Norm* in the display is large.
verbose : bool, optional
If True, prints progress updates and model details.
seed : int, optional
Seed for random number generation. Set this value to ensure that the
same model is created every time.
batch_size : int, optional
If you are getting memory errors, try decreasing this value. If you
have a powerful computer, increasing this value may improve performance.
Returns
-------
out : ImageClassifier
A trained :class:`ImageClassifier` model.
Examples
--------
.. sourcecode:: python
>>> model = turicreate.image_classifier.create(data, target='is_expensive')
# Make predictions (in various forms)
>>> predictions = model.predict(data) # predictions
>>> predictions = model.classify(data) # predictions with confidence
>>> predictions = model.predict_topk(data) # Top-5 predictions (multiclass)
# Evaluate the model with ground truth data
>>> results = model.evaluate(data)
See Also
--------
ImageClassifier
"""
start_time = _time.time()
# Check model parameter
allowed_models = list(_pre_trained_models.MODELS.keys())
if _mac_ver() >= (10,14):
allowed_models.append('VisionFeaturePrint_Scene')
# Also, to make sure existing code doesn't break, replace incorrect name
# with the correct name version
if model == "VisionFeaturePrint_Screen":
print("WARNING: Correct spelling of model name is VisionFeaturePrint_Scene; VisionFeaturePrint_Screen will be removed in subsequent versions.")
model = "VisionFeaturePrint_Scene"
_tkutl._check_categorical_option_type('model', model, allowed_models)
# Check dataset parameter
if len(dataset) == 0:
raise _ToolkitError('Unable to train on empty dataset')
if (feature is not None) and (feature not in dataset.column_names()):
raise _ToolkitError("Image feature column '%s' does not exist" % feature)
if target not in dataset.column_names():
raise _ToolkitError("Target column '%s' does not exist" % target)
if(batch_size < 1):
raise ValueError("'batch_size' must be greater than or equal to 1")
if not (isinstance(validation_set, _tc.SFrame) or validation_set == 'auto' or validation_set is None):
raise TypeError("Unrecognized value for 'validation_set'.")
if feature is None:
feature = _tkutl._find_only_image_column(dataset)
feature_extractor = _image_feature_extractor._create_feature_extractor(model)
# Extract features
extracted_features = _tc.SFrame({
target: dataset[target],
'__image_features__': feature_extractor.extract_features(dataset, feature, verbose=verbose, batch_size=batch_size),
})
if isinstance(validation_set, _tc.SFrame):
extracted_features_validation = _tc.SFrame({
target: validation_set[target],
'__image_features__': feature_extractor.extract_features(validation_set, feature, verbose=verbose, batch_size=batch_size),
})
else:
extracted_features_validation = validation_set
# Train a classifier using the extracted features
extracted_features[target] = dataset[target]
lr_model = _tc.logistic_classifier.create(extracted_features,
features=['__image_features__'],
target=target,
max_iterations=max_iterations,
validation_set=extracted_features_validation,
seed=seed,
verbose=verbose, l2_penalty=l2_penalty, l1_penalty=l1_penalty,
solver=solver, feature_rescaling=feature_rescaling,
convergence_threshold=convergence_threshold,
step_size=step_size,
lbfgs_memory_level=lbfgs_memory_level,
class_weights=class_weights)
# set input image shape
if model in _pre_trained_models.MODELS:
input_image_shape = _pre_trained_models.MODELS[model].input_image_shape
else: # model == VisionFeaturePrint_Scene
input_image_shape = (3, 299, 299)
# Save the model
state = {
'classifier': lr_model,
'model': model,
'max_iterations': max_iterations,
'feature_extractor': feature_extractor,
'input_image_shape': input_image_shape,
'target': target,
'feature': feature,
'num_features': 1,
'num_classes': lr_model.num_classes,
'classes': lr_model.classes,
'num_examples': lr_model.num_examples,
'training_time': _time.time() - start_time,
'training_loss': lr_model.training_loss,
}
return ImageClassifier(state) |
def check_output_format(fmt, nfiles):
"""Validate file format string.
Parameters
----------
fmt : `str`
File format string.
nfiles : `int`
Number of files.
Raises
------
ValueError
If nfiles < 0 or format string is invalid.
"""
if nfiles < 0:
raise ValueError('Invalid file count: ' + str(nfiles))
if nfiles == 1:
return
try:
fmt % nfiles
except TypeError as err:
raise ValueError(''.join(
('Invalid file format string: ', fmt, ': ', str(err))
)) | Validate file format string.
Parameters
----------
fmt : `str`
File format string.
nfiles : `int`
Number of files.
Raises
------
ValueError
If nfiles < 0 or format string is invalid. | Below is the the instruction that describes the task:
### Input:
Validate file format string.
Parameters
----------
fmt : `str`
File format string.
nfiles : `int`
Number of files.
Raises
------
ValueError
If nfiles < 0 or format string is invalid.
### Response:
def check_output_format(fmt, nfiles):
"""Validate file format string.
Parameters
----------
fmt : `str`
File format string.
nfiles : `int`
Number of files.
Raises
------
ValueError
If nfiles < 0 or format string is invalid.
"""
if nfiles < 0:
raise ValueError('Invalid file count: ' + str(nfiles))
if nfiles == 1:
return
try:
fmt % nfiles
except TypeError as err:
raise ValueError(''.join(
('Invalid file format string: ', fmt, ': ', str(err))
)) |
def _get_x(self, kwargs):
'''
Returns x if it is explicitly defined in kwargs.
Otherwise, raises TypeError.
'''
if 'x' in kwargs:
return round(float(kwargs['x']), 6)
elif self._element_x in kwargs:
return round(float(kwargs[self._element_x]), 6)
elif self._type == 3 and self._element_1mx in kwargs:
return round(1. - float(kwargs[self._element_1mx]), 6)
else:
raise TypeError() | Returns x if it is explicitly defined in kwargs.
Otherwise, raises TypeError. | Below is the the instruction that describes the task:
### Input:
Returns x if it is explicitly defined in kwargs.
Otherwise, raises TypeError.
### Response:
def _get_x(self, kwargs):
'''
Returns x if it is explicitly defined in kwargs.
Otherwise, raises TypeError.
'''
if 'x' in kwargs:
return round(float(kwargs['x']), 6)
elif self._element_x in kwargs:
return round(float(kwargs[self._element_x]), 6)
elif self._type == 3 and self._element_1mx in kwargs:
return round(1. - float(kwargs[self._element_1mx]), 6)
else:
raise TypeError() |
def GetUcsPropertyMetaAttributeList(classId):
""" Methods returns the class meta. """
if classId in _ManagedObjectMeta:
attrList = _ManagedObjectMeta[classId].keys()
attrList.remove("Meta")
return attrList
if classId in _MethodFactoryMeta:
attrList = _MethodFactoryMeta[classId].keys()
attrList.remove("Meta")
return attrList
# If the case of classId is not as in Meta
nci = UcsUtils.FindClassIdInMoMetaIgnoreCase(classId)
if (nci != None):
attrList = _ManagedObjectMeta[nci].keys()
attrList.remove("Meta")
return attrList
nci = UcsUtils.FindClassIdInMethodMetaIgnoreCase(classId)
if (nci != None):
attrList = _MethodFactoryMeta[nci].keys()
attrList.remove("Meta")
return attrList
return None | Methods returns the class meta. | Below is the the instruction that describes the task:
### Input:
Methods returns the class meta.
### Response:
def GetUcsPropertyMetaAttributeList(classId):
""" Methods returns the class meta. """
if classId in _ManagedObjectMeta:
attrList = _ManagedObjectMeta[classId].keys()
attrList.remove("Meta")
return attrList
if classId in _MethodFactoryMeta:
attrList = _MethodFactoryMeta[classId].keys()
attrList.remove("Meta")
return attrList
# If the case of classId is not as in Meta
nci = UcsUtils.FindClassIdInMoMetaIgnoreCase(classId)
if (nci != None):
attrList = _ManagedObjectMeta[nci].keys()
attrList.remove("Meta")
return attrList
nci = UcsUtils.FindClassIdInMethodMetaIgnoreCase(classId)
if (nci != None):
attrList = _MethodFactoryMeta[nci].keys()
attrList.remove("Meta")
return attrList
return None |
def convolved_1d(iterable, kernel_size=1, stride=1, padding=0, default_value=None):
"""1D Iterable to get every convolution window per loop iteration.
For more information, refer to:
- https://github.com/guillaume-chevalier/python-conv-lib/blob/master/conv/conv.py
- https://github.com/guillaume-chevalier/python-conv-lib
- MIT License, Copyright (c) 2018 Guillaume Chevalier
"""
return convolved(iterable, kernel_size, stride, padding, default_value) | 1D Iterable to get every convolution window per loop iteration.
For more information, refer to:
- https://github.com/guillaume-chevalier/python-conv-lib/blob/master/conv/conv.py
- https://github.com/guillaume-chevalier/python-conv-lib
- MIT License, Copyright (c) 2018 Guillaume Chevalier | Below is the the instruction that describes the task:
### Input:
1D Iterable to get every convolution window per loop iteration.
For more information, refer to:
- https://github.com/guillaume-chevalier/python-conv-lib/blob/master/conv/conv.py
- https://github.com/guillaume-chevalier/python-conv-lib
- MIT License, Copyright (c) 2018 Guillaume Chevalier
### Response:
def convolved_1d(iterable, kernel_size=1, stride=1, padding=0, default_value=None):
"""1D Iterable to get every convolution window per loop iteration.
For more information, refer to:
- https://github.com/guillaume-chevalier/python-conv-lib/blob/master/conv/conv.py
- https://github.com/guillaume-chevalier/python-conv-lib
- MIT License, Copyright (c) 2018 Guillaume Chevalier
"""
return convolved(iterable, kernel_size, stride, padding, default_value) |
def content():
"""Helper method that returns just the content.
This method was added so that the text could be reused in the
dock_help module.
.. versionadded:: 3.2.2
:returns: A message object without brand element.
:rtype: safe.messaging.message.Message
"""
message = m.Message()
paragraph = m.Paragraph(tr(
'InaSAFE is free software that produces realistic natural hazard '
'impact scenarios for better planning, preparedness and response '
'activities. It provides a simple but rigourous way to combine data '
'from scientists, local governments and communities to provide '
'insights into the likely impacts of future disaster events.'
))
message.add(paragraph)
paragraph = m.Paragraph(tr(
'The InaSAFE \'dock panel\' helps you to run hazard impact analysis '
'within the QGIS environment. It helps you create your hazard impact '
'analysis question and shows the results of this analysis. If you are '
'a new user, you may also consider using the \'Impact Function '
'Centric Wizard\' to run the analysis. This wizard will guide you '
'through the process of running an InaSAFE assessment, with '
'interactive step by step instructions. You can launch the wizard '
'by clicking on this icon in the toolbar:'),
m.Image(
'file:///%s/img/icons/'
'show-wizard.svg' % resources_path(),
**SMALL_ICON_STYLE),
)
message.add(paragraph)
paragraph = m.Paragraph(tr(
'You can drag and drop the dock panel to reposition it on the screen. '
'For example, dragging the panel towards the right margin of the QGIS '
'application will dock it to the right side of the screen.'
))
message.add(paragraph)
message.add(m.Paragraph(tr(
'There are three main areas to the dock panel:')))
bullets = m.BulletedList()
bullets.add(m.Text(
# format 'the __questions__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'questions')).to_html(),
)))
bullets.add(m.Text(
# format 'the __results__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'results')).to_html(),
)))
bullets.add(m.Text(
# format 'the __buttons__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'buttons')).to_html(),
)))
message.add(bullets)
message.add(m.Paragraph(tr(
'You can get help at any time in InaSAFE by clicking on the '
'help buttons provided on each dock and dialog.')))
header = m.Heading(tr('The questions area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'The intention of InaSAFE is to make it easy to perform your impact '
'analysis. We start the analysis in the questions area. This area '
'contains three drop down menus. You create your question by using '
'these drop down menus to select the hazard and exposure data you '
'wish to perform the analysis on. '
'All questions follow this form:'),
m.EmphasizedText(tr(
'In the event of a [hazard], how many [exposure] might be '
'[impacted]?'))))
message.add(m.Paragraph(tr(
'For example: "If there is a flood, how many buildings might be '
'flooded?"')))
message.add(m.Paragraph(tr(
'InaSAFE can be used to answer such questions for hazards such as '
'flood, tsunami, volcanic ash fall and earthquake and exposures '
'such as population, roads, structures, land cover etc.')))
message.add(m.Paragraph(tr(
'The first step in answering these questions is to load layers that '
'represent either hazard scenarios or exposure data into QGIS. '
'A hazard, for example, may be represented as a raster layer in '
'QGIS where each pixel in the raster represents the flood depth '
'following an inundation event. An exposure layer could be '
'represented, for example, as vector polygon data representing '
'building outlines, or a raster outline where each pixel represents '
'the number of people thought to be living in that cell.')))
message.add(m.Paragraph(tr(
'InaSAFE will combine these two layers in a '
'mathematical model. The results of this model will show what the '
'effect of the hazard will be on the exposed infrastructure or '
'people. The plugin relies on simple keyword metadata '
'associated with each layer to determine what kind of information the '
'layer represents. You can define these keywords by '
'selecting a layer and then clicking the InaSAFE Keywords Wizard icon '
'on the toolbar: '),
m.Image(
'file:///%s/img/icons/'
'show-keyword-wizard.svg' % resources_path(),
**SMALL_ICON_STYLE),
tr(
'The wizard will guide you through the process of defining the '
'keywords for that layer.')))
message.add(m.Paragraph(tr(
'Aggregation is the process whereby we group the analysis results '
'by district so that you can see how many people, roads or '
'buildings were affected in each area. This will help you to '
'understand where the most critical needs are. Aggregation is '
'optional in InaSAFE - if you do not use aggregation, the entire '
'analysis area will be used for the data summaries. Typically '
'aggregation layers in InaSAFE have the name of the district or '
'reporting area as attributes. It is also possible to use extended '
'attributes to indicate the ratio of men and women; youth, adults '
'and elderly living in each area. Where these are provided and the '
'exposure layer is population, InaSAFE will provide a demographic '
'breakdown per aggregation area indicating how many men, women, etc. '
'were probably affected in that area.'
)))
header = m.Heading(tr('The results area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'After running an analysis, the question area is hidden to maximise '
'the amount of space allocated to the results area. You can '
're-open the question area at any time by pressing the \'show '
'question form\' button.')))
message.add(m.Paragraph(tr(
'The results area is used to display various useful feedback items to '
'the user. Once an impact scenario has been run, a summary table will '
'be shown.')))
message.add(m.Paragraph(tr(
'If you select an impact layer (i.e. a layer that was produced using '
'an InaSAFE Impact Function), in the QGIS layers list, this summary '
'will also be displayed in the results area. When you select a hazard '
'or exposure layer in the QGIS layers list, the keywords for that '
'layer will be shown in the results area, making it easy to '
'understand what metadata exists for that layer.')))
message.add(m.Paragraph(tr(
'The results area is also used to display status information. For '
'example, during the analysis process, the status area will display '
'notes about each step in the analysis process. The \'Run\' '
'button will be activated when both a valid hazard and valid exposure '
'layer have been added in QGIS.'
)))
message.add(m.Paragraph(tr(
'Finally, the results area is also used to display any error messages '
'so that you can see what went wrong and why. You may need to '
'scroll down to view the message completely to see all of the error '
'message details.'
)))
message.add(m.Paragraph(tr(
'After running the impact scenario calculation, the question is '
'automatically hidden to make the results area as large as possible. '
'If you want to see what the question used in the analysis was, click '
'on the \'Show question form\' button at the top of the results area.'
)))
message.add(m.Paragraph(tr(
'If you want to hide the question area again to have more space to '
'display the results, click on the layer you just calculated '
'with InaSAFE in the Layers list of QGIS to make it active.'
)))
header = m.Heading(tr('The buttons area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'The buttons area contains four buttons:')))
bullets = m.BulletedList()
bullets.add(m.Text(
m.ImportantText(tr('Help')),
tr(
'- click on this if you need context help, such as the document '
'you are reading right now!')))
bullets.add(m.Text(
m.ImportantText(tr('About')),
tr(
'- click on this to see short credits for the InaSAFE project.')))
bullets.add(m.Text(
m.ImportantText(tr('Print')),
tr(
'... - click on this if you wish to create a pdf of your '
'impact scenario project or generate a report to open in '
'composer for further tuning. An impact layer must be active '
'before the \'Print\' button will be enabled.')))
bullets.add(m.Text(
m.ImportantText(tr('Run')),
tr(
'- this button is enabled when the combination of hazard and '
'exposure selected in the questions area\'s drop down menus will '
'allow you to run a scenario.')))
message.add(bullets)
header = m.Heading(tr('Data conversions'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'When running a scenario, the data being used needs to be processed '
'into a state where it is acceptable for use by InaSAFE. '
'In particular it should be noted that:')))
bullets = m.BulletedList()
bullets.add(tr(
'Remote datasets will be copied locally before processing.'))
bullets.add(m.Text(
tr(
'All datasets will be clipped to the behaviours defined in the '
'analysis extents dialog if you do not use an aggregation layer.'),
m.Image(
'file:///%s/img/icons/'
'set-extents-tool.svg' % resources_path(),
**SMALL_ICON_STYLE)
))
bullets.add(m.Text(
tr(
'You can visualise the area that will be used for the analysis '
'by enabling the "Toggle Scenario Outlines" tool. When this tool '
'is enabled, a line (green by default) will be drawn around the '
'outermost boundary of the analysis area.'),
m.Image(
'file:///%s/img/icons/'
'toggle-rubber-bands.svg' % resources_path(),
**SMALL_ICON_STYLE)
))
bullets.add(m.Text(
tr(
'When you have selected an aggregation layer the analysis area '
'will be the outline of the aggregation layer. If you select one '
'or more polygons in the aggregation layer (by using the QGIS '
'feature selection tools), the analysis boundary will be reduced '
'to just the outline of these selected polygons. If the "Toggle '
'Scenario Outlines" tool is enabled, the preview of the effective '
'analysis area will be updated to reflect the selected features.'),
))
bullets.add(tr(
'All clipped datasets will be converted (reprojected) to the '
'Coordinate Reference System of the exposure layer '
'before analysis.'))
message.add(bullets)
header = m.Heading(tr('Generating impact reports'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'When the impact analysis has completed you may want to generate a '
'report. Usually the \'Print...\' button will be enabled immediately '
'after analysis. Selecting an InaSAFE impact layer in QGIS Layers '
'panel will also enable it.'
)))
# This adds the help content of the print dialog
message.add(report())
return message | Helper method that returns just the content.
This method was added so that the text could be reused in the
dock_help module.
.. versionadded:: 3.2.2
:returns: A message object without brand element.
:rtype: safe.messaging.message.Message | Below is the the instruction that describes the task:
### Input:
Helper method that returns just the content.
This method was added so that the text could be reused in the
dock_help module.
.. versionadded:: 3.2.2
:returns: A message object without brand element.
:rtype: safe.messaging.message.Message
### Response:
def content():
"""Helper method that returns just the content.
This method was added so that the text could be reused in the
dock_help module.
.. versionadded:: 3.2.2
:returns: A message object without brand element.
:rtype: safe.messaging.message.Message
"""
message = m.Message()
paragraph = m.Paragraph(tr(
'InaSAFE is free software that produces realistic natural hazard '
'impact scenarios for better planning, preparedness and response '
'activities. It provides a simple but rigourous way to combine data '
'from scientists, local governments and communities to provide '
'insights into the likely impacts of future disaster events.'
))
message.add(paragraph)
paragraph = m.Paragraph(tr(
'The InaSAFE \'dock panel\' helps you to run hazard impact analysis '
'within the QGIS environment. It helps you create your hazard impact '
'analysis question and shows the results of this analysis. If you are '
'a new user, you may also consider using the \'Impact Function '
'Centric Wizard\' to run the analysis. This wizard will guide you '
'through the process of running an InaSAFE assessment, with '
'interactive step by step instructions. You can launch the wizard '
'by clicking on this icon in the toolbar:'),
m.Image(
'file:///%s/img/icons/'
'show-wizard.svg' % resources_path(),
**SMALL_ICON_STYLE),
)
message.add(paragraph)
paragraph = m.Paragraph(tr(
'You can drag and drop the dock panel to reposition it on the screen. '
'For example, dragging the panel towards the right margin of the QGIS '
'application will dock it to the right side of the screen.'
))
message.add(paragraph)
message.add(m.Paragraph(tr(
'There are three main areas to the dock panel:')))
bullets = m.BulletedList()
bullets.add(m.Text(
# format 'the __questions__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'questions')).to_html(),
)))
bullets.add(m.Text(
# format 'the __results__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'results')).to_html(),
)))
bullets.add(m.Text(
# format 'the __buttons__ area' for proper i18n
tr('the %s area') % (
m.ImportantText(tr(
'buttons')).to_html(),
)))
message.add(bullets)
message.add(m.Paragraph(tr(
'You can get help at any time in InaSAFE by clicking on the '
'help buttons provided on each dock and dialog.')))
header = m.Heading(tr('The questions area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'The intention of InaSAFE is to make it easy to perform your impact '
'analysis. We start the analysis in the questions area. This area '
'contains three drop down menus. You create your question by using '
'these drop down menus to select the hazard and exposure data you '
'wish to perform the analysis on. '
'All questions follow this form:'),
m.EmphasizedText(tr(
'In the event of a [hazard], how many [exposure] might be '
'[impacted]?'))))
message.add(m.Paragraph(tr(
'For example: "If there is a flood, how many buildings might be '
'flooded?"')))
message.add(m.Paragraph(tr(
'InaSAFE can be used to answer such questions for hazards such as '
'flood, tsunami, volcanic ash fall and earthquake and exposures '
'such as population, roads, structures, land cover etc.')))
message.add(m.Paragraph(tr(
'The first step in answering these questions is to load layers that '
'represent either hazard scenarios or exposure data into QGIS. '
'A hazard, for example, may be represented as a raster layer in '
'QGIS where each pixel in the raster represents the flood depth '
'following an inundation event. An exposure layer could be '
'represented, for example, as vector polygon data representing '
'building outlines, or a raster outline where each pixel represents '
'the number of people thought to be living in that cell.')))
message.add(m.Paragraph(tr(
'InaSAFE will combine these two layers in a '
'mathematical model. The results of this model will show what the '
'effect of the hazard will be on the exposed infrastructure or '
'people. The plugin relies on simple keyword metadata '
'associated with each layer to determine what kind of information the '
'layer represents. You can define these keywords by '
'selecting a layer and then clicking the InaSAFE Keywords Wizard icon '
'on the toolbar: '),
m.Image(
'file:///%s/img/icons/'
'show-keyword-wizard.svg' % resources_path(),
**SMALL_ICON_STYLE),
tr(
'The wizard will guide you through the process of defining the '
'keywords for that layer.')))
message.add(m.Paragraph(tr(
'Aggregation is the process whereby we group the analysis results '
'by district so that you can see how many people, roads or '
'buildings were affected in each area. This will help you to '
'understand where the most critical needs are. Aggregation is '
'optional in InaSAFE - if you do not use aggregation, the entire '
'analysis area will be used for the data summaries. Typically '
'aggregation layers in InaSAFE have the name of the district or '
'reporting area as attributes. It is also possible to use extended '
'attributes to indicate the ratio of men and women; youth, adults '
'and elderly living in each area. Where these are provided and the '
'exposure layer is population, InaSAFE will provide a demographic '
'breakdown per aggregation area indicating how many men, women, etc. '
'were probably affected in that area.'
)))
header = m.Heading(tr('The results area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'After running an analysis, the question area is hidden to maximise '
'the amount of space allocated to the results area. You can '
're-open the question area at any time by pressing the \'show '
'question form\' button.')))
message.add(m.Paragraph(tr(
'The results area is used to display various useful feedback items to '
'the user. Once an impact scenario has been run, a summary table will '
'be shown.')))
message.add(m.Paragraph(tr(
'If you select an impact layer (i.e. a layer that was produced using '
'an InaSAFE Impact Function), in the QGIS layers list, this summary '
'will also be displayed in the results area. When you select a hazard '
'or exposure layer in the QGIS layers list, the keywords for that '
'layer will be shown in the results area, making it easy to '
'understand what metadata exists for that layer.')))
message.add(m.Paragraph(tr(
'The results area is also used to display status information. For '
'example, during the analysis process, the status area will display '
'notes about each step in the analysis process. The \'Run\' '
'button will be activated when both a valid hazard and valid exposure '
'layer have been added in QGIS.'
)))
message.add(m.Paragraph(tr(
'Finally, the results area is also used to display any error messages '
'so that you can see what went wrong and why. You may need to '
'scroll down to view the message completely to see all of the error '
'message details.'
)))
message.add(m.Paragraph(tr(
'After running the impact scenario calculation, the question is '
'automatically hidden to make the results area as large as possible. '
'If you want to see what the question used in the analysis was, click '
'on the \'Show question form\' button at the top of the results area.'
)))
message.add(m.Paragraph(tr(
'If you want to hide the question area again to have more space to '
'display the results, click on the layer you just calculated '
'with InaSAFE in the Layers list of QGIS to make it active.'
)))
header = m.Heading(tr('The buttons area'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'The buttons area contains four buttons:')))
bullets = m.BulletedList()
bullets.add(m.Text(
m.ImportantText(tr('Help')),
tr(
'- click on this if you need context help, such as the document '
'you are reading right now!')))
bullets.add(m.Text(
m.ImportantText(tr('About')),
tr(
'- click on this to see short credits for the InaSAFE project.')))
bullets.add(m.Text(
m.ImportantText(tr('Print')),
tr(
'... - click on this if you wish to create a pdf of your '
'impact scenario project or generate a report to open in '
'composer for further tuning. An impact layer must be active '
'before the \'Print\' button will be enabled.')))
bullets.add(m.Text(
m.ImportantText(tr('Run')),
tr(
'- this button is enabled when the combination of hazard and '
'exposure selected in the questions area\'s drop down menus will '
'allow you to run a scenario.')))
message.add(bullets)
header = m.Heading(tr('Data conversions'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'When running a scenario, the data being used needs to be processed '
'into a state where it is acceptable for use by InaSAFE. '
'In particular it should be noted that:')))
bullets = m.BulletedList()
bullets.add(tr(
'Remote datasets will be copied locally before processing.'))
bullets.add(m.Text(
tr(
'All datasets will be clipped to the behaviours defined in the '
'analysis extents dialog if you do not use an aggregation layer.'),
m.Image(
'file:///%s/img/icons/'
'set-extents-tool.svg' % resources_path(),
**SMALL_ICON_STYLE)
))
bullets.add(m.Text(
tr(
'You can visualise the area that will be used for the analysis '
'by enabling the "Toggle Scenario Outlines" tool. When this tool '
'is enabled, a line (green by default) will be drawn around the '
'outermost boundary of the analysis area.'),
m.Image(
'file:///%s/img/icons/'
'toggle-rubber-bands.svg' % resources_path(),
**SMALL_ICON_STYLE)
))
bullets.add(m.Text(
tr(
'When you have selected an aggregation layer the analysis area '
'will be the outline of the aggregation layer. If you select one '
'or more polygons in the aggregation layer (by using the QGIS '
'feature selection tools), the analysis boundary will be reduced '
'to just the outline of these selected polygons. If the "Toggle '
'Scenario Outlines" tool is enabled, the preview of the effective '
'analysis area will be updated to reflect the selected features.'),
))
bullets.add(tr(
'All clipped datasets will be converted (reprojected) to the '
'Coordinate Reference System of the exposure layer '
'before analysis.'))
message.add(bullets)
header = m.Heading(tr('Generating impact reports'), **INFO_STYLE)
message.add(header)
message.add(m.Paragraph(tr(
'When the impact analysis has completed you may want to generate a '
'report. Usually the \'Print...\' button will be enabled immediately '
'after analysis. Selecting an InaSAFE impact layer in QGIS Layers '
'panel will also enable it.'
)))
# This adds the help content of the print dialog
message.add(report())
return message |
def write_block_data(self, cmd, block):
"""
Writes a block of bytes to the bus using I2C format to the specified
command register
"""
self.bus.write_i2c_block_data(self.address, cmd, block)
self.log.debug(
"write_block_data: Wrote [%s] to command register 0x%02X" % (
', '.join(['0x%02X' % x for x in block]),
cmd
)
) | Writes a block of bytes to the bus using I2C format to the specified
command register | Below is the the instruction that describes the task:
### Input:
Writes a block of bytes to the bus using I2C format to the specified
command register
### Response:
def write_block_data(self, cmd, block):
"""
Writes a block of bytes to the bus using I2C format to the specified
command register
"""
self.bus.write_i2c_block_data(self.address, cmd, block)
self.log.debug(
"write_block_data: Wrote [%s] to command register 0x%02X" % (
', '.join(['0x%02X' % x for x in block]),
cmd
)
) |
def restart(self, offset: int):
'''Send restart command.
Coroutine.
'''
yield from self._control_stream.write_command(Command('REST', str(offset)))
reply = yield from self._control_stream.read_reply()
self.raise_if_not_match('Restart', ReplyCodes.requested_file_action_pending_further_information, reply) | Send restart command.
Coroutine. | Below is the the instruction that describes the task:
### Input:
Send restart command.
Coroutine.
### Response:
def restart(self, offset: int):
'''Send restart command.
Coroutine.
'''
yield from self._control_stream.write_command(Command('REST', str(offset)))
reply = yield from self._control_stream.read_reply()
self.raise_if_not_match('Restart', ReplyCodes.requested_file_action_pending_further_information, reply) |
def move_group_in_parent(self, group = None, index = None):
"""Move group to another position in group's parent.
index must be a valid index of group.parent.groups
"""
if group is None or index is None:
raise KPError("group and index must be set")
elif type(group) is not v1Group or type(index) is not int:
raise KPError("group must be a v1Group-instance and index "
"must be an integer.")
elif group not in self.groups:
raise KPError("Given group doesn't exist")
elif index < 0 or index >= len(group.parent.children):
raise KPError("index must be a valid index if group.parent.groups")
else:
group_at_index = group.parent.children[index]
pos_in_parent = group.parent.children.index(group)
pos_in_groups = self.groups.index(group)
pos_in_groups2 = self.groups.index(group_at_index)
group.parent.children[index] = group
group.parent.children[pos_in_parent] = group_at_index
self.groups[pos_in_groups2] = group
self.groups[pos_in_groups] = group_at_index
if group.children:
self._move_group_helper(group)
if group_at_index.children:
self._move_group_helper(group_at_index)
group.last_mod = datetime.now().replace(microsecond=0)
return True | Move group to another position in group's parent.
index must be a valid index of group.parent.groups | Below is the the instruction that describes the task:
### Input:
Move group to another position in group's parent.
index must be a valid index of group.parent.groups
### Response:
def move_group_in_parent(self, group = None, index = None):
"""Move group to another position in group's parent.
index must be a valid index of group.parent.groups
"""
if group is None or index is None:
raise KPError("group and index must be set")
elif type(group) is not v1Group or type(index) is not int:
raise KPError("group must be a v1Group-instance and index "
"must be an integer.")
elif group not in self.groups:
raise KPError("Given group doesn't exist")
elif index < 0 or index >= len(group.parent.children):
raise KPError("index must be a valid index if group.parent.groups")
else:
group_at_index = group.parent.children[index]
pos_in_parent = group.parent.children.index(group)
pos_in_groups = self.groups.index(group)
pos_in_groups2 = self.groups.index(group_at_index)
group.parent.children[index] = group
group.parent.children[pos_in_parent] = group_at_index
self.groups[pos_in_groups2] = group
self.groups[pos_in_groups] = group_at_index
if group.children:
self._move_group_helper(group)
if group_at_index.children:
self._move_group_helper(group_at_index)
group.last_mod = datetime.now().replace(microsecond=0)
return True |
def is_blackout(self) -> bool:
"""Does this alert match a blackout period?"""
if not current_app.config['NOTIFICATION_BLACKOUT']:
if self.severity in current_app.config['BLACKOUT_ACCEPT']:
return False
return db.is_blackout_period(self) | Does this alert match a blackout period? | Below is the the instruction that describes the task:
### Input:
Does this alert match a blackout period?
### Response:
def is_blackout(self) -> bool:
"""Does this alert match a blackout period?"""
if not current_app.config['NOTIFICATION_BLACKOUT']:
if self.severity in current_app.config['BLACKOUT_ACCEPT']:
return False
return db.is_blackout_period(self) |
def authenticate(self, auth_token, auth_info, service_name):
"""Authenticates the current auth token.
Args:
auth_token: the auth token.
auth_info: the auth configurations of the API method being called.
service_name: the name of this service.
Returns:
A constructed UserInfo object representing the identity of the caller.
Raises:
UnauthenticatedException: When
* the issuer is not allowed;
* the audiences are not allowed;
* the auth token has already expired.
"""
try:
jwt_claims = self.get_jwt_claims(auth_token)
except Exception as error:
raise suppliers.UnauthenticatedException(u"Cannot decode the auth token",
error)
_check_jwt_claims(jwt_claims)
user_info = UserInfo(jwt_claims)
issuer = user_info.issuer
if issuer not in self._issuers_to_provider_ids:
raise suppliers.UnauthenticatedException(u"Unknown issuer: " + issuer)
provider_id = self._issuers_to_provider_ids[issuer]
if not auth_info.is_provider_allowed(provider_id):
raise suppliers.UnauthenticatedException(u"The requested method does not "
u"allow provider id: " + provider_id)
# Check the audiences decoded from the auth token. The auth token is
# allowed when 1) an audience is equal to the service name, or 2) at least
# one audience is allowed in the method configuration.
audiences = user_info.audiences
has_service_name = service_name in audiences
allowed_audiences = auth_info.get_allowed_audiences(provider_id)
intersected_audiences = set(allowed_audiences).intersection(audiences)
if not has_service_name and not intersected_audiences:
raise suppliers.UnauthenticatedException(u"Audiences not allowed")
return user_info | Authenticates the current auth token.
Args:
auth_token: the auth token.
auth_info: the auth configurations of the API method being called.
service_name: the name of this service.
Returns:
A constructed UserInfo object representing the identity of the caller.
Raises:
UnauthenticatedException: When
* the issuer is not allowed;
* the audiences are not allowed;
* the auth token has already expired. | Below is the the instruction that describes the task:
### Input:
Authenticates the current auth token.
Args:
auth_token: the auth token.
auth_info: the auth configurations of the API method being called.
service_name: the name of this service.
Returns:
A constructed UserInfo object representing the identity of the caller.
Raises:
UnauthenticatedException: When
* the issuer is not allowed;
* the audiences are not allowed;
* the auth token has already expired.
### Response:
def authenticate(self, auth_token, auth_info, service_name):
"""Authenticates the current auth token.
Args:
auth_token: the auth token.
auth_info: the auth configurations of the API method being called.
service_name: the name of this service.
Returns:
A constructed UserInfo object representing the identity of the caller.
Raises:
UnauthenticatedException: When
* the issuer is not allowed;
* the audiences are not allowed;
* the auth token has already expired.
"""
try:
jwt_claims = self.get_jwt_claims(auth_token)
except Exception as error:
raise suppliers.UnauthenticatedException(u"Cannot decode the auth token",
error)
_check_jwt_claims(jwt_claims)
user_info = UserInfo(jwt_claims)
issuer = user_info.issuer
if issuer not in self._issuers_to_provider_ids:
raise suppliers.UnauthenticatedException(u"Unknown issuer: " + issuer)
provider_id = self._issuers_to_provider_ids[issuer]
if not auth_info.is_provider_allowed(provider_id):
raise suppliers.UnauthenticatedException(u"The requested method does not "
u"allow provider id: " + provider_id)
# Check the audiences decoded from the auth token. The auth token is
# allowed when 1) an audience is equal to the service name, or 2) at least
# one audience is allowed in the method configuration.
audiences = user_info.audiences
has_service_name = service_name in audiences
allowed_audiences = auth_info.get_allowed_audiences(provider_id)
intersected_audiences = set(allowed_audiences).intersection(audiences)
if not has_service_name and not intersected_audiences:
raise suppliers.UnauthenticatedException(u"Audiences not allowed")
return user_info |
def BLE(self, params):
"""
BLE label
Branch to the instruction at label if the Z flag is set or if the N flag is not the same as the V flag
"""
label = self.get_one_parameter(self.ONE_PARAMETER, params)
self.check_arguments(label_exists=(label,))
# BLE label
def BLE_func():
if self.is_Z_set() or (self.is_N_set() != self.is_V_set()):
self.register['PC'] = self.labels[label]
return BLE_func | BLE label
Branch to the instruction at label if the Z flag is set or if the N flag is not the same as the V flag | Below is the the instruction that describes the task:
### Input:
BLE label
Branch to the instruction at label if the Z flag is set or if the N flag is not the same as the V flag
### Response:
def BLE(self, params):
"""
BLE label
Branch to the instruction at label if the Z flag is set or if the N flag is not the same as the V flag
"""
label = self.get_one_parameter(self.ONE_PARAMETER, params)
self.check_arguments(label_exists=(label,))
# BLE label
def BLE_func():
if self.is_Z_set() or (self.is_N_set() != self.is_V_set()):
self.register['PC'] = self.labels[label]
return BLE_func |
def _function_magic_marker(magic_kind):
"""Decorator factory for standalone functions.
"""
validate_type(magic_kind)
# This is a closure to capture the magic_kind. We could also use a class,
# but it's overkill for just that one bit of state.
def magic_deco(arg):
call = lambda f, *a, **k: f(*a, **k)
# Find get_ipython() in the caller's namespace
caller = sys._getframe(1)
for ns in ['f_locals', 'f_globals', 'f_builtins']:
get_ipython = getattr(caller, ns).get('get_ipython')
if get_ipython is not None:
break
else:
raise NameError('Decorator can only run in context where '
'`get_ipython` exists')
ip = get_ipython()
if callable(arg):
# "Naked" decorator call (just @foo, no args)
func = arg
name = func.func_name
ip.register_magic_function(func, magic_kind, name)
retval = decorator(call, func)
elif isinstance(arg, basestring):
# Decorator called with arguments (@foo('bar'))
name = arg
def mark(func, *a, **kw):
ip.register_magic_function(func, magic_kind, name)
return decorator(call, func)
retval = mark
else:
raise TypeError("Decorator can only be called with "
"string or function")
return retval
# Ensure the resulting decorator has a usable docstring
ds = _docstring_template.format('function', magic_kind)
ds += dedent("""
Note: this decorator can only be used in a context where IPython is already
active, so that the `get_ipython()` call succeeds. You can therefore use
it in your startup files loaded after IPython initializes, but *not* in the
IPython configuration file itself, which is executed before IPython is
fully up and running. Any file located in the `startup` subdirectory of
your configuration profile will be OK in this sense.
""")
magic_deco.__doc__ = ds
return magic_deco | Decorator factory for standalone functions. | Below is the the instruction that describes the task:
### Input:
Decorator factory for standalone functions.
### Response:
def _function_magic_marker(magic_kind):
"""Decorator factory for standalone functions.
"""
validate_type(magic_kind)
# This is a closure to capture the magic_kind. We could also use a class,
# but it's overkill for just that one bit of state.
def magic_deco(arg):
call = lambda f, *a, **k: f(*a, **k)
# Find get_ipython() in the caller's namespace
caller = sys._getframe(1)
for ns in ['f_locals', 'f_globals', 'f_builtins']:
get_ipython = getattr(caller, ns).get('get_ipython')
if get_ipython is not None:
break
else:
raise NameError('Decorator can only run in context where '
'`get_ipython` exists')
ip = get_ipython()
if callable(arg):
# "Naked" decorator call (just @foo, no args)
func = arg
name = func.func_name
ip.register_magic_function(func, magic_kind, name)
retval = decorator(call, func)
elif isinstance(arg, basestring):
# Decorator called with arguments (@foo('bar'))
name = arg
def mark(func, *a, **kw):
ip.register_magic_function(func, magic_kind, name)
return decorator(call, func)
retval = mark
else:
raise TypeError("Decorator can only be called with "
"string or function")
return retval
# Ensure the resulting decorator has a usable docstring
ds = _docstring_template.format('function', magic_kind)
ds += dedent("""
Note: this decorator can only be used in a context where IPython is already
active, so that the `get_ipython()` call succeeds. You can therefore use
it in your startup files loaded after IPython initializes, but *not* in the
IPython configuration file itself, which is executed before IPython is
fully up and running. Any file located in the `startup` subdirectory of
your configuration profile will be OK in this sense.
""")
magic_deco.__doc__ = ds
return magic_deco |
def flatten_group(group_to_flatten, root, recursive=True,
group_filter=lambda x: True, path_filter=lambda x: True,
path_conversions=CONVERSIONS,
group_search_xpath=SVG_GROUP_TAG):
"""Flatten all the paths in a specific group.
The paths will be flattened into the 'root' frame. Note that root
needs to be an ancestor of the group that is being flattened.
Otherwise, no paths will be returned."""
if not any(group_to_flatten is descendant for descendant in root.iter()):
warnings.warn('The requested group_to_flatten is not a '
'descendant of root')
# We will shortcut here, because it is impossible for any paths
# to be returned anyhow.
return []
# We create a set of the unique IDs of each element that we wish to
# flatten, if those elements are groups. Any groups outside of this
# set will be skipped while we flatten the paths.
desired_groups = set()
if recursive:
for group in group_to_flatten.iter():
desired_groups.add(id(group))
else:
desired_groups.add(id(group_to_flatten))
def desired_group_filter(x):
return (id(x) in desired_groups) and group_filter(x)
return flatten_all_paths(root, desired_group_filter, path_filter,
path_conversions, group_search_xpath) | Flatten all the paths in a specific group.
The paths will be flattened into the 'root' frame. Note that root
needs to be an ancestor of the group that is being flattened.
Otherwise, no paths will be returned. | Below is the the instruction that describes the task:
### Input:
Flatten all the paths in a specific group.
The paths will be flattened into the 'root' frame. Note that root
needs to be an ancestor of the group that is being flattened.
Otherwise, no paths will be returned.
### Response:
def flatten_group(group_to_flatten, root, recursive=True,
group_filter=lambda x: True, path_filter=lambda x: True,
path_conversions=CONVERSIONS,
group_search_xpath=SVG_GROUP_TAG):
"""Flatten all the paths in a specific group.
The paths will be flattened into the 'root' frame. Note that root
needs to be an ancestor of the group that is being flattened.
Otherwise, no paths will be returned."""
if not any(group_to_flatten is descendant for descendant in root.iter()):
warnings.warn('The requested group_to_flatten is not a '
'descendant of root')
# We will shortcut here, because it is impossible for any paths
# to be returned anyhow.
return []
# We create a set of the unique IDs of each element that we wish to
# flatten, if those elements are groups. Any groups outside of this
# set will be skipped while we flatten the paths.
desired_groups = set()
if recursive:
for group in group_to_flatten.iter():
desired_groups.add(id(group))
else:
desired_groups.add(id(group_to_flatten))
def desired_group_filter(x):
return (id(x) in desired_groups) and group_filter(x)
return flatten_all_paths(root, desired_group_filter, path_filter,
path_conversions, group_search_xpath) |
def refresh(self):
'''
Refresh the list and the screen
'''
self._screen.force_update()
self._screen.refresh()
self._update(1) | Refresh the list and the screen | Below is the the instruction that describes the task:
### Input:
Refresh the list and the screen
### Response:
def refresh(self):
'''
Refresh the list and the screen
'''
self._screen.force_update()
self._screen.refresh()
self._update(1) |
def get_current_target(module, module_parameter=None, action_parameter=None):
'''
Get the currently selected target for the given module.
module
name of the module to be queried for its current target
module_parameter
additional params passed to the defined module
action_parameter
additional params passed to the 'show' action
CLI Example (current target of system-wide ``java-vm``):
.. code-block:: bash
salt '*' eselect.get_current_target java-vm action_parameter='system'
CLI Example (current target of ``kernel`` symlink):
.. code-block:: bash
salt '*' eselect.get_current_target kernel
'''
result = exec_action(module, 'show', module_parameter=module_parameter, action_parameter=action_parameter)[0]
if not result:
return None
if result == '(unset)':
return None
return result | Get the currently selected target for the given module.
module
name of the module to be queried for its current target
module_parameter
additional params passed to the defined module
action_parameter
additional params passed to the 'show' action
CLI Example (current target of system-wide ``java-vm``):
.. code-block:: bash
salt '*' eselect.get_current_target java-vm action_parameter='system'
CLI Example (current target of ``kernel`` symlink):
.. code-block:: bash
salt '*' eselect.get_current_target kernel | Below is the the instruction that describes the task:
### Input:
Get the currently selected target for the given module.
module
name of the module to be queried for its current target
module_parameter
additional params passed to the defined module
action_parameter
additional params passed to the 'show' action
CLI Example (current target of system-wide ``java-vm``):
.. code-block:: bash
salt '*' eselect.get_current_target java-vm action_parameter='system'
CLI Example (current target of ``kernel`` symlink):
.. code-block:: bash
salt '*' eselect.get_current_target kernel
### Response:
def get_current_target(module, module_parameter=None, action_parameter=None):
'''
Get the currently selected target for the given module.
module
name of the module to be queried for its current target
module_parameter
additional params passed to the defined module
action_parameter
additional params passed to the 'show' action
CLI Example (current target of system-wide ``java-vm``):
.. code-block:: bash
salt '*' eselect.get_current_target java-vm action_parameter='system'
CLI Example (current target of ``kernel`` symlink):
.. code-block:: bash
salt '*' eselect.get_current_target kernel
'''
result = exec_action(module, 'show', module_parameter=module_parameter, action_parameter=action_parameter)[0]
if not result:
return None
if result == '(unset)':
return None
return result |
def write_entries(self, entries, logger_name=None, resource=None, labels=None):
"""API call: log an entry resource via a POST request
:type entries: sequence of mapping
:param entries: the log entry resources to log.
:type logger_name: str
:param logger_name: name of default logger to which to log the entries;
individual entries may override.
:type resource: mapping
:param resource: default resource to associate with entries;
individual entries may override.
:type labels: mapping
:param labels: default labels to associate with entries;
individual entries may override.
"""
partial_success = False
entry_pbs = [_log_entry_mapping_to_pb(entry) for entry in entries]
self._gapic_api.write_log_entries(
entry_pbs,
log_name=logger_name,
resource=resource,
labels=labels,
partial_success=partial_success,
) | API call: log an entry resource via a POST request
:type entries: sequence of mapping
:param entries: the log entry resources to log.
:type logger_name: str
:param logger_name: name of default logger to which to log the entries;
individual entries may override.
:type resource: mapping
:param resource: default resource to associate with entries;
individual entries may override.
:type labels: mapping
:param labels: default labels to associate with entries;
individual entries may override. | Below is the the instruction that describes the task:
### Input:
API call: log an entry resource via a POST request
:type entries: sequence of mapping
:param entries: the log entry resources to log.
:type logger_name: str
:param logger_name: name of default logger to which to log the entries;
individual entries may override.
:type resource: mapping
:param resource: default resource to associate with entries;
individual entries may override.
:type labels: mapping
:param labels: default labels to associate with entries;
individual entries may override.
### Response:
def write_entries(self, entries, logger_name=None, resource=None, labels=None):
"""API call: log an entry resource via a POST request
:type entries: sequence of mapping
:param entries: the log entry resources to log.
:type logger_name: str
:param logger_name: name of default logger to which to log the entries;
individual entries may override.
:type resource: mapping
:param resource: default resource to associate with entries;
individual entries may override.
:type labels: mapping
:param labels: default labels to associate with entries;
individual entries may override.
"""
partial_success = False
entry_pbs = [_log_entry_mapping_to_pb(entry) for entry in entries]
self._gapic_api.write_log_entries(
entry_pbs,
log_name=logger_name,
resource=resource,
labels=labels,
partial_success=partial_success,
) |
def preprocess(self, dt):
""" Preprocess the `dt` with `localize()` and `astz()` """
# Process
try: # this block should not raise errors, and if it does -- they should not be wrapped with `Invalid`
# localize
if self.localize and dt.tzinfo is None:
dt = self.localize(dt)
# astimezone
if self.astz and dt.tzinfo is not None:
dt = self.astz(dt)
# Finish
return dt
except Exception as e:
if isinstance(e, Invalid):
raise
six.reraise(RuntimeError, e) | Preprocess the `dt` with `localize()` and `astz()` | Below is the the instruction that describes the task:
### Input:
Preprocess the `dt` with `localize()` and `astz()`
### Response:
def preprocess(self, dt):
""" Preprocess the `dt` with `localize()` and `astz()` """
# Process
try: # this block should not raise errors, and if it does -- they should not be wrapped with `Invalid`
# localize
if self.localize and dt.tzinfo is None:
dt = self.localize(dt)
# astimezone
if self.astz and dt.tzinfo is not None:
dt = self.astz(dt)
# Finish
return dt
except Exception as e:
if isinstance(e, Invalid):
raise
six.reraise(RuntimeError, e) |
def remote_styles(family_metadata):
"""Get a dictionary of TTFont objects of all font files of
a given family as currently hosted at Google Fonts.
"""
def download_family_from_Google_Fonts(family_name):
"""Return a zipfile containing a font family hosted on fonts.google.com"""
from zipfile import ZipFile
from fontbakery.utils import download_file
url_prefix = 'https://fonts.google.com/download?family='
url = '{}{}'.format(url_prefix, family_name.replace(' ', '+'))
return ZipFile(download_file(url))
def fonts_from_zip(zipfile):
'''return a list of fontTools TTFonts'''
from fontTools.ttLib import TTFont
from io import BytesIO
fonts = []
for file_name in zipfile.namelist():
if file_name.lower().endswith(".ttf"):
file_obj = BytesIO(zipfile.open(file_name).read())
fonts.append([file_name, TTFont(file_obj)])
return fonts
if (not listed_on_gfonts_api(family_metadata) or
not family_metadata):
return None
remote_fonts_zip = download_family_from_Google_Fonts(family_metadata.name)
rstyles = {}
for remote_filename, remote_font in fonts_from_zip(remote_fonts_zip):
remote_style = os.path.splitext(remote_filename)[0]
if '-' in remote_style:
remote_style = remote_style.split('-')[1]
rstyles[remote_style] = remote_font
return rstyles | Get a dictionary of TTFont objects of all font files of
a given family as currently hosted at Google Fonts. | Below is the the instruction that describes the task:
### Input:
Get a dictionary of TTFont objects of all font files of
a given family as currently hosted at Google Fonts.
### Response:
def remote_styles(family_metadata):
"""Get a dictionary of TTFont objects of all font files of
a given family as currently hosted at Google Fonts.
"""
def download_family_from_Google_Fonts(family_name):
"""Return a zipfile containing a font family hosted on fonts.google.com"""
from zipfile import ZipFile
from fontbakery.utils import download_file
url_prefix = 'https://fonts.google.com/download?family='
url = '{}{}'.format(url_prefix, family_name.replace(' ', '+'))
return ZipFile(download_file(url))
def fonts_from_zip(zipfile):
'''return a list of fontTools TTFonts'''
from fontTools.ttLib import TTFont
from io import BytesIO
fonts = []
for file_name in zipfile.namelist():
if file_name.lower().endswith(".ttf"):
file_obj = BytesIO(zipfile.open(file_name).read())
fonts.append([file_name, TTFont(file_obj)])
return fonts
if (not listed_on_gfonts_api(family_metadata) or
not family_metadata):
return None
remote_fonts_zip = download_family_from_Google_Fonts(family_metadata.name)
rstyles = {}
for remote_filename, remote_font in fonts_from_zip(remote_fonts_zip):
remote_style = os.path.splitext(remote_filename)[0]
if '-' in remote_style:
remote_style = remote_style.split('-')[1]
rstyles[remote_style] = remote_font
return rstyles |
def get_cv_idxs(n, cv_idx=0, val_pct=0.2, seed=42):
""" Get a list of index values for Validation set from a dataset
Arguments:
n : int, Total number of elements in the data set.
cv_idx : int, starting index [idx_start = cv_idx*int(val_pct*n)]
val_pct : (int, float), validation set percentage
seed : seed value for RandomState
Returns:
list of indexes
"""
np.random.seed(seed)
n_val = int(val_pct*n)
idx_start = cv_idx*n_val
idxs = np.random.permutation(n)
return idxs[idx_start:idx_start+n_val] | Get a list of index values for Validation set from a dataset
Arguments:
n : int, Total number of elements in the data set.
cv_idx : int, starting index [idx_start = cv_idx*int(val_pct*n)]
val_pct : (int, float), validation set percentage
seed : seed value for RandomState
Returns:
list of indexes | Below is the the instruction that describes the task:
### Input:
Get a list of index values for Validation set from a dataset
Arguments:
n : int, Total number of elements in the data set.
cv_idx : int, starting index [idx_start = cv_idx*int(val_pct*n)]
val_pct : (int, float), validation set percentage
seed : seed value for RandomState
Returns:
list of indexes
### Response:
def get_cv_idxs(n, cv_idx=0, val_pct=0.2, seed=42):
""" Get a list of index values for Validation set from a dataset
Arguments:
n : int, Total number of elements in the data set.
cv_idx : int, starting index [idx_start = cv_idx*int(val_pct*n)]
val_pct : (int, float), validation set percentage
seed : seed value for RandomState
Returns:
list of indexes
"""
np.random.seed(seed)
n_val = int(val_pct*n)
idx_start = cv_idx*n_val
idxs = np.random.permutation(n)
return idxs[idx_start:idx_start+n_val] |
def main():
"""
generates a random world, sets terrain and runs agents in it
TODO - need to change pieces in multiple places (see worlds.py, cls_grid, world_generator)
(takes about 5 minutes to make 500x400 grid with 8% blockages)
"""
width = 20 # grid width
height = 10 # grid height
iterations = 20 # how many simulations to run
num_agents = 6 # number of agents to enter the world
w = build_world(height, width)
print(w)
a = create_random_agents(w, num_agents)
sim = my_world.WorldSimulation(w, a, 1)
sim.run(iterations, 'Y', log_folder + os.sep + 'agt_run')
sim.world.grd.save('test_world_traversed.txt') | generates a random world, sets terrain and runs agents in it
TODO - need to change pieces in multiple places (see worlds.py, cls_grid, world_generator)
(takes about 5 minutes to make 500x400 grid with 8% blockages) | Below is the the instruction that describes the task:
### Input:
generates a random world, sets terrain and runs agents in it
TODO - need to change pieces in multiple places (see worlds.py, cls_grid, world_generator)
(takes about 5 minutes to make 500x400 grid with 8% blockages)
### Response:
def main():
"""
generates a random world, sets terrain and runs agents in it
TODO - need to change pieces in multiple places (see worlds.py, cls_grid, world_generator)
(takes about 5 minutes to make 500x400 grid with 8% blockages)
"""
width = 20 # grid width
height = 10 # grid height
iterations = 20 # how many simulations to run
num_agents = 6 # number of agents to enter the world
w = build_world(height, width)
print(w)
a = create_random_agents(w, num_agents)
sim = my_world.WorldSimulation(w, a, 1)
sim.run(iterations, 'Y', log_folder + os.sep + 'agt_run')
sim.world.grd.save('test_world_traversed.txt') |
def com_google_fonts_check_metadata_regular_is_400(family_metadata):
"""METADATA.pb: Regular should be 400."""
badfonts = []
for f in family_metadata.fonts:
if f.full_name.endswith("Regular") and f.weight != 400:
badfonts.append(f"{f.filename} (weight: {f.weight})")
if len(badfonts) > 0:
yield FAIL, ("METADATA.pb: Regular font weight must be 400."
" Please fix these: {}").format(", ".join(badfonts))
else:
yield PASS, "Regular has weight = 400." | METADATA.pb: Regular should be 400. | Below is the the instruction that describes the task:
### Input:
METADATA.pb: Regular should be 400.
### Response:
def com_google_fonts_check_metadata_regular_is_400(family_metadata):
"""METADATA.pb: Regular should be 400."""
badfonts = []
for f in family_metadata.fonts:
if f.full_name.endswith("Regular") and f.weight != 400:
badfonts.append(f"{f.filename} (weight: {f.weight})")
if len(badfonts) > 0:
yield FAIL, ("METADATA.pb: Regular font weight must be 400."
" Please fix these: {}").format(", ".join(badfonts))
else:
yield PASS, "Regular has weight = 400." |
def _galaxy_association_cuts(
self,
matchedObjects,
catalogueName,
magnitudeLimitFilter,
upperMagnitudeLimit,
lowerMagnitudeLimit):
"""*perform a bright star match on the crossmatch results if required by the catalogue search*
**Key Arguments:**
- ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch
- ``catalogueName`` -- the name of the catalogue the crossmatch results from
- ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on
- ``lowerMagnitudeLimit`` -- the lower magnitude limit to match general galaxies against
- ``upperMagnitudeLimit`` -- the upper magnitude limit to match general galaxies against
**Return:**
- ``galaxyMatches`` -- the trimmed matched sources (associated galaxies only)
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_galaxy_association_cuts`` method')
import decimal
decimal.getcontext().prec = 10
# MATCH BRIGHT STAR ASSOCIATIONS
galaxyMatches = []
for row in matchedObjects:
if not magnitudeLimitFilter or row[magnitudeLimitFilter] == None:
galaxyMatches.append(row)
else:
mag = decimal.Decimal(row[magnitudeLimitFilter])
if mag and mag < lowerMagnitudeLimit and mag > upperMagnitudeLimit:
sep = decimal.Decimal(row["separationArcsec"])
if sep < decimal.Decimal(decimal.Decimal(10)**(decimal.Decimal((decimal.Decimal(25.) - mag) / decimal.Decimal(6.)))):
galaxyMatches.append(row)
self.log.debug('completed the ``_galaxy_association_cuts`` method')
return galaxyMatches | *perform a bright star match on the crossmatch results if required by the catalogue search*
**Key Arguments:**
- ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch
- ``catalogueName`` -- the name of the catalogue the crossmatch results from
- ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on
- ``lowerMagnitudeLimit`` -- the lower magnitude limit to match general galaxies against
- ``upperMagnitudeLimit`` -- the upper magnitude limit to match general galaxies against
**Return:**
- ``galaxyMatches`` -- the trimmed matched sources (associated galaxies only)
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring | Below is the the instruction that describes the task:
### Input:
*perform a bright star match on the crossmatch results if required by the catalogue search*
**Key Arguments:**
- ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch
- ``catalogueName`` -- the name of the catalogue the crossmatch results from
- ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on
- ``lowerMagnitudeLimit`` -- the lower magnitude limit to match general galaxies against
- ``upperMagnitudeLimit`` -- the upper magnitude limit to match general galaxies against
**Return:**
- ``galaxyMatches`` -- the trimmed matched sources (associated galaxies only)
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
### Response:
def _galaxy_association_cuts(
self,
matchedObjects,
catalogueName,
magnitudeLimitFilter,
upperMagnitudeLimit,
lowerMagnitudeLimit):
"""*perform a bright star match on the crossmatch results if required by the catalogue search*
**Key Arguments:**
- ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch
- ``catalogueName`` -- the name of the catalogue the crossmatch results from
- ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on
- ``lowerMagnitudeLimit`` -- the lower magnitude limit to match general galaxies against
- ``upperMagnitudeLimit`` -- the upper magnitude limit to match general galaxies against
**Return:**
- ``galaxyMatches`` -- the trimmed matched sources (associated galaxies only)
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_galaxy_association_cuts`` method')
import decimal
decimal.getcontext().prec = 10
# MATCH BRIGHT STAR ASSOCIATIONS
galaxyMatches = []
for row in matchedObjects:
if not magnitudeLimitFilter or row[magnitudeLimitFilter] == None:
galaxyMatches.append(row)
else:
mag = decimal.Decimal(row[magnitudeLimitFilter])
if mag and mag < lowerMagnitudeLimit and mag > upperMagnitudeLimit:
sep = decimal.Decimal(row["separationArcsec"])
if sep < decimal.Decimal(decimal.Decimal(10)**(decimal.Decimal((decimal.Decimal(25.) - mag) / decimal.Decimal(6.)))):
galaxyMatches.append(row)
self.log.debug('completed the ``_galaxy_association_cuts`` method')
return galaxyMatches |
def do_help(self, arg):
"""Help command.
Usage:
help [command]
Parameters:
command: Optional - command name to display detailed help
"""
cmds = arg.split()
if cmds:
func = getattr(self, 'do_{}'.format(cmds[0]))
if func:
_LOGGING.info(func.__doc__)
else:
_LOGGING.error('Command %s not found', cmds[0])
else:
_LOGGING.info("Available command list: ")
for curr_cmd in dir(self.__class__):
if curr_cmd.startswith("do_") and not curr_cmd == 'do_test':
print(" - ", curr_cmd[3:])
_LOGGING.info("For help with a command type `help command`") | Help command.
Usage:
help [command]
Parameters:
command: Optional - command name to display detailed help | Below is the the instruction that describes the task:
### Input:
Help command.
Usage:
help [command]
Parameters:
command: Optional - command name to display detailed help
### Response:
def do_help(self, arg):
"""Help command.
Usage:
help [command]
Parameters:
command: Optional - command name to display detailed help
"""
cmds = arg.split()
if cmds:
func = getattr(self, 'do_{}'.format(cmds[0]))
if func:
_LOGGING.info(func.__doc__)
else:
_LOGGING.error('Command %s not found', cmds[0])
else:
_LOGGING.info("Available command list: ")
for curr_cmd in dir(self.__class__):
if curr_cmd.startswith("do_") and not curr_cmd == 'do_test':
print(" - ", curr_cmd[3:])
_LOGGING.info("For help with a command type `help command`") |
def read_chunk(stream):
"""Ignore whitespace outside of strings. If we hit a string, read it in
its entirety.
"""
chunk = stream.read(1)
while chunk in SKIP:
chunk = stream.read(1)
if chunk == "\"":
chunk += stream.read(1)
while not chunk.endswith("\""):
if chunk[-1] == ESCAPE:
chunk += stream.read(2)
else:
chunk += stream.read(1)
return chunk | Ignore whitespace outside of strings. If we hit a string, read it in
its entirety. | Below is the the instruction that describes the task:
### Input:
Ignore whitespace outside of strings. If we hit a string, read it in
its entirety.
### Response:
def read_chunk(stream):
"""Ignore whitespace outside of strings. If we hit a string, read it in
its entirety.
"""
chunk = stream.read(1)
while chunk in SKIP:
chunk = stream.read(1)
if chunk == "\"":
chunk += stream.read(1)
while not chunk.endswith("\""):
if chunk[-1] == ESCAPE:
chunk += stream.read(2)
else:
chunk += stream.read(1)
return chunk |
def _str_extract_frame(arr, pat, flags=0):
"""
For each subject string in the Series, extract groups from the
first match of regular expression pat. This function is called from
str_extract(expand=True), and always returns a DataFrame.
"""
from pandas import DataFrame
regex = re.compile(pat, flags=flags)
groups_or_na = _groups_or_na_fun(regex)
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
columns = [names.get(1 + i, i) for i in range(regex.groups)]
if len(arr) == 0:
return DataFrame(columns=columns, dtype=object)
try:
result_index = arr.index
except AttributeError:
result_index = None
return DataFrame(
[groups_or_na(val) for val in arr],
columns=columns,
index=result_index,
dtype=object) | For each subject string in the Series, extract groups from the
first match of regular expression pat. This function is called from
str_extract(expand=True), and always returns a DataFrame. | Below is the the instruction that describes the task:
### Input:
For each subject string in the Series, extract groups from the
first match of regular expression pat. This function is called from
str_extract(expand=True), and always returns a DataFrame.
### Response:
def _str_extract_frame(arr, pat, flags=0):
"""
For each subject string in the Series, extract groups from the
first match of regular expression pat. This function is called from
str_extract(expand=True), and always returns a DataFrame.
"""
from pandas import DataFrame
regex = re.compile(pat, flags=flags)
groups_or_na = _groups_or_na_fun(regex)
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
columns = [names.get(1 + i, i) for i in range(regex.groups)]
if len(arr) == 0:
return DataFrame(columns=columns, dtype=object)
try:
result_index = arr.index
except AttributeError:
result_index = None
return DataFrame(
[groups_or_na(val) for val in arr],
columns=columns,
index=result_index,
dtype=object) |
def _HasTable(self, table_name):
"""Determines if a specific table exists.
Args:
table_name (str): name of the table.
Returns:
bool: True if the table exists, false otherwise.
"""
query = self._HAS_TABLE_QUERY.format(table_name)
self._cursor.execute(query)
return bool(self._cursor.fetchone()) | Determines if a specific table exists.
Args:
table_name (str): name of the table.
Returns:
bool: True if the table exists, false otherwise. | Below is the the instruction that describes the task:
### Input:
Determines if a specific table exists.
Args:
table_name (str): name of the table.
Returns:
bool: True if the table exists, false otherwise.
### Response:
def _HasTable(self, table_name):
"""Determines if a specific table exists.
Args:
table_name (str): name of the table.
Returns:
bool: True if the table exists, false otherwise.
"""
query = self._HAS_TABLE_QUERY.format(table_name)
self._cursor.execute(query)
return bool(self._cursor.fetchone()) |
def surface_density(segmentation, voxelsize_mm, aoi=None, sond_raster_mm=None):
"""
:segmentation: is ndarray with 0 and 1
:voxelsize_mm: is array of three numbers specifiing size of voxel for each
axis
:aoi: is specify area of interest. It is ndarray with 0 and 1
:sond_raster_mm: unimplemented. It is parametr of sonds design
"""
axis = 0
if sond_raster_mm is None:
sond_raster_mm = voxelsize_mm
if aoi is None:
aoi = np.ones(segmentation.shape)
im_edg = find_edge(segmentation, axis=axis)
im_edg = im_edg * aoi
im_sond, aoi_sond = bufford_needle_sond(
im_edg, voxelsize_mm, sond_raster_mm, axis=axis, aoi=aoi)
# isotropic fakir - kubinova janacek
# est S = 2 \frac{1}{n} \sum_{i=1}^{n} \frac{v}{l_i} \cdot l_i
# celkova delka sond
# n_needle = (im_sond.shape[1] * im_sond.shape[2])
# one_needle_l = im_sond.shape[0] * voxelsize_mm[0]
# length = n_needle * one_needle_l
length = np.sum(aoi_sond > 0) * voxelsize_mm[0]
# inverse of the probe per unit volume v/l_i
# ippuv = (
# (np.prod(sond_raster_mm) * im_sond.shape[axis])
# /
# (sond_raster_mm[axis] * im_sond.shape[axis])
# )
# Pocet pruseciku
# Ii = np.sum(np.abs(im_sond))
Ii = np.sum(np.abs(im_sond))
# import sed3
# ed = sed3.sed3(im_sond)
# ed.show()
# Kubinova2001
# print "Ii = ", Ii
Sv = 2.0 * Ii / length
# import ipdb; ipdb.set_trace() # noqa BREAKPOINT
return Sv | :segmentation: is ndarray with 0 and 1
:voxelsize_mm: is array of three numbers specifiing size of voxel for each
axis
:aoi: is specify area of interest. It is ndarray with 0 and 1
:sond_raster_mm: unimplemented. It is parametr of sonds design | Below is the the instruction that describes the task:
### Input:
:segmentation: is ndarray with 0 and 1
:voxelsize_mm: is array of three numbers specifiing size of voxel for each
axis
:aoi: is specify area of interest. It is ndarray with 0 and 1
:sond_raster_mm: unimplemented. It is parametr of sonds design
### Response:
def surface_density(segmentation, voxelsize_mm, aoi=None, sond_raster_mm=None):
"""
:segmentation: is ndarray with 0 and 1
:voxelsize_mm: is array of three numbers specifiing size of voxel for each
axis
:aoi: is specify area of interest. It is ndarray with 0 and 1
:sond_raster_mm: unimplemented. It is parametr of sonds design
"""
axis = 0
if sond_raster_mm is None:
sond_raster_mm = voxelsize_mm
if aoi is None:
aoi = np.ones(segmentation.shape)
im_edg = find_edge(segmentation, axis=axis)
im_edg = im_edg * aoi
im_sond, aoi_sond = bufford_needle_sond(
im_edg, voxelsize_mm, sond_raster_mm, axis=axis, aoi=aoi)
# isotropic fakir - kubinova janacek
# est S = 2 \frac{1}{n} \sum_{i=1}^{n} \frac{v}{l_i} \cdot l_i
# celkova delka sond
# n_needle = (im_sond.shape[1] * im_sond.shape[2])
# one_needle_l = im_sond.shape[0] * voxelsize_mm[0]
# length = n_needle * one_needle_l
length = np.sum(aoi_sond > 0) * voxelsize_mm[0]
# inverse of the probe per unit volume v/l_i
# ippuv = (
# (np.prod(sond_raster_mm) * im_sond.shape[axis])
# /
# (sond_raster_mm[axis] * im_sond.shape[axis])
# )
# Pocet pruseciku
# Ii = np.sum(np.abs(im_sond))
Ii = np.sum(np.abs(im_sond))
# import sed3
# ed = sed3.sed3(im_sond)
# ed.show()
# Kubinova2001
# print "Ii = ", Ii
Sv = 2.0 * Ii / length
# import ipdb; ipdb.set_trace() # noqa BREAKPOINT
return Sv |
def write_bus_data(self, file):
""" Writes bus data in MATPOWER format.
"""
# I, 'NAME', BASKV, IDE, GL, BL, AREA, ZONE, VM, VA, OWNER
bus_attrs = ["_i", "name", "v_base", "type", "g_shunt", "b_shunt",
"area", "zone",
"v_magnitude", "v_angle"]
for bus in self.case.buses:
vals = [getattr(bus, a) for a in bus_attrs]
if float(vals[6]) == 0.0:
vals[6] = 1 # default AREA: 1
if float(vals[7])==0.0:
vals[7] = 1 # default ZONE: 1
d = {PQ: 1, PV: 2, REFERENCE: 3, ISOLATED: 4}
vals[3] = d[vals[3]]
vals.append(1)
# print len(vals), vals
file.write("%6d,'%-10s',%10.4f,%d,%10.3f,%10.3f,%4d,%4d,%10.3f,"
"%10.3f%4d\n" % tuple(vals))
file.write(" 0 / END OF BUS DATA, BEGIN LOAD DATA\n")
# I, ID, STATUS, AREA, ZONE, PL, QL, IP, IQ, YP, YQ, OWNER
load_attrs = ["_i", "area", "zone", "p_demand", "q_demand"]
for bus in self.case.buses:
if bus.p_demand > 0.0 or bus.q_demand > 0.0:
vals = [getattr(bus, a) for a in load_attrs]
if float(vals[1])==0.0:
vals[1] = 1 # default AREA: 1
if float(vals[2])==0.0:
vals[2] = 1 # default ZONE: 1
vals.insert(1, 1) # STATUS
vals.insert(1, "1 ") # ID
vals.extend([0., 0., 0., 0.])
vals.append(1) # OWNER
file.write("%6d,'%s',%2d,%2d,%2d,%10.3f,%10.3f,%10.3f,%10.3f,"
"%10.3f,%10.3f,%4d\n" % tuple(vals))
file.write(" 0 / END OF LOAD DATA, BEGIN GENERATOR DATA\n") | Writes bus data in MATPOWER format. | Below is the the instruction that describes the task:
### Input:
Writes bus data in MATPOWER format.
### Response:
def write_bus_data(self, file):
""" Writes bus data in MATPOWER format.
"""
# I, 'NAME', BASKV, IDE, GL, BL, AREA, ZONE, VM, VA, OWNER
bus_attrs = ["_i", "name", "v_base", "type", "g_shunt", "b_shunt",
"area", "zone",
"v_magnitude", "v_angle"]
for bus in self.case.buses:
vals = [getattr(bus, a) for a in bus_attrs]
if float(vals[6]) == 0.0:
vals[6] = 1 # default AREA: 1
if float(vals[7])==0.0:
vals[7] = 1 # default ZONE: 1
d = {PQ: 1, PV: 2, REFERENCE: 3, ISOLATED: 4}
vals[3] = d[vals[3]]
vals.append(1)
# print len(vals), vals
file.write("%6d,'%-10s',%10.4f,%d,%10.3f,%10.3f,%4d,%4d,%10.3f,"
"%10.3f%4d\n" % tuple(vals))
file.write(" 0 / END OF BUS DATA, BEGIN LOAD DATA\n")
# I, ID, STATUS, AREA, ZONE, PL, QL, IP, IQ, YP, YQ, OWNER
load_attrs = ["_i", "area", "zone", "p_demand", "q_demand"]
for bus in self.case.buses:
if bus.p_demand > 0.0 or bus.q_demand > 0.0:
vals = [getattr(bus, a) for a in load_attrs]
if float(vals[1])==0.0:
vals[1] = 1 # default AREA: 1
if float(vals[2])==0.0:
vals[2] = 1 # default ZONE: 1
vals.insert(1, 1) # STATUS
vals.insert(1, "1 ") # ID
vals.extend([0., 0., 0., 0.])
vals.append(1) # OWNER
file.write("%6d,'%s',%2d,%2d,%2d,%10.3f,%10.3f,%10.3f,%10.3f,"
"%10.3f,%10.3f,%4d\n" % tuple(vals))
file.write(" 0 / END OF LOAD DATA, BEGIN GENERATOR DATA\n") |
def trace_toolchain(toolchain):
"""
Trace the versions of the involved packages for the provided
toolchain instance.
"""
pkgs = []
for cls in getmro(type(toolchain)):
if not issubclass(cls, Toolchain):
continue
dist = _cls_lookup_dist(cls)
value = {
'project_name': dist.project_name,
'version': dist.version,
} if dist else {}
key = '%s:%s' % (cls.__module__, cls.__name__)
pkgs.append({key: value})
return pkgs | Trace the versions of the involved packages for the provided
toolchain instance. | Below is the the instruction that describes the task:
### Input:
Trace the versions of the involved packages for the provided
toolchain instance.
### Response:
def trace_toolchain(toolchain):
"""
Trace the versions of the involved packages for the provided
toolchain instance.
"""
pkgs = []
for cls in getmro(type(toolchain)):
if not issubclass(cls, Toolchain):
continue
dist = _cls_lookup_dist(cls)
value = {
'project_name': dist.project_name,
'version': dist.version,
} if dist else {}
key = '%s:%s' % (cls.__module__, cls.__name__)
pkgs.append({key: value})
return pkgs |
def fullmatch(self, string, *args, **kwargs):
"""Apply `fullmatch`."""
return self._pattern.fullmatch(string, *args, **kwargs) | Apply `fullmatch`. | Below is the the instruction that describes the task:
### Input:
Apply `fullmatch`.
### Response:
def fullmatch(self, string, *args, **kwargs):
"""Apply `fullmatch`."""
return self._pattern.fullmatch(string, *args, **kwargs) |
def write_iocs(self, directory=None, source=None):
"""
Serializes IOCs to a directory.
:param directory: Directory to write IOCs to. If not provided, the current working directory is used.
:param source: Dictionary contianing iocid -> IOC mapping. Defaults to self.iocs_10. This is not normally modifed by a user for this class.
:return:
"""
"""
if directory is None, write the iocs to the current working directory
source: allows specifying a different dictionry of elmentTree ioc objects
"""
if not source:
source = self.iocs_10
if len(source) < 1:
log.error('no iocs available to write out')
return False
if not directory:
directory = os.getcwd()
if os.path.isfile(directory):
log.error('cannot writes iocs to a directory')
return False
source_iocs = set(source.keys())
source_iocs = source_iocs.difference(self.pruned_11_iocs)
source_iocs = source_iocs.difference(self.null_pruned_iocs)
if not source_iocs:
log.error('no iocs available to write out after removing pruned/null iocs')
return False
utils.safe_makedirs(directory)
output_dir = os.path.abspath(directory)
log.info('Writing IOCs to %s' % (str(output_dir)))
# serialize the iocs
for iocid in source_iocs:
ioc_obj = source[iocid]
ioc_obj.write_ioc_to_file(output_dir=output_dir, force=True)
return True | Serializes IOCs to a directory.
:param directory: Directory to write IOCs to. If not provided, the current working directory is used.
:param source: Dictionary contianing iocid -> IOC mapping. Defaults to self.iocs_10. This is not normally modifed by a user for this class.
:return: | Below is the the instruction that describes the task:
### Input:
Serializes IOCs to a directory.
:param directory: Directory to write IOCs to. If not provided, the current working directory is used.
:param source: Dictionary contianing iocid -> IOC mapping. Defaults to self.iocs_10. This is not normally modifed by a user for this class.
:return:
### Response:
def write_iocs(self, directory=None, source=None):
"""
Serializes IOCs to a directory.
:param directory: Directory to write IOCs to. If not provided, the current working directory is used.
:param source: Dictionary contianing iocid -> IOC mapping. Defaults to self.iocs_10. This is not normally modifed by a user for this class.
:return:
"""
"""
if directory is None, write the iocs to the current working directory
source: allows specifying a different dictionry of elmentTree ioc objects
"""
if not source:
source = self.iocs_10
if len(source) < 1:
log.error('no iocs available to write out')
return False
if not directory:
directory = os.getcwd()
if os.path.isfile(directory):
log.error('cannot writes iocs to a directory')
return False
source_iocs = set(source.keys())
source_iocs = source_iocs.difference(self.pruned_11_iocs)
source_iocs = source_iocs.difference(self.null_pruned_iocs)
if not source_iocs:
log.error('no iocs available to write out after removing pruned/null iocs')
return False
utils.safe_makedirs(directory)
output_dir = os.path.abspath(directory)
log.info('Writing IOCs to %s' % (str(output_dir)))
# serialize the iocs
for iocid in source_iocs:
ioc_obj = source[iocid]
ioc_obj.write_ioc_to_file(output_dir=output_dir, force=True)
return True |
def getAdministrator(self, email, returned_properties=None):
"""Get the :class:`rtcclient.models.Administrator` object
by the email address
:param email: the email address (e.g. [email protected])
:param returned_properties: the returned properties that you want.
Refer to :class:`rtcclient.client.RTCClient` for more explanations
:return: the :class:`rtcclient.models.Administrator` object
:rtype: rtcclient.models.Administrator
"""
if not isinstance(email, six.string_types) or "@" not in email:
excp_msg = "Please specify a valid email address name"
self.log.error(excp_msg)
raise exception.BadValue(excp_msg)
self.log.debug("Try to get Administrator whose email is %s",
email)
rp = returned_properties
administrators = self._getAdministrators(returned_properties=rp,
email=email)
if administrators is not None:
administrator = administrators[0]
self.log.info("Get <Administrator %s> in <ProjectArea %s>",
administrator, self)
return administrator
msg = "No administrator's email is %s in <ProjectArea %s>" % (email,
self)
self.log.error(msg)
raise exception.NotFound(msg) | Get the :class:`rtcclient.models.Administrator` object
by the email address
:param email: the email address (e.g. [email protected])
:param returned_properties: the returned properties that you want.
Refer to :class:`rtcclient.client.RTCClient` for more explanations
:return: the :class:`rtcclient.models.Administrator` object
:rtype: rtcclient.models.Administrator | Below is the the instruction that describes the task:
### Input:
Get the :class:`rtcclient.models.Administrator` object
by the email address
:param email: the email address (e.g. [email protected])
:param returned_properties: the returned properties that you want.
Refer to :class:`rtcclient.client.RTCClient` for more explanations
:return: the :class:`rtcclient.models.Administrator` object
:rtype: rtcclient.models.Administrator
### Response:
def getAdministrator(self, email, returned_properties=None):
"""Get the :class:`rtcclient.models.Administrator` object
by the email address
:param email: the email address (e.g. [email protected])
:param returned_properties: the returned properties that you want.
Refer to :class:`rtcclient.client.RTCClient` for more explanations
:return: the :class:`rtcclient.models.Administrator` object
:rtype: rtcclient.models.Administrator
"""
if not isinstance(email, six.string_types) or "@" not in email:
excp_msg = "Please specify a valid email address name"
self.log.error(excp_msg)
raise exception.BadValue(excp_msg)
self.log.debug("Try to get Administrator whose email is %s",
email)
rp = returned_properties
administrators = self._getAdministrators(returned_properties=rp,
email=email)
if administrators is not None:
administrator = administrators[0]
self.log.info("Get <Administrator %s> in <ProjectArea %s>",
administrator, self)
return administrator
msg = "No administrator's email is %s in <ProjectArea %s>" % (email,
self)
self.log.error(msg)
raise exception.NotFound(msg) |
def check(text):
"""Suggest the preferred forms."""
err = "redundancy.wallace"
msg = "Redundancy. Use '{}' instead of '{}'."
redundancies = [
["rectangular", ["rectangular in shape"]],
["audible", ["audible to the ear"]],
]
return preferred_forms_check(text, redundancies, err, msg) | Suggest the preferred forms. | Below is the the instruction that describes the task:
### Input:
Suggest the preferred forms.
### Response:
def check(text):
"""Suggest the preferred forms."""
err = "redundancy.wallace"
msg = "Redundancy. Use '{}' instead of '{}'."
redundancies = [
["rectangular", ["rectangular in shape"]],
["audible", ["audible to the ear"]],
]
return preferred_forms_check(text, redundancies, err, msg) |
def Friedel(m, x, rhol, rhog, mul, mug, sigma, D, roughness=0, L=1):
r'''Calculates two-phase pressure drop with the Friedel correlation.
.. math::
\Delta P_{friction} = \Delta P_{lo} \phi_{lo}^2
.. math::
\phi_{lo}^2 = E + \frac{3.24FH}{Fr^{0.0454} We^{0.035}}
.. math::
H = \left(\frac{\rho_l}{\rho_g}\right)^{0.91}\left(\frac{\mu_g}{\mu_l}
\right)^{0.19}\left(1 - \frac{\mu_g}{\mu_l}\right)^{0.7}
.. math::
F = x^{0.78}(1 - x)^{0.224}
.. math::
E = (1-x)^2 + x^2\left(\frac{\rho_l f_{d,go}}{\rho_g f_{d,lo}}\right)
.. math::
Fr = \frac{G_{tp}^2}{gD\rho_H^2}
.. math::
We = \frac{G_{tp}^2 D}{\sigma \rho_H}
.. math::
\rho_H = \left(\frac{x}{\rho_g} + \frac{1-x}{\rho_l}\right)^{-1}
Parameters
----------
m : float
Mass flow rate of fluid, [kg/s]
x : float
Quality of fluid, [-]
rhol : float
Liquid density, [kg/m^3]
rhog : float
Gas density, [kg/m^3]
mul : float
Viscosity of liquid, [Pa*s]
mug : float
Viscosity of gas, [Pa*s]
sigma : float
Surface tension, [N/m]
D : float
Diameter of pipe, [m]
roughness : float, optional
Roughness of pipe for use in calculating friction factor, [m]
L : float, optional
Length of pipe, [m]
Returns
-------
dP : float
Pressure drop of the two-phase flow, [Pa]
Notes
-----
Applicable to vertical upflow and horizontal flow. Known to work poorly
when mul/mug > 1000. Gives mean errors on the order of 40%. Tested on data
with diameters as small as 4 mm.
The power of 0.0454 is given as 0.045 in [2]_, [3]_, [4]_, and [5]_; [6]_
and [2]_ give 0.0454 and [2]_ also gives a similar correlation said to be
presented in [1]_, so it is believed this 0.0454 was the original power.
[6]_ also gives an expression for friction factor claimed to be presented
in [1]_; it is not used here.
Examples
--------
Example 4 in [6]_:
>>> Friedel(m=0.6, x=0.1, rhol=915., rhog=2.67, mul=180E-6, mug=14E-6,
... sigma=0.0487, D=0.05, roughness=0, L=1)
738.6500525002245
References
----------
.. [1] Friedel, L. "Improved Friction Pressure Drop Correlations for
Horizontal and Vertical Two-Phase Pipe Flow." , in: Proceedings,
European Two Phase Flow Group Meeting, Ispra, Italy, 1979: 485-481.
.. [2] Whalley, P. B. Boiling, Condensation, and Gas-Liquid Flow. Oxford:
Oxford University Press, 1987.
.. [3] Triplett, K. A., S. M. Ghiaasiaan, S. I. Abdel-Khalik, A. LeMouel,
and B. N. McCord. "Gas-liquid Two-Phase Flow in Microchannels: Part II:
Void Fraction and Pressure Drop.” International Journal of Multiphase
Flow 25, no. 3 (April 1999): 395-410. doi:10.1016/S0301-9322(98)00055-X.
.. [4] Mekisso, Henock Mateos. "Comparison of Frictional Pressure Drop
Correlations for Isothermal Two-Phase Horizontal Flow." Thesis, Oklahoma
State University, 2013. https://shareok.org/handle/11244/11109.
.. [5] Thome, John R. "Engineering Data Book III." Wolverine Tube Inc
(2004). http://www.wlv.com/heat-transfer-databook/
.. [6] Ghiaasiaan, S. Mostafa. Two-Phase Flow, Boiling, and Condensation:
In Conventional and Miniature Systems. Cambridge University Press, 2007.
'''
# Liquid-only properties, for calculation of E, dP_lo
v_lo = m/rhol/(pi/4*D**2)
Re_lo = Reynolds(V=v_lo, rho=rhol, mu=mul, D=D)
fd_lo = friction_factor(Re=Re_lo, eD=roughness/D)
dP_lo = fd_lo*L/D*(0.5*rhol*v_lo**2)
# Gas-only properties, for calculation of E
v_go = m/rhog/(pi/4*D**2)
Re_go = Reynolds(V=v_go, rho=rhog, mu=mug, D=D)
fd_go = friction_factor(Re=Re_go, eD=roughness/D)
F = x**0.78*(1-x)**0.224
H = (rhol/rhog)**0.91*(mug/mul)**0.19*(1 - mug/mul)**0.7
E = (1-x)**2 + x**2*(rhol*fd_go/(rhog*fd_lo))
# Homogeneous properties, for Froude/Weber numbers
voidage_h = homogeneous(x, rhol, rhog)
rho_h = rhol*(1-voidage_h) + rhog*voidage_h
Q_h = m/rho_h
v_h = Q_h/(pi/4*D**2)
Fr = Froude(V=v_h, L=D, squared=True) # checked with (m/(pi/4*D**2))**2/g/D/rho_h**2
We = Weber(V=v_h, L=D, rho=rho_h, sigma=sigma) # checked with (m/(pi/4*D**2))**2*D/sigma/rho_h
phi_lo2 = E + 3.24*F*H/(Fr**0.0454*We**0.035)
return phi_lo2*dP_lo | r'''Calculates two-phase pressure drop with the Friedel correlation.
.. math::
\Delta P_{friction} = \Delta P_{lo} \phi_{lo}^2
.. math::
\phi_{lo}^2 = E + \frac{3.24FH}{Fr^{0.0454} We^{0.035}}
.. math::
H = \left(\frac{\rho_l}{\rho_g}\right)^{0.91}\left(\frac{\mu_g}{\mu_l}
\right)^{0.19}\left(1 - \frac{\mu_g}{\mu_l}\right)^{0.7}
.. math::
F = x^{0.78}(1 - x)^{0.224}
.. math::
E = (1-x)^2 + x^2\left(\frac{\rho_l f_{d,go}}{\rho_g f_{d,lo}}\right)
.. math::
Fr = \frac{G_{tp}^2}{gD\rho_H^2}
.. math::
We = \frac{G_{tp}^2 D}{\sigma \rho_H}
.. math::
\rho_H = \left(\frac{x}{\rho_g} + \frac{1-x}{\rho_l}\right)^{-1}
Parameters
----------
m : float
Mass flow rate of fluid, [kg/s]
x : float
Quality of fluid, [-]
rhol : float
Liquid density, [kg/m^3]
rhog : float
Gas density, [kg/m^3]
mul : float
Viscosity of liquid, [Pa*s]
mug : float
Viscosity of gas, [Pa*s]
sigma : float
Surface tension, [N/m]
D : float
Diameter of pipe, [m]
roughness : float, optional
Roughness of pipe for use in calculating friction factor, [m]
L : float, optional
Length of pipe, [m]
Returns
-------
dP : float
Pressure drop of the two-phase flow, [Pa]
Notes
-----
Applicable to vertical upflow and horizontal flow. Known to work poorly
when mul/mug > 1000. Gives mean errors on the order of 40%. Tested on data
with diameters as small as 4 mm.
The power of 0.0454 is given as 0.045 in [2]_, [3]_, [4]_, and [5]_; [6]_
and [2]_ give 0.0454 and [2]_ also gives a similar correlation said to be
presented in [1]_, so it is believed this 0.0454 was the original power.
[6]_ also gives an expression for friction factor claimed to be presented
in [1]_; it is not used here.
Examples
--------
Example 4 in [6]_:
>>> Friedel(m=0.6, x=0.1, rhol=915., rhog=2.67, mul=180E-6, mug=14E-6,
... sigma=0.0487, D=0.05, roughness=0, L=1)
738.6500525002245
References
----------
.. [1] Friedel, L. "Improved Friction Pressure Drop Correlations for
Horizontal and Vertical Two-Phase Pipe Flow." , in: Proceedings,
European Two Phase Flow Group Meeting, Ispra, Italy, 1979: 485-481.
.. [2] Whalley, P. B. Boiling, Condensation, and Gas-Liquid Flow. Oxford:
Oxford University Press, 1987.
.. [3] Triplett, K. A., S. M. Ghiaasiaan, S. I. Abdel-Khalik, A. LeMouel,
and B. N. McCord. "Gas-liquid Two-Phase Flow in Microchannels: Part II:
Void Fraction and Pressure Drop.” International Journal of Multiphase
Flow 25, no. 3 (April 1999): 395-410. doi:10.1016/S0301-9322(98)00055-X.
.. [4] Mekisso, Henock Mateos. "Comparison of Frictional Pressure Drop
Correlations for Isothermal Two-Phase Horizontal Flow." Thesis, Oklahoma
State University, 2013. https://shareok.org/handle/11244/11109.
.. [5] Thome, John R. "Engineering Data Book III." Wolverine Tube Inc
(2004). http://www.wlv.com/heat-transfer-databook/
.. [6] Ghiaasiaan, S. Mostafa. Two-Phase Flow, Boiling, and Condensation:
In Conventional and Miniature Systems. Cambridge University Press, 2007. | Below is the the instruction that describes the task:
### Input:
r'''Calculates two-phase pressure drop with the Friedel correlation.
.. math::
\Delta P_{friction} = \Delta P_{lo} \phi_{lo}^2
.. math::
\phi_{lo}^2 = E + \frac{3.24FH}{Fr^{0.0454} We^{0.035}}
.. math::
H = \left(\frac{\rho_l}{\rho_g}\right)^{0.91}\left(\frac{\mu_g}{\mu_l}
\right)^{0.19}\left(1 - \frac{\mu_g}{\mu_l}\right)^{0.7}
.. math::
F = x^{0.78}(1 - x)^{0.224}
.. math::
E = (1-x)^2 + x^2\left(\frac{\rho_l f_{d,go}}{\rho_g f_{d,lo}}\right)
.. math::
Fr = \frac{G_{tp}^2}{gD\rho_H^2}
.. math::
We = \frac{G_{tp}^2 D}{\sigma \rho_H}
.. math::
\rho_H = \left(\frac{x}{\rho_g} + \frac{1-x}{\rho_l}\right)^{-1}
Parameters
----------
m : float
Mass flow rate of fluid, [kg/s]
x : float
Quality of fluid, [-]
rhol : float
Liquid density, [kg/m^3]
rhog : float
Gas density, [kg/m^3]
mul : float
Viscosity of liquid, [Pa*s]
mug : float
Viscosity of gas, [Pa*s]
sigma : float
Surface tension, [N/m]
D : float
Diameter of pipe, [m]
roughness : float, optional
Roughness of pipe for use in calculating friction factor, [m]
L : float, optional
Length of pipe, [m]
Returns
-------
dP : float
Pressure drop of the two-phase flow, [Pa]
Notes
-----
Applicable to vertical upflow and horizontal flow. Known to work poorly
when mul/mug > 1000. Gives mean errors on the order of 40%. Tested on data
with diameters as small as 4 mm.
The power of 0.0454 is given as 0.045 in [2]_, [3]_, [4]_, and [5]_; [6]_
and [2]_ give 0.0454 and [2]_ also gives a similar correlation said to be
presented in [1]_, so it is believed this 0.0454 was the original power.
[6]_ also gives an expression for friction factor claimed to be presented
in [1]_; it is not used here.
Examples
--------
Example 4 in [6]_:
>>> Friedel(m=0.6, x=0.1, rhol=915., rhog=2.67, mul=180E-6, mug=14E-6,
... sigma=0.0487, D=0.05, roughness=0, L=1)
738.6500525002245
References
----------
.. [1] Friedel, L. "Improved Friction Pressure Drop Correlations for
Horizontal and Vertical Two-Phase Pipe Flow." , in: Proceedings,
European Two Phase Flow Group Meeting, Ispra, Italy, 1979: 485-481.
.. [2] Whalley, P. B. Boiling, Condensation, and Gas-Liquid Flow. Oxford:
Oxford University Press, 1987.
.. [3] Triplett, K. A., S. M. Ghiaasiaan, S. I. Abdel-Khalik, A. LeMouel,
and B. N. McCord. "Gas-liquid Two-Phase Flow in Microchannels: Part II:
Void Fraction and Pressure Drop.” International Journal of Multiphase
Flow 25, no. 3 (April 1999): 395-410. doi:10.1016/S0301-9322(98)00055-X.
.. [4] Mekisso, Henock Mateos. "Comparison of Frictional Pressure Drop
Correlations for Isothermal Two-Phase Horizontal Flow." Thesis, Oklahoma
State University, 2013. https://shareok.org/handle/11244/11109.
.. [5] Thome, John R. "Engineering Data Book III." Wolverine Tube Inc
(2004). http://www.wlv.com/heat-transfer-databook/
.. [6] Ghiaasiaan, S. Mostafa. Two-Phase Flow, Boiling, and Condensation:
In Conventional and Miniature Systems. Cambridge University Press, 2007.
### Response:
def Friedel(m, x, rhol, rhog, mul, mug, sigma, D, roughness=0, L=1):
r'''Calculates two-phase pressure drop with the Friedel correlation.
.. math::
\Delta P_{friction} = \Delta P_{lo} \phi_{lo}^2
.. math::
\phi_{lo}^2 = E + \frac{3.24FH}{Fr^{0.0454} We^{0.035}}
.. math::
H = \left(\frac{\rho_l}{\rho_g}\right)^{0.91}\left(\frac{\mu_g}{\mu_l}
\right)^{0.19}\left(1 - \frac{\mu_g}{\mu_l}\right)^{0.7}
.. math::
F = x^{0.78}(1 - x)^{0.224}
.. math::
E = (1-x)^2 + x^2\left(\frac{\rho_l f_{d,go}}{\rho_g f_{d,lo}}\right)
.. math::
Fr = \frac{G_{tp}^2}{gD\rho_H^2}
.. math::
We = \frac{G_{tp}^2 D}{\sigma \rho_H}
.. math::
\rho_H = \left(\frac{x}{\rho_g} + \frac{1-x}{\rho_l}\right)^{-1}
Parameters
----------
m : float
Mass flow rate of fluid, [kg/s]
x : float
Quality of fluid, [-]
rhol : float
Liquid density, [kg/m^3]
rhog : float
Gas density, [kg/m^3]
mul : float
Viscosity of liquid, [Pa*s]
mug : float
Viscosity of gas, [Pa*s]
sigma : float
Surface tension, [N/m]
D : float
Diameter of pipe, [m]
roughness : float, optional
Roughness of pipe for use in calculating friction factor, [m]
L : float, optional
Length of pipe, [m]
Returns
-------
dP : float
Pressure drop of the two-phase flow, [Pa]
Notes
-----
Applicable to vertical upflow and horizontal flow. Known to work poorly
when mul/mug > 1000. Gives mean errors on the order of 40%. Tested on data
with diameters as small as 4 mm.
The power of 0.0454 is given as 0.045 in [2]_, [3]_, [4]_, and [5]_; [6]_
and [2]_ give 0.0454 and [2]_ also gives a similar correlation said to be
presented in [1]_, so it is believed this 0.0454 was the original power.
[6]_ also gives an expression for friction factor claimed to be presented
in [1]_; it is not used here.
Examples
--------
Example 4 in [6]_:
>>> Friedel(m=0.6, x=0.1, rhol=915., rhog=2.67, mul=180E-6, mug=14E-6,
... sigma=0.0487, D=0.05, roughness=0, L=1)
738.6500525002245
References
----------
.. [1] Friedel, L. "Improved Friction Pressure Drop Correlations for
Horizontal and Vertical Two-Phase Pipe Flow." , in: Proceedings,
European Two Phase Flow Group Meeting, Ispra, Italy, 1979: 485-481.
.. [2] Whalley, P. B. Boiling, Condensation, and Gas-Liquid Flow. Oxford:
Oxford University Press, 1987.
.. [3] Triplett, K. A., S. M. Ghiaasiaan, S. I. Abdel-Khalik, A. LeMouel,
and B. N. McCord. "Gas-liquid Two-Phase Flow in Microchannels: Part II:
Void Fraction and Pressure Drop.” International Journal of Multiphase
Flow 25, no. 3 (April 1999): 395-410. doi:10.1016/S0301-9322(98)00055-X.
.. [4] Mekisso, Henock Mateos. "Comparison of Frictional Pressure Drop
Correlations for Isothermal Two-Phase Horizontal Flow." Thesis, Oklahoma
State University, 2013. https://shareok.org/handle/11244/11109.
.. [5] Thome, John R. "Engineering Data Book III." Wolverine Tube Inc
(2004). http://www.wlv.com/heat-transfer-databook/
.. [6] Ghiaasiaan, S. Mostafa. Two-Phase Flow, Boiling, and Condensation:
In Conventional and Miniature Systems. Cambridge University Press, 2007.
'''
# Liquid-only properties, for calculation of E, dP_lo
v_lo = m/rhol/(pi/4*D**2)
Re_lo = Reynolds(V=v_lo, rho=rhol, mu=mul, D=D)
fd_lo = friction_factor(Re=Re_lo, eD=roughness/D)
dP_lo = fd_lo*L/D*(0.5*rhol*v_lo**2)
# Gas-only properties, for calculation of E
v_go = m/rhog/(pi/4*D**2)
Re_go = Reynolds(V=v_go, rho=rhog, mu=mug, D=D)
fd_go = friction_factor(Re=Re_go, eD=roughness/D)
F = x**0.78*(1-x)**0.224
H = (rhol/rhog)**0.91*(mug/mul)**0.19*(1 - mug/mul)**0.7
E = (1-x)**2 + x**2*(rhol*fd_go/(rhog*fd_lo))
# Homogeneous properties, for Froude/Weber numbers
voidage_h = homogeneous(x, rhol, rhog)
rho_h = rhol*(1-voidage_h) + rhog*voidage_h
Q_h = m/rho_h
v_h = Q_h/(pi/4*D**2)
Fr = Froude(V=v_h, L=D, squared=True) # checked with (m/(pi/4*D**2))**2/g/D/rho_h**2
We = Weber(V=v_h, L=D, rho=rho_h, sigma=sigma) # checked with (m/(pi/4*D**2))**2*D/sigma/rho_h
phi_lo2 = E + 3.24*F*H/(Fr**0.0454*We**0.035)
return phi_lo2*dP_lo |
def import_parms(self, args):
"""Import external dict to internal dict"""
for key, val in args.items():
self.set_parm(key, val) | Import external dict to internal dict | Below is the the instruction that describes the task:
### Input:
Import external dict to internal dict
### Response:
def import_parms(self, args):
"""Import external dict to internal dict"""
for key, val in args.items():
self.set_parm(key, val) |
def daemonize(redirect_out=True):
'''
Daemonize a process
'''
# Avoid circular import
import salt.utils.crypt
try:
pid = os.fork()
if pid > 0:
# exit first parent
salt.utils.crypt.reinit_crypto()
os._exit(salt.defaults.exitcodes.EX_OK)
except OSError as exc:
log.error('fork #1 failed: %s (%s)', exc.errno, exc)
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
# decouple from parent environment
os.chdir('/')
# noinspection PyArgumentList
os.setsid()
os.umask(0o022) # pylint: disable=blacklisted-function
# do second fork
try:
pid = os.fork()
if pid > 0:
salt.utils.crypt.reinit_crypto()
sys.exit(salt.defaults.exitcodes.EX_OK)
except OSError as exc:
log.error('fork #2 failed: %s (%s)', exc.errno, exc)
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
salt.utils.crypt.reinit_crypto()
# A normal daemonization redirects the process output to /dev/null.
# Unfortunately when a python multiprocess is called the output is
# not cleanly redirected and the parent process dies when the
# multiprocessing process attempts to access stdout or err.
if redirect_out:
with salt.utils.files.fopen('/dev/null', 'r+') as dev_null:
# Redirect python stdin/out/err
# and the os stdin/out/err which can be different
os.dup2(dev_null.fileno(), sys.stdin.fileno())
os.dup2(dev_null.fileno(), sys.stdout.fileno())
os.dup2(dev_null.fileno(), sys.stderr.fileno())
os.dup2(dev_null.fileno(), 0)
os.dup2(dev_null.fileno(), 1)
os.dup2(dev_null.fileno(), 2) | Daemonize a process | Below is the the instruction that describes the task:
### Input:
Daemonize a process
### Response:
def daemonize(redirect_out=True):
'''
Daemonize a process
'''
# Avoid circular import
import salt.utils.crypt
try:
pid = os.fork()
if pid > 0:
# exit first parent
salt.utils.crypt.reinit_crypto()
os._exit(salt.defaults.exitcodes.EX_OK)
except OSError as exc:
log.error('fork #1 failed: %s (%s)', exc.errno, exc)
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
# decouple from parent environment
os.chdir('/')
# noinspection PyArgumentList
os.setsid()
os.umask(0o022) # pylint: disable=blacklisted-function
# do second fork
try:
pid = os.fork()
if pid > 0:
salt.utils.crypt.reinit_crypto()
sys.exit(salt.defaults.exitcodes.EX_OK)
except OSError as exc:
log.error('fork #2 failed: %s (%s)', exc.errno, exc)
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
salt.utils.crypt.reinit_crypto()
# A normal daemonization redirects the process output to /dev/null.
# Unfortunately when a python multiprocess is called the output is
# not cleanly redirected and the parent process dies when the
# multiprocessing process attempts to access stdout or err.
if redirect_out:
with salt.utils.files.fopen('/dev/null', 'r+') as dev_null:
# Redirect python stdin/out/err
# and the os stdin/out/err which can be different
os.dup2(dev_null.fileno(), sys.stdin.fileno())
os.dup2(dev_null.fileno(), sys.stdout.fileno())
os.dup2(dev_null.fileno(), sys.stderr.fileno())
os.dup2(dev_null.fileno(), 0)
os.dup2(dev_null.fileno(), 1)
os.dup2(dev_null.fileno(), 2) |
def powerlaw(f, log10_A=-16, gamma=5):
"""Power-law PSD.
:param f: Sampling frequencies
:param log10_A: log10 of red noise Amplitude [GW units]
:param gamma: Spectral index of red noise process
"""
fyr = 1 / 3.16e7
return (10**log10_A)**2 / 12.0 / np.pi**2 * fyr**(gamma-3) * f**(-gamma) | Power-law PSD.
:param f: Sampling frequencies
:param log10_A: log10 of red noise Amplitude [GW units]
:param gamma: Spectral index of red noise process | Below is the the instruction that describes the task:
### Input:
Power-law PSD.
:param f: Sampling frequencies
:param log10_A: log10 of red noise Amplitude [GW units]
:param gamma: Spectral index of red noise process
### Response:
def powerlaw(f, log10_A=-16, gamma=5):
"""Power-law PSD.
:param f: Sampling frequencies
:param log10_A: log10 of red noise Amplitude [GW units]
:param gamma: Spectral index of red noise process
"""
fyr = 1 / 3.16e7
return (10**log10_A)**2 / 12.0 / np.pi**2 * fyr**(gamma-3) * f**(-gamma) |
def over(self, name='usd'):
'''Returns a new currency pair with the *over* currency as
second part of the pair (Foreign currency).'''
name = name.upper()
if self.ccy1.code == name.upper():
return ccy_pair(self.ccy2, self.ccy1)
else:
return self | Returns a new currency pair with the *over* currency as
second part of the pair (Foreign currency). | Below is the the instruction that describes the task:
### Input:
Returns a new currency pair with the *over* currency as
second part of the pair (Foreign currency).
### Response:
def over(self, name='usd'):
'''Returns a new currency pair with the *over* currency as
second part of the pair (Foreign currency).'''
name = name.upper()
if self.ccy1.code == name.upper():
return ccy_pair(self.ccy2, self.ccy1)
else:
return self |
def configure_roles_on_host(api, host):
"""
Go through all the roles on this host, and configure them if they
match the role types that we care about.
"""
for role_ref in host.roleRefs:
# Mgmt service/role has no cluster name. Skip over those.
if role_ref.get('clusterName') is None:
continue
# Get the role and inspect the role type
role = api.get_cluster(role_ref['clusterName'])\
.get_service(role_ref['serviceName'])\
.get_role(role_ref['roleName'])
LOG.debug("Evaluating %s (%s)" % (role.name, host.hostname))
config = None
if role.type == 'DATANODE':
config = DATANODE_CONF
elif role.type == 'TASKTRACKER':
config = TASKTRACKER_CONF
elif role.type == 'REGIONSERVER':
config = REGIONSERVER_CONF
else:
continue
# Set the config
LOG.info("Configuring %s (%s)" % (role.name, host.hostname))
role.update_config(config) | Go through all the roles on this host, and configure them if they
match the role types that we care about. | Below is the the instruction that describes the task:
### Input:
Go through all the roles on this host, and configure them if they
match the role types that we care about.
### Response:
def configure_roles_on_host(api, host):
"""
Go through all the roles on this host, and configure them if they
match the role types that we care about.
"""
for role_ref in host.roleRefs:
# Mgmt service/role has no cluster name. Skip over those.
if role_ref.get('clusterName') is None:
continue
# Get the role and inspect the role type
role = api.get_cluster(role_ref['clusterName'])\
.get_service(role_ref['serviceName'])\
.get_role(role_ref['roleName'])
LOG.debug("Evaluating %s (%s)" % (role.name, host.hostname))
config = None
if role.type == 'DATANODE':
config = DATANODE_CONF
elif role.type == 'TASKTRACKER':
config = TASKTRACKER_CONF
elif role.type == 'REGIONSERVER':
config = REGIONSERVER_CONF
else:
continue
# Set the config
LOG.info("Configuring %s (%s)" % (role.name, host.hostname))
role.update_config(config) |
def get(sub_array_id):
"""Sub array detail resource.
This method will list scheduling blocks and processing blocks
in the specified sub-array.
"""
if not re.match(r'^subarray-0[0-9]|subarray-1[0-5]$', sub_array_id):
response = dict(error='Invalid sub-array ID specified "{}" does not '
'match sub-array ID naming convention '
'(ie. subarray-[00-15]).'.
format(sub_array_id))
return response, HTTPStatus.BAD_REQUEST
if sub_array_id not in DB.get_sub_array_ids():
response = dict(error='Sub-array "{}" does not currently exist. '
'Known sub-arrays = {}'
.format(sub_array_id, DB.get_sub_array_ids()))
return response, HTTPStatus.NOT_FOUND
block_ids = DB.get_sub_array_sbi_ids(sub_array_id)
_blocks = [b for b in DB.get_block_details(block_ids)]
response = dict(scheduling_blocks=[])
_url = get_root_url()
for block in _blocks:
block['links'] = {
'self': '{}/scheduling-block/{}'.format(_url, block['id'])
}
response['scheduling_blocks'].append(block)
response['links'] = {
'self': '{}'.format(request.url),
'list': '{}/sub-arrays'.format(_url),
'home': '{}'.format(_url),
}
return response, HTTPStatus.OK | Sub array detail resource.
This method will list scheduling blocks and processing blocks
in the specified sub-array. | Below is the the instruction that describes the task:
### Input:
Sub array detail resource.
This method will list scheduling blocks and processing blocks
in the specified sub-array.
### Response:
def get(sub_array_id):
"""Sub array detail resource.
This method will list scheduling blocks and processing blocks
in the specified sub-array.
"""
if not re.match(r'^subarray-0[0-9]|subarray-1[0-5]$', sub_array_id):
response = dict(error='Invalid sub-array ID specified "{}" does not '
'match sub-array ID naming convention '
'(ie. subarray-[00-15]).'.
format(sub_array_id))
return response, HTTPStatus.BAD_REQUEST
if sub_array_id not in DB.get_sub_array_ids():
response = dict(error='Sub-array "{}" does not currently exist. '
'Known sub-arrays = {}'
.format(sub_array_id, DB.get_sub_array_ids()))
return response, HTTPStatus.NOT_FOUND
block_ids = DB.get_sub_array_sbi_ids(sub_array_id)
_blocks = [b for b in DB.get_block_details(block_ids)]
response = dict(scheduling_blocks=[])
_url = get_root_url()
for block in _blocks:
block['links'] = {
'self': '{}/scheduling-block/{}'.format(_url, block['id'])
}
response['scheduling_blocks'].append(block)
response['links'] = {
'self': '{}'.format(request.url),
'list': '{}/sub-arrays'.format(_url),
'home': '{}'.format(_url),
}
return response, HTTPStatus.OK |
def run(self):
"""Launch filtering, sorting and paging to output results."""
query = self.query
# count before filtering
self.cardinality = query.add_columns(self.columns[0].sqla_expr).count()
self._set_column_filter_expressions()
self._set_global_filter_expression()
self._set_sort_expressions()
self._set_yadcf_data(query)
# apply filters
query = query.filter(
*[e for e in self.filter_expressions if e is not None])
self.cardinality_filtered = query.add_columns(
self.columns[0].sqla_expr).count()
# apply sorts
query = query.order_by(
*[e for e in self.sort_expressions if e is not None])
# add paging options
length = int(self.params.get('length'))
if length >= 0:
query = query.limit(length)
elif length == -1:
pass
else:
raise (ValueError(
'Length should be a positive integer or -1 to disable'))
query = query.offset(int(self.params.get('start')))
# add columns to query
query = query.add_columns(*[c.sqla_expr for c in self.columns])
# fetch the result of the queries
column_names = [
col.mData if col.mData else str(i)
for i, col in enumerate(self.columns)
]
self.results = [{k: v
for k, v in zip(column_names, row)}
for row in query.all()] | Launch filtering, sorting and paging to output results. | Below is the the instruction that describes the task:
### Input:
Launch filtering, sorting and paging to output results.
### Response:
def run(self):
"""Launch filtering, sorting and paging to output results."""
query = self.query
# count before filtering
self.cardinality = query.add_columns(self.columns[0].sqla_expr).count()
self._set_column_filter_expressions()
self._set_global_filter_expression()
self._set_sort_expressions()
self._set_yadcf_data(query)
# apply filters
query = query.filter(
*[e for e in self.filter_expressions if e is not None])
self.cardinality_filtered = query.add_columns(
self.columns[0].sqla_expr).count()
# apply sorts
query = query.order_by(
*[e for e in self.sort_expressions if e is not None])
# add paging options
length = int(self.params.get('length'))
if length >= 0:
query = query.limit(length)
elif length == -1:
pass
else:
raise (ValueError(
'Length should be a positive integer or -1 to disable'))
query = query.offset(int(self.params.get('start')))
# add columns to query
query = query.add_columns(*[c.sqla_expr for c in self.columns])
# fetch the result of the queries
column_names = [
col.mData if col.mData else str(i)
for i, col in enumerate(self.columns)
]
self.results = [{k: v
for k, v in zip(column_names, row)}
for row in query.all()] |
def clear(self):
"""Resets Bloch sphere data sets to empty.
"""
self.points = []
self.vectors = []
self.point_style = []
self.annotations = [] | Resets Bloch sphere data sets to empty. | Below is the the instruction that describes the task:
### Input:
Resets Bloch sphere data sets to empty.
### Response:
def clear(self):
"""Resets Bloch sphere data sets to empty.
"""
self.points = []
self.vectors = []
self.point_style = []
self.annotations = [] |
def accept(self):
"""
Update `show_errors` and hide dialog box.
Overrides method of `QDialogBox`.
"""
AutosaveErrorDialog.show_errors = not self.dismiss_box.isChecked()
return QDialog.accept(self) | Update `show_errors` and hide dialog box.
Overrides method of `QDialogBox`. | Below is the the instruction that describes the task:
### Input:
Update `show_errors` and hide dialog box.
Overrides method of `QDialogBox`.
### Response:
def accept(self):
"""
Update `show_errors` and hide dialog box.
Overrides method of `QDialogBox`.
"""
AutosaveErrorDialog.show_errors = not self.dismiss_box.isChecked()
return QDialog.accept(self) |
def eni_present(
name,
subnet_id=None,
subnet_name=None,
private_ip_address=None,
description=None,
groups=None,
source_dest_check=True,
allocate_eip=None,
arecords=None,
region=None,
key=None,
keyid=None,
profile=None):
'''
Ensure the EC2 ENI exists.
.. versionadded:: 2016.3.0
name
Name tag associated with the ENI.
subnet_id
The VPC subnet ID the ENI will exist within.
subnet_name
The VPC subnet name the ENI will exist within.
private_ip_address
The private ip address to use for this ENI. If this is not specified
AWS will automatically assign a private IP address to the ENI. Must be
specified at creation time; will be ignored afterward.
description
Description of the key.
groups
A list of security groups to apply to the ENI.
source_dest_check
Boolean specifying whether source/destination checking is enabled on
the ENI.
allocate_eip
allocate and associate an EIP to the ENI. Could be 'standard' to
allocate Elastic IP to EC2 region or 'vpc' to get it for a
particular VPC
.. versionchanged:: 2016.11.0
arecords
A list of arecord dicts with attributes needed for the DNS add_record state.
By default the boto_route53.add_record state will be used, which requires: name, zone, ttl, and identifier.
See the boto_route53 state for information about these attributes.
Other DNS modules can be called by specifying the provider keyword.
By default, the private ENI IP address will be used, set 'public: True' in the arecord dict to use the ENI's public IP address
.. versionadded:: 2016.3.0
region
Region to connect to.
key
Secret key to be used.
keyid
Access key to be used.
profile
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid.
'''
if not salt.utils.data.exactly_one((subnet_id, subnet_name)):
raise SaltInvocationError('One (but not both) of subnet_id or '
'subnet_name must be provided.')
if not groups:
raise SaltInvocationError('groups is a required argument.')
if not isinstance(groups, list):
raise SaltInvocationError('groups must be a list.')
if not isinstance(source_dest_check, bool):
raise SaltInvocationError('source_dest_check must be a bool.')
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
r = __salt__['boto_ec2.get_network_interface'](
name=name, region=region, key=key, keyid=keyid, profile=profile
)
if 'error' in r:
ret['result'] = False
ret['comment'] = 'Error when attempting to find eni: {0}.'.format(
r['error']['message']
)
return ret
if not r['result']:
if __opts__['test']:
ret['comment'] = 'ENI is set to be created.'
if allocate_eip:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is set to be allocated/assocaited to the ENI.'])
if arecords:
ret['comment'] = ' '.join([ret['comment'], 'A records are set to be created.'])
ret['result'] = None
return ret
result_create = __salt__['boto_ec2.create_network_interface'](
name, subnet_id=subnet_id, subnet_name=subnet_name, private_ip_address=private_ip_address,
description=description, groups=groups, region=region, key=key,
keyid=keyid, profile=profile
)
if 'error' in result_create:
ret['result'] = False
ret['comment'] = 'Failed to create ENI: {0}'.format(
result_create['error']['message']
)
return ret
r['result'] = result_create['result']
ret['comment'] = 'Created ENI {0}'.format(name)
ret['changes']['id'] = r['result']['id']
else:
_ret = _eni_attribute(
r['result'], 'description', description, region, key, keyid,
profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = _ret['comment']
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
_ret = _eni_groups(
r['result'], groups, region, key, keyid, profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
# Actions that need to occur whether creating or updating
_ret = _eni_attribute(
r['result'], 'source_dest_check', source_dest_check, region, key,
keyid, profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
return ret
if allocate_eip:
if 'allocationId' not in r['result']:
if __opts__['test']:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is set to be allocated and assocaited to the ENI.'])
else:
domain = 'vpc' if allocate_eip == 'vpc' else None
eip_alloc = __salt__['boto_ec2.allocate_eip_address'](domain=domain,
region=region,
key=key,
keyid=keyid,
profile=profile)
if eip_alloc:
_ret = __salt__['boto_ec2.associate_eip_address'](instance_id=None,
instance_name=None,
public_ip=None,
allocation_id=eip_alloc['allocation_id'],
network_interface_id=r['result']['id'],
private_ip_address=None,
allow_reassociation=False,
region=region,
key=key,
keyid=keyid,
profile=profile)
if not _ret:
_ret = __salt__['boto_ec2.release_eip_address'](public_ip=None,
allocation_id=eip_alloc['allocation_id'],
region=region,
key=key,
keyid=keyid,
profile=profile)
ret['result'] = False
msg = 'Failed to assocaite the allocated EIP address with the ENI. The EIP {0}'.format('was successfully released.' if _ret else 'was NOT RELEASED.')
ret['comment'] = ' '.join([ret['comment'], msg])
return ret
else:
ret['result'] = False
ret['comment'] = ' '.join([ret['comment'], 'Failed to allocate an EIP address'])
return ret
else:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is already allocated/assocaited to the ENI'])
if arecords:
for arecord in arecords:
if 'name' not in arecord:
msg = 'The arecord must contain a "name" property.'
raise SaltInvocationError(msg)
log.debug('processing arecord %s', arecord)
_ret = None
dns_provider = 'boto_route53'
arecord['record_type'] = 'A'
public_ip_arecord = False
if 'public' in arecord:
public_ip_arecord = arecord.pop('public')
if public_ip_arecord:
if 'publicIp' in r['result']:
arecord['value'] = r['result']['publicIp']
elif 'public_ip' in eip_alloc:
arecord['value'] = eip_alloc['public_ip']
else:
msg = 'Unable to add an A record for the public IP address, a public IP address does not seem to be allocated to this ENI.'
raise CommandExecutionError(msg)
else:
arecord['value'] = r['result']['private_ip_address']
if 'provider' in arecord:
dns_provider = arecord.pop('provider')
if dns_provider == 'boto_route53':
if 'profile' not in arecord:
arecord['profile'] = profile
if 'key' not in arecord:
arecord['key'] = key
if 'keyid' not in arecord:
arecord['keyid'] = keyid
if 'region' not in arecord:
arecord['region'] = region
_ret = __states__['.'.join([dns_provider, 'present'])](**arecord)
log.debug('ret from dns_provider.present = %s', _ret)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
return ret | Ensure the EC2 ENI exists.
.. versionadded:: 2016.3.0
name
Name tag associated with the ENI.
subnet_id
The VPC subnet ID the ENI will exist within.
subnet_name
The VPC subnet name the ENI will exist within.
private_ip_address
The private ip address to use for this ENI. If this is not specified
AWS will automatically assign a private IP address to the ENI. Must be
specified at creation time; will be ignored afterward.
description
Description of the key.
groups
A list of security groups to apply to the ENI.
source_dest_check
Boolean specifying whether source/destination checking is enabled on
the ENI.
allocate_eip
allocate and associate an EIP to the ENI. Could be 'standard' to
allocate Elastic IP to EC2 region or 'vpc' to get it for a
particular VPC
.. versionchanged:: 2016.11.0
arecords
A list of arecord dicts with attributes needed for the DNS add_record state.
By default the boto_route53.add_record state will be used, which requires: name, zone, ttl, and identifier.
See the boto_route53 state for information about these attributes.
Other DNS modules can be called by specifying the provider keyword.
By default, the private ENI IP address will be used, set 'public: True' in the arecord dict to use the ENI's public IP address
.. versionadded:: 2016.3.0
region
Region to connect to.
key
Secret key to be used.
keyid
Access key to be used.
profile
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid. | Below is the the instruction that describes the task:
### Input:
Ensure the EC2 ENI exists.
.. versionadded:: 2016.3.0
name
Name tag associated with the ENI.
subnet_id
The VPC subnet ID the ENI will exist within.
subnet_name
The VPC subnet name the ENI will exist within.
private_ip_address
The private ip address to use for this ENI. If this is not specified
AWS will automatically assign a private IP address to the ENI. Must be
specified at creation time; will be ignored afterward.
description
Description of the key.
groups
A list of security groups to apply to the ENI.
source_dest_check
Boolean specifying whether source/destination checking is enabled on
the ENI.
allocate_eip
allocate and associate an EIP to the ENI. Could be 'standard' to
allocate Elastic IP to EC2 region or 'vpc' to get it for a
particular VPC
.. versionchanged:: 2016.11.0
arecords
A list of arecord dicts with attributes needed for the DNS add_record state.
By default the boto_route53.add_record state will be used, which requires: name, zone, ttl, and identifier.
See the boto_route53 state for information about these attributes.
Other DNS modules can be called by specifying the provider keyword.
By default, the private ENI IP address will be used, set 'public: True' in the arecord dict to use the ENI's public IP address
.. versionadded:: 2016.3.0
region
Region to connect to.
key
Secret key to be used.
keyid
Access key to be used.
profile
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid.
### Response:
def eni_present(
name,
subnet_id=None,
subnet_name=None,
private_ip_address=None,
description=None,
groups=None,
source_dest_check=True,
allocate_eip=None,
arecords=None,
region=None,
key=None,
keyid=None,
profile=None):
'''
Ensure the EC2 ENI exists.
.. versionadded:: 2016.3.0
name
Name tag associated with the ENI.
subnet_id
The VPC subnet ID the ENI will exist within.
subnet_name
The VPC subnet name the ENI will exist within.
private_ip_address
The private ip address to use for this ENI. If this is not specified
AWS will automatically assign a private IP address to the ENI. Must be
specified at creation time; will be ignored afterward.
description
Description of the key.
groups
A list of security groups to apply to the ENI.
source_dest_check
Boolean specifying whether source/destination checking is enabled on
the ENI.
allocate_eip
allocate and associate an EIP to the ENI. Could be 'standard' to
allocate Elastic IP to EC2 region or 'vpc' to get it for a
particular VPC
.. versionchanged:: 2016.11.0
arecords
A list of arecord dicts with attributes needed for the DNS add_record state.
By default the boto_route53.add_record state will be used, which requires: name, zone, ttl, and identifier.
See the boto_route53 state for information about these attributes.
Other DNS modules can be called by specifying the provider keyword.
By default, the private ENI IP address will be used, set 'public: True' in the arecord dict to use the ENI's public IP address
.. versionadded:: 2016.3.0
region
Region to connect to.
key
Secret key to be used.
keyid
Access key to be used.
profile
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid.
'''
if not salt.utils.data.exactly_one((subnet_id, subnet_name)):
raise SaltInvocationError('One (but not both) of subnet_id or '
'subnet_name must be provided.')
if not groups:
raise SaltInvocationError('groups is a required argument.')
if not isinstance(groups, list):
raise SaltInvocationError('groups must be a list.')
if not isinstance(source_dest_check, bool):
raise SaltInvocationError('source_dest_check must be a bool.')
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
r = __salt__['boto_ec2.get_network_interface'](
name=name, region=region, key=key, keyid=keyid, profile=profile
)
if 'error' in r:
ret['result'] = False
ret['comment'] = 'Error when attempting to find eni: {0}.'.format(
r['error']['message']
)
return ret
if not r['result']:
if __opts__['test']:
ret['comment'] = 'ENI is set to be created.'
if allocate_eip:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is set to be allocated/assocaited to the ENI.'])
if arecords:
ret['comment'] = ' '.join([ret['comment'], 'A records are set to be created.'])
ret['result'] = None
return ret
result_create = __salt__['boto_ec2.create_network_interface'](
name, subnet_id=subnet_id, subnet_name=subnet_name, private_ip_address=private_ip_address,
description=description, groups=groups, region=region, key=key,
keyid=keyid, profile=profile
)
if 'error' in result_create:
ret['result'] = False
ret['comment'] = 'Failed to create ENI: {0}'.format(
result_create['error']['message']
)
return ret
r['result'] = result_create['result']
ret['comment'] = 'Created ENI {0}'.format(name)
ret['changes']['id'] = r['result']['id']
else:
_ret = _eni_attribute(
r['result'], 'description', description, region, key, keyid,
profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = _ret['comment']
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
_ret = _eni_groups(
r['result'], groups, region, key, keyid, profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
# Actions that need to occur whether creating or updating
_ret = _eni_attribute(
r['result'], 'source_dest_check', source_dest_check, region, key,
keyid, profile
)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
return ret
if allocate_eip:
if 'allocationId' not in r['result']:
if __opts__['test']:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is set to be allocated and assocaited to the ENI.'])
else:
domain = 'vpc' if allocate_eip == 'vpc' else None
eip_alloc = __salt__['boto_ec2.allocate_eip_address'](domain=domain,
region=region,
key=key,
keyid=keyid,
profile=profile)
if eip_alloc:
_ret = __salt__['boto_ec2.associate_eip_address'](instance_id=None,
instance_name=None,
public_ip=None,
allocation_id=eip_alloc['allocation_id'],
network_interface_id=r['result']['id'],
private_ip_address=None,
allow_reassociation=False,
region=region,
key=key,
keyid=keyid,
profile=profile)
if not _ret:
_ret = __salt__['boto_ec2.release_eip_address'](public_ip=None,
allocation_id=eip_alloc['allocation_id'],
region=region,
key=key,
keyid=keyid,
profile=profile)
ret['result'] = False
msg = 'Failed to assocaite the allocated EIP address with the ENI. The EIP {0}'.format('was successfully released.' if _ret else 'was NOT RELEASED.')
ret['comment'] = ' '.join([ret['comment'], msg])
return ret
else:
ret['result'] = False
ret['comment'] = ' '.join([ret['comment'], 'Failed to allocate an EIP address'])
return ret
else:
ret['comment'] = ' '.join([ret['comment'], 'An EIP is already allocated/assocaited to the ENI'])
if arecords:
for arecord in arecords:
if 'name' not in arecord:
msg = 'The arecord must contain a "name" property.'
raise SaltInvocationError(msg)
log.debug('processing arecord %s', arecord)
_ret = None
dns_provider = 'boto_route53'
arecord['record_type'] = 'A'
public_ip_arecord = False
if 'public' in arecord:
public_ip_arecord = arecord.pop('public')
if public_ip_arecord:
if 'publicIp' in r['result']:
arecord['value'] = r['result']['publicIp']
elif 'public_ip' in eip_alloc:
arecord['value'] = eip_alloc['public_ip']
else:
msg = 'Unable to add an A record for the public IP address, a public IP address does not seem to be allocated to this ENI.'
raise CommandExecutionError(msg)
else:
arecord['value'] = r['result']['private_ip_address']
if 'provider' in arecord:
dns_provider = arecord.pop('provider')
if dns_provider == 'boto_route53':
if 'profile' not in arecord:
arecord['profile'] = profile
if 'key' not in arecord:
arecord['key'] = key
if 'keyid' not in arecord:
arecord['keyid'] = keyid
if 'region' not in arecord:
arecord['region'] = region
_ret = __states__['.'.join([dns_provider, 'present'])](**arecord)
log.debug('ret from dns_provider.present = %s', _ret)
ret['changes'] = dictupdate.update(ret['changes'], _ret['changes'])
ret['comment'] = ' '.join([ret['comment'], _ret['comment']])
if not _ret['result']:
ret['result'] = _ret['result']
if ret['result'] is False:
return ret
return ret |
def smooth_angle_channels(self, channels):
"""Remove discontinuities in angle channels so that they don't cause artifacts in algorithms that rely on the smoothness of the functions."""
for vertex in self.vertices:
for col in vertex.meta['rot_ind']:
if col:
for k in range(1, channels.shape[0]):
diff=channels[k, col]-channels[k-1, col]
if abs(diff+360.)<abs(diff):
channels[k:, col]=channels[k:, col]+360.
elif abs(diff-360.)<abs(diff):
channels[k:, col]=channels[k:, col]-360. | Remove discontinuities in angle channels so that they don't cause artifacts in algorithms that rely on the smoothness of the functions. | Below is the the instruction that describes the task:
### Input:
Remove discontinuities in angle channels so that they don't cause artifacts in algorithms that rely on the smoothness of the functions.
### Response:
def smooth_angle_channels(self, channels):
"""Remove discontinuities in angle channels so that they don't cause artifacts in algorithms that rely on the smoothness of the functions."""
for vertex in self.vertices:
for col in vertex.meta['rot_ind']:
if col:
for k in range(1, channels.shape[0]):
diff=channels[k, col]-channels[k-1, col]
if abs(diff+360.)<abs(diff):
channels[k:, col]=channels[k:, col]+360.
elif abs(diff-360.)<abs(diff):
channels[k:, col]=channels[k:, col]-360. |
def normalize_flags(self, flags):
"""normalize the flags to make sure needed values are there
after this method is called self.flags is available
:param flags: the flags that will be normalized
"""
flags['type'] = flags.get('type', None)
paction = flags.get('action', 'store')
if paction == 'store_false':
flags['default'] = True
flags['type'] = bool
elif paction == 'store_true':
flags['default'] = False
flags['type'] = bool
prequired = False if 'default' in flags else flags.get('required', True)
flags["action"] = paction
flags["required"] = prequired
self.flags = flags | normalize the flags to make sure needed values are there
after this method is called self.flags is available
:param flags: the flags that will be normalized | Below is the the instruction that describes the task:
### Input:
normalize the flags to make sure needed values are there
after this method is called self.flags is available
:param flags: the flags that will be normalized
### Response:
def normalize_flags(self, flags):
"""normalize the flags to make sure needed values are there
after this method is called self.flags is available
:param flags: the flags that will be normalized
"""
flags['type'] = flags.get('type', None)
paction = flags.get('action', 'store')
if paction == 'store_false':
flags['default'] = True
flags['type'] = bool
elif paction == 'store_true':
flags['default'] = False
flags['type'] = bool
prequired = False if 'default' in flags else flags.get('required', True)
flags["action"] = paction
flags["required"] = prequired
self.flags = flags |
def redirect_stderr(self, enabled=True, log_level=logging.ERROR):
"""
Redirect sys.stderr to file-like object.
"""
if enabled:
if self.__stderr_wrapper:
self.__stderr_wrapper.update_log_level(log_level=log_level)
else:
self.__stderr_wrapper = StdErrWrapper(logger=self, log_level=log_level)
self.__stderr_stream = self.__stderr_wrapper
else:
self.__stderr_stream = _original_stderr
# Assign the new stream to sys.stderr
sys.stderr = self.__stderr_stream | Redirect sys.stderr to file-like object. | Below is the the instruction that describes the task:
### Input:
Redirect sys.stderr to file-like object.
### Response:
def redirect_stderr(self, enabled=True, log_level=logging.ERROR):
"""
Redirect sys.stderr to file-like object.
"""
if enabled:
if self.__stderr_wrapper:
self.__stderr_wrapper.update_log_level(log_level=log_level)
else:
self.__stderr_wrapper = StdErrWrapper(logger=self, log_level=log_level)
self.__stderr_stream = self.__stderr_wrapper
else:
self.__stderr_stream = _original_stderr
# Assign the new stream to sys.stderr
sys.stderr = self.__stderr_stream |
def get_hubs(self):
"""Get a list of hubs names.
Returns
-------
hubs : list
List of hub names
"""
# Use helm to get a list of hubs.
output = helm(
'list',
'-q'
)
# Check if an error occurred.
if output.returncode != 0:
print("Something went wrong!")
print(output.stderr)
else:
hubs = output.stdout.split()
return hubs | Get a list of hubs names.
Returns
-------
hubs : list
List of hub names | Below is the the instruction that describes the task:
### Input:
Get a list of hubs names.
Returns
-------
hubs : list
List of hub names
### Response:
def get_hubs(self):
"""Get a list of hubs names.
Returns
-------
hubs : list
List of hub names
"""
# Use helm to get a list of hubs.
output = helm(
'list',
'-q'
)
# Check if an error occurred.
if output.returncode != 0:
print("Something went wrong!")
print(output.stderr)
else:
hubs = output.stdout.split()
return hubs |
def parse_nodes_coords(osm_response):
"""
Parse node coordinates from OSM response. Some nodes are
standalone points of interest, others are vertices in
polygonal (areal) POIs.
Parameters
----------
osm_response : string
OSM response JSON string
Returns
-------
coords : dict
dict of node IDs and their lat, lon coordinates
"""
coords = {}
for result in osm_response['elements']:
if 'type' in result and result['type'] == 'node':
coords[result['id']] = {'lat': result['lat'],
'lon': result['lon']}
return coords | Parse node coordinates from OSM response. Some nodes are
standalone points of interest, others are vertices in
polygonal (areal) POIs.
Parameters
----------
osm_response : string
OSM response JSON string
Returns
-------
coords : dict
dict of node IDs and their lat, lon coordinates | Below is the the instruction that describes the task:
### Input:
Parse node coordinates from OSM response. Some nodes are
standalone points of interest, others are vertices in
polygonal (areal) POIs.
Parameters
----------
osm_response : string
OSM response JSON string
Returns
-------
coords : dict
dict of node IDs and their lat, lon coordinates
### Response:
def parse_nodes_coords(osm_response):
"""
Parse node coordinates from OSM response. Some nodes are
standalone points of interest, others are vertices in
polygonal (areal) POIs.
Parameters
----------
osm_response : string
OSM response JSON string
Returns
-------
coords : dict
dict of node IDs and their lat, lon coordinates
"""
coords = {}
for result in osm_response['elements']:
if 'type' in result and result['type'] == 'node':
coords[result['id']] = {'lat': result['lat'],
'lon': result['lon']}
return coords |
def parent(self):
"""Return a Key constructed from all but the last (kind, id) pairs.
If there is only one (kind, id) pair, return None.
"""
pairs = self.__pairs
if len(pairs) <= 1:
return None
return Key(pairs=pairs[:-1], app=self.__app, namespace=self.__namespace) | Return a Key constructed from all but the last (kind, id) pairs.
If there is only one (kind, id) pair, return None. | Below is the the instruction that describes the task:
### Input:
Return a Key constructed from all but the last (kind, id) pairs.
If there is only one (kind, id) pair, return None.
### Response:
def parent(self):
"""Return a Key constructed from all but the last (kind, id) pairs.
If there is only one (kind, id) pair, return None.
"""
pairs = self.__pairs
if len(pairs) <= 1:
return None
return Key(pairs=pairs[:-1], app=self.__app, namespace=self.__namespace) |
def register_watchers(self, type_callback_dict, register_timeout=None):
"""
Register multiple callback/event type pairs, expressed as a dict.
"""
for event_type, callback in type_callback_dict.items():
self._push_watchers[event_type].add(callback)
self.wait_for_response(
RegisterMessage(event_list=type_callback_dict.keys()),
timeout=register_timeout) | Register multiple callback/event type pairs, expressed as a dict. | Below is the the instruction that describes the task:
### Input:
Register multiple callback/event type pairs, expressed as a dict.
### Response:
def register_watchers(self, type_callback_dict, register_timeout=None):
"""
Register multiple callback/event type pairs, expressed as a dict.
"""
for event_type, callback in type_callback_dict.items():
self._push_watchers[event_type].add(callback)
self.wait_for_response(
RegisterMessage(event_list=type_callback_dict.keys()),
timeout=register_timeout) |
def index(self, value, floating=False):
"""Return the index of a value in the domain.
Parameters
----------
value : ``self.set`` element
Point whose index to find.
floating : bool, optional
If True, then the index should also give the position inside the
voxel. This is given by returning the integer valued index of the
voxel plus the distance from the left cell boundary as a fraction
of the full cell size.
Returns
-------
index : int, float, tuple of int or tuple of float
Index of the value, as counted from the left.
If ``self.ndim > 1`` the result is a tuple, else a scalar.
If ``floating=True`` the scalar is a float, else an int.
Examples
--------
Get the indices of start and end:
>>> p = odl.uniform_partition(0, 2, 5)
>>> p.index(0)
0
>>> p.index(2)
4
For points inside voxels, the index of the containing cell is returned:
>>> p.index(0.2)
0
By using the ``floating`` argument, partial positions inside the voxels
can instead be determined:
>>> p.index(0.2, floating=True)
0.5
These indices work with indexing, extracting the voxel in which the
point lies:
>>> p[p.index(0.1)]
uniform_partition(0.0, 0.4, 1)
The same principle also works in higher dimensions:
>>> p = uniform_partition([0, -1], [1, 2], (4, 1))
>>> p.index([0.5, 2])
(2, 0)
>>> p[p.index([0.5, 2])]
uniform_partition([ 0.5, -1. ], [ 0.75, 2. ], (1, 1))
"""
value = np.atleast_1d(self.set.element(value))
result = []
for val, cell_bdry_vec in zip(value, self.cell_boundary_vecs):
ind = np.searchsorted(cell_bdry_vec, val)
if floating:
if cell_bdry_vec[ind] == val:
# Value is on top of edge
result.append(float(ind))
else:
# interpolate between
csize = float(cell_bdry_vec[ind] - cell_bdry_vec[ind - 1])
result.append(ind - (cell_bdry_vec[ind] - val) / csize)
else:
if cell_bdry_vec[ind] == val and ind != len(cell_bdry_vec) - 1:
# Value is on top of edge, but not last edge
result.append(ind)
else:
result.append(ind - 1)
if self.ndim == 1:
result = result[0]
else:
result = tuple(result)
return result | Return the index of a value in the domain.
Parameters
----------
value : ``self.set`` element
Point whose index to find.
floating : bool, optional
If True, then the index should also give the position inside the
voxel. This is given by returning the integer valued index of the
voxel plus the distance from the left cell boundary as a fraction
of the full cell size.
Returns
-------
index : int, float, tuple of int or tuple of float
Index of the value, as counted from the left.
If ``self.ndim > 1`` the result is a tuple, else a scalar.
If ``floating=True`` the scalar is a float, else an int.
Examples
--------
Get the indices of start and end:
>>> p = odl.uniform_partition(0, 2, 5)
>>> p.index(0)
0
>>> p.index(2)
4
For points inside voxels, the index of the containing cell is returned:
>>> p.index(0.2)
0
By using the ``floating`` argument, partial positions inside the voxels
can instead be determined:
>>> p.index(0.2, floating=True)
0.5
These indices work with indexing, extracting the voxel in which the
point lies:
>>> p[p.index(0.1)]
uniform_partition(0.0, 0.4, 1)
The same principle also works in higher dimensions:
>>> p = uniform_partition([0, -1], [1, 2], (4, 1))
>>> p.index([0.5, 2])
(2, 0)
>>> p[p.index([0.5, 2])]
uniform_partition([ 0.5, -1. ], [ 0.75, 2. ], (1, 1)) | Below is the the instruction that describes the task:
### Input:
Return the index of a value in the domain.
Parameters
----------
value : ``self.set`` element
Point whose index to find.
floating : bool, optional
If True, then the index should also give the position inside the
voxel. This is given by returning the integer valued index of the
voxel plus the distance from the left cell boundary as a fraction
of the full cell size.
Returns
-------
index : int, float, tuple of int or tuple of float
Index of the value, as counted from the left.
If ``self.ndim > 1`` the result is a tuple, else a scalar.
If ``floating=True`` the scalar is a float, else an int.
Examples
--------
Get the indices of start and end:
>>> p = odl.uniform_partition(0, 2, 5)
>>> p.index(0)
0
>>> p.index(2)
4
For points inside voxels, the index of the containing cell is returned:
>>> p.index(0.2)
0
By using the ``floating`` argument, partial positions inside the voxels
can instead be determined:
>>> p.index(0.2, floating=True)
0.5
These indices work with indexing, extracting the voxel in which the
point lies:
>>> p[p.index(0.1)]
uniform_partition(0.0, 0.4, 1)
The same principle also works in higher dimensions:
>>> p = uniform_partition([0, -1], [1, 2], (4, 1))
>>> p.index([0.5, 2])
(2, 0)
>>> p[p.index([0.5, 2])]
uniform_partition([ 0.5, -1. ], [ 0.75, 2. ], (1, 1))
### Response:
def index(self, value, floating=False):
"""Return the index of a value in the domain.
Parameters
----------
value : ``self.set`` element
Point whose index to find.
floating : bool, optional
If True, then the index should also give the position inside the
voxel. This is given by returning the integer valued index of the
voxel plus the distance from the left cell boundary as a fraction
of the full cell size.
Returns
-------
index : int, float, tuple of int or tuple of float
Index of the value, as counted from the left.
If ``self.ndim > 1`` the result is a tuple, else a scalar.
If ``floating=True`` the scalar is a float, else an int.
Examples
--------
Get the indices of start and end:
>>> p = odl.uniform_partition(0, 2, 5)
>>> p.index(0)
0
>>> p.index(2)
4
For points inside voxels, the index of the containing cell is returned:
>>> p.index(0.2)
0
By using the ``floating`` argument, partial positions inside the voxels
can instead be determined:
>>> p.index(0.2, floating=True)
0.5
These indices work with indexing, extracting the voxel in which the
point lies:
>>> p[p.index(0.1)]
uniform_partition(0.0, 0.4, 1)
The same principle also works in higher dimensions:
>>> p = uniform_partition([0, -1], [1, 2], (4, 1))
>>> p.index([0.5, 2])
(2, 0)
>>> p[p.index([0.5, 2])]
uniform_partition([ 0.5, -1. ], [ 0.75, 2. ], (1, 1))
"""
value = np.atleast_1d(self.set.element(value))
result = []
for val, cell_bdry_vec in zip(value, self.cell_boundary_vecs):
ind = np.searchsorted(cell_bdry_vec, val)
if floating:
if cell_bdry_vec[ind] == val:
# Value is on top of edge
result.append(float(ind))
else:
# interpolate between
csize = float(cell_bdry_vec[ind] - cell_bdry_vec[ind - 1])
result.append(ind - (cell_bdry_vec[ind] - val) / csize)
else:
if cell_bdry_vec[ind] == val and ind != len(cell_bdry_vec) - 1:
# Value is on top of edge, but not last edge
result.append(ind)
else:
result.append(ind - 1)
if self.ndim == 1:
result = result[0]
else:
result = tuple(result)
return result |
def parse(self, lines):
'''
Parse signature file lines.
@lines - A list of lines from a signature file.
Returns None.
'''
signature = None
for line in lines:
# Split at the first comment delimiter (if any) and strip the
# result
line = line.split('#')[0].strip()
# Ignore blank lines and lines that are nothing but comments.
# We also don't support the '!mime' style line entries.
if line and line[0] != '!':
# Parse this signature line
sigline = SignatureLine(line)
# Level 0 means the first line of a signature entry
if sigline.level == 0:
# If there is an existing signature, append it to the signature list,
# unless the text in its title field has been filtered by user-defined
# filter rules.
if signature and not self._filtered(signature.title):
self.signatures.append(signature)
# Create a new signature object; use the size of self.signatures to
# assign each signature a unique ID.
signature = Signature(len(self.signatures), sigline)
# Else, just append this line to the existing signature
elif signature:
# signature.append(sigline)
signature.lines.append(sigline)
# If this is not the first line of a signature entry and there is no other
# existing signature entry, something is very wrong with the
# signature file.
else:
raise ParserException("Invalid signature line: '%s'" % line)
# Add the final signature to the signature list
if signature:
if not self._filtered(signature.lines[0].format):
self.signatures.append(signature)
# Sort signatures by confidence (aka, length of their magic bytes),
# largest first
self.signatures.sort(key=lambda x: x.confidence, reverse=True) | Parse signature file lines.
@lines - A list of lines from a signature file.
Returns None. | Below is the the instruction that describes the task:
### Input:
Parse signature file lines.
@lines - A list of lines from a signature file.
Returns None.
### Response:
def parse(self, lines):
'''
Parse signature file lines.
@lines - A list of lines from a signature file.
Returns None.
'''
signature = None
for line in lines:
# Split at the first comment delimiter (if any) and strip the
# result
line = line.split('#')[0].strip()
# Ignore blank lines and lines that are nothing but comments.
# We also don't support the '!mime' style line entries.
if line and line[0] != '!':
# Parse this signature line
sigline = SignatureLine(line)
# Level 0 means the first line of a signature entry
if sigline.level == 0:
# If there is an existing signature, append it to the signature list,
# unless the text in its title field has been filtered by user-defined
# filter rules.
if signature and not self._filtered(signature.title):
self.signatures.append(signature)
# Create a new signature object; use the size of self.signatures to
# assign each signature a unique ID.
signature = Signature(len(self.signatures), sigline)
# Else, just append this line to the existing signature
elif signature:
# signature.append(sigline)
signature.lines.append(sigline)
# If this is not the first line of a signature entry and there is no other
# existing signature entry, something is very wrong with the
# signature file.
else:
raise ParserException("Invalid signature line: '%s'" % line)
# Add the final signature to the signature list
if signature:
if not self._filtered(signature.lines[0].format):
self.signatures.append(signature)
# Sort signatures by confidence (aka, length of their magic bytes),
# largest first
self.signatures.sort(key=lambda x: x.confidence, reverse=True) |
def expose_finish(self, *args):
"""Finish drawing process
"""
# Obtain a reference to the OpenGL drawable
# and rendering context.
gldrawable = self.get_gl_drawable()
# glcontext = self.get_gl_context()
if not gldrawable:
return
# Put the buffer on the screen!
if gldrawable.is_double_buffered():
gldrawable.swap_buffers()
else:
glFlush()
# OpenGL end
gldrawable.gl_end() | Finish drawing process | Below is the the instruction that describes the task:
### Input:
Finish drawing process
### Response:
def expose_finish(self, *args):
"""Finish drawing process
"""
# Obtain a reference to the OpenGL drawable
# and rendering context.
gldrawable = self.get_gl_drawable()
# glcontext = self.get_gl_context()
if not gldrawable:
return
# Put the buffer on the screen!
if gldrawable.is_double_buffered():
gldrawable.swap_buffers()
else:
glFlush()
# OpenGL end
gldrawable.gl_end() |
def torus(script, major_radius=3.0, minor_radius=1.0, inner_diameter=None,
outer_diameter=None, major_segments=48, minor_segments=12,
color=None):
"""Create a torus mesh
Args:
major_radius (float, (optional)): radius from the origin to the
center of the cross sections
minor_radius (float, (optional)): radius of the torus cross
section
inner_diameter (float, (optional)): inner diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.,
outer_diameter (float, (optional)): outer diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.
major_segments (int (optional)): number of segments for the main
ring of the torus
minor_segments (int (optional)): number of segments for the minor
ring of the torus
color (str (optional)): color name to apply vertex colors to the
newly created mesh
Returns:
None
"""
if inner_diameter is not None and outer_diameter is not None:
major_radius = (inner_diameter + outer_diameter) / 4
minor_radius = major_radius - inner_diameter / 2
# Ref: inner_diameter = 2 * (major_radius - minor_radius)
# Ref: outer_diameter = 2 * (major_radius + minor_radius)
filter_xml = ''.join([
' <filter name="Torus">\n',
' <Param name="hRadius" ',
'value="%s" ' % major_radius,
'description="Horizontal Radius" ',
'type="RichFloat" ',
'/>\n',
' <Param name="vRadius" ',
'value="%s" ' % minor_radius,
'description="Vertical Radius" ',
'type="RichFloat" ',
'/>\n',
' <Param name="hSubdiv" ',
'value="%d" ' % major_segments,
'description="Horizontal Subdivision" ',
'type="RichInt" ',
'/>\n',
' <Param name="vSubdiv" ',
'value="%d" ' % minor_segments,
'description="Vertical Subdivision" ',
'type="RichInt" ',
'/>\n',
' </filter>\n'])
util.write_filter(script, filter_xml)
if isinstance(script, FilterScript):
script.add_layer('Torus', change_layer=True)
if color is not None:
vert_color.function(script, color=color)
return None | Create a torus mesh
Args:
major_radius (float, (optional)): radius from the origin to the
center of the cross sections
minor_radius (float, (optional)): radius of the torus cross
section
inner_diameter (float, (optional)): inner diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.,
outer_diameter (float, (optional)): outer diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.
major_segments (int (optional)): number of segments for the main
ring of the torus
minor_segments (int (optional)): number of segments for the minor
ring of the torus
color (str (optional)): color name to apply vertex colors to the
newly created mesh
Returns:
None | Below is the the instruction that describes the task:
### Input:
Create a torus mesh
Args:
major_radius (float, (optional)): radius from the origin to the
center of the cross sections
minor_radius (float, (optional)): radius of the torus cross
section
inner_diameter (float, (optional)): inner diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.,
outer_diameter (float, (optional)): outer diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.
major_segments (int (optional)): number of segments for the main
ring of the torus
minor_segments (int (optional)): number of segments for the minor
ring of the torus
color (str (optional)): color name to apply vertex colors to the
newly created mesh
Returns:
None
### Response:
def torus(script, major_radius=3.0, minor_radius=1.0, inner_diameter=None,
outer_diameter=None, major_segments=48, minor_segments=12,
color=None):
"""Create a torus mesh
Args:
major_radius (float, (optional)): radius from the origin to the
center of the cross sections
minor_radius (float, (optional)): radius of the torus cross
section
inner_diameter (float, (optional)): inner diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.,
outer_diameter (float, (optional)): outer diameter of torus. If
both inner_diameter and outer_diameter are provided then
these will override major_radius and minor_radius.
major_segments (int (optional)): number of segments for the main
ring of the torus
minor_segments (int (optional)): number of segments for the minor
ring of the torus
color (str (optional)): color name to apply vertex colors to the
newly created mesh
Returns:
None
"""
if inner_diameter is not None and outer_diameter is not None:
major_radius = (inner_diameter + outer_diameter) / 4
minor_radius = major_radius - inner_diameter / 2
# Ref: inner_diameter = 2 * (major_radius - minor_radius)
# Ref: outer_diameter = 2 * (major_radius + minor_radius)
filter_xml = ''.join([
' <filter name="Torus">\n',
' <Param name="hRadius" ',
'value="%s" ' % major_radius,
'description="Horizontal Radius" ',
'type="RichFloat" ',
'/>\n',
' <Param name="vRadius" ',
'value="%s" ' % minor_radius,
'description="Vertical Radius" ',
'type="RichFloat" ',
'/>\n',
' <Param name="hSubdiv" ',
'value="%d" ' % major_segments,
'description="Horizontal Subdivision" ',
'type="RichInt" ',
'/>\n',
' <Param name="vSubdiv" ',
'value="%d" ' % minor_segments,
'description="Vertical Subdivision" ',
'type="RichInt" ',
'/>\n',
' </filter>\n'])
util.write_filter(script, filter_xml)
if isinstance(script, FilterScript):
script.add_layer('Torus', change_layer=True)
if color is not None:
vert_color.function(script, color=color)
return None |
def safejoin(base, *elements):
"""Safely joins paths together.
The result will always be a subdirectory under `base`, otherwise ValueError
is raised.
Args:
base (str): base path
elements (list of strings): path elements to join to base
Returns:
elements joined to base
"""
# TODO: do we really want to be absolute here?
base = os.path.abspath(base)
path = os.path.join(base, *elements)
path = os.path.normpath(path)
if not path_is_inside(path, base):
raise ValueError('target path is outside of the base path')
return path | Safely joins paths together.
The result will always be a subdirectory under `base`, otherwise ValueError
is raised.
Args:
base (str): base path
elements (list of strings): path elements to join to base
Returns:
elements joined to base | Below is the the instruction that describes the task:
### Input:
Safely joins paths together.
The result will always be a subdirectory under `base`, otherwise ValueError
is raised.
Args:
base (str): base path
elements (list of strings): path elements to join to base
Returns:
elements joined to base
### Response:
def safejoin(base, *elements):
"""Safely joins paths together.
The result will always be a subdirectory under `base`, otherwise ValueError
is raised.
Args:
base (str): base path
elements (list of strings): path elements to join to base
Returns:
elements joined to base
"""
# TODO: do we really want to be absolute here?
base = os.path.abspath(base)
path = os.path.join(base, *elements)
path = os.path.normpath(path)
if not path_is_inside(path, base):
raise ValueError('target path is outside of the base path')
return path |
def ellipsis(text, length, symbol="..."):
"""Present a block of text of given length.
If the length of available text exceeds the requested length, truncate and
intelligently append an ellipsis.
"""
if len(text) > length:
pos = text.rfind(" ", 0, length)
if pos < 0:
return text[:length].rstrip(".") + symbol
else:
return text[:pos].rstrip(".") + symbol
else:
return text | Present a block of text of given length.
If the length of available text exceeds the requested length, truncate and
intelligently append an ellipsis. | Below is the the instruction that describes the task:
### Input:
Present a block of text of given length.
If the length of available text exceeds the requested length, truncate and
intelligently append an ellipsis.
### Response:
def ellipsis(text, length, symbol="..."):
"""Present a block of text of given length.
If the length of available text exceeds the requested length, truncate and
intelligently append an ellipsis.
"""
if len(text) > length:
pos = text.rfind(" ", 0, length)
if pos < 0:
return text[:length].rstrip(".") + symbol
else:
return text[:pos].rstrip(".") + symbol
else:
return text |
def install(feature, recurse=False, restart=False, source=None, exclude=None):
r'''
Install a feature
.. note::
Some features require reboot after un/installation, if so until the
server is restarted other features can not be installed!
.. note::
Some features take a long time to complete un/installation, set -t with
a long timeout
Args:
feature (str, list):
The name of the feature(s) to install. This can be a single feature,
a string of features in a comma delimited list (no spaces), or a
list of features.
.. versionadded:: 2018.3.0
Added the ability to pass a list of features to be installed.
recurse (Options[bool]):
Install all sub-features. Default is False
restart (Optional[bool]):
Restarts the computer when installation is complete, if required by
the role/feature installed. Will also trigger a reboot if an item
in ``exclude`` requires a reboot to be properly removed. Default is
False
source (Optional[str]):
Path to the source files if missing from the target system. None
means that the system will use windows update services to find the
required files. Default is None
exclude (Optional[str]):
The name of the feature to exclude when installing the named
feature. This can be a single feature, a string of features in a
comma-delimited list (no spaces), or a list of features.
.. warning::
As there is no exclude option for the ``Add-WindowsFeature``
or ``Install-WindowsFeature`` PowerShell commands the features
named in ``exclude`` will be installed with other sub-features
and will then be removed. **If the feature named in ``exclude``
is not a sub-feature of one of the installed items it will still
be removed.**
Returns:
dict: A dictionary containing the results of the install
CLI Example:
.. code-block:: bash
# Install the Telnet Client passing a single string
salt '*' win_servermanager.install Telnet-Client
# Install the TFTP Client and the SNMP Service passing a comma-delimited
# string. Install all sub-features
salt '*' win_servermanager.install TFTP-Client,SNMP-Service recurse=True
# Install the TFTP Client from d:\side-by-side
salt '*' win_servermanager.install TFTP-Client source=d:\\side-by-side
# Install the XPS Viewer, SNMP Service, and Remote Access passing a
# list. Install all sub-features, but exclude the Web Server
salt '*' win_servermanager.install "['XPS-Viewer', 'SNMP-Service', 'RemoteAccess']" True recurse=True exclude="Web-Server"
'''
# If it is a list of features, make it a comma delimited string
if isinstance(feature, list):
feature = ','.join(feature)
# Use Install-WindowsFeature on Windows 2012 (osversion 6.2) and later
# minions. Default to Add-WindowsFeature for earlier releases of Windows.
# The newer command makes management tools optional so add them for parity
# with old behavior.
command = 'Add-WindowsFeature'
management_tools = ''
if salt.utils.versions.version_cmp(__grains__['osversion'], '6.2') >= 0:
command = 'Install-WindowsFeature'
management_tools = '-IncludeManagementTools'
cmd = '{0} -Name {1} {2} {3} {4} ' \
'-WarningAction SilentlyContinue'\
.format(command, _cmd_quote(feature), management_tools,
'-IncludeAllSubFeature' if recurse else '',
'' if source is None else '-Source {0}'.format(source))
out = _pshell_json(cmd)
# Uninstall items in the exclude list
# The Install-WindowsFeature command doesn't have the concept of an exclude
# list. So you install first, then remove
if exclude is not None:
removed = remove(exclude)
# Results are stored in a list of dictionaries in `FeatureResult`
if out['FeatureResult']:
ret = {'ExitCode': out['ExitCode'],
'RestartNeeded': False,
'Restarted': False,
'Features': {},
'Success': out['Success']}
# FeatureResult is a list of dicts, so each item is a dict
for item in out['FeatureResult']:
ret['Features'][item['Name']] = {
'DisplayName': item['DisplayName'],
'Message': item['Message'],
'RestartNeeded': item['RestartNeeded'],
'SkipReason': item['SkipReason'],
'Success': item['Success']
}
if item['RestartNeeded']:
ret['RestartNeeded'] = True
# Only items that installed are in the list of dictionaries
# Add 'Already installed' for features that aren't in the list of dicts
for item in feature.split(','):
if item not in ret['Features']:
ret['Features'][item] = {'Message': 'Already installed'}
# Some items in the exclude list were removed after installation
# Show what was done, update the dict
if exclude is not None:
# Features is a dict, so it only iterates over the keys
for item in removed['Features']:
if item in ret['Features']:
ret['Features'][item] = {
'Message': 'Removed after installation (exclude)',
'DisplayName': removed['Features'][item]['DisplayName'],
'RestartNeeded': removed['Features'][item]['RestartNeeded'],
'SkipReason': removed['Features'][item]['SkipReason'],
'Success': removed['Features'][item]['Success']
}
# Exclude items might need a restart
if removed['Features'][item]['RestartNeeded']:
ret['RestartNeeded'] = True
# Restart here if needed
if restart:
if ret['RestartNeeded']:
if __salt__['system.restart'](in_seconds=True):
ret['Restarted'] = True
return ret
else:
# If we get here then all features were already installed
ret = {'ExitCode': out['ExitCode'],
'Features': {},
'RestartNeeded': False,
'Restarted': False,
'Success': out['Success']}
for item in feature.split(','):
ret['Features'][item] = {'Message': 'Already installed'}
return ret | r'''
Install a feature
.. note::
Some features require reboot after un/installation, if so until the
server is restarted other features can not be installed!
.. note::
Some features take a long time to complete un/installation, set -t with
a long timeout
Args:
feature (str, list):
The name of the feature(s) to install. This can be a single feature,
a string of features in a comma delimited list (no spaces), or a
list of features.
.. versionadded:: 2018.3.0
Added the ability to pass a list of features to be installed.
recurse (Options[bool]):
Install all sub-features. Default is False
restart (Optional[bool]):
Restarts the computer when installation is complete, if required by
the role/feature installed. Will also trigger a reboot if an item
in ``exclude`` requires a reboot to be properly removed. Default is
False
source (Optional[str]):
Path to the source files if missing from the target system. None
means that the system will use windows update services to find the
required files. Default is None
exclude (Optional[str]):
The name of the feature to exclude when installing the named
feature. This can be a single feature, a string of features in a
comma-delimited list (no spaces), or a list of features.
.. warning::
As there is no exclude option for the ``Add-WindowsFeature``
or ``Install-WindowsFeature`` PowerShell commands the features
named in ``exclude`` will be installed with other sub-features
and will then be removed. **If the feature named in ``exclude``
is not a sub-feature of one of the installed items it will still
be removed.**
Returns:
dict: A dictionary containing the results of the install
CLI Example:
.. code-block:: bash
# Install the Telnet Client passing a single string
salt '*' win_servermanager.install Telnet-Client
# Install the TFTP Client and the SNMP Service passing a comma-delimited
# string. Install all sub-features
salt '*' win_servermanager.install TFTP-Client,SNMP-Service recurse=True
# Install the TFTP Client from d:\side-by-side
salt '*' win_servermanager.install TFTP-Client source=d:\\side-by-side
# Install the XPS Viewer, SNMP Service, and Remote Access passing a
# list. Install all sub-features, but exclude the Web Server
salt '*' win_servermanager.install "['XPS-Viewer', 'SNMP-Service', 'RemoteAccess']" True recurse=True exclude="Web-Server" | Below is the the instruction that describes the task:
### Input:
r'''
Install a feature
.. note::
Some features require reboot after un/installation, if so until the
server is restarted other features can not be installed!
.. note::
Some features take a long time to complete un/installation, set -t with
a long timeout
Args:
feature (str, list):
The name of the feature(s) to install. This can be a single feature,
a string of features in a comma delimited list (no spaces), or a
list of features.
.. versionadded:: 2018.3.0
Added the ability to pass a list of features to be installed.
recurse (Options[bool]):
Install all sub-features. Default is False
restart (Optional[bool]):
Restarts the computer when installation is complete, if required by
the role/feature installed. Will also trigger a reboot if an item
in ``exclude`` requires a reboot to be properly removed. Default is
False
source (Optional[str]):
Path to the source files if missing from the target system. None
means that the system will use windows update services to find the
required files. Default is None
exclude (Optional[str]):
The name of the feature to exclude when installing the named
feature. This can be a single feature, a string of features in a
comma-delimited list (no spaces), or a list of features.
.. warning::
As there is no exclude option for the ``Add-WindowsFeature``
or ``Install-WindowsFeature`` PowerShell commands the features
named in ``exclude`` will be installed with other sub-features
and will then be removed. **If the feature named in ``exclude``
is not a sub-feature of one of the installed items it will still
be removed.**
Returns:
dict: A dictionary containing the results of the install
CLI Example:
.. code-block:: bash
# Install the Telnet Client passing a single string
salt '*' win_servermanager.install Telnet-Client
# Install the TFTP Client and the SNMP Service passing a comma-delimited
# string. Install all sub-features
salt '*' win_servermanager.install TFTP-Client,SNMP-Service recurse=True
# Install the TFTP Client from d:\side-by-side
salt '*' win_servermanager.install TFTP-Client source=d:\\side-by-side
# Install the XPS Viewer, SNMP Service, and Remote Access passing a
# list. Install all sub-features, but exclude the Web Server
salt '*' win_servermanager.install "['XPS-Viewer', 'SNMP-Service', 'RemoteAccess']" True recurse=True exclude="Web-Server"
### Response:
def install(feature, recurse=False, restart=False, source=None, exclude=None):
r'''
Install a feature
.. note::
Some features require reboot after un/installation, if so until the
server is restarted other features can not be installed!
.. note::
Some features take a long time to complete un/installation, set -t with
a long timeout
Args:
feature (str, list):
The name of the feature(s) to install. This can be a single feature,
a string of features in a comma delimited list (no spaces), or a
list of features.
.. versionadded:: 2018.3.0
Added the ability to pass a list of features to be installed.
recurse (Options[bool]):
Install all sub-features. Default is False
restart (Optional[bool]):
Restarts the computer when installation is complete, if required by
the role/feature installed. Will also trigger a reboot if an item
in ``exclude`` requires a reboot to be properly removed. Default is
False
source (Optional[str]):
Path to the source files if missing from the target system. None
means that the system will use windows update services to find the
required files. Default is None
exclude (Optional[str]):
The name of the feature to exclude when installing the named
feature. This can be a single feature, a string of features in a
comma-delimited list (no spaces), or a list of features.
.. warning::
As there is no exclude option for the ``Add-WindowsFeature``
or ``Install-WindowsFeature`` PowerShell commands the features
named in ``exclude`` will be installed with other sub-features
and will then be removed. **If the feature named in ``exclude``
is not a sub-feature of one of the installed items it will still
be removed.**
Returns:
dict: A dictionary containing the results of the install
CLI Example:
.. code-block:: bash
# Install the Telnet Client passing a single string
salt '*' win_servermanager.install Telnet-Client
# Install the TFTP Client and the SNMP Service passing a comma-delimited
# string. Install all sub-features
salt '*' win_servermanager.install TFTP-Client,SNMP-Service recurse=True
# Install the TFTP Client from d:\side-by-side
salt '*' win_servermanager.install TFTP-Client source=d:\\side-by-side
# Install the XPS Viewer, SNMP Service, and Remote Access passing a
# list. Install all sub-features, but exclude the Web Server
salt '*' win_servermanager.install "['XPS-Viewer', 'SNMP-Service', 'RemoteAccess']" True recurse=True exclude="Web-Server"
'''
# If it is a list of features, make it a comma delimited string
if isinstance(feature, list):
feature = ','.join(feature)
# Use Install-WindowsFeature on Windows 2012 (osversion 6.2) and later
# minions. Default to Add-WindowsFeature for earlier releases of Windows.
# The newer command makes management tools optional so add them for parity
# with old behavior.
command = 'Add-WindowsFeature'
management_tools = ''
if salt.utils.versions.version_cmp(__grains__['osversion'], '6.2') >= 0:
command = 'Install-WindowsFeature'
management_tools = '-IncludeManagementTools'
cmd = '{0} -Name {1} {2} {3} {4} ' \
'-WarningAction SilentlyContinue'\
.format(command, _cmd_quote(feature), management_tools,
'-IncludeAllSubFeature' if recurse else '',
'' if source is None else '-Source {0}'.format(source))
out = _pshell_json(cmd)
# Uninstall items in the exclude list
# The Install-WindowsFeature command doesn't have the concept of an exclude
# list. So you install first, then remove
if exclude is not None:
removed = remove(exclude)
# Results are stored in a list of dictionaries in `FeatureResult`
if out['FeatureResult']:
ret = {'ExitCode': out['ExitCode'],
'RestartNeeded': False,
'Restarted': False,
'Features': {},
'Success': out['Success']}
# FeatureResult is a list of dicts, so each item is a dict
for item in out['FeatureResult']:
ret['Features'][item['Name']] = {
'DisplayName': item['DisplayName'],
'Message': item['Message'],
'RestartNeeded': item['RestartNeeded'],
'SkipReason': item['SkipReason'],
'Success': item['Success']
}
if item['RestartNeeded']:
ret['RestartNeeded'] = True
# Only items that installed are in the list of dictionaries
# Add 'Already installed' for features that aren't in the list of dicts
for item in feature.split(','):
if item not in ret['Features']:
ret['Features'][item] = {'Message': 'Already installed'}
# Some items in the exclude list were removed after installation
# Show what was done, update the dict
if exclude is not None:
# Features is a dict, so it only iterates over the keys
for item in removed['Features']:
if item in ret['Features']:
ret['Features'][item] = {
'Message': 'Removed after installation (exclude)',
'DisplayName': removed['Features'][item]['DisplayName'],
'RestartNeeded': removed['Features'][item]['RestartNeeded'],
'SkipReason': removed['Features'][item]['SkipReason'],
'Success': removed['Features'][item]['Success']
}
# Exclude items might need a restart
if removed['Features'][item]['RestartNeeded']:
ret['RestartNeeded'] = True
# Restart here if needed
if restart:
if ret['RestartNeeded']:
if __salt__['system.restart'](in_seconds=True):
ret['Restarted'] = True
return ret
else:
# If we get here then all features were already installed
ret = {'ExitCode': out['ExitCode'],
'Features': {},
'RestartNeeded': False,
'Restarted': False,
'Success': out['Success']}
for item in feature.split(','):
ret['Features'][item] = {'Message': 'Already installed'}
return ret |
def run_callbacks(obj, log=None):
"""Run callbacks."""
def run_callback(callback, args):
return callback(*args)
return walk_callbacks(obj, run_callback, log) | Run callbacks. | Below is the the instruction that describes the task:
### Input:
Run callbacks.
### Response:
def run_callbacks(obj, log=None):
"""Run callbacks."""
def run_callback(callback, args):
return callback(*args)
return walk_callbacks(obj, run_callback, log) |
def get_properties_by_type(self, type, recursive=True, parent_path=""):
"""
Returns a sorted list of fields that match the type.
:param type the type of the field "string","integer" or a list of types
:param recursive recurse to sub object
:returns a sorted list of fields the match the type
"""
if parent_path:
parent_path += "."
if isinstance(type, str):
if type == "*":
type = set(MAPPING_NAME_TYPE.keys()) - set(["nested", "multi_field", "multifield"])
else:
type = [type]
properties = []
for prop in list(self.properties.values()):
if prop.type in type:
properties.append((parent_path + prop.name, prop))
continue
elif prop.type == "multi_field" and prop.name in prop.fields and prop.fields[prop.name].type in type:
properties.append((parent_path + prop.name, prop))
continue
if not recursive:
continue
if prop.type in ["nested", "object"]:
properties.extend(
prop.get_properties_by_type(type, recursive=recursive, parent_path=parent_path + prop.name))
return sorted(properties) | Returns a sorted list of fields that match the type.
:param type the type of the field "string","integer" or a list of types
:param recursive recurse to sub object
:returns a sorted list of fields the match the type | Below is the the instruction that describes the task:
### Input:
Returns a sorted list of fields that match the type.
:param type the type of the field "string","integer" or a list of types
:param recursive recurse to sub object
:returns a sorted list of fields the match the type
### Response:
def get_properties_by_type(self, type, recursive=True, parent_path=""):
"""
Returns a sorted list of fields that match the type.
:param type the type of the field "string","integer" or a list of types
:param recursive recurse to sub object
:returns a sorted list of fields the match the type
"""
if parent_path:
parent_path += "."
if isinstance(type, str):
if type == "*":
type = set(MAPPING_NAME_TYPE.keys()) - set(["nested", "multi_field", "multifield"])
else:
type = [type]
properties = []
for prop in list(self.properties.values()):
if prop.type in type:
properties.append((parent_path + prop.name, prop))
continue
elif prop.type == "multi_field" and prop.name in prop.fields and prop.fields[prop.name].type in type:
properties.append((parent_path + prop.name, prop))
continue
if not recursive:
continue
if prop.type in ["nested", "object"]:
properties.extend(
prop.get_properties_by_type(type, recursive=recursive, parent_path=parent_path + prop.name))
return sorted(properties) |
def header(self):
'''
This returns the first header in the data file
'''
if self._header is None:
self._header = self._read_half_frame_header(self.data)
return self._header | This returns the first header in the data file | Below is the the instruction that describes the task:
### Input:
This returns the first header in the data file
### Response:
def header(self):
'''
This returns the first header in the data file
'''
if self._header is None:
self._header = self._read_half_frame_header(self.data)
return self._header |
def transform_audio(self, y):
'''Compute the HCQT with unwrapped phase
Parameters
----------
y : np.ndarray
The audio buffer
Returns
-------
data : dict
data['mag'] : np.ndarray, shape=(n_frames, n_bins)
CQT magnitude
data['dphase'] : np.ndarray, shape=(n_frames, n_bins)
Unwrapped phase differential
'''
data = super(HCQTPhaseDiff, self).transform_audio(y)
data['dphase'] = self.phase_diff(data.pop('phase'))
return data | Compute the HCQT with unwrapped phase
Parameters
----------
y : np.ndarray
The audio buffer
Returns
-------
data : dict
data['mag'] : np.ndarray, shape=(n_frames, n_bins)
CQT magnitude
data['dphase'] : np.ndarray, shape=(n_frames, n_bins)
Unwrapped phase differential | Below is the the instruction that describes the task:
### Input:
Compute the HCQT with unwrapped phase
Parameters
----------
y : np.ndarray
The audio buffer
Returns
-------
data : dict
data['mag'] : np.ndarray, shape=(n_frames, n_bins)
CQT magnitude
data['dphase'] : np.ndarray, shape=(n_frames, n_bins)
Unwrapped phase differential
### Response:
def transform_audio(self, y):
'''Compute the HCQT with unwrapped phase
Parameters
----------
y : np.ndarray
The audio buffer
Returns
-------
data : dict
data['mag'] : np.ndarray, shape=(n_frames, n_bins)
CQT magnitude
data['dphase'] : np.ndarray, shape=(n_frames, n_bins)
Unwrapped phase differential
'''
data = super(HCQTPhaseDiff, self).transform_audio(y)
data['dphase'] = self.phase_diff(data.pop('phase'))
return data |
def environments(self):
"""
Access the environments
:returns: twilio.rest.serverless.v1.service.environment.EnvironmentList
:rtype: twilio.rest.serverless.v1.service.environment.EnvironmentList
"""
if self._environments is None:
self._environments = EnvironmentList(self._version, service_sid=self._solution['sid'], )
return self._environments | Access the environments
:returns: twilio.rest.serverless.v1.service.environment.EnvironmentList
:rtype: twilio.rest.serverless.v1.service.environment.EnvironmentList | Below is the the instruction that describes the task:
### Input:
Access the environments
:returns: twilio.rest.serverless.v1.service.environment.EnvironmentList
:rtype: twilio.rest.serverless.v1.service.environment.EnvironmentList
### Response:
def environments(self):
"""
Access the environments
:returns: twilio.rest.serverless.v1.service.environment.EnvironmentList
:rtype: twilio.rest.serverless.v1.service.environment.EnvironmentList
"""
if self._environments is None:
self._environments = EnvironmentList(self._version, service_sid=self._solution['sid'], )
return self._environments |
def groupBy(groups_in, classifier, fun_desc='?', keep_uniques=False,
*args, **kwargs):
"""Subdivide groups of paths according to a function.
:param groups_in: Grouped sets of paths.
:type groups_in: :class:`~__builtins__.dict` of iterables
:param classifier: Function to group a list of paths by some attribute.
:type classifier: ``function(list, *args, **kwargs) -> str``
:param fun_desc: Human-readable term for what the classifier operates on.
(Used in log messages)
:type fun_desc: :class:`~__builtins__.str`
:param keep_uniques: If ``False``, discard groups with only one member.
:type keep_uniques: :class:`~__builtins__.bool`
:returns: A dict mapping classifier keys to groups of matches.
:rtype: :class:`~__builtins__.dict`
:attention: Grouping functions generally use a :class:`~__builtins__.set`
``groups`` as extra protection against accidentally counting a given
file twice. (Complimentary to use of :func:`os.path.realpath` in
:func:`~fastdupes.getPaths`)
.. todo:: Find some way to bring back the file-by-file status text
"""
groups, count, group_count = {}, 0, len(groups_in)
for pos, paths in enumerate(groups_in.values()):
out.write("Subdividing group %d of %d by %s... (%d files examined, %d "
"in current group)" % (
pos + 1, group_count, fun_desc, count, len(paths)
))
for key, group in classifier(paths, *args, **kwargs).items():
groups.setdefault(key, set()).update(group)
count += len(group)
if not keep_uniques:
# Return only the groups with more than one file.
groups = dict([(x, groups[x]) for x in groups if len(groups[x]) > 1])
out.write("Found %s sets of files with identical %s. (%d files examined)"
% (len(groups), fun_desc, count), newline=True)
return groups | Subdivide groups of paths according to a function.
:param groups_in: Grouped sets of paths.
:type groups_in: :class:`~__builtins__.dict` of iterables
:param classifier: Function to group a list of paths by some attribute.
:type classifier: ``function(list, *args, **kwargs) -> str``
:param fun_desc: Human-readable term for what the classifier operates on.
(Used in log messages)
:type fun_desc: :class:`~__builtins__.str`
:param keep_uniques: If ``False``, discard groups with only one member.
:type keep_uniques: :class:`~__builtins__.bool`
:returns: A dict mapping classifier keys to groups of matches.
:rtype: :class:`~__builtins__.dict`
:attention: Grouping functions generally use a :class:`~__builtins__.set`
``groups`` as extra protection against accidentally counting a given
file twice. (Complimentary to use of :func:`os.path.realpath` in
:func:`~fastdupes.getPaths`)
.. todo:: Find some way to bring back the file-by-file status text | Below is the the instruction that describes the task:
### Input:
Subdivide groups of paths according to a function.
:param groups_in: Grouped sets of paths.
:type groups_in: :class:`~__builtins__.dict` of iterables
:param classifier: Function to group a list of paths by some attribute.
:type classifier: ``function(list, *args, **kwargs) -> str``
:param fun_desc: Human-readable term for what the classifier operates on.
(Used in log messages)
:type fun_desc: :class:`~__builtins__.str`
:param keep_uniques: If ``False``, discard groups with only one member.
:type keep_uniques: :class:`~__builtins__.bool`
:returns: A dict mapping classifier keys to groups of matches.
:rtype: :class:`~__builtins__.dict`
:attention: Grouping functions generally use a :class:`~__builtins__.set`
``groups`` as extra protection against accidentally counting a given
file twice. (Complimentary to use of :func:`os.path.realpath` in
:func:`~fastdupes.getPaths`)
.. todo:: Find some way to bring back the file-by-file status text
### Response:
def groupBy(groups_in, classifier, fun_desc='?', keep_uniques=False,
*args, **kwargs):
"""Subdivide groups of paths according to a function.
:param groups_in: Grouped sets of paths.
:type groups_in: :class:`~__builtins__.dict` of iterables
:param classifier: Function to group a list of paths by some attribute.
:type classifier: ``function(list, *args, **kwargs) -> str``
:param fun_desc: Human-readable term for what the classifier operates on.
(Used in log messages)
:type fun_desc: :class:`~__builtins__.str`
:param keep_uniques: If ``False``, discard groups with only one member.
:type keep_uniques: :class:`~__builtins__.bool`
:returns: A dict mapping classifier keys to groups of matches.
:rtype: :class:`~__builtins__.dict`
:attention: Grouping functions generally use a :class:`~__builtins__.set`
``groups`` as extra protection against accidentally counting a given
file twice. (Complimentary to use of :func:`os.path.realpath` in
:func:`~fastdupes.getPaths`)
.. todo:: Find some way to bring back the file-by-file status text
"""
groups, count, group_count = {}, 0, len(groups_in)
for pos, paths in enumerate(groups_in.values()):
out.write("Subdividing group %d of %d by %s... (%d files examined, %d "
"in current group)" % (
pos + 1, group_count, fun_desc, count, len(paths)
))
for key, group in classifier(paths, *args, **kwargs).items():
groups.setdefault(key, set()).update(group)
count += len(group)
if not keep_uniques:
# Return only the groups with more than one file.
groups = dict([(x, groups[x]) for x in groups if len(groups[x]) > 1])
out.write("Found %s sets of files with identical %s. (%d files examined)"
% (len(groups), fun_desc, count), newline=True)
return groups |
def _find_highest_supported_command(self, *segment_classes, **kwargs):
"""Search the BPD for the highest supported version of a segment."""
return_parameter_segment = kwargs.get("return_parameter_segment", False)
parameter_segment_name = "{}I{}S".format(segment_classes[0].TYPE[0], segment_classes[0].TYPE[2:])
version_map = dict((clazz.VERSION, clazz) for clazz in segment_classes)
max_version = self.bpd.find_segment_highest_version(parameter_segment_name, version_map.keys())
if not max_version:
raise FinTSUnsupportedOperation('No supported {} version found. I support {}, bank supports {}.'.format(
parameter_segment_name,
tuple(version_map.keys()),
tuple(v.header.version for v in self.bpd.find_segments(parameter_segment_name))
))
if return_parameter_segment:
return max_version, version_map.get(max_version.header.version)
else:
return version_map.get(max_version.header.version) | Search the BPD for the highest supported version of a segment. | Below is the the instruction that describes the task:
### Input:
Search the BPD for the highest supported version of a segment.
### Response:
def _find_highest_supported_command(self, *segment_classes, **kwargs):
"""Search the BPD for the highest supported version of a segment."""
return_parameter_segment = kwargs.get("return_parameter_segment", False)
parameter_segment_name = "{}I{}S".format(segment_classes[0].TYPE[0], segment_classes[0].TYPE[2:])
version_map = dict((clazz.VERSION, clazz) for clazz in segment_classes)
max_version = self.bpd.find_segment_highest_version(parameter_segment_name, version_map.keys())
if not max_version:
raise FinTSUnsupportedOperation('No supported {} version found. I support {}, bank supports {}.'.format(
parameter_segment_name,
tuple(version_map.keys()),
tuple(v.header.version for v in self.bpd.find_segments(parameter_segment_name))
))
if return_parameter_segment:
return max_version, version_map.get(max_version.header.version)
else:
return version_map.get(max_version.header.version) |
def set_main_video_stream_type(self, streamtype, callback=None):
'''
Set the stream type of main stream
'''
params = {'streamType': streamtype}
return self.execute_command('setMainVideoStreamType',
params, callback=callback) | Set the stream type of main stream | Below is the the instruction that describes the task:
### Input:
Set the stream type of main stream
### Response:
def set_main_video_stream_type(self, streamtype, callback=None):
'''
Set the stream type of main stream
'''
params = {'streamType': streamtype}
return self.execute_command('setMainVideoStreamType',
params, callback=callback) |
def system_describe_projects(input_params={}, always_retry=True, **kwargs):
"""
Invokes the /system/describeProjects API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeProjects
"""
return DXHTTPRequest('/system/describeProjects', input_params, always_retry=always_retry, **kwargs) | Invokes the /system/describeProjects API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeProjects | Below is the the instruction that describes the task:
### Input:
Invokes the /system/describeProjects API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeProjects
### Response:
def system_describe_projects(input_params={}, always_retry=True, **kwargs):
"""
Invokes the /system/describeProjects API method.
For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/System-Methods#API-method:-/system/describeProjects
"""
return DXHTTPRequest('/system/describeProjects', input_params, always_retry=always_retry, **kwargs) |
def launch(self):
"""Launch and synchronously write metadata.
This is possible due to watchman's built-in async server startup - no double-forking required.
"""
cmd = self._construct_cmd((self._watchman_path, 'get-pid'),
state_file=self._state_file,
sock_file=self._sock_file,
pid_file=self._pid_file,
log_file=self._log_file,
log_level=str(self._log_level))
self._logger.debug('watchman cmd is: {}'.format(' '.join(cmd)))
self._maybe_init_metadata()
# Watchman is launched via its cli. By running the 'get-pid' command on the client we implicitly
# launch the Watchman daemon. This approach is somewhat error-prone - in some cases the client
# can successfully trigger the launch of the Watchman daemon, but fail to return successfully
# for the 'get-pid' result due to server <-> daemon socket issues - these can look like:
#
# 2016-04-01T17:31:23,820: [cli] unable to talk to your watchman
# on .../watchman.sock! (Permission denied)
#
# This results in a subprocess execution failure and leaves us with no pid information to write
# to the metadata directory - while in reality a Watchman daemon is actually running but now
# untracked. To safeguard against this, we retry the (idempotent) 'get-pid' command a few times
# to give the server-side socket setup a few chances to quiesce before potentially orphaning it.
get_output = functools.partial(self.get_subprocess_output, cmd)
output = retry_on_exception(get_output, 3, (ProcessManager.ExecutionError,), lambda n: n * .5)
# Parse the watchman PID from the cli output.
pid = self._parse_pid_from_output(output)
# Write the process metadata to disk.
self.write_pid(pid)
self.write_socket(self._sock_file) | Launch and synchronously write metadata.
This is possible due to watchman's built-in async server startup - no double-forking required. | Below is the the instruction that describes the task:
### Input:
Launch and synchronously write metadata.
This is possible due to watchman's built-in async server startup - no double-forking required.
### Response:
def launch(self):
"""Launch and synchronously write metadata.
This is possible due to watchman's built-in async server startup - no double-forking required.
"""
cmd = self._construct_cmd((self._watchman_path, 'get-pid'),
state_file=self._state_file,
sock_file=self._sock_file,
pid_file=self._pid_file,
log_file=self._log_file,
log_level=str(self._log_level))
self._logger.debug('watchman cmd is: {}'.format(' '.join(cmd)))
self._maybe_init_metadata()
# Watchman is launched via its cli. By running the 'get-pid' command on the client we implicitly
# launch the Watchman daemon. This approach is somewhat error-prone - in some cases the client
# can successfully trigger the launch of the Watchman daemon, but fail to return successfully
# for the 'get-pid' result due to server <-> daemon socket issues - these can look like:
#
# 2016-04-01T17:31:23,820: [cli] unable to talk to your watchman
# on .../watchman.sock! (Permission denied)
#
# This results in a subprocess execution failure and leaves us with no pid information to write
# to the metadata directory - while in reality a Watchman daemon is actually running but now
# untracked. To safeguard against this, we retry the (idempotent) 'get-pid' command a few times
# to give the server-side socket setup a few chances to quiesce before potentially orphaning it.
get_output = functools.partial(self.get_subprocess_output, cmd)
output = retry_on_exception(get_output, 3, (ProcessManager.ExecutionError,), lambda n: n * .5)
# Parse the watchman PID from the cli output.
pid = self._parse_pid_from_output(output)
# Write the process metadata to disk.
self.write_pid(pid)
self.write_socket(self._sock_file) |
def get_magicc6_to_magicc7_variable_mapping(inverse=False):
"""Get the mappings from MAGICC6 to MAGICC7 variables.
Note that this mapping is not one to one. For example, "HFC4310", "HFC43-10" and
"HFC-43-10" in MAGICC6 both map to "HFC4310" in MAGICC7 but "HFC4310" in
MAGICC7 maps back to "HFC4310".
Note that HFC-245fa was mistakenly labelled as HFC-245ca in MAGICC6. In reality,
they are not the same thing. However, the MAGICC6 labelling was merely a typo so
the mapping between the two is one-to-one.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. MAGICC7 to MAGICC6 mappings
Returns
-------
dict
Dictionary of mappings
"""
# we generate the mapping dynamically, the first name in the list
# is the one which will be used for inverse mappings
magicc6_simple_mapping_vars = [
"KYOTO-CO2EQ",
"CO2I",
"CO2B",
"CH4",
"N2O",
"BC",
"OC",
"SOx",
"NOx",
"NMVOC",
"CO",
"SF6",
"NH3",
"CF4",
"C2F6",
"HFC4310",
"HFC43-10",
"HFC-43-10",
"HFC4310",
"HFC134a",
"HFC143a",
"HFC227ea",
"CCl4",
"CH3CCl3",
"HFC245fa",
"Halon 1211",
"Halon 1202",
"Halon 1301",
"Halon 2402",
"Halon1211",
"Halon1202",
"Halon1301",
"Halon2402",
"CH3Br",
"CH3Cl",
"C6F14",
]
magicc6_sometimes_hyphen_vars = [
"CFC-11",
"CFC-12",
"CFC-113",
"CFC-114",
"CFC-115",
"HCFC-22",
"HFC-23",
"HFC-32",
"HFC-125",
"HFC-134a",
"HFC-143a",
"HCFC-141b",
"HCFC-142b",
"HFC-227ea",
"HFC-245fa",
]
magicc6_sometimes_hyphen_vars = [
v.replace("-", "") for v in magicc6_sometimes_hyphen_vars
] + magicc6_sometimes_hyphen_vars
magicc6_sometimes_underscore_vars = [
"HFC43_10",
"CFC_11",
"CFC_12",
"CFC_113",
"CFC_114",
"CFC_115",
"HCFC_22",
"HCFC_141b",
"HCFC_142b",
]
magicc6_sometimes_underscore_replacements = {
v: v.replace("_", "") for v in magicc6_sometimes_underscore_vars
}
special_case_replacements = {
"FossilCO2": "CO2I",
"OtherCO2": "CO2B",
"MCF": "CH3CCL3",
"CARB_TET": "CCL4",
"MHALOSUMCFC12EQ": "MHALOSUMCFC12EQ", # special case to avoid confusion with MCF
}
one_way_replacements = {"HFC-245ca": "HFC245FA", "HFC245ca": "HFC245FA"}
all_possible_magicc6_vars = (
magicc6_simple_mapping_vars
+ magicc6_sometimes_hyphen_vars
+ magicc6_sometimes_underscore_vars
+ list(special_case_replacements.keys())
+ list(one_way_replacements.keys())
)
replacements = {}
for m6v in all_possible_magicc6_vars:
if m6v in special_case_replacements:
replacements[m6v] = special_case_replacements[m6v]
elif (
m6v in magicc6_sometimes_underscore_vars and not inverse
): # underscores one way
replacements[m6v] = magicc6_sometimes_underscore_replacements[m6v]
elif (m6v in one_way_replacements) and not inverse:
replacements[m6v] = one_way_replacements[m6v]
else:
m7v = m6v.replace("-", "").replace(" ", "").upper()
# i.e. if we've already got a value for the inverse, we don't
# want to overwrite it
if (m7v in replacements.values()) and inverse:
continue
replacements[m6v] = m7v
if inverse:
return {v: k for k, v in replacements.items()}
else:
return replacements | Get the mappings from MAGICC6 to MAGICC7 variables.
Note that this mapping is not one to one. For example, "HFC4310", "HFC43-10" and
"HFC-43-10" in MAGICC6 both map to "HFC4310" in MAGICC7 but "HFC4310" in
MAGICC7 maps back to "HFC4310".
Note that HFC-245fa was mistakenly labelled as HFC-245ca in MAGICC6. In reality,
they are not the same thing. However, the MAGICC6 labelling was merely a typo so
the mapping between the two is one-to-one.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. MAGICC7 to MAGICC6 mappings
Returns
-------
dict
Dictionary of mappings | Below is the the instruction that describes the task:
### Input:
Get the mappings from MAGICC6 to MAGICC7 variables.
Note that this mapping is not one to one. For example, "HFC4310", "HFC43-10" and
"HFC-43-10" in MAGICC6 both map to "HFC4310" in MAGICC7 but "HFC4310" in
MAGICC7 maps back to "HFC4310".
Note that HFC-245fa was mistakenly labelled as HFC-245ca in MAGICC6. In reality,
they are not the same thing. However, the MAGICC6 labelling was merely a typo so
the mapping between the two is one-to-one.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. MAGICC7 to MAGICC6 mappings
Returns
-------
dict
Dictionary of mappings
### Response:
def get_magicc6_to_magicc7_variable_mapping(inverse=False):
"""Get the mappings from MAGICC6 to MAGICC7 variables.
Note that this mapping is not one to one. For example, "HFC4310", "HFC43-10" and
"HFC-43-10" in MAGICC6 both map to "HFC4310" in MAGICC7 but "HFC4310" in
MAGICC7 maps back to "HFC4310".
Note that HFC-245fa was mistakenly labelled as HFC-245ca in MAGICC6. In reality,
they are not the same thing. However, the MAGICC6 labelling was merely a typo so
the mapping between the two is one-to-one.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. MAGICC7 to MAGICC6 mappings
Returns
-------
dict
Dictionary of mappings
"""
# we generate the mapping dynamically, the first name in the list
# is the one which will be used for inverse mappings
magicc6_simple_mapping_vars = [
"KYOTO-CO2EQ",
"CO2I",
"CO2B",
"CH4",
"N2O",
"BC",
"OC",
"SOx",
"NOx",
"NMVOC",
"CO",
"SF6",
"NH3",
"CF4",
"C2F6",
"HFC4310",
"HFC43-10",
"HFC-43-10",
"HFC4310",
"HFC134a",
"HFC143a",
"HFC227ea",
"CCl4",
"CH3CCl3",
"HFC245fa",
"Halon 1211",
"Halon 1202",
"Halon 1301",
"Halon 2402",
"Halon1211",
"Halon1202",
"Halon1301",
"Halon2402",
"CH3Br",
"CH3Cl",
"C6F14",
]
magicc6_sometimes_hyphen_vars = [
"CFC-11",
"CFC-12",
"CFC-113",
"CFC-114",
"CFC-115",
"HCFC-22",
"HFC-23",
"HFC-32",
"HFC-125",
"HFC-134a",
"HFC-143a",
"HCFC-141b",
"HCFC-142b",
"HFC-227ea",
"HFC-245fa",
]
magicc6_sometimes_hyphen_vars = [
v.replace("-", "") for v in magicc6_sometimes_hyphen_vars
] + magicc6_sometimes_hyphen_vars
magicc6_sometimes_underscore_vars = [
"HFC43_10",
"CFC_11",
"CFC_12",
"CFC_113",
"CFC_114",
"CFC_115",
"HCFC_22",
"HCFC_141b",
"HCFC_142b",
]
magicc6_sometimes_underscore_replacements = {
v: v.replace("_", "") for v in magicc6_sometimes_underscore_vars
}
special_case_replacements = {
"FossilCO2": "CO2I",
"OtherCO2": "CO2B",
"MCF": "CH3CCL3",
"CARB_TET": "CCL4",
"MHALOSUMCFC12EQ": "MHALOSUMCFC12EQ", # special case to avoid confusion with MCF
}
one_way_replacements = {"HFC-245ca": "HFC245FA", "HFC245ca": "HFC245FA"}
all_possible_magicc6_vars = (
magicc6_simple_mapping_vars
+ magicc6_sometimes_hyphen_vars
+ magicc6_sometimes_underscore_vars
+ list(special_case_replacements.keys())
+ list(one_way_replacements.keys())
)
replacements = {}
for m6v in all_possible_magicc6_vars:
if m6v in special_case_replacements:
replacements[m6v] = special_case_replacements[m6v]
elif (
m6v in magicc6_sometimes_underscore_vars and not inverse
): # underscores one way
replacements[m6v] = magicc6_sometimes_underscore_replacements[m6v]
elif (m6v in one_way_replacements) and not inverse:
replacements[m6v] = one_way_replacements[m6v]
else:
m7v = m6v.replace("-", "").replace(" ", "").upper()
# i.e. if we've already got a value for the inverse, we don't
# want to overwrite it
if (m7v in replacements.values()) and inverse:
continue
replacements[m6v] = m7v
if inverse:
return {v: k for k, v in replacements.items()}
else:
return replacements |
def backfill_history(self, num_days, available_table_names):
"""
Backfill historical data for days that are missing.
:param num_days: number of days of historical data to backfill,
if missing
:type num_days: int
:param available_table_names: names of available per-date tables
:type available_table_names: ``list``
"""
if num_days == -1:
# skip the first date, under the assumption that data may be
# incomplete
logger.info('Backfilling all available history')
start_table = available_table_names[1]
else:
logger.info('Backfilling %d days of history', num_days)
start_table = available_table_names[-1 * num_days]
start_date = self._datetime_for_table_name(start_table)
end_table = available_table_names[-3]
end_date = self._datetime_for_table_name(end_table)
logger.debug(
'Backfilling history from %s (%s) to %s (%s)', start_table,
start_date.strftime('%Y-%m-%d'), end_table,
end_date.strftime('%Y-%m-%d')
)
for days in range((end_date - start_date).days + 1):
backfill_dt = start_date + timedelta(days=days)
if self._have_cache_for_date(backfill_dt):
logger.info('Cache present for all projects for %s; skipping',
backfill_dt.strftime('%Y-%m-%d'))
continue
backfill_table = self._table_name_for_datetime(backfill_dt)
logger.info('Backfilling %s (%s)', backfill_table,
backfill_dt.strftime('%Y-%m-%d'))
self.query_one_table(backfill_table) | Backfill historical data for days that are missing.
:param num_days: number of days of historical data to backfill,
if missing
:type num_days: int
:param available_table_names: names of available per-date tables
:type available_table_names: ``list`` | Below is the the instruction that describes the task:
### Input:
Backfill historical data for days that are missing.
:param num_days: number of days of historical data to backfill,
if missing
:type num_days: int
:param available_table_names: names of available per-date tables
:type available_table_names: ``list``
### Response:
def backfill_history(self, num_days, available_table_names):
"""
Backfill historical data for days that are missing.
:param num_days: number of days of historical data to backfill,
if missing
:type num_days: int
:param available_table_names: names of available per-date tables
:type available_table_names: ``list``
"""
if num_days == -1:
# skip the first date, under the assumption that data may be
# incomplete
logger.info('Backfilling all available history')
start_table = available_table_names[1]
else:
logger.info('Backfilling %d days of history', num_days)
start_table = available_table_names[-1 * num_days]
start_date = self._datetime_for_table_name(start_table)
end_table = available_table_names[-3]
end_date = self._datetime_for_table_name(end_table)
logger.debug(
'Backfilling history from %s (%s) to %s (%s)', start_table,
start_date.strftime('%Y-%m-%d'), end_table,
end_date.strftime('%Y-%m-%d')
)
for days in range((end_date - start_date).days + 1):
backfill_dt = start_date + timedelta(days=days)
if self._have_cache_for_date(backfill_dt):
logger.info('Cache present for all projects for %s; skipping',
backfill_dt.strftime('%Y-%m-%d'))
continue
backfill_table = self._table_name_for_datetime(backfill_dt)
logger.info('Backfilling %s (%s)', backfill_table,
backfill_dt.strftime('%Y-%m-%d'))
self.query_one_table(backfill_table) |
def read_chunks(stream, block_size=2**10):
"""
Given a byte stream with reader, yield chunks of block_size
until the stream is consusmed.
"""
while True:
chunk = stream.read(block_size)
if not chunk:
break
yield chunk | Given a byte stream with reader, yield chunks of block_size
until the stream is consusmed. | Below is the the instruction that describes the task:
### Input:
Given a byte stream with reader, yield chunks of block_size
until the stream is consusmed.
### Response:
def read_chunks(stream, block_size=2**10):
"""
Given a byte stream with reader, yield chunks of block_size
until the stream is consusmed.
"""
while True:
chunk = stream.read(block_size)
if not chunk:
break
yield chunk |
def resolve_class(class_path):
"""
Load a class by a fully qualified class_path,
eg. myapp.models.ModelName
"""
modulepath, classname = class_path.rsplit('.', 1)
module = __import__(modulepath, fromlist=[classname])
return getattr(module, classname) | Load a class by a fully qualified class_path,
eg. myapp.models.ModelName | Below is the the instruction that describes the task:
### Input:
Load a class by a fully qualified class_path,
eg. myapp.models.ModelName
### Response:
def resolve_class(class_path):
"""
Load a class by a fully qualified class_path,
eg. myapp.models.ModelName
"""
modulepath, classname = class_path.rsplit('.', 1)
module = __import__(modulepath, fromlist=[classname])
return getattr(module, classname) |
def generate_ansible_command(self):
"""
Given that the ``RunnerConfig`` preparation methods have been run to gather the inputs this method
will generate the ``ansible`` or ``ansible-playbook`` command that will be used by the
:py:class:`ansible_runner.runner.Runner` object to start the process
"""
if self.binary is not None:
base_command = self.binary
self.execution_mode = ExecutionMode.RAW
elif self.module is not None:
base_command = 'ansible'
self.execution_mode = ExecutionMode.ANSIBLE
else:
base_command = 'ansible-playbook'
self.execution_mode = ExecutionMode.ANSIBLE_PLAYBOOK
exec_list = [base_command]
try:
cmdline_args = self.loader.load_file('env/cmdline', string_types, encoding=None)
args = shlex.split(cmdline_args)
exec_list.extend(args)
except ConfigurationError:
pass
if isinstance(self.inventory, list):
for i in self.inventory:
exec_list.append("-i")
exec_list.append(i)
else:
exec_list.append("-i")
exec_list.append(self.inventory)
if self.limit is not None:
exec_list.append("--limit")
exec_list.append(self.limit)
if self.loader.isfile('env/extravars'):
exec_list.extend(['-e', '@{}'.format(self.loader.abspath('env/extravars'))])
if isinstance(self.extra_vars, dict) and self.extra_vars:
exec_list.extend(
[
'-e',
'%s' % ' '.join(
["{}=\"{}\"".format(k, self.extra_vars[k]) for k in self.extra_vars]
)
]
)
if self.verbosity:
v = 'v' * self.verbosity
exec_list.append('-{}'.format(v))
if self.tags:
exec_list.extend(['--tags', '{}'.format(self.tags)])
if self.skip_tags:
exec_list.extend(['--skip-tags', '{}'.format(self.skip_tags)])
if self.forks:
exec_list.extend(['--forks', '{}'.format(self.forks)])
# Other parameters
if self.execution_mode == ExecutionMode.ANSIBLE_PLAYBOOK:
exec_list.append(self.playbook)
elif self.execution_mode == ExecutionMode.ANSIBLE:
exec_list.append("-m")
exec_list.append(self.module)
if self.module_args is not None:
exec_list.append("-a")
exec_list.append(self.module_args)
if self.host_pattern is not None:
exec_list.append(self.host_pattern)
return exec_list | Given that the ``RunnerConfig`` preparation methods have been run to gather the inputs this method
will generate the ``ansible`` or ``ansible-playbook`` command that will be used by the
:py:class:`ansible_runner.runner.Runner` object to start the process | Below is the the instruction that describes the task:
### Input:
Given that the ``RunnerConfig`` preparation methods have been run to gather the inputs this method
will generate the ``ansible`` or ``ansible-playbook`` command that will be used by the
:py:class:`ansible_runner.runner.Runner` object to start the process
### Response:
def generate_ansible_command(self):
"""
Given that the ``RunnerConfig`` preparation methods have been run to gather the inputs this method
will generate the ``ansible`` or ``ansible-playbook`` command that will be used by the
:py:class:`ansible_runner.runner.Runner` object to start the process
"""
if self.binary is not None:
base_command = self.binary
self.execution_mode = ExecutionMode.RAW
elif self.module is not None:
base_command = 'ansible'
self.execution_mode = ExecutionMode.ANSIBLE
else:
base_command = 'ansible-playbook'
self.execution_mode = ExecutionMode.ANSIBLE_PLAYBOOK
exec_list = [base_command]
try:
cmdline_args = self.loader.load_file('env/cmdline', string_types, encoding=None)
args = shlex.split(cmdline_args)
exec_list.extend(args)
except ConfigurationError:
pass
if isinstance(self.inventory, list):
for i in self.inventory:
exec_list.append("-i")
exec_list.append(i)
else:
exec_list.append("-i")
exec_list.append(self.inventory)
if self.limit is not None:
exec_list.append("--limit")
exec_list.append(self.limit)
if self.loader.isfile('env/extravars'):
exec_list.extend(['-e', '@{}'.format(self.loader.abspath('env/extravars'))])
if isinstance(self.extra_vars, dict) and self.extra_vars:
exec_list.extend(
[
'-e',
'%s' % ' '.join(
["{}=\"{}\"".format(k, self.extra_vars[k]) for k in self.extra_vars]
)
]
)
if self.verbosity:
v = 'v' * self.verbosity
exec_list.append('-{}'.format(v))
if self.tags:
exec_list.extend(['--tags', '{}'.format(self.tags)])
if self.skip_tags:
exec_list.extend(['--skip-tags', '{}'.format(self.skip_tags)])
if self.forks:
exec_list.extend(['--forks', '{}'.format(self.forks)])
# Other parameters
if self.execution_mode == ExecutionMode.ANSIBLE_PLAYBOOK:
exec_list.append(self.playbook)
elif self.execution_mode == ExecutionMode.ANSIBLE:
exec_list.append("-m")
exec_list.append(self.module)
if self.module_args is not None:
exec_list.append("-a")
exec_list.append(self.module_args)
if self.host_pattern is not None:
exec_list.append(self.host_pattern)
return exec_list |
def process_file(self):
""" process_file: Writes base64 encoding to file
Args: None
Returns: filename
"""
self.filename = self.convert_base64_to_file()
config.LOGGER.info("\t--- Converted base64 image to {}".format(self.filename))
return self.filename | process_file: Writes base64 encoding to file
Args: None
Returns: filename | Below is the the instruction that describes the task:
### Input:
process_file: Writes base64 encoding to file
Args: None
Returns: filename
### Response:
def process_file(self):
""" process_file: Writes base64 encoding to file
Args: None
Returns: filename
"""
self.filename = self.convert_base64_to_file()
config.LOGGER.info("\t--- Converted base64 image to {}".format(self.filename))
return self.filename |
def get_property_names(obj):
"""
Gets names of all properties implemented in specified object.
:param obj: an objec to introspect.
:return: a list with property names.
"""
property_names = []
for property_name in dir(obj):
property = getattr(obj, property_name)
if PropertyReflector._is_property(property, property_name):
property_names.append(property_name)
return property_names | Gets names of all properties implemented in specified object.
:param obj: an objec to introspect.
:return: a list with property names. | Below is the the instruction that describes the task:
### Input:
Gets names of all properties implemented in specified object.
:param obj: an objec to introspect.
:return: a list with property names.
### Response:
def get_property_names(obj):
"""
Gets names of all properties implemented in specified object.
:param obj: an objec to introspect.
:return: a list with property names.
"""
property_names = []
for property_name in dir(obj):
property = getattr(obj, property_name)
if PropertyReflector._is_property(property, property_name):
property_names.append(property_name)
return property_names |
def process_core(self):
"""
The method deals with a core found previously in
:func:`get_core`. Clause selectors ``self.core_sels`` and
sum assumptions involved in the core are treated
separately of each other. This is handled by calling
methods :func:`process_sels` and :func:`process_sums`,
respectively. Whenever necessary, both methods relax the
core literals, which is followed by creating a new
totalizer object encoding the sum of the new relaxation
variables. The totalizer object can be "exhausted"
depending on the option.
"""
# updating the cost
self.cost += self.minw
# assumptions to remove
self.garbage = set()
if len(self.core_sels) != 1 or len(self.core_sums) > 0:
# process selectors in the core
self.process_sels()
# process previously introducded sums in the core
self.process_sums()
if len(self.rels) > 1:
# create a new cardunality constraint
t = self.create_sum()
# apply core exhaustion if required
b = self.exhaust_core(t) if self.exhaust else 1
if b:
# save the info about this sum and
# add its assumption literal
self.set_bound(t, b)
else:
# impossible to satisfy any of these clauses
# they must become hard
for relv in self.rels:
self.oracle.add_clause([relv])
else:
# unit cores are treated differently
# (their negation is added to the hard part)
self.oracle.add_clause([-self.core_sels[0]])
self.garbage.add(self.core_sels[0])
# remove unnecessary assumptions
self.filter_assumps() | The method deals with a core found previously in
:func:`get_core`. Clause selectors ``self.core_sels`` and
sum assumptions involved in the core are treated
separately of each other. This is handled by calling
methods :func:`process_sels` and :func:`process_sums`,
respectively. Whenever necessary, both methods relax the
core literals, which is followed by creating a new
totalizer object encoding the sum of the new relaxation
variables. The totalizer object can be "exhausted"
depending on the option. | Below is the the instruction that describes the task:
### Input:
The method deals with a core found previously in
:func:`get_core`. Clause selectors ``self.core_sels`` and
sum assumptions involved in the core are treated
separately of each other. This is handled by calling
methods :func:`process_sels` and :func:`process_sums`,
respectively. Whenever necessary, both methods relax the
core literals, which is followed by creating a new
totalizer object encoding the sum of the new relaxation
variables. The totalizer object can be "exhausted"
depending on the option.
### Response:
def process_core(self):
"""
The method deals with a core found previously in
:func:`get_core`. Clause selectors ``self.core_sels`` and
sum assumptions involved in the core are treated
separately of each other. This is handled by calling
methods :func:`process_sels` and :func:`process_sums`,
respectively. Whenever necessary, both methods relax the
core literals, which is followed by creating a new
totalizer object encoding the sum of the new relaxation
variables. The totalizer object can be "exhausted"
depending on the option.
"""
# updating the cost
self.cost += self.minw
# assumptions to remove
self.garbage = set()
if len(self.core_sels) != 1 or len(self.core_sums) > 0:
# process selectors in the core
self.process_sels()
# process previously introducded sums in the core
self.process_sums()
if len(self.rels) > 1:
# create a new cardunality constraint
t = self.create_sum()
# apply core exhaustion if required
b = self.exhaust_core(t) if self.exhaust else 1
if b:
# save the info about this sum and
# add its assumption literal
self.set_bound(t, b)
else:
# impossible to satisfy any of these clauses
# they must become hard
for relv in self.rels:
self.oracle.add_clause([relv])
else:
# unit cores are treated differently
# (their negation is added to the hard part)
self.oracle.add_clause([-self.core_sels[0]])
self.garbage.add(self.core_sels[0])
# remove unnecessary assumptions
self.filter_assumps() |
def init_argparser(self, argparser):
"""
Other runtimes (or users of ArgumentParser) can pass their
subparser into here to collect the arguments here for a
subcommand.
"""
super(ToolchainRuntime, self).init_argparser(argparser)
# it is possible for subclasses to fully override this, but if
# they are using this as the runtime to drive the toolchain they
# should be prepared to follow the layout, but if they omit them
# it should only result in the spec omitting these arguments.
self.init_argparser_export_target(argparser)
self.init_argparser_working_dir(argparser)
self.init_argparser_build_dir(argparser)
self.init_argparser_optional_advice(argparser) | Other runtimes (or users of ArgumentParser) can pass their
subparser into here to collect the arguments here for a
subcommand. | Below is the the instruction that describes the task:
### Input:
Other runtimes (or users of ArgumentParser) can pass their
subparser into here to collect the arguments here for a
subcommand.
### Response:
def init_argparser(self, argparser):
"""
Other runtimes (or users of ArgumentParser) can pass their
subparser into here to collect the arguments here for a
subcommand.
"""
super(ToolchainRuntime, self).init_argparser(argparser)
# it is possible for subclasses to fully override this, but if
# they are using this as the runtime to drive the toolchain they
# should be prepared to follow the layout, but if they omit them
# it should only result in the spec omitting these arguments.
self.init_argparser_export_target(argparser)
self.init_argparser_working_dir(argparser)
self.init_argparser_build_dir(argparser)
self.init_argparser_optional_advice(argparser) |
def _get_equivalent_distances_east(wid, lng, mag, repi, focal_depth=10.,
ab06=False):
"""
Computes equivalent values of Joyner-Boore and closest distance to the
rupture given epoicentral distance. The procedure is described in
Atkinson (2012) - Appendix A (page 32).
:param float wid:
Width of rectangular rupture
:param float lng:
Length of rectangular rupture
:param float mag:
Magnitude
:param repi:
A :class:`numpy.ndarray` instance containing repi values
:param float focal_depth:
Focal depth
:param boolean ab06:
When true a minimum ztor value is set to force near-source saturation
"""
dtop = focal_depth - 0.5*wid
# this computes a minimum ztor value - used for AB2006
if ab06:
ztor_ab06 = 21-2.5*mag
dtop = np.max([ztor_ab06, dtop])
ztor = max(0, dtop)
# find the average distance to the fault projection
dsurf = np.max([repi-0.3*lng, 0.1*np.ones_like(repi)], axis=0)
# rrup
rrup = (dsurf**2+ztor**2)**0.5
# return rjb and rrup
return dsurf, rrup | Computes equivalent values of Joyner-Boore and closest distance to the
rupture given epoicentral distance. The procedure is described in
Atkinson (2012) - Appendix A (page 32).
:param float wid:
Width of rectangular rupture
:param float lng:
Length of rectangular rupture
:param float mag:
Magnitude
:param repi:
A :class:`numpy.ndarray` instance containing repi values
:param float focal_depth:
Focal depth
:param boolean ab06:
When true a minimum ztor value is set to force near-source saturation | Below is the the instruction that describes the task:
### Input:
Computes equivalent values of Joyner-Boore and closest distance to the
rupture given epoicentral distance. The procedure is described in
Atkinson (2012) - Appendix A (page 32).
:param float wid:
Width of rectangular rupture
:param float lng:
Length of rectangular rupture
:param float mag:
Magnitude
:param repi:
A :class:`numpy.ndarray` instance containing repi values
:param float focal_depth:
Focal depth
:param boolean ab06:
When true a minimum ztor value is set to force near-source saturation
### Response:
def _get_equivalent_distances_east(wid, lng, mag, repi, focal_depth=10.,
ab06=False):
"""
Computes equivalent values of Joyner-Boore and closest distance to the
rupture given epoicentral distance. The procedure is described in
Atkinson (2012) - Appendix A (page 32).
:param float wid:
Width of rectangular rupture
:param float lng:
Length of rectangular rupture
:param float mag:
Magnitude
:param repi:
A :class:`numpy.ndarray` instance containing repi values
:param float focal_depth:
Focal depth
:param boolean ab06:
When true a minimum ztor value is set to force near-source saturation
"""
dtop = focal_depth - 0.5*wid
# this computes a minimum ztor value - used for AB2006
if ab06:
ztor_ab06 = 21-2.5*mag
dtop = np.max([ztor_ab06, dtop])
ztor = max(0, dtop)
# find the average distance to the fault projection
dsurf = np.max([repi-0.3*lng, 0.1*np.ones_like(repi)], axis=0)
# rrup
rrup = (dsurf**2+ztor**2)**0.5
# return rjb and rrup
return dsurf, rrup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.