code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def area_field(key='area'): """Provides a select box for country selection""" area_list = list(subdivisions) title_map = [] for item in area_list: title_map.append({'value': item.code, 'name': item.name}) widget = { 'key': key, 'type': 'uiselect', 'titleMap': title_map } return widget
Provides a select box for country selection
Below is the the instruction that describes the task: ### Input: Provides a select box for country selection ### Response: def area_field(key='area'): """Provides a select box for country selection""" area_list = list(subdivisions) title_map = [] for item in area_list: title_map.append({'value': item.code, 'name': item.name}) widget = { 'key': key, 'type': 'uiselect', 'titleMap': title_map } return widget
def extract_flags_from_text(self, text): """ Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags. """ flags = set() reversed_flags_repr = {v: k for k, v in self.flags_repr.items()} for flag_repr in text: if flag_repr not in reversed_flags_repr: raise KeyError("Flag '%s' is not recognized" % flag_repr) else: flags.add(reversed_flags_repr[flag_repr]) return flags
Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags.
Below is the the instruction that describes the task: ### Input: Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags. ### Response: def extract_flags_from_text(self, text): """ Extract the flags from the given text and return a :class:`set` of flag values. See :class:`~taxi.timesheet.lines.Entry` for a list of existing flags. """ flags = set() reversed_flags_repr = {v: k for k, v in self.flags_repr.items()} for flag_repr in text: if flag_repr not in reversed_flags_repr: raise KeyError("Flag '%s' is not recognized" % flag_repr) else: flags.add(reversed_flags_repr[flag_repr]) return flags
def make_variant(cls, converters, re_opts=None, compiled=False, strict=True): """ Creates a type converter for a number of type converter alternatives. The first matching type converter is used. REQUIRES: type_converter.pattern attribute :param converters: List of type converters as alternatives. :param re_opts: Regular expression options zu use (=default_re_opts). :param compiled: Use compiled regexp matcher, if true (=False). :param strict: Enable assertion checks. :return: Type converter function object. .. note:: Works only with named fields in :class:`parse.Parser`. Parser needs group_index delta for unnamed/fixed fields. This is not supported for user-defined types. Otherwise, you need to use :class:`parse_type.parse.Parser` (patched version of the :mod:`parse` module). """ # -- NOTE: Uses double-dispatch with regex pattern rematch because # match is not passed through to primary type converter. assert converters, "REQUIRE: Non-empty list." if len(converters) == 1: return converters[0] if re_opts is None: re_opts = cls.default_re_opts pattern = r")|(".join([tc.pattern for tc in converters]) pattern = r"("+ pattern + ")" group_count = len(converters) for converter in converters: group_count += pattern_group_count(converter.pattern) if compiled: convert_variant = cls.__create_convert_variant_compiled(converters, re_opts, strict) else: convert_variant = cls.__create_convert_variant(re_opts, strict) convert_variant.pattern = pattern convert_variant.converters = tuple(converters) # OLD: convert_variant.group_count = group_count convert_variant.regex_group_count = group_count return convert_variant
Creates a type converter for a number of type converter alternatives. The first matching type converter is used. REQUIRES: type_converter.pattern attribute :param converters: List of type converters as alternatives. :param re_opts: Regular expression options zu use (=default_re_opts). :param compiled: Use compiled regexp matcher, if true (=False). :param strict: Enable assertion checks. :return: Type converter function object. .. note:: Works only with named fields in :class:`parse.Parser`. Parser needs group_index delta for unnamed/fixed fields. This is not supported for user-defined types. Otherwise, you need to use :class:`parse_type.parse.Parser` (patched version of the :mod:`parse` module).
Below is the the instruction that describes the task: ### Input: Creates a type converter for a number of type converter alternatives. The first matching type converter is used. REQUIRES: type_converter.pattern attribute :param converters: List of type converters as alternatives. :param re_opts: Regular expression options zu use (=default_re_opts). :param compiled: Use compiled regexp matcher, if true (=False). :param strict: Enable assertion checks. :return: Type converter function object. .. note:: Works only with named fields in :class:`parse.Parser`. Parser needs group_index delta for unnamed/fixed fields. This is not supported for user-defined types. Otherwise, you need to use :class:`parse_type.parse.Parser` (patched version of the :mod:`parse` module). ### Response: def make_variant(cls, converters, re_opts=None, compiled=False, strict=True): """ Creates a type converter for a number of type converter alternatives. The first matching type converter is used. REQUIRES: type_converter.pattern attribute :param converters: List of type converters as alternatives. :param re_opts: Regular expression options zu use (=default_re_opts). :param compiled: Use compiled regexp matcher, if true (=False). :param strict: Enable assertion checks. :return: Type converter function object. .. note:: Works only with named fields in :class:`parse.Parser`. Parser needs group_index delta for unnamed/fixed fields. This is not supported for user-defined types. Otherwise, you need to use :class:`parse_type.parse.Parser` (patched version of the :mod:`parse` module). """ # -- NOTE: Uses double-dispatch with regex pattern rematch because # match is not passed through to primary type converter. assert converters, "REQUIRE: Non-empty list." if len(converters) == 1: return converters[0] if re_opts is None: re_opts = cls.default_re_opts pattern = r")|(".join([tc.pattern for tc in converters]) pattern = r"("+ pattern + ")" group_count = len(converters) for converter in converters: group_count += pattern_group_count(converter.pattern) if compiled: convert_variant = cls.__create_convert_variant_compiled(converters, re_opts, strict) else: convert_variant = cls.__create_convert_variant(re_opts, strict) convert_variant.pattern = pattern convert_variant.converters = tuple(converters) # OLD: convert_variant.group_count = group_count convert_variant.regex_group_count = group_count return convert_variant
def set(container, azure_secret_access_key): """Set/update the access key for the specified Azure storage container.""" click.secho(dtool_config.utils.set_azure_secret_access_key( CONFIG_PATH, container, azure_secret_access_key ))
Set/update the access key for the specified Azure storage container.
Below is the the instruction that describes the task: ### Input: Set/update the access key for the specified Azure storage container. ### Response: def set(container, azure_secret_access_key): """Set/update the access key for the specified Azure storage container.""" click.secho(dtool_config.utils.set_azure_secret_access_key( CONFIG_PATH, container, azure_secret_access_key ))
def tar_and_upload_dir(session, bucket, s3_key_prefix, script, directory=None, dependencies=None, kms_key=None): """Package source files and upload a compress tar file to S3. The S3 location will be ``s3://<bucket>/s3_key_prefix/sourcedir.tar.gz``. If directory is an S3 URI, an UploadedCode object will be returned, but nothing will be uploaded to S3 (this allow reuse of code already in S3). If directory is None, the script will be added to the archive at ``./<basename of script>``. If directory is not None, the (recursive) contents of the directory will be added to the archive. directory is treated as the base path of the archive, and the script name is assumed to be a filename or relative path inside the directory. Args: session (boto3.Session): Boto session used to access S3. bucket (str): S3 bucket to which the compressed file is uploaded. s3_key_prefix (str): Prefix for the S3 key. script (str): Script filename or path. directory (str): Optional. Directory containing the source file. If it starts with "s3://", no action is taken. dependencies (List[str]): Optional. A list of paths to directories (absolute or relative) containing additional libraries that will be copied into /opt/ml/lib kms_key (str): Optional. KMS key ID used to upload objects to the bucket (default: None). Returns: sagemaker.fw_utils.UserCode: An object with the S3 bucket and key (S3 prefix) and script name. """ if directory and directory.lower().startswith('s3://'): return UploadedCode(s3_prefix=directory, script_name=os.path.basename(script)) script_name = script if directory else os.path.basename(script) dependencies = dependencies or [] key = '%s/sourcedir.tar.gz' % s3_key_prefix tmp = tempfile.mkdtemp() try: source_files = _list_files_to_compress(script, directory) + dependencies tar_file = sagemaker.utils.create_tar_file(source_files, os.path.join(tmp, _TAR_SOURCE_FILENAME)) if kms_key: extra_args = {'ServerSideEncryption': 'aws:kms', 'SSEKMSKeyId': kms_key} else: extra_args = None session.resource('s3').Object(bucket, key).upload_file(tar_file, ExtraArgs=extra_args) finally: shutil.rmtree(tmp) return UploadedCode(s3_prefix='s3://%s/%s' % (bucket, key), script_name=script_name)
Package source files and upload a compress tar file to S3. The S3 location will be ``s3://<bucket>/s3_key_prefix/sourcedir.tar.gz``. If directory is an S3 URI, an UploadedCode object will be returned, but nothing will be uploaded to S3 (this allow reuse of code already in S3). If directory is None, the script will be added to the archive at ``./<basename of script>``. If directory is not None, the (recursive) contents of the directory will be added to the archive. directory is treated as the base path of the archive, and the script name is assumed to be a filename or relative path inside the directory. Args: session (boto3.Session): Boto session used to access S3. bucket (str): S3 bucket to which the compressed file is uploaded. s3_key_prefix (str): Prefix for the S3 key. script (str): Script filename or path. directory (str): Optional. Directory containing the source file. If it starts with "s3://", no action is taken. dependencies (List[str]): Optional. A list of paths to directories (absolute or relative) containing additional libraries that will be copied into /opt/ml/lib kms_key (str): Optional. KMS key ID used to upload objects to the bucket (default: None). Returns: sagemaker.fw_utils.UserCode: An object with the S3 bucket and key (S3 prefix) and script name.
Below is the the instruction that describes the task: ### Input: Package source files and upload a compress tar file to S3. The S3 location will be ``s3://<bucket>/s3_key_prefix/sourcedir.tar.gz``. If directory is an S3 URI, an UploadedCode object will be returned, but nothing will be uploaded to S3 (this allow reuse of code already in S3). If directory is None, the script will be added to the archive at ``./<basename of script>``. If directory is not None, the (recursive) contents of the directory will be added to the archive. directory is treated as the base path of the archive, and the script name is assumed to be a filename or relative path inside the directory. Args: session (boto3.Session): Boto session used to access S3. bucket (str): S3 bucket to which the compressed file is uploaded. s3_key_prefix (str): Prefix for the S3 key. script (str): Script filename or path. directory (str): Optional. Directory containing the source file. If it starts with "s3://", no action is taken. dependencies (List[str]): Optional. A list of paths to directories (absolute or relative) containing additional libraries that will be copied into /opt/ml/lib kms_key (str): Optional. KMS key ID used to upload objects to the bucket (default: None). Returns: sagemaker.fw_utils.UserCode: An object with the S3 bucket and key (S3 prefix) and script name. ### Response: def tar_and_upload_dir(session, bucket, s3_key_prefix, script, directory=None, dependencies=None, kms_key=None): """Package source files and upload a compress tar file to S3. The S3 location will be ``s3://<bucket>/s3_key_prefix/sourcedir.tar.gz``. If directory is an S3 URI, an UploadedCode object will be returned, but nothing will be uploaded to S3 (this allow reuse of code already in S3). If directory is None, the script will be added to the archive at ``./<basename of script>``. If directory is not None, the (recursive) contents of the directory will be added to the archive. directory is treated as the base path of the archive, and the script name is assumed to be a filename or relative path inside the directory. Args: session (boto3.Session): Boto session used to access S3. bucket (str): S3 bucket to which the compressed file is uploaded. s3_key_prefix (str): Prefix for the S3 key. script (str): Script filename or path. directory (str): Optional. Directory containing the source file. If it starts with "s3://", no action is taken. dependencies (List[str]): Optional. A list of paths to directories (absolute or relative) containing additional libraries that will be copied into /opt/ml/lib kms_key (str): Optional. KMS key ID used to upload objects to the bucket (default: None). Returns: sagemaker.fw_utils.UserCode: An object with the S3 bucket and key (S3 prefix) and script name. """ if directory and directory.lower().startswith('s3://'): return UploadedCode(s3_prefix=directory, script_name=os.path.basename(script)) script_name = script if directory else os.path.basename(script) dependencies = dependencies or [] key = '%s/sourcedir.tar.gz' % s3_key_prefix tmp = tempfile.mkdtemp() try: source_files = _list_files_to_compress(script, directory) + dependencies tar_file = sagemaker.utils.create_tar_file(source_files, os.path.join(tmp, _TAR_SOURCE_FILENAME)) if kms_key: extra_args = {'ServerSideEncryption': 'aws:kms', 'SSEKMSKeyId': kms_key} else: extra_args = None session.resource('s3').Object(bucket, key).upload_file(tar_file, ExtraArgs=extra_args) finally: shutil.rmtree(tmp) return UploadedCode(s3_prefix='s3://%s/%s' % (bucket, key), script_name=script_name)
def strings_to_list_string(strings): '''Takes a list of strings presumably containing words and phrases, and returns a "list" form of those strings, like: >>> strings_to_list_string(('cats', 'dogs')) >>> 'cats and dogs' or >>> strings_to_list_string(('pizza', 'pop', 'chips')) >>> 'pizza, pop, and chips' Raises ValueError if strings is empty. ''' if isinstance(strings, six.string_types): raise TypeError('strings must be an iterable of strings, not a string ' 'itself') if len(strings) == 0: raise ValueError('strings may not be empty') elif len(strings) == 1: return strings[0] elif len(strings) == 2: return ' and '.join(strings) else: return '{0}, and {1}'.format(', '.join(strings[:-1]), strings[-1])
Takes a list of strings presumably containing words and phrases, and returns a "list" form of those strings, like: >>> strings_to_list_string(('cats', 'dogs')) >>> 'cats and dogs' or >>> strings_to_list_string(('pizza', 'pop', 'chips')) >>> 'pizza, pop, and chips' Raises ValueError if strings is empty.
Below is the the instruction that describes the task: ### Input: Takes a list of strings presumably containing words and phrases, and returns a "list" form of those strings, like: >>> strings_to_list_string(('cats', 'dogs')) >>> 'cats and dogs' or >>> strings_to_list_string(('pizza', 'pop', 'chips')) >>> 'pizza, pop, and chips' Raises ValueError if strings is empty. ### Response: def strings_to_list_string(strings): '''Takes a list of strings presumably containing words and phrases, and returns a "list" form of those strings, like: >>> strings_to_list_string(('cats', 'dogs')) >>> 'cats and dogs' or >>> strings_to_list_string(('pizza', 'pop', 'chips')) >>> 'pizza, pop, and chips' Raises ValueError if strings is empty. ''' if isinstance(strings, six.string_types): raise TypeError('strings must be an iterable of strings, not a string ' 'itself') if len(strings) == 0: raise ValueError('strings may not be empty') elif len(strings) == 1: return strings[0] elif len(strings) == 2: return ' and '.join(strings) else: return '{0}, and {1}'.format(', '.join(strings[:-1]), strings[-1])
def report(self, format=ReportFormat.printout, output_path=None): """ Returns a report of this class. :param format: The format of the report. :param output_path: The path to the file the report is written to. If None, then the report is not written to a file. :returns: The descendants of the account. """ rpt = GlsRpt(self, output_path) return rpt.render(format)
Returns a report of this class. :param format: The format of the report. :param output_path: The path to the file the report is written to. If None, then the report is not written to a file. :returns: The descendants of the account.
Below is the the instruction that describes the task: ### Input: Returns a report of this class. :param format: The format of the report. :param output_path: The path to the file the report is written to. If None, then the report is not written to a file. :returns: The descendants of the account. ### Response: def report(self, format=ReportFormat.printout, output_path=None): """ Returns a report of this class. :param format: The format of the report. :param output_path: The path to the file the report is written to. If None, then the report is not written to a file. :returns: The descendants of the account. """ rpt = GlsRpt(self, output_path) return rpt.render(format)
def retrieve(self, request, _id): """ Returns the document containing the given _id or 404 """ _id = deserialize(_id) retrieved = self.collection.find_one({'_id': _id}) if retrieved: return Response(serialize(retrieved)) else: return Response( response=serialize( DocumentNotFoundError(self.collection.__name__, _id) ), status=400 )
Returns the document containing the given _id or 404
Below is the the instruction that describes the task: ### Input: Returns the document containing the given _id or 404 ### Response: def retrieve(self, request, _id): """ Returns the document containing the given _id or 404 """ _id = deserialize(_id) retrieved = self.collection.find_one({'_id': _id}) if retrieved: return Response(serialize(retrieved)) else: return Response( response=serialize( DocumentNotFoundError(self.collection.__name__, _id) ), status=400 )
def _parse_mtllibs(self): """Load mtl files""" for mtllib in self.meta.mtllibs: try: materials = self.material_parser_cls( os.path.join(self.path, mtllib), encoding=self.encoding, strict=self.strict).materials except IOError: raise IOError("Failed to load mtl file:".format(os.path.join(self.path, mtllib))) for name, material in materials.items(): self.wavefront.materials[name] = material
Load mtl files
Below is the the instruction that describes the task: ### Input: Load mtl files ### Response: def _parse_mtllibs(self): """Load mtl files""" for mtllib in self.meta.mtllibs: try: materials = self.material_parser_cls( os.path.join(self.path, mtllib), encoding=self.encoding, strict=self.strict).materials except IOError: raise IOError("Failed to load mtl file:".format(os.path.join(self.path, mtllib))) for name, material in materials.items(): self.wavefront.materials[name] = material
def init_app(self, app, conf_key=None): """ :type app: flask.Flask :parm str conf_key: Key of flask config. """ conf_key = conf_key or self.conf_key or 'PYMEMCACHE' self.conf_key = conf_key conf = app.config[conf_key] if not isinstance(conf, dict): raise TypeError("Flask-PyMemcache conf should be dict") close_on_teardown = conf.pop('close_on_teardown', False) if isinstance(conf['server'], list): conf['servers'] = conf.pop('server') client = pymemcache.client.hash.HashClient(**conf) elif isinstance(conf['server'], tuple): client = pymemcache.client.Client(**conf) else: raise TypeError("Flask-PyMemcache conf['server'] should be tuple or list of tuples") app.extensions.setdefault('pymemcache', {}) app.extensions['pymemcache'][self] = client if close_on_teardown: @app.teardown_appcontext def close_connection(exc=None): client.close()
:type app: flask.Flask :parm str conf_key: Key of flask config.
Below is the the instruction that describes the task: ### Input: :type app: flask.Flask :parm str conf_key: Key of flask config. ### Response: def init_app(self, app, conf_key=None): """ :type app: flask.Flask :parm str conf_key: Key of flask config. """ conf_key = conf_key or self.conf_key or 'PYMEMCACHE' self.conf_key = conf_key conf = app.config[conf_key] if not isinstance(conf, dict): raise TypeError("Flask-PyMemcache conf should be dict") close_on_teardown = conf.pop('close_on_teardown', False) if isinstance(conf['server'], list): conf['servers'] = conf.pop('server') client = pymemcache.client.hash.HashClient(**conf) elif isinstance(conf['server'], tuple): client = pymemcache.client.Client(**conf) else: raise TypeError("Flask-PyMemcache conf['server'] should be tuple or list of tuples") app.extensions.setdefault('pymemcache', {}) app.extensions['pymemcache'][self] = client if close_on_teardown: @app.teardown_appcontext def close_connection(exc=None): client.close()
def cache_train(self): """ Loads the data for this classifier from a cache file :return: whether or not we were successful :rtype: bool """ filename = self.get_cache_location() if not os.path.exists(filename): return False categories = pickle.load(open(filename, 'rb')) assert isinstance(categories, BayesCategories), \ "Cache data is either corrupt or invalid" self.categories = categories # Updating our per-category overall probabilities self.calculate_category_probability() return True
Loads the data for this classifier from a cache file :return: whether or not we were successful :rtype: bool
Below is the the instruction that describes the task: ### Input: Loads the data for this classifier from a cache file :return: whether or not we were successful :rtype: bool ### Response: def cache_train(self): """ Loads the data for this classifier from a cache file :return: whether or not we were successful :rtype: bool """ filename = self.get_cache_location() if not os.path.exists(filename): return False categories = pickle.load(open(filename, 'rb')) assert isinstance(categories, BayesCategories), \ "Cache data is either corrupt or invalid" self.categories = categories # Updating our per-category overall probabilities self.calculate_category_probability() return True
def print_tree(self, maxresults=100, maxdepth=None): """Walk the object tree, pretty-printing each branch.""" self.ignore_caller() for depth, refid, rep in self.walk(maxresults, maxdepth): print(("%9d" % refid), (" " * depth * 2), rep)
Walk the object tree, pretty-printing each branch.
Below is the the instruction that describes the task: ### Input: Walk the object tree, pretty-printing each branch. ### Response: def print_tree(self, maxresults=100, maxdepth=None): """Walk the object tree, pretty-printing each branch.""" self.ignore_caller() for depth, refid, rep in self.walk(maxresults, maxdepth): print(("%9d" % refid), (" " * depth * 2), rep)
def get_transports(): """ get all known transports from Ariane Server :return: """ LOGGER.debug("TransportService.get_transports") params = SessionService.complete_transactional_req(None) if params is None: if MappingService.driver_type != DriverFactory.DRIVER_REST: params = {'OPERATION': 'getTransports'} args = {'properties': params} else: args = {'http_operation': 'GET', 'operation_path': ''} else: if MappingService.driver_type != DriverFactory.DRIVER_REST: params['OPERATION'] = 'getTransports' args = {'properties': params} else: args = {'http_operation': 'GET', 'operation_path': '', 'parameters': params} response = TransportService.requester.call(args) if MappingService.driver_type != DriverFactory.DRIVER_REST: response = response.get() ret = None if response.rc == 0: ret = [] for transport in response.response_content['transports']: ret.append(Transport.json_2_transport(transport)) elif response.rc != 404: err_msg = 'TransportService.get_transports - Problem while getting transports. ' \ 'Reason: ' + str(response.response_content) + ' - ' + str(response.error_message) + \ " (" + str(response.rc) + ")" LOGGER.warning(err_msg) if response.rc == 500 and ArianeMappingOverloadError.ERROR_MSG in response.error_message: raise ArianeMappingOverloadError("Transport.get_transports", ArianeMappingOverloadError.ERROR_MSG) # traceback.print_stack() return ret
get all known transports from Ariane Server :return:
Below is the the instruction that describes the task: ### Input: get all known transports from Ariane Server :return: ### Response: def get_transports(): """ get all known transports from Ariane Server :return: """ LOGGER.debug("TransportService.get_transports") params = SessionService.complete_transactional_req(None) if params is None: if MappingService.driver_type != DriverFactory.DRIVER_REST: params = {'OPERATION': 'getTransports'} args = {'properties': params} else: args = {'http_operation': 'GET', 'operation_path': ''} else: if MappingService.driver_type != DriverFactory.DRIVER_REST: params['OPERATION'] = 'getTransports' args = {'properties': params} else: args = {'http_operation': 'GET', 'operation_path': '', 'parameters': params} response = TransportService.requester.call(args) if MappingService.driver_type != DriverFactory.DRIVER_REST: response = response.get() ret = None if response.rc == 0: ret = [] for transport in response.response_content['transports']: ret.append(Transport.json_2_transport(transport)) elif response.rc != 404: err_msg = 'TransportService.get_transports - Problem while getting transports. ' \ 'Reason: ' + str(response.response_content) + ' - ' + str(response.error_message) + \ " (" + str(response.rc) + ")" LOGGER.warning(err_msg) if response.rc == 500 and ArianeMappingOverloadError.ERROR_MSG in response.error_message: raise ArianeMappingOverloadError("Transport.get_transports", ArianeMappingOverloadError.ERROR_MSG) # traceback.print_stack() return ret
def batch_augment(x, func, device='/CPU:0'): """ Apply dataset augmentation to a batch of exmaples. :param x: Tensor representing a batch of examples. :param func: Callable implementing dataset augmentation, operating on a single image. :param device: String specifying which device to use. """ with tf.device(device): return tf.map_fn(func, x)
Apply dataset augmentation to a batch of exmaples. :param x: Tensor representing a batch of examples. :param func: Callable implementing dataset augmentation, operating on a single image. :param device: String specifying which device to use.
Below is the the instruction that describes the task: ### Input: Apply dataset augmentation to a batch of exmaples. :param x: Tensor representing a batch of examples. :param func: Callable implementing dataset augmentation, operating on a single image. :param device: String specifying which device to use. ### Response: def batch_augment(x, func, device='/CPU:0'): """ Apply dataset augmentation to a batch of exmaples. :param x: Tensor representing a batch of examples. :param func: Callable implementing dataset augmentation, operating on a single image. :param device: String specifying which device to use. """ with tf.device(device): return tf.map_fn(func, x)
def add_headers(vcf_obj, nr_cases=None, sv=False): """Add loqus specific information to a VCF header Args: vcf_obj(cyvcf2.VCF) """ vcf_obj.add_info_to_header( { 'ID':"Obs", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observations for the variant"} ) if not sv: vcf_obj.add_info_to_header( { 'ID':"Hom", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observed homozygotes"} ) vcf_obj.add_info_to_header( { 'ID':"Hem", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observed hemizygotes"} ) if nr_cases: case_header = "##NrCases={}".format(nr_cases) vcf_obj.add_to_header(case_header) # head.add_version_tracking("loqusdb", version, datetime.now().strftime("%Y-%m-%d %H:%M")) return
Add loqus specific information to a VCF header Args: vcf_obj(cyvcf2.VCF)
Below is the the instruction that describes the task: ### Input: Add loqus specific information to a VCF header Args: vcf_obj(cyvcf2.VCF) ### Response: def add_headers(vcf_obj, nr_cases=None, sv=False): """Add loqus specific information to a VCF header Args: vcf_obj(cyvcf2.VCF) """ vcf_obj.add_info_to_header( { 'ID':"Obs", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observations for the variant"} ) if not sv: vcf_obj.add_info_to_header( { 'ID':"Hom", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observed homozygotes"} ) vcf_obj.add_info_to_header( { 'ID':"Hem", 'Number': '1', 'Type': 'Integer', 'Description': "The number of observed hemizygotes"} ) if nr_cases: case_header = "##NrCases={}".format(nr_cases) vcf_obj.add_to_header(case_header) # head.add_version_tracking("loqusdb", version, datetime.now().strftime("%Y-%m-%d %H:%M")) return
def hexblock_byte(cls, data, address = None, bits = None, separator = ' ', width = 16): """ Dump a block of hexadecimal BYTEs from binary data. @type data: str @param data: Binary data. @type address: str @param address: Memory address where the data was read from. @type bits: int @param bits: (Optional) Number of bits of the target architecture. The default is platform dependent. See: L{HexDump.address_size} @type separator: str @param separator: Separator between the hexadecimal representation of each BYTE. @type width: int @param width: (Optional) Maximum number of BYTEs to convert per text line. @rtype: str @return: Multiline output text. """ return cls.hexblock_cb(cls.hexadecimal, data, address, bits, width, cb_kwargs = {'separator': separator})
Dump a block of hexadecimal BYTEs from binary data. @type data: str @param data: Binary data. @type address: str @param address: Memory address where the data was read from. @type bits: int @param bits: (Optional) Number of bits of the target architecture. The default is platform dependent. See: L{HexDump.address_size} @type separator: str @param separator: Separator between the hexadecimal representation of each BYTE. @type width: int @param width: (Optional) Maximum number of BYTEs to convert per text line. @rtype: str @return: Multiline output text.
Below is the the instruction that describes the task: ### Input: Dump a block of hexadecimal BYTEs from binary data. @type data: str @param data: Binary data. @type address: str @param address: Memory address where the data was read from. @type bits: int @param bits: (Optional) Number of bits of the target architecture. The default is platform dependent. See: L{HexDump.address_size} @type separator: str @param separator: Separator between the hexadecimal representation of each BYTE. @type width: int @param width: (Optional) Maximum number of BYTEs to convert per text line. @rtype: str @return: Multiline output text. ### Response: def hexblock_byte(cls, data, address = None, bits = None, separator = ' ', width = 16): """ Dump a block of hexadecimal BYTEs from binary data. @type data: str @param data: Binary data. @type address: str @param address: Memory address where the data was read from. @type bits: int @param bits: (Optional) Number of bits of the target architecture. The default is platform dependent. See: L{HexDump.address_size} @type separator: str @param separator: Separator between the hexadecimal representation of each BYTE. @type width: int @param width: (Optional) Maximum number of BYTEs to convert per text line. @rtype: str @return: Multiline output text. """ return cls.hexblock_cb(cls.hexadecimal, data, address, bits, width, cb_kwargs = {'separator': separator})
def login(self, username=None, password=None): """ Before doing any remote operation, the user has to login to the GMQL serivice. This can be done in the two following ways: * Guest mode: the user has no credentials and uses the system only as a temporary guest * Authenticated mode: the users has credentials and a stable remote account If neither username and password are specified, the user enters the system as a guest. If both are specified and they correspond to an existent user, the user enters as an authenticated user :param username: (optional) :param password: (optional) :return: None """ if (username is None) and (password is None): auth_token = self.__login_guest() elif (username is not None) and (password is not None): auth_token, fullName = self.__login_credentials(username, password) self.logger.info("You are logged as {}".format(fullName)) else: raise ValueError("you have to specify both username and password or nothing") if auth_token is not None: self.auth_token = auth_token else: raise ConnectionError("Impossible to retrieve the authentication token")
Before doing any remote operation, the user has to login to the GMQL serivice. This can be done in the two following ways: * Guest mode: the user has no credentials and uses the system only as a temporary guest * Authenticated mode: the users has credentials and a stable remote account If neither username and password are specified, the user enters the system as a guest. If both are specified and they correspond to an existent user, the user enters as an authenticated user :param username: (optional) :param password: (optional) :return: None
Below is the the instruction that describes the task: ### Input: Before doing any remote operation, the user has to login to the GMQL serivice. This can be done in the two following ways: * Guest mode: the user has no credentials and uses the system only as a temporary guest * Authenticated mode: the users has credentials and a stable remote account If neither username and password are specified, the user enters the system as a guest. If both are specified and they correspond to an existent user, the user enters as an authenticated user :param username: (optional) :param password: (optional) :return: None ### Response: def login(self, username=None, password=None): """ Before doing any remote operation, the user has to login to the GMQL serivice. This can be done in the two following ways: * Guest mode: the user has no credentials and uses the system only as a temporary guest * Authenticated mode: the users has credentials and a stable remote account If neither username and password are specified, the user enters the system as a guest. If both are specified and they correspond to an existent user, the user enters as an authenticated user :param username: (optional) :param password: (optional) :return: None """ if (username is None) and (password is None): auth_token = self.__login_guest() elif (username is not None) and (password is not None): auth_token, fullName = self.__login_credentials(username, password) self.logger.info("You are logged as {}".format(fullName)) else: raise ValueError("you have to specify both username and password or nothing") if auth_token is not None: self.auth_token = auth_token else: raise ConnectionError("Impossible to retrieve the authentication token")
def from_array(array): """ Deserialize a new KeyboardButton from a given dictionary. :return: new KeyboardButton instance. :rtype: KeyboardButton """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") data = {} data['text'] = u(array.get('text')) data['request_contact'] = bool(array.get('request_contact')) if array.get('request_contact') is not None else None data['request_location'] = bool(array.get('request_location')) if array.get('request_location') is not None else None instance = KeyboardButton(**data) instance._raw = array return instance
Deserialize a new KeyboardButton from a given dictionary. :return: new KeyboardButton instance. :rtype: KeyboardButton
Below is the the instruction that describes the task: ### Input: Deserialize a new KeyboardButton from a given dictionary. :return: new KeyboardButton instance. :rtype: KeyboardButton ### Response: def from_array(array): """ Deserialize a new KeyboardButton from a given dictionary. :return: new KeyboardButton instance. :rtype: KeyboardButton """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") data = {} data['text'] = u(array.get('text')) data['request_contact'] = bool(array.get('request_contact')) if array.get('request_contact') is not None else None data['request_location'] = bool(array.get('request_location')) if array.get('request_location') is not None else None instance = KeyboardButton(**data) instance._raw = array return instance
def order(self, order=None): """ If order is given, modify the URL correspondingly, return the current order otherwise. """ if order is None: return int(self.url.order) self.url.order = str(order)
If order is given, modify the URL correspondingly, return the current order otherwise.
Below is the the instruction that describes the task: ### Input: If order is given, modify the URL correspondingly, return the current order otherwise. ### Response: def order(self, order=None): """ If order is given, modify the URL correspondingly, return the current order otherwise. """ if order is None: return int(self.url.order) self.url.order = str(order)
def focusOutEvent( self, event ): """ Overloads the focus out event to cancel editing when the widget loses focus. :param event | <QFocusEvent> """ super(XNavigationEdit, self).focusOutEvent(event) self.cancelEdit()
Overloads the focus out event to cancel editing when the widget loses focus. :param event | <QFocusEvent>
Below is the the instruction that describes the task: ### Input: Overloads the focus out event to cancel editing when the widget loses focus. :param event | <QFocusEvent> ### Response: def focusOutEvent( self, event ): """ Overloads the focus out event to cancel editing when the widget loses focus. :param event | <QFocusEvent> """ super(XNavigationEdit, self).focusOutEvent(event) self.cancelEdit()
def _get_distribution_indexes(catalog, dataset_identifier, dataset_title, distribution_identifier, distribution_title, logger=None): """Devuelve el índice de una distribución en su dataset en función de su título, junto con el índice de su dataset padre en el catálogo, en función de su identificador""" logger = logger or pydj_logger dataset_index = _get_dataset_index( catalog, dataset_identifier, dataset_title) if dataset_index is None: return None, None else: dataset = catalog["catalog_dataset"][dataset_index] matching_distributions = [] for idx, distribution in enumerate(dataset["dataset_distribution"]): if distribution["distribution_identifier"] == distribution_identifier: if distribution["distribution_title"] == distribution_title: matching_distributions.append(idx) else: logger.warning( ce.DistributionUnexpectedTitle( distribution_identifier, distribution["distribution_title"], distribution_title ) ) # Debe haber exactamente una distribución con los identicadores provistos if len(matching_distributions) == 0: logger.warning( ce.DistributionTitleNonExistentError( distribution_title, dataset_identifier ) ) return dataset_index, None elif len(matching_distributions) > 1: logger.warning( ce.DistributionTitleRepetitionError( distribution_title, dataset_identifier, matching_distributions) ) return dataset_index, None else: distribution_index = matching_distributions[0] return dataset_index, distribution_index
Devuelve el índice de una distribución en su dataset en función de su título, junto con el índice de su dataset padre en el catálogo, en función de su identificador
Below is the the instruction that describes the task: ### Input: Devuelve el índice de una distribución en su dataset en función de su título, junto con el índice de su dataset padre en el catálogo, en función de su identificador ### Response: def _get_distribution_indexes(catalog, dataset_identifier, dataset_title, distribution_identifier, distribution_title, logger=None): """Devuelve el índice de una distribución en su dataset en función de su título, junto con el índice de su dataset padre en el catálogo, en función de su identificador""" logger = logger or pydj_logger dataset_index = _get_dataset_index( catalog, dataset_identifier, dataset_title) if dataset_index is None: return None, None else: dataset = catalog["catalog_dataset"][dataset_index] matching_distributions = [] for idx, distribution in enumerate(dataset["dataset_distribution"]): if distribution["distribution_identifier"] == distribution_identifier: if distribution["distribution_title"] == distribution_title: matching_distributions.append(idx) else: logger.warning( ce.DistributionUnexpectedTitle( distribution_identifier, distribution["distribution_title"], distribution_title ) ) # Debe haber exactamente una distribución con los identicadores provistos if len(matching_distributions) == 0: logger.warning( ce.DistributionTitleNonExistentError( distribution_title, dataset_identifier ) ) return dataset_index, None elif len(matching_distributions) > 1: logger.warning( ce.DistributionTitleRepetitionError( distribution_title, dataset_identifier, matching_distributions) ) return dataset_index, None else: distribution_index = matching_distributions[0] return dataset_index, distribution_index
def children(self, node_parent): """! @brief Returns list of children of node. @param[in] node_parent (node): Node whose children are required. @return (list) Children of node. If node haven't got any child then None is returned. """ if node_parent.left is not None: yield node_parent.left if node_parent.right is not None: yield node_parent.right
! @brief Returns list of children of node. @param[in] node_parent (node): Node whose children are required. @return (list) Children of node. If node haven't got any child then None is returned.
Below is the the instruction that describes the task: ### Input: ! @brief Returns list of children of node. @param[in] node_parent (node): Node whose children are required. @return (list) Children of node. If node haven't got any child then None is returned. ### Response: def children(self, node_parent): """! @brief Returns list of children of node. @param[in] node_parent (node): Node whose children are required. @return (list) Children of node. If node haven't got any child then None is returned. """ if node_parent.left is not None: yield node_parent.left if node_parent.right is not None: yield node_parent.right
def _normalize_lang_attrs(self, text, strip): """Remove embedded bracketed attributes. This (potentially) bitwise-ands bracketed attributes together and adds to the end. This is applied to a single alternative at a time -- not to a parenthesized list. It removes all embedded bracketed attributes, logically-ands them together, and places them at the end. However if strip is true, this can indeed remove embedded bracketed attributes from a parenthesized list. Parameters ---------- text : str A Beider-Morse phonetic encoding (in progress) strip : bool Remove the bracketed attributes (and throw away) Returns ------- str A Beider-Morse phonetic code Raises ------ ValueError No closing square bracket """ uninitialized = -1 # all 1's attrib = uninitialized while '[' in text: bracket_start = text.find('[') bracket_end = text.find(']', bracket_start) if bracket_end == -1: raise ValueError( 'No closing square bracket: text=(' + text + ') strip=(' + text_type(strip) + ')' ) attrib &= int(text[bracket_start + 1 : bracket_end]) text = text[:bracket_start] + text[bracket_end + 1 :] if attrib == uninitialized or strip: return text elif attrib == 0: # means that the attributes were incompatible and there is no # alternative here return '[0]' return text + '[' + str(attrib) + ']'
Remove embedded bracketed attributes. This (potentially) bitwise-ands bracketed attributes together and adds to the end. This is applied to a single alternative at a time -- not to a parenthesized list. It removes all embedded bracketed attributes, logically-ands them together, and places them at the end. However if strip is true, this can indeed remove embedded bracketed attributes from a parenthesized list. Parameters ---------- text : str A Beider-Morse phonetic encoding (in progress) strip : bool Remove the bracketed attributes (and throw away) Returns ------- str A Beider-Morse phonetic code Raises ------ ValueError No closing square bracket
Below is the the instruction that describes the task: ### Input: Remove embedded bracketed attributes. This (potentially) bitwise-ands bracketed attributes together and adds to the end. This is applied to a single alternative at a time -- not to a parenthesized list. It removes all embedded bracketed attributes, logically-ands them together, and places them at the end. However if strip is true, this can indeed remove embedded bracketed attributes from a parenthesized list. Parameters ---------- text : str A Beider-Morse phonetic encoding (in progress) strip : bool Remove the bracketed attributes (and throw away) Returns ------- str A Beider-Morse phonetic code Raises ------ ValueError No closing square bracket ### Response: def _normalize_lang_attrs(self, text, strip): """Remove embedded bracketed attributes. This (potentially) bitwise-ands bracketed attributes together and adds to the end. This is applied to a single alternative at a time -- not to a parenthesized list. It removes all embedded bracketed attributes, logically-ands them together, and places them at the end. However if strip is true, this can indeed remove embedded bracketed attributes from a parenthesized list. Parameters ---------- text : str A Beider-Morse phonetic encoding (in progress) strip : bool Remove the bracketed attributes (and throw away) Returns ------- str A Beider-Morse phonetic code Raises ------ ValueError No closing square bracket """ uninitialized = -1 # all 1's attrib = uninitialized while '[' in text: bracket_start = text.find('[') bracket_end = text.find(']', bracket_start) if bracket_end == -1: raise ValueError( 'No closing square bracket: text=(' + text + ') strip=(' + text_type(strip) + ')' ) attrib &= int(text[bracket_start + 1 : bracket_end]) text = text[:bracket_start] + text[bracket_end + 1 :] if attrib == uninitialized or strip: return text elif attrib == 0: # means that the attributes were incompatible and there is no # alternative here return '[0]' return text + '[' + str(attrib) + ']'
def add_data_flow_to_state(from_port, to_port): """Interface method between Gaphas and RAFCON core for adding data flows The method checks the types of the given ports and their relation. From this the necessary parameters for the add_dat_flow method of the RAFCON core are determined. Also the parent state is derived from the ports. :param from_port: Port from which the data flow starts :param to_port: Port to which the data flow goes to :return: True if a data flow was added, False if an error occurred """ from rafcon.gui.mygaphas.items.ports import InputPortView, OutputPortView, ScopedVariablePortView from rafcon.gui.models.container_state import ContainerStateModel from_state_v = from_port.parent to_state_v = to_port.parent from_state_m = from_state_v.model to_state_m = to_state_v.model from_state_id = from_state_m.state.state_id to_state_id = to_state_m.state.state_id from_port_id = from_port.port_id to_port_id = to_port.port_id if not isinstance(from_port, (InputPortView, OutputPortView, ScopedVariablePortView)) or \ not isinstance(from_port, (InputPortView, OutputPortView, ScopedVariablePortView)): logger.error("Data flows only exist between data ports (input, output, scope). Given: {0} and {1}".format(type( from_port), type(to_port))) return False responsible_parent_m = None # from parent to child if isinstance(from_state_m, ContainerStateModel) and \ check_if_dict_contains_object_reference_in_values(to_state_m.state, from_state_m.state.states): responsible_parent_m = from_state_m # from child to parent elif isinstance(to_state_m, ContainerStateModel) and \ check_if_dict_contains_object_reference_in_values(from_state_m.state, to_state_m.state.states): responsible_parent_m = to_state_m # from parent to parent elif isinstance(from_state_m, ContainerStateModel) and from_state_m.state is to_state_m.state: responsible_parent_m = from_state_m # == to_state_m # from child to child elif (not from_state_m.state.is_root_state) and (not to_state_m.state.is_root_state) \ and from_state_m.state is not to_state_m.state \ and from_state_m.parent.state.state_id and to_state_m.parent.state.state_id: responsible_parent_m = from_state_m.parent if not isinstance(responsible_parent_m, ContainerStateModel): logger.error("Data flows only exist in container states (e.g. hierarchy states)") return False try: responsible_parent_m.state.add_data_flow(from_state_id, from_port_id, to_state_id, to_port_id) return True except (ValueError, AttributeError, TypeError) as e: logger.error("Data flow couldn't be added: {0}".format(e)) return False
Interface method between Gaphas and RAFCON core for adding data flows The method checks the types of the given ports and their relation. From this the necessary parameters for the add_dat_flow method of the RAFCON core are determined. Also the parent state is derived from the ports. :param from_port: Port from which the data flow starts :param to_port: Port to which the data flow goes to :return: True if a data flow was added, False if an error occurred
Below is the the instruction that describes the task: ### Input: Interface method between Gaphas and RAFCON core for adding data flows The method checks the types of the given ports and their relation. From this the necessary parameters for the add_dat_flow method of the RAFCON core are determined. Also the parent state is derived from the ports. :param from_port: Port from which the data flow starts :param to_port: Port to which the data flow goes to :return: True if a data flow was added, False if an error occurred ### Response: def add_data_flow_to_state(from_port, to_port): """Interface method between Gaphas and RAFCON core for adding data flows The method checks the types of the given ports and their relation. From this the necessary parameters for the add_dat_flow method of the RAFCON core are determined. Also the parent state is derived from the ports. :param from_port: Port from which the data flow starts :param to_port: Port to which the data flow goes to :return: True if a data flow was added, False if an error occurred """ from rafcon.gui.mygaphas.items.ports import InputPortView, OutputPortView, ScopedVariablePortView from rafcon.gui.models.container_state import ContainerStateModel from_state_v = from_port.parent to_state_v = to_port.parent from_state_m = from_state_v.model to_state_m = to_state_v.model from_state_id = from_state_m.state.state_id to_state_id = to_state_m.state.state_id from_port_id = from_port.port_id to_port_id = to_port.port_id if not isinstance(from_port, (InputPortView, OutputPortView, ScopedVariablePortView)) or \ not isinstance(from_port, (InputPortView, OutputPortView, ScopedVariablePortView)): logger.error("Data flows only exist between data ports (input, output, scope). Given: {0} and {1}".format(type( from_port), type(to_port))) return False responsible_parent_m = None # from parent to child if isinstance(from_state_m, ContainerStateModel) and \ check_if_dict_contains_object_reference_in_values(to_state_m.state, from_state_m.state.states): responsible_parent_m = from_state_m # from child to parent elif isinstance(to_state_m, ContainerStateModel) and \ check_if_dict_contains_object_reference_in_values(from_state_m.state, to_state_m.state.states): responsible_parent_m = to_state_m # from parent to parent elif isinstance(from_state_m, ContainerStateModel) and from_state_m.state is to_state_m.state: responsible_parent_m = from_state_m # == to_state_m # from child to child elif (not from_state_m.state.is_root_state) and (not to_state_m.state.is_root_state) \ and from_state_m.state is not to_state_m.state \ and from_state_m.parent.state.state_id and to_state_m.parent.state.state_id: responsible_parent_m = from_state_m.parent if not isinstance(responsible_parent_m, ContainerStateModel): logger.error("Data flows only exist in container states (e.g. hierarchy states)") return False try: responsible_parent_m.state.add_data_flow(from_state_id, from_port_id, to_state_id, to_port_id) return True except (ValueError, AttributeError, TypeError) as e: logger.error("Data flow couldn't be added: {0}".format(e)) return False
def _worker_handler(future, worker, pipe, timeout): """Worker lifecycle manager. Waits for the worker to be perform its task, collects result, runs the callback and cleans up the process. """ result = _get_result(future, pipe, timeout) if isinstance(result, BaseException): if isinstance(result, ProcessExpired): result.exitcode = worker.exitcode future.set_exception(result) else: future.set_result(result) if worker.is_alive(): stop_process(worker)
Worker lifecycle manager. Waits for the worker to be perform its task, collects result, runs the callback and cleans up the process.
Below is the the instruction that describes the task: ### Input: Worker lifecycle manager. Waits for the worker to be perform its task, collects result, runs the callback and cleans up the process. ### Response: def _worker_handler(future, worker, pipe, timeout): """Worker lifecycle manager. Waits for the worker to be perform its task, collects result, runs the callback and cleans up the process. """ result = _get_result(future, pipe, timeout) if isinstance(result, BaseException): if isinstance(result, ProcessExpired): result.exitcode = worker.exitcode future.set_exception(result) else: future.set_result(result) if worker.is_alive(): stop_process(worker)
def get_authorize_url(self, state=None): """ Gets the URL to use to authorize this app """ payload = {'client_id': self.client_id, 'response_type': 'code', 'redirect_uri': self.redirect_uri, 'scope': self.scope} urlparams = urllib.parse.urlencode(payload) return "%s?%s" % (self.OAUTH_AUTHORIZE_URL, urlparams)
Gets the URL to use to authorize this app
Below is the the instruction that describes the task: ### Input: Gets the URL to use to authorize this app ### Response: def get_authorize_url(self, state=None): """ Gets the URL to use to authorize this app """ payload = {'client_id': self.client_id, 'response_type': 'code', 'redirect_uri': self.redirect_uri, 'scope': self.scope} urlparams = urllib.parse.urlencode(payload) return "%s?%s" % (self.OAUTH_AUTHORIZE_URL, urlparams)
def zones(self): """ :class:`list` of :class:`stravalib.model.ActivityZone` objects for this activity. """ if self._zones is None: self.assert_bind_client() self._zones = self.bind_client.get_activity_zones(self.id) return self._zones
:class:`list` of :class:`stravalib.model.ActivityZone` objects for this activity.
Below is the the instruction that describes the task: ### Input: :class:`list` of :class:`stravalib.model.ActivityZone` objects for this activity. ### Response: def zones(self): """ :class:`list` of :class:`stravalib.model.ActivityZone` objects for this activity. """ if self._zones is None: self.assert_bind_client() self._zones = self.bind_client.get_activity_zones(self.id) return self._zones
def _createFromLocal(self, data, schema): """ Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema. """ # make sure data could consumed multiple times if not isinstance(data, list): data = list(data) if schema is None or isinstance(schema, (list, tuple)): struct = self._inferSchemaFromList(data, names=schema) converter = _create_converter(struct) data = map(converter, data) if isinstance(schema, (list, tuple)): for i, name in enumerate(schema): struct.fields[i].name = name struct.names[i] = name schema = struct elif not isinstance(schema, StructType): raise TypeError("schema should be StructType or list or None, but got: %s" % schema) # convert python objects to sql data data = [schema.toInternal(row) for row in data] return self._sc.parallelize(data), schema
Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema.
Below is the the instruction that describes the task: ### Input: Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema. ### Response: def _createFromLocal(self, data, schema): """ Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema. """ # make sure data could consumed multiple times if not isinstance(data, list): data = list(data) if schema is None or isinstance(schema, (list, tuple)): struct = self._inferSchemaFromList(data, names=schema) converter = _create_converter(struct) data = map(converter, data) if isinstance(schema, (list, tuple)): for i, name in enumerate(schema): struct.fields[i].name = name struct.names[i] = name schema = struct elif not isinstance(schema, StructType): raise TypeError("schema should be StructType or list or None, but got: %s" % schema) # convert python objects to sql data data = [schema.toInternal(row) for row in data] return self._sc.parallelize(data), schema
def ReportConfiguration(self, file): """ :param file: Destination for report details :return: None """ global encodingpar print >> file, BuildReportLine("FAM FILE", self.fam_details) print >> file, BuildReportLine("IMPUTE_ARCHIVES", "%s:%s" % (str(self.chroms[0]), self.archives[0])) idx = 0 for arch in self.archives[1:]: print >> file, BuildReportLine("", "%s:%s" % (str(self.chroms[idx+1]), arch)) idx += 1 print >> file, BuildReportLine("ENCODING", ["Additive", "Dominant", "Recessive", "Genotype", "Raw"][encoding]) print >> file, BuildReportLine("INFO-EXT", Parser.info_ext) print >> file, BuildReportLine("INFO-THRESH", Parser.info_threshold)
:param file: Destination for report details :return: None
Below is the the instruction that describes the task: ### Input: :param file: Destination for report details :return: None ### Response: def ReportConfiguration(self, file): """ :param file: Destination for report details :return: None """ global encodingpar print >> file, BuildReportLine("FAM FILE", self.fam_details) print >> file, BuildReportLine("IMPUTE_ARCHIVES", "%s:%s" % (str(self.chroms[0]), self.archives[0])) idx = 0 for arch in self.archives[1:]: print >> file, BuildReportLine("", "%s:%s" % (str(self.chroms[idx+1]), arch)) idx += 1 print >> file, BuildReportLine("ENCODING", ["Additive", "Dominant", "Recessive", "Genotype", "Raw"][encoding]) print >> file, BuildReportLine("INFO-EXT", Parser.info_ext) print >> file, BuildReportLine("INFO-THRESH", Parser.info_threshold)
def get(self, key, default=None): ''' get - Gets an attribute by key with the chance to provide a default value @param key <str> - The key to query @param default <Anything> Default None - The value to return if key is not found @return - The value of attribute at #key, or #default if not present. ''' key = key.lower() if key == 'class': return self.tag.className if key in ('style', 'class') or key in self.keys(): return self[key] return default
get - Gets an attribute by key with the chance to provide a default value @param key <str> - The key to query @param default <Anything> Default None - The value to return if key is not found @return - The value of attribute at #key, or #default if not present.
Below is the the instruction that describes the task: ### Input: get - Gets an attribute by key with the chance to provide a default value @param key <str> - The key to query @param default <Anything> Default None - The value to return if key is not found @return - The value of attribute at #key, or #default if not present. ### Response: def get(self, key, default=None): ''' get - Gets an attribute by key with the chance to provide a default value @param key <str> - The key to query @param default <Anything> Default None - The value to return if key is not found @return - The value of attribute at #key, or #default if not present. ''' key = key.lower() if key == 'class': return self.tag.className if key in ('style', 'class') or key in self.keys(): return self[key] return default
def delete_reference_image( self, location, product_id, reference_image_id, project_id=None, retry=None, timeout=None, metadata=None, ): """ For the documentation see: :py:class:`~airflow.contrib.operators.gcp_vision_operator.CloudVisionReferenceImageCreateOperator` """ client = self.get_conn() self.log.info('Deleting ReferenceImage') name = ProductSearchClient.reference_image_path( project=project_id, location=location, product=product_id, reference_image=reference_image_id ) response = client.delete_reference_image(name=name, retry=retry, timeout=timeout, metadata=metadata) self.log.info('ReferenceImage with the name [%s] deleted.', name) return MessageToDict(response)
For the documentation see: :py:class:`~airflow.contrib.operators.gcp_vision_operator.CloudVisionReferenceImageCreateOperator`
Below is the the instruction that describes the task: ### Input: For the documentation see: :py:class:`~airflow.contrib.operators.gcp_vision_operator.CloudVisionReferenceImageCreateOperator` ### Response: def delete_reference_image( self, location, product_id, reference_image_id, project_id=None, retry=None, timeout=None, metadata=None, ): """ For the documentation see: :py:class:`~airflow.contrib.operators.gcp_vision_operator.CloudVisionReferenceImageCreateOperator` """ client = self.get_conn() self.log.info('Deleting ReferenceImage') name = ProductSearchClient.reference_image_path( project=project_id, location=location, product=product_id, reference_image=reference_image_id ) response = client.delete_reference_image(name=name, retry=retry, timeout=timeout, metadata=metadata) self.log.info('ReferenceImage with the name [%s] deleted.', name) return MessageToDict(response)
def percolate_declares(program: Program) -> Program: """ Move all the DECLARE statements to the top of the program. Return a fresh obejct. :param program: Perhaps jumbled program. :return: Program with DECLAREs all at the top and otherwise the same sorted contents. """ declare_program = Program() instrs_program = Program() for instr in program: if isinstance(instr, Declare): declare_program += instr else: instrs_program += instr p = declare_program + instrs_program p._defined_gates = program._defined_gates return p
Move all the DECLARE statements to the top of the program. Return a fresh obejct. :param program: Perhaps jumbled program. :return: Program with DECLAREs all at the top and otherwise the same sorted contents.
Below is the the instruction that describes the task: ### Input: Move all the DECLARE statements to the top of the program. Return a fresh obejct. :param program: Perhaps jumbled program. :return: Program with DECLAREs all at the top and otherwise the same sorted contents. ### Response: def percolate_declares(program: Program) -> Program: """ Move all the DECLARE statements to the top of the program. Return a fresh obejct. :param program: Perhaps jumbled program. :return: Program with DECLAREs all at the top and otherwise the same sorted contents. """ declare_program = Program() instrs_program = Program() for instr in program: if isinstance(instr, Declare): declare_program += instr else: instrs_program += instr p = declare_program + instrs_program p._defined_gates = program._defined_gates return p
def _rolling_window(a, window, axis=-1): """ Make an ndarray with a rolling window along axis. Parameters ---------- a : array_like Array to add rolling window to axis: int axis position along which rolling window will be applied. window : int Size of rolling window Returns ------- Array that is a view of the original array with a added dimension of size w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> np.rolling_window(x, 3, axis=-1) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(np.rolling_window(x, 3, axis=-1), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) This function is taken from https://github.com/numpy/numpy/pull/31 but slightly modified to accept axis option. """ axis = _validate_axis(a, axis) a = np.swapaxes(a, axis, -1) if window < 1: raise ValueError( "`window` must be at least 1. Given : {}".format(window)) if window > a.shape[-1]: raise ValueError("`window` is too long. Given : {}".format(window)) shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) rolling = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides, writeable=False) return np.swapaxes(rolling, -2, axis)
Make an ndarray with a rolling window along axis. Parameters ---------- a : array_like Array to add rolling window to axis: int axis position along which rolling window will be applied. window : int Size of rolling window Returns ------- Array that is a view of the original array with a added dimension of size w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> np.rolling_window(x, 3, axis=-1) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(np.rolling_window(x, 3, axis=-1), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) This function is taken from https://github.com/numpy/numpy/pull/31 but slightly modified to accept axis option.
Below is the the instruction that describes the task: ### Input: Make an ndarray with a rolling window along axis. Parameters ---------- a : array_like Array to add rolling window to axis: int axis position along which rolling window will be applied. window : int Size of rolling window Returns ------- Array that is a view of the original array with a added dimension of size w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> np.rolling_window(x, 3, axis=-1) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(np.rolling_window(x, 3, axis=-1), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) This function is taken from https://github.com/numpy/numpy/pull/31 but slightly modified to accept axis option. ### Response: def _rolling_window(a, window, axis=-1): """ Make an ndarray with a rolling window along axis. Parameters ---------- a : array_like Array to add rolling window to axis: int axis position along which rolling window will be applied. window : int Size of rolling window Returns ------- Array that is a view of the original array with a added dimension of size w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> np.rolling_window(x, 3, axis=-1) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(np.rolling_window(x, 3, axis=-1), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) This function is taken from https://github.com/numpy/numpy/pull/31 but slightly modified to accept axis option. """ axis = _validate_axis(a, axis) a = np.swapaxes(a, axis, -1) if window < 1: raise ValueError( "`window` must be at least 1. Given : {}".format(window)) if window > a.shape[-1]: raise ValueError("`window` is too long. Given : {}".format(window)) shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) rolling = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides, writeable=False) return np.swapaxes(rolling, -2, axis)
def intersect(self, range_): self.solver.intersection_broad_tests_count += 1 """Remove variants whose version fall outside of the given range.""" if range_.is_any(): return self if self.solver.optimised: if range_ in self.been_intersected_with: return self if self.pr: self.pr.passive("intersecting %s wrt range '%s'...", self, range_) self.solver.intersection_tests_count += 1 with self.solver.timed(self.solver.intersection_time): # this is faster than iter_intersecting :( entries = [x for x in self.entries if x.version in range_] if not entries: return None elif len(entries) < len(self.entries): copy_ = self._copy(entries) copy_.been_intersected_with.add(range_) return copy_ else: self.been_intersected_with.add(range_) return self
Remove variants whose version fall outside of the given range.
Below is the the instruction that describes the task: ### Input: Remove variants whose version fall outside of the given range. ### Response: def intersect(self, range_): self.solver.intersection_broad_tests_count += 1 """Remove variants whose version fall outside of the given range.""" if range_.is_any(): return self if self.solver.optimised: if range_ in self.been_intersected_with: return self if self.pr: self.pr.passive("intersecting %s wrt range '%s'...", self, range_) self.solver.intersection_tests_count += 1 with self.solver.timed(self.solver.intersection_time): # this is faster than iter_intersecting :( entries = [x for x in self.entries if x.version in range_] if not entries: return None elif len(entries) < len(self.entries): copy_ = self._copy(entries) copy_.been_intersected_with.add(range_) return copy_ else: self.been_intersected_with.add(range_) return self
def insert(self, context): """ Add Vagrant box to the calling user. :param resort.engine.execution.Context context: Current execution context. """ self.write([ "box", "add", "--name", context.resolve(self.__name), self.__path(context) ])
Add Vagrant box to the calling user. :param resort.engine.execution.Context context: Current execution context.
Below is the the instruction that describes the task: ### Input: Add Vagrant box to the calling user. :param resort.engine.execution.Context context: Current execution context. ### Response: def insert(self, context): """ Add Vagrant box to the calling user. :param resort.engine.execution.Context context: Current execution context. """ self.write([ "box", "add", "--name", context.resolve(self.__name), self.__path(context) ])
def show_channel(self, channel, owner): '''List the channels for owner If owner is none, the currently logged in user is used ''' url = '%s/channels/%s/%s' % (self.domain, owner, channel) res = self.session.get(url) self._check_response(res, [200]) return res.json()
List the channels for owner If owner is none, the currently logged in user is used
Below is the the instruction that describes the task: ### Input: List the channels for owner If owner is none, the currently logged in user is used ### Response: def show_channel(self, channel, owner): '''List the channels for owner If owner is none, the currently logged in user is used ''' url = '%s/channels/%s/%s' % (self.domain, owner, channel) res = self.session.get(url) self._check_response(res, [200]) return res.json()
def hparams_to_batching_scheme(hparams, drop_long_sequences=False, shard_multiplier=1, length_multiplier=1): """Wrapper around _batching_scheme with hparams.""" return batching_scheme( batch_size=hparams.batch_size, min_length=hparams.min_length, max_length=hparams.max_length, min_length_bucket=hparams.min_length_bucket, length_bucket_step=hparams.length_bucket_step, drop_long_sequences=drop_long_sequences, shard_multiplier=shard_multiplier, length_multiplier=length_multiplier)
Wrapper around _batching_scheme with hparams.
Below is the the instruction that describes the task: ### Input: Wrapper around _batching_scheme with hparams. ### Response: def hparams_to_batching_scheme(hparams, drop_long_sequences=False, shard_multiplier=1, length_multiplier=1): """Wrapper around _batching_scheme with hparams.""" return batching_scheme( batch_size=hparams.batch_size, min_length=hparams.min_length, max_length=hparams.max_length, min_length_bucket=hparams.min_length_bucket, length_bucket_step=hparams.length_bucket_step, drop_long_sequences=drop_long_sequences, shard_multiplier=shard_multiplier, length_multiplier=length_multiplier)
def delayed_unpacking(self, container, fun, *args, **kwargs): """Should be used when unpacking mutable values. This allows circular references resolution by pausing serialization.""" try: self._delayed += 1 blob = self._begin() try: fun(*args, **kwargs) self._commit(blob) return container except DelayPacking: self._rollback(blob) continuation = (fun, args, kwargs) self._pending.append(continuation) return container finally: self._delayed -= 1
Should be used when unpacking mutable values. This allows circular references resolution by pausing serialization.
Below is the the instruction that describes the task: ### Input: Should be used when unpacking mutable values. This allows circular references resolution by pausing serialization. ### Response: def delayed_unpacking(self, container, fun, *args, **kwargs): """Should be used when unpacking mutable values. This allows circular references resolution by pausing serialization.""" try: self._delayed += 1 blob = self._begin() try: fun(*args, **kwargs) self._commit(blob) return container except DelayPacking: self._rollback(blob) continuation = (fun, args, kwargs) self._pending.append(continuation) return container finally: self._delayed -= 1
def _compute_initial_out_degree(self): """The number of operations which use each tensor as input. Returns: a {string, int} mapping tensor name to the number of operations which use it as input, or one plus that quantity if the tensor is final. """ out_degree = collections.defaultdict(int) # Pretend that final tensors have an additional degree so they are not # freed. for tensor_name in self.get_all_tensor_names(): if self.is_tensor_final(tensor_name): out_degree[tensor_name] = 1 for operation_name in self.get_all_operation_names(): for input_name in self.get_operation_input_names(operation_name): out_degree[input_name] += 1 return out_degree
The number of operations which use each tensor as input. Returns: a {string, int} mapping tensor name to the number of operations which use it as input, or one plus that quantity if the tensor is final.
Below is the the instruction that describes the task: ### Input: The number of operations which use each tensor as input. Returns: a {string, int} mapping tensor name to the number of operations which use it as input, or one plus that quantity if the tensor is final. ### Response: def _compute_initial_out_degree(self): """The number of operations which use each tensor as input. Returns: a {string, int} mapping tensor name to the number of operations which use it as input, or one plus that quantity if the tensor is final. """ out_degree = collections.defaultdict(int) # Pretend that final tensors have an additional degree so they are not # freed. for tensor_name in self.get_all_tensor_names(): if self.is_tensor_final(tensor_name): out_degree[tensor_name] = 1 for operation_name in self.get_all_operation_names(): for input_name in self.get_operation_input_names(operation_name): out_degree[input_name] += 1 return out_degree
def copy(self): ''' :return: a copy of the container ''' dup = super(Container, self).copy() dup._fields = [field.copy() for field in self._fields] dup._fields_dict = {field.get_name(): field for field in dup._fields if field.get_name() is not None} dup._containers = [] for container in self._containers: idx = self._fields.index(container) dup._containers.append(dup._fields[idx]) for field in dup._fields: field.enclosing = dup return dup
:return: a copy of the container
Below is the the instruction that describes the task: ### Input: :return: a copy of the container ### Response: def copy(self): ''' :return: a copy of the container ''' dup = super(Container, self).copy() dup._fields = [field.copy() for field in self._fields] dup._fields_dict = {field.get_name(): field for field in dup._fields if field.get_name() is not None} dup._containers = [] for container in self._containers: idx = self._fields.index(container) dup._containers.append(dup._fields[idx]) for field in dup._fields: field.enclosing = dup return dup
def decrypt(self, message): """ Decrypt a PGPMessage using this key. :param message: An encrypted :py:obj:`PGPMessage` :raises: :py:exc:`~errors.PGPError` if the key is not private, or protected but not unlocked. :raises: :py:exc:`~errors.PGPDecryptionError` if decryption fails for any other reason. :returns: A new :py:obj:`PGPMessage` with the decrypted contents of ``message``. """ if not message.is_encrypted: warnings.warn("This message is not encrypted", stacklevel=3) return message if self.fingerprint.keyid not in message.encrypters: sks = set(self.subkeys) mis = set(message.encrypters) if sks & mis: skid = list(sks & mis)[0] warnings.warn("Message was encrypted with this key's subkey: {:s}. " "Decrypting with that...".format(skid), stacklevel=2) return self.subkeys[skid].decrypt(message) raise PGPError("Cannot decrypt the provided message with this key") pkesk = next(pk for pk in message._sessionkeys if pk.pkalg == self.key_algorithm and pk.encrypter == self.fingerprint.keyid) alg, key = pkesk.decrypt_sk(self._key) # now that we have the symmetric cipher used and the key, we can decrypt the actual message decmsg = PGPMessage() decmsg.parse(message.message.decrypt(key, alg)) return decmsg
Decrypt a PGPMessage using this key. :param message: An encrypted :py:obj:`PGPMessage` :raises: :py:exc:`~errors.PGPError` if the key is not private, or protected but not unlocked. :raises: :py:exc:`~errors.PGPDecryptionError` if decryption fails for any other reason. :returns: A new :py:obj:`PGPMessage` with the decrypted contents of ``message``.
Below is the the instruction that describes the task: ### Input: Decrypt a PGPMessage using this key. :param message: An encrypted :py:obj:`PGPMessage` :raises: :py:exc:`~errors.PGPError` if the key is not private, or protected but not unlocked. :raises: :py:exc:`~errors.PGPDecryptionError` if decryption fails for any other reason. :returns: A new :py:obj:`PGPMessage` with the decrypted contents of ``message``. ### Response: def decrypt(self, message): """ Decrypt a PGPMessage using this key. :param message: An encrypted :py:obj:`PGPMessage` :raises: :py:exc:`~errors.PGPError` if the key is not private, or protected but not unlocked. :raises: :py:exc:`~errors.PGPDecryptionError` if decryption fails for any other reason. :returns: A new :py:obj:`PGPMessage` with the decrypted contents of ``message``. """ if not message.is_encrypted: warnings.warn("This message is not encrypted", stacklevel=3) return message if self.fingerprint.keyid not in message.encrypters: sks = set(self.subkeys) mis = set(message.encrypters) if sks & mis: skid = list(sks & mis)[0] warnings.warn("Message was encrypted with this key's subkey: {:s}. " "Decrypting with that...".format(skid), stacklevel=2) return self.subkeys[skid].decrypt(message) raise PGPError("Cannot decrypt the provided message with this key") pkesk = next(pk for pk in message._sessionkeys if pk.pkalg == self.key_algorithm and pk.encrypter == self.fingerprint.keyid) alg, key = pkesk.decrypt_sk(self._key) # now that we have the symmetric cipher used and the key, we can decrypt the actual message decmsg = PGPMessage() decmsg.parse(message.message.decrypt(key, alg)) return decmsg
def document_delete(index, doc_type, id, hosts=None, profile=None): ''' Delete a document from an index index Index name where the document resides doc_type Type of the document id Document identifier CLI example:: salt myminion elasticsearch.document_delete testindex doctype1 AUx-384m0Bug_8U80wQZ ''' es = _get_instance(hosts, profile) try: return es.delete(index=index, doc_type=doc_type, id=id) except elasticsearch.exceptions.NotFoundError: return None except elasticsearch.TransportError as e: raise CommandExecutionError("Cannot delete document {0} in index {1}, server returned code {2} with message {3}".format(id, index, e.status_code, e.error))
Delete a document from an index index Index name where the document resides doc_type Type of the document id Document identifier CLI example:: salt myminion elasticsearch.document_delete testindex doctype1 AUx-384m0Bug_8U80wQZ
Below is the the instruction that describes the task: ### Input: Delete a document from an index index Index name where the document resides doc_type Type of the document id Document identifier CLI example:: salt myminion elasticsearch.document_delete testindex doctype1 AUx-384m0Bug_8U80wQZ ### Response: def document_delete(index, doc_type, id, hosts=None, profile=None): ''' Delete a document from an index index Index name where the document resides doc_type Type of the document id Document identifier CLI example:: salt myminion elasticsearch.document_delete testindex doctype1 AUx-384m0Bug_8U80wQZ ''' es = _get_instance(hosts, profile) try: return es.delete(index=index, doc_type=doc_type, id=id) except elasticsearch.exceptions.NotFoundError: return None except elasticsearch.TransportError as e: raise CommandExecutionError("Cannot delete document {0} in index {1}, server returned code {2} with message {3}".format(id, index, e.status_code, e.error))
def likelihood2(args): """ %prog likelihood2 100_20.json Plot the likelihood surface and marginal distributions. """ from matplotlib import gridspec p = OptionParser(likelihood2.__doc__) opts, args, iopts = p.set_image_options(args, figsize="10x5", style="white", cmap="coolwarm") if len(args) != 1: sys.exit(not p.print_help()) jsonfile, = args fig = plt.figure(figsize=(iopts.w, iopts.h)) gs = gridspec.GridSpec(2, 2) ax1 = fig.add_subplot(gs[:, 0]) ax2 = fig.add_subplot(gs[0, 1]) ax3 = fig.add_subplot(gs[1, 1]) plt.tight_layout(pad=3) pf = plot_panel(jsonfile, ax1, ax2, ax3, opts.cmap) root = fig.add_axes([0, 0, 1, 1]) normalize_axes(root) image_name = "likelihood2.{}.".format(pf) + iopts.format savefig(image_name, dpi=iopts.dpi, iopts=iopts)
%prog likelihood2 100_20.json Plot the likelihood surface and marginal distributions.
Below is the the instruction that describes the task: ### Input: %prog likelihood2 100_20.json Plot the likelihood surface and marginal distributions. ### Response: def likelihood2(args): """ %prog likelihood2 100_20.json Plot the likelihood surface and marginal distributions. """ from matplotlib import gridspec p = OptionParser(likelihood2.__doc__) opts, args, iopts = p.set_image_options(args, figsize="10x5", style="white", cmap="coolwarm") if len(args) != 1: sys.exit(not p.print_help()) jsonfile, = args fig = plt.figure(figsize=(iopts.w, iopts.h)) gs = gridspec.GridSpec(2, 2) ax1 = fig.add_subplot(gs[:, 0]) ax2 = fig.add_subplot(gs[0, 1]) ax3 = fig.add_subplot(gs[1, 1]) plt.tight_layout(pad=3) pf = plot_panel(jsonfile, ax1, ax2, ax3, opts.cmap) root = fig.add_axes([0, 0, 1, 1]) normalize_axes(root) image_name = "likelihood2.{}.".format(pf) + iopts.format savefig(image_name, dpi=iopts.dpi, iopts=iopts)
def settings(cls): """ Find the settings for the current class inside the platforms configuration. """ from bernard.platforms.management import get_platform_settings for platform in get_platform_settings(): candidate = import_class(platform['class']) if candidate == cls: return platform.get('settings', {})
Find the settings for the current class inside the platforms configuration.
Below is the the instruction that describes the task: ### Input: Find the settings for the current class inside the platforms configuration. ### Response: def settings(cls): """ Find the settings for the current class inside the platforms configuration. """ from bernard.platforms.management import get_platform_settings for platform in get_platform_settings(): candidate = import_class(platform['class']) if candidate == cls: return platform.get('settings', {})
def block_widths(self): """Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` to avoid having to recompute these values each time they are needed. """ if self._widths_cache is None: # The first column will have the correct lengths. We have an # invariant that requires that all blocks be the same width in a # column of blocks. self._widths_cache = ( [obj.width() for obj in self._partitions_cache[0]] if len(self._partitions_cache) > 0 else [] ) return self._widths_cache
Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` to avoid having to recompute these values each time they are needed.
Below is the the instruction that describes the task: ### Input: Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` to avoid having to recompute these values each time they are needed. ### Response: def block_widths(self): """Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` to avoid having to recompute these values each time they are needed. """ if self._widths_cache is None: # The first column will have the correct lengths. We have an # invariant that requires that all blocks be the same width in a # column of blocks. self._widths_cache = ( [obj.width() for obj in self._partitions_cache[0]] if len(self._partitions_cache) > 0 else [] ) return self._widths_cache
def patch(self, item, byte_order=BYTEORDER): """ Returns a memory :class:`Patch` for the given *item* that shall be patched in the `data source`. :param item: item to patch. :param byte_order: encoding :class:`Byteorder` for the item. :type byte_order: :class:`Byteorder`, :class:`str` """ # Re-index the data object self.index_data() if is_container(item): length = item.container_size() if length[1] is not 0: # Incomplete container raise ContainerLengthError(item, length) field = item.first_field() if field is None: # Empty container? return None index = field.index if index.bit is not 0: # Bad placed container raise FieldIndexError(field, index) # Create a dummy byte array filled with zero bytes. # The dummy byte array is necessary because the length of # the buffer must correlate to the field indexes of the # appending fields. buffer = bytearray(b'\x00' * index.byte) # Append to the buffer the content mapped by the container fields item.serialize(buffer, index, byte_order=byte_order) # Content of the buffer mapped by the container fields content = buffer[index.byte:] if len(content) != length[0]: # Not correct filled buffer! raise BufferError(len(content), length[0]) return Patch(content, index.address, byte_order, length[0] * 8, 0, False) elif is_field(item): # Field index index = item.index # Field alignment alignment = item.alignment if index.bit != alignment.bit_offset: # Bad aligned field? raise FieldGroupOffsetError( item, index, Alignment(alignment.byte_size, index.bit)) # Create a dummy byte array filled with zero bytes. # The dummy byte array is necessary because the length of # the buffer must correlate to the field index of the # appending field group. buffer = bytearray(b'\x00' * index.byte) # Append to the buffer the content mapped by the field item.serialize(buffer, index, byte_order=byte_order) # Content of the buffer mapped by the field group content = buffer[index.byte:] if len(content) != alignment.byte_size: # Not correct filled buffer! raise BufferError(len(content), alignment.byte_size) # Patch size in bytes for the field in the content buffer patch_size, bit_offset = divmod(item.bit_size, 8) if bit_offset is not 0: inject = True patch_size += 1 else: inject = False # Patch offset in bytes for the field in the content buffer patch_offset, bit_offset = divmod(alignment.bit_offset, 8) if bit_offset is not 0: inject = True if byte_order is Byteorder.big: start = alignment.byte_size - (patch_offset + patch_size) stop = alignment.byte_size - patch_offset else: start = patch_offset stop = patch_offset + patch_size return Patch(content[start:stop], index.address + start, byte_order, item.bit_size, bit_offset, inject) else: raise MemberTypeError(self, item)
Returns a memory :class:`Patch` for the given *item* that shall be patched in the `data source`. :param item: item to patch. :param byte_order: encoding :class:`Byteorder` for the item. :type byte_order: :class:`Byteorder`, :class:`str`
Below is the the instruction that describes the task: ### Input: Returns a memory :class:`Patch` for the given *item* that shall be patched in the `data source`. :param item: item to patch. :param byte_order: encoding :class:`Byteorder` for the item. :type byte_order: :class:`Byteorder`, :class:`str` ### Response: def patch(self, item, byte_order=BYTEORDER): """ Returns a memory :class:`Patch` for the given *item* that shall be patched in the `data source`. :param item: item to patch. :param byte_order: encoding :class:`Byteorder` for the item. :type byte_order: :class:`Byteorder`, :class:`str` """ # Re-index the data object self.index_data() if is_container(item): length = item.container_size() if length[1] is not 0: # Incomplete container raise ContainerLengthError(item, length) field = item.first_field() if field is None: # Empty container? return None index = field.index if index.bit is not 0: # Bad placed container raise FieldIndexError(field, index) # Create a dummy byte array filled with zero bytes. # The dummy byte array is necessary because the length of # the buffer must correlate to the field indexes of the # appending fields. buffer = bytearray(b'\x00' * index.byte) # Append to the buffer the content mapped by the container fields item.serialize(buffer, index, byte_order=byte_order) # Content of the buffer mapped by the container fields content = buffer[index.byte:] if len(content) != length[0]: # Not correct filled buffer! raise BufferError(len(content), length[0]) return Patch(content, index.address, byte_order, length[0] * 8, 0, False) elif is_field(item): # Field index index = item.index # Field alignment alignment = item.alignment if index.bit != alignment.bit_offset: # Bad aligned field? raise FieldGroupOffsetError( item, index, Alignment(alignment.byte_size, index.bit)) # Create a dummy byte array filled with zero bytes. # The dummy byte array is necessary because the length of # the buffer must correlate to the field index of the # appending field group. buffer = bytearray(b'\x00' * index.byte) # Append to the buffer the content mapped by the field item.serialize(buffer, index, byte_order=byte_order) # Content of the buffer mapped by the field group content = buffer[index.byte:] if len(content) != alignment.byte_size: # Not correct filled buffer! raise BufferError(len(content), alignment.byte_size) # Patch size in bytes for the field in the content buffer patch_size, bit_offset = divmod(item.bit_size, 8) if bit_offset is not 0: inject = True patch_size += 1 else: inject = False # Patch offset in bytes for the field in the content buffer patch_offset, bit_offset = divmod(alignment.bit_offset, 8) if bit_offset is not 0: inject = True if byte_order is Byteorder.big: start = alignment.byte_size - (patch_offset + patch_size) stop = alignment.byte_size - patch_offset else: start = patch_offset stop = patch_offset + patch_size return Patch(content[start:stop], index.address + start, byte_order, item.bit_size, bit_offset, inject) else: raise MemberTypeError(self, item)
def mset(self, *args, **kwargs): """ Sets key/values based on a mapping. Mapping can be supplied as a single dictionary argument or as kwargs. """ mapping = kwargs if args: if len(args) != 1 or not isinstance(args[0], dict): raise RedisError('MSET requires **kwargs or a single dict arg') mapping.update(args[0]) if len(mapping) == 0: raise ResponseError("wrong number of arguments for 'mset' command") for key, value in mapping.items(): self.set(key, value) return True
Sets key/values based on a mapping. Mapping can be supplied as a single dictionary argument or as kwargs.
Below is the the instruction that describes the task: ### Input: Sets key/values based on a mapping. Mapping can be supplied as a single dictionary argument or as kwargs. ### Response: def mset(self, *args, **kwargs): """ Sets key/values based on a mapping. Mapping can be supplied as a single dictionary argument or as kwargs. """ mapping = kwargs if args: if len(args) != 1 or not isinstance(args[0], dict): raise RedisError('MSET requires **kwargs or a single dict arg') mapping.update(args[0]) if len(mapping) == 0: raise ResponseError("wrong number of arguments for 'mset' command") for key, value in mapping.items(): self.set(key, value) return True
def qdict_get_list(qdict, k): """ get list from QueryDict and remove blank date from list. """ pks = qdict.getlist(k) return [e for e in pks if e]
get list from QueryDict and remove blank date from list.
Below is the the instruction that describes the task: ### Input: get list from QueryDict and remove blank date from list. ### Response: def qdict_get_list(qdict, k): """ get list from QueryDict and remove blank date from list. """ pks = qdict.getlist(k) return [e for e in pks if e]
def collect_params(self, select=None): """Returns a :py:class:`ParameterDict` containing this :py:class:`Block` and all of its children's Parameters(default), also can returns the select :py:class:`ParameterDict` which match some given regular expressions. For example, collect the specified parameters in ['conv1_weight', 'conv1_bias', 'fc_weight', 'fc_bias']:: model.collect_params('conv1_weight|conv1_bias|fc_weight|fc_bias') or collect all parameters whose names end with 'weight' or 'bias', this can be done using regular expressions:: model.collect_params('.*weight|.*bias') Parameters ---------- select : str regular expressions Returns ------- The selected :py:class:`ParameterDict` """ # We need to check here because blocks inside containers are not supported. self._check_container_with_block() ret = ParameterDict(self._params.prefix) if not select: ret.update(self.params) else: pattern = re.compile(select) ret.update({name:value for name, value in self.params.items() if pattern.match(name)}) for cld in self._children.values(): ret.update(cld.collect_params(select=select)) return ret
Returns a :py:class:`ParameterDict` containing this :py:class:`Block` and all of its children's Parameters(default), also can returns the select :py:class:`ParameterDict` which match some given regular expressions. For example, collect the specified parameters in ['conv1_weight', 'conv1_bias', 'fc_weight', 'fc_bias']:: model.collect_params('conv1_weight|conv1_bias|fc_weight|fc_bias') or collect all parameters whose names end with 'weight' or 'bias', this can be done using regular expressions:: model.collect_params('.*weight|.*bias') Parameters ---------- select : str regular expressions Returns ------- The selected :py:class:`ParameterDict`
Below is the the instruction that describes the task: ### Input: Returns a :py:class:`ParameterDict` containing this :py:class:`Block` and all of its children's Parameters(default), also can returns the select :py:class:`ParameterDict` which match some given regular expressions. For example, collect the specified parameters in ['conv1_weight', 'conv1_bias', 'fc_weight', 'fc_bias']:: model.collect_params('conv1_weight|conv1_bias|fc_weight|fc_bias') or collect all parameters whose names end with 'weight' or 'bias', this can be done using regular expressions:: model.collect_params('.*weight|.*bias') Parameters ---------- select : str regular expressions Returns ------- The selected :py:class:`ParameterDict` ### Response: def collect_params(self, select=None): """Returns a :py:class:`ParameterDict` containing this :py:class:`Block` and all of its children's Parameters(default), also can returns the select :py:class:`ParameterDict` which match some given regular expressions. For example, collect the specified parameters in ['conv1_weight', 'conv1_bias', 'fc_weight', 'fc_bias']:: model.collect_params('conv1_weight|conv1_bias|fc_weight|fc_bias') or collect all parameters whose names end with 'weight' or 'bias', this can be done using regular expressions:: model.collect_params('.*weight|.*bias') Parameters ---------- select : str regular expressions Returns ------- The selected :py:class:`ParameterDict` """ # We need to check here because blocks inside containers are not supported. self._check_container_with_block() ret = ParameterDict(self._params.prefix) if not select: ret.update(self.params) else: pattern = re.compile(select) ret.update({name:value for name, value in self.params.items() if pattern.match(name)}) for cld in self._children.values(): ret.update(cld.collect_params(select=select)) return ret
def return_page(page): """Return a rendered template.""" try: hit_id = request.args['hit_id'] assignment_id = request.args['assignment_id'] worker_id = request.args['worker_id'] mode = request.args['mode'] return render_template( page, hit_id=hit_id, assignment_id=assignment_id, worker_id=worker_id, mode=mode ) except: try: participant_id = request.args['participant_id'] return render_template(page, participant_id=participant_id) except: return error_response(error_type="{} args missing".format(page))
Return a rendered template.
Below is the the instruction that describes the task: ### Input: Return a rendered template. ### Response: def return_page(page): """Return a rendered template.""" try: hit_id = request.args['hit_id'] assignment_id = request.args['assignment_id'] worker_id = request.args['worker_id'] mode = request.args['mode'] return render_template( page, hit_id=hit_id, assignment_id=assignment_id, worker_id=worker_id, mode=mode ) except: try: participant_id = request.args['participant_id'] return render_template(page, participant_id=participant_id) except: return error_response(error_type="{} args missing".format(page))
async def _async_register(self): # pragma: no cover """ Register the agent in the XMPP server from a coroutine. """ metadata = aioxmpp.make_security_layer(None, no_verify=not self.verify_security) query = ibr.Query(self.jid.localpart, self.password) _, stream, features = await aioxmpp.node.connect_xmlstream(self.jid, metadata, loop=self.loop) await ibr.register(stream, query)
Register the agent in the XMPP server from a coroutine.
Below is the the instruction that describes the task: ### Input: Register the agent in the XMPP server from a coroutine. ### Response: async def _async_register(self): # pragma: no cover """ Register the agent in the XMPP server from a coroutine. """ metadata = aioxmpp.make_security_layer(None, no_verify=not self.verify_security) query = ibr.Query(self.jid.localpart, self.password) _, stream, features = await aioxmpp.node.connect_xmlstream(self.jid, metadata, loop=self.loop) await ibr.register(stream, query)
def import_rsa_key(pem_data): """ Extract an RSA key from a PEM-encoded X.509 certificate :param pem_data: RSA key encoded in standard form :return: rsa.RSAPublicKey instance """ if not pem_data.startswith(PREFIX): pem_data = bytes('{}\n{}\n{}'.format(PREFIX, pem_data, POSTFIX), 'utf-8') else: pem_data = bytes(pem_data, 'utf-8') cert = x509.load_pem_x509_certificate(pem_data, default_backend()) return cert.public_key()
Extract an RSA key from a PEM-encoded X.509 certificate :param pem_data: RSA key encoded in standard form :return: rsa.RSAPublicKey instance
Below is the the instruction that describes the task: ### Input: Extract an RSA key from a PEM-encoded X.509 certificate :param pem_data: RSA key encoded in standard form :return: rsa.RSAPublicKey instance ### Response: def import_rsa_key(pem_data): """ Extract an RSA key from a PEM-encoded X.509 certificate :param pem_data: RSA key encoded in standard form :return: rsa.RSAPublicKey instance """ if not pem_data.startswith(PREFIX): pem_data = bytes('{}\n{}\n{}'.format(PREFIX, pem_data, POSTFIX), 'utf-8') else: pem_data = bytes(pem_data, 'utf-8') cert = x509.load_pem_x509_certificate(pem_data, default_backend()) return cert.public_key()
def find_connection(self): '''find an antenna tracker connection if possible''' if self.connection is not None: return self.connection for m in self.mpstate.mav_master: if 'HEARTBEAT' in m.messages: if m.messages['HEARTBEAT'].type == mavutil.mavlink.MAV_TYPE_ANTENNA_TRACKER: return m return None
find an antenna tracker connection if possible
Below is the the instruction that describes the task: ### Input: find an antenna tracker connection if possible ### Response: def find_connection(self): '''find an antenna tracker connection if possible''' if self.connection is not None: return self.connection for m in self.mpstate.mav_master: if 'HEARTBEAT' in m.messages: if m.messages['HEARTBEAT'].type == mavutil.mavlink.MAV_TYPE_ANTENNA_TRACKER: return m return None
def filter(self, data, collection, **kwargs): """Filter given collection.""" if not data or self.filters is None: return None, collection filters = {} for f in self.filters: if f.name not in data: continue ops, collection = f.filter(collection, data, **kwargs) filters[f.name] = ops return filters, collection
Filter given collection.
Below is the the instruction that describes the task: ### Input: Filter given collection. ### Response: def filter(self, data, collection, **kwargs): """Filter given collection.""" if not data or self.filters is None: return None, collection filters = {} for f in self.filters: if f.name not in data: continue ops, collection = f.filter(collection, data, **kwargs) filters[f.name] = ops return filters, collection
def clear_items_sequential(self): """Clears the items sequential flag. raise: NoAccess - ``Metadata.isRequired()`` or ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for osid.resource.ResourceForm.clear_group_template if (self.get_items_sequential_metadata().is_read_only() or self.get_items_sequential_metadata().is_required()): raise errors.NoAccess() self._my_map['itemsSequential'] = self._items_sequential_default
Clears the items sequential flag. raise: NoAccess - ``Metadata.isRequired()`` or ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Clears the items sequential flag. raise: NoAccess - ``Metadata.isRequired()`` or ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.* ### Response: def clear_items_sequential(self): """Clears the items sequential flag. raise: NoAccess - ``Metadata.isRequired()`` or ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for osid.resource.ResourceForm.clear_group_template if (self.get_items_sequential_metadata().is_read_only() or self.get_items_sequential_metadata().is_required()): raise errors.NoAccess() self._my_map['itemsSequential'] = self._items_sequential_default
def _compute_symbolic_link_mapping( directory: str, extensions: Iterable[str] ) -> Dict[str, str]: """ Given a shared analysis directory, produce a mapping from actual source files to files contained within this directory. Only includes files which have one of the provided extensions. Watchman watches actual source files, so when a change is detected to a file, this mapping can be used to identify what file changed from Pyre's perspective. """ symbolic_links = {} try: for symbolic_link in find_paths_with_extensions(directory, extensions): symbolic_links[os.path.realpath(symbolic_link)] = symbolic_link except subprocess.CalledProcessError as error: LOG.warning( "Exception encountered trying to find source files " "in the analysis directory: `%s`", error, ) LOG.warning("Starting with an empty set of tracked files.") return symbolic_links
Given a shared analysis directory, produce a mapping from actual source files to files contained within this directory. Only includes files which have one of the provided extensions. Watchman watches actual source files, so when a change is detected to a file, this mapping can be used to identify what file changed from Pyre's perspective.
Below is the the instruction that describes the task: ### Input: Given a shared analysis directory, produce a mapping from actual source files to files contained within this directory. Only includes files which have one of the provided extensions. Watchman watches actual source files, so when a change is detected to a file, this mapping can be used to identify what file changed from Pyre's perspective. ### Response: def _compute_symbolic_link_mapping( directory: str, extensions: Iterable[str] ) -> Dict[str, str]: """ Given a shared analysis directory, produce a mapping from actual source files to files contained within this directory. Only includes files which have one of the provided extensions. Watchman watches actual source files, so when a change is detected to a file, this mapping can be used to identify what file changed from Pyre's perspective. """ symbolic_links = {} try: for symbolic_link in find_paths_with_extensions(directory, extensions): symbolic_links[os.path.realpath(symbolic_link)] = symbolic_link except subprocess.CalledProcessError as error: LOG.warning( "Exception encountered trying to find source files " "in the analysis directory: `%s`", error, ) LOG.warning("Starting with an empty set of tracked files.") return symbolic_links
def report_usage_to_host(host_ip, vmid): #base value cpu_usage = 0.0 os_mem_usage = 0.0 task_mem_usage = 0.0 io_usage = 0.0 cpu_usage = get_cpu_usage() os_mem_usage = get_os_mem_usage() task_mem_usage = get_task_mem_usage() io_usage = get_io_usage() usage = str(vmid.strip())+' | '+str(cpu_usage)+' | '+str(os_mem_usage)+' | '+str(task_mem_usage)+' | '+str(io_usage) #usage = "'cpu |sdbfsj |sdfsdhf |sdfvsdvfgdfvj'" #cmd = 'python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage '''cmd = '/bin/ssh -n -q -o StrictHostKeyChecking=no root@host_ip \"/bin/nohup /bin/python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage+' &\"' cmd = cmd.replace("host_ip",str(host_ip).strip())''' #report usage via socket start_client_socket(host_ip, usage)
cmd = '/bin/ssh -n -q -o StrictHostKeyChecking=no root@host_ip \"/bin/nohup /bin/python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage+' &\"' cmd = cmd.replace("host_ip",str(host_ip).strip())
Below is the the instruction that describes the task: ### Input: cmd = '/bin/ssh -n -q -o StrictHostKeyChecking=no root@host_ip \"/bin/nohup /bin/python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage+' &\"' cmd = cmd.replace("host_ip",str(host_ip).strip()) ### Response: def report_usage_to_host(host_ip, vmid): #base value cpu_usage = 0.0 os_mem_usage = 0.0 task_mem_usage = 0.0 io_usage = 0.0 cpu_usage = get_cpu_usage() os_mem_usage = get_os_mem_usage() task_mem_usage = get_task_mem_usage() io_usage = get_io_usage() usage = str(vmid.strip())+' | '+str(cpu_usage)+' | '+str(os_mem_usage)+' | '+str(task_mem_usage)+' | '+str(io_usage) #usage = "'cpu |sdbfsj |sdfsdhf |sdfvsdvfgdfvj'" #cmd = 'python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage '''cmd = '/bin/ssh -n -q -o StrictHostKeyChecking=no root@host_ip \"/bin/nohup /bin/python /var/lib/virtdc/vmonere/host/vmonere_listener.py '+usage+' &\"' cmd = cmd.replace("host_ip",str(host_ip).strip())''' #report usage via socket start_client_socket(host_ip, usage)
def gen_front_term(self, x, dmp_num): """Generates the front term on the forcing term. For rhythmic DMPs it's non-diminishing, so this function is just a placeholder to return 1. x float: the current value of the canonical system dmp_num int: the index of the current dmp """ if isinstance(x, np.ndarray): return np.ones(x.shape) return 1
Generates the front term on the forcing term. For rhythmic DMPs it's non-diminishing, so this function is just a placeholder to return 1. x float: the current value of the canonical system dmp_num int: the index of the current dmp
Below is the the instruction that describes the task: ### Input: Generates the front term on the forcing term. For rhythmic DMPs it's non-diminishing, so this function is just a placeholder to return 1. x float: the current value of the canonical system dmp_num int: the index of the current dmp ### Response: def gen_front_term(self, x, dmp_num): """Generates the front term on the forcing term. For rhythmic DMPs it's non-diminishing, so this function is just a placeholder to return 1. x float: the current value of the canonical system dmp_num int: the index of the current dmp """ if isinstance(x, np.ndarray): return np.ones(x.shape) return 1
def reindex_model_on_save(sender, document, **kwargs): '''(Re/Un)Index Mongo document on post_save''' if current_app.config.get('AUTO_INDEX'): reindex.delay(document)
(Re/Un)Index Mongo document on post_save
Below is the the instruction that describes the task: ### Input: (Re/Un)Index Mongo document on post_save ### Response: def reindex_model_on_save(sender, document, **kwargs): '''(Re/Un)Index Mongo document on post_save''' if current_app.config.get('AUTO_INDEX'): reindex.delay(document)
def interleave(infile_1, infile_2, outfile, suffix1=None, suffix2=None): '''Makes interleaved file from two sequence files. If used, will append suffix1 onto end of every sequence name in infile_1, unless it already ends with suffix1. Similar for sufffix2.''' seq_reader_1 = sequences.file_reader(infile_1) seq_reader_2 = sequences.file_reader(infile_2) f_out = utils.open_file_write(outfile) for seq_1 in seq_reader_1: try: seq_2 = next(seq_reader_2) except: utils.close(f_out) raise Error('Error getting mate for sequence', seq_1.id, ' ... cannot continue') if suffix1 is not None and not seq_1.id.endswith(suffix1): seq_1.id += suffix1 if suffix2 is not None and not seq_2.id.endswith(suffix2): seq_2.id += suffix2 print(seq_1, file=f_out) print(seq_2, file=f_out) try: seq_2 = next(seq_reader_2) except: seq_2 = None if seq_2 is not None: utils.close(f_out) raise Error('Error getting mate for sequence', seq_2.id, ' ... cannot continue') utils.close(f_out)
Makes interleaved file from two sequence files. If used, will append suffix1 onto end of every sequence name in infile_1, unless it already ends with suffix1. Similar for sufffix2.
Below is the the instruction that describes the task: ### Input: Makes interleaved file from two sequence files. If used, will append suffix1 onto end of every sequence name in infile_1, unless it already ends with suffix1. Similar for sufffix2. ### Response: def interleave(infile_1, infile_2, outfile, suffix1=None, suffix2=None): '''Makes interleaved file from two sequence files. If used, will append suffix1 onto end of every sequence name in infile_1, unless it already ends with suffix1. Similar for sufffix2.''' seq_reader_1 = sequences.file_reader(infile_1) seq_reader_2 = sequences.file_reader(infile_2) f_out = utils.open_file_write(outfile) for seq_1 in seq_reader_1: try: seq_2 = next(seq_reader_2) except: utils.close(f_out) raise Error('Error getting mate for sequence', seq_1.id, ' ... cannot continue') if suffix1 is not None and not seq_1.id.endswith(suffix1): seq_1.id += suffix1 if suffix2 is not None and not seq_2.id.endswith(suffix2): seq_2.id += suffix2 print(seq_1, file=f_out) print(seq_2, file=f_out) try: seq_2 = next(seq_reader_2) except: seq_2 = None if seq_2 is not None: utils.close(f_out) raise Error('Error getting mate for sequence', seq_2.id, ' ... cannot continue') utils.close(f_out)
def _update_zone(self, zone, status=None): """ Updates a zones status. :param zone: zone number :type zone: int :param status: zone status :type status: int :raises: IndexError """ if not zone in self._zones: raise IndexError('Zone does not exist and cannot be updated: %d', zone) old_status = self._zones[zone].status if status is None: status = old_status self._zones[zone].status = status self._zones[zone].timestamp = time.time() if status == Zone.CLEAR: if zone in self._zones_faulted: self._zones_faulted.remove(zone) self.on_restore(zone=zone) else: if old_status != status and status is not None: self.on_fault(zone=zone)
Updates a zones status. :param zone: zone number :type zone: int :param status: zone status :type status: int :raises: IndexError
Below is the the instruction that describes the task: ### Input: Updates a zones status. :param zone: zone number :type zone: int :param status: zone status :type status: int :raises: IndexError ### Response: def _update_zone(self, zone, status=None): """ Updates a zones status. :param zone: zone number :type zone: int :param status: zone status :type status: int :raises: IndexError """ if not zone in self._zones: raise IndexError('Zone does not exist and cannot be updated: %d', zone) old_status = self._zones[zone].status if status is None: status = old_status self._zones[zone].status = status self._zones[zone].timestamp = time.time() if status == Zone.CLEAR: if zone in self._zones_faulted: self._zones_faulted.remove(zone) self.on_restore(zone=zone) else: if old_status != status and status is not None: self.on_fault(zone=zone)
def fix_config(self, options): """ Fixes the options, if necessary. I.e., it adds all required elements to the dictionary. :param options: the options to fix :type options: dict :return: the (potentially) fixed options :rtype: dict """ options = super(ROC, self).fix_config(options) opt = "class_index" if opt not in options: options[opt] = [0] if opt not in self.help: self.help[opt] = "The list of 0-based class-label indices to display (list)." opt = "key_loc" if opt not in options: options[opt] = "lower right" if opt not in self.help: self.help[opt] = "The location of the key in the plot (str)." opt = "title" if opt not in options: options[opt] = None if opt not in self.help: self.help[opt] = "The title for the plot (str)." opt = "outfile" if opt not in options: options[opt] = None if opt not in self.help: self.help[opt] = "The file to store the plot in (str)." opt = "wait" if opt not in options: options[opt] = True if opt not in self.help: self.help[opt] = "Whether to wait for user to close the plot window (bool)." return options
Fixes the options, if necessary. I.e., it adds all required elements to the dictionary. :param options: the options to fix :type options: dict :return: the (potentially) fixed options :rtype: dict
Below is the the instruction that describes the task: ### Input: Fixes the options, if necessary. I.e., it adds all required elements to the dictionary. :param options: the options to fix :type options: dict :return: the (potentially) fixed options :rtype: dict ### Response: def fix_config(self, options): """ Fixes the options, if necessary. I.e., it adds all required elements to the dictionary. :param options: the options to fix :type options: dict :return: the (potentially) fixed options :rtype: dict """ options = super(ROC, self).fix_config(options) opt = "class_index" if opt not in options: options[opt] = [0] if opt not in self.help: self.help[opt] = "The list of 0-based class-label indices to display (list)." opt = "key_loc" if opt not in options: options[opt] = "lower right" if opt not in self.help: self.help[opt] = "The location of the key in the plot (str)." opt = "title" if opt not in options: options[opt] = None if opt not in self.help: self.help[opt] = "The title for the plot (str)." opt = "outfile" if opt not in options: options[opt] = None if opt not in self.help: self.help[opt] = "The file to store the plot in (str)." opt = "wait" if opt not in options: options[opt] = True if opt not in self.help: self.help[opt] = "Whether to wait for user to close the plot window (bool)." return options
def _get_png_size(version, scale, quiet_zone=4): """See: QRCode.get_png_size This function was abstracted away from QRCode to allow for the output of QR codes during the build process, i.e. for debugging. It works just the same except you must specify the code's version. This is needed to calculate the PNG's size. """ #Formula: scale times number of modules plus the border on each side return (int(scale) * tables.version_size[version]) + (2 * quiet_zone * int(scale))
See: QRCode.get_png_size This function was abstracted away from QRCode to allow for the output of QR codes during the build process, i.e. for debugging. It works just the same except you must specify the code's version. This is needed to calculate the PNG's size.
Below is the the instruction that describes the task: ### Input: See: QRCode.get_png_size This function was abstracted away from QRCode to allow for the output of QR codes during the build process, i.e. for debugging. It works just the same except you must specify the code's version. This is needed to calculate the PNG's size. ### Response: def _get_png_size(version, scale, quiet_zone=4): """See: QRCode.get_png_size This function was abstracted away from QRCode to allow for the output of QR codes during the build process, i.e. for debugging. It works just the same except you must specify the code's version. This is needed to calculate the PNG's size. """ #Formula: scale times number of modules plus the border on each side return (int(scale) * tables.version_size[version]) + (2 * quiet_zone * int(scale))
def BetaPrime(alpha, beta, tag=None): """ A BetaPrime random variate Parameters ---------- alpha : scalar The first shape parameter beta : scalar The second shape parameter """ assert ( alpha > 0 and beta > 0 ), 'BetaPrime "alpha" and "beta" parameters must be greater than zero' x = Beta(alpha, beta, tag) return x / (1 - x)
A BetaPrime random variate Parameters ---------- alpha : scalar The first shape parameter beta : scalar The second shape parameter
Below is the the instruction that describes the task: ### Input: A BetaPrime random variate Parameters ---------- alpha : scalar The first shape parameter beta : scalar The second shape parameter ### Response: def BetaPrime(alpha, beta, tag=None): """ A BetaPrime random variate Parameters ---------- alpha : scalar The first shape parameter beta : scalar The second shape parameter """ assert ( alpha > 0 and beta > 0 ), 'BetaPrime "alpha" and "beta" parameters must be greater than zero' x = Beta(alpha, beta, tag) return x / (1 - x)
def get_real_percent(self): """get_real_percent() Returns the unmodified percentage of the score based on a 0-point scale.""" if not (self.votes and self.score): return 0 return 100 * (self.get_real_rating() / self.field.range)
get_real_percent() Returns the unmodified percentage of the score based on a 0-point scale.
Below is the the instruction that describes the task: ### Input: get_real_percent() Returns the unmodified percentage of the score based on a 0-point scale. ### Response: def get_real_percent(self): """get_real_percent() Returns the unmodified percentage of the score based on a 0-point scale.""" if not (self.votes and self.score): return 0 return 100 * (self.get_real_rating() / self.field.range)
def remove_values(self, keys): """Remove values from data""" data = self.model.get_data() for key in sorted(keys, reverse=True): data.pop(key) self.set_data(data)
Remove values from data
Below is the the instruction that describes the task: ### Input: Remove values from data ### Response: def remove_values(self, keys): """Remove values from data""" data = self.model.get_data() for key in sorted(keys, reverse=True): data.pop(key) self.set_data(data)
def edit_team_push_restrictions(self, *teams): """ :calls: `POST /repos/:owner/:repo/branches/:branch/protection/restrictions <https://developer.github.com/v3/repos/branches>`_ :teams: list of strings """ assert all(isinstance(element, (str, unicode)) or isinstance(element, (str, unicode)) for element in teams), teams headers, data = self._requester.requestJsonAndCheck( "POST", self.protection_url + "/restrictions/teams", input=teams )
:calls: `POST /repos/:owner/:repo/branches/:branch/protection/restrictions <https://developer.github.com/v3/repos/branches>`_ :teams: list of strings
Below is the the instruction that describes the task: ### Input: :calls: `POST /repos/:owner/:repo/branches/:branch/protection/restrictions <https://developer.github.com/v3/repos/branches>`_ :teams: list of strings ### Response: def edit_team_push_restrictions(self, *teams): """ :calls: `POST /repos/:owner/:repo/branches/:branch/protection/restrictions <https://developer.github.com/v3/repos/branches>`_ :teams: list of strings """ assert all(isinstance(element, (str, unicode)) or isinstance(element, (str, unicode)) for element in teams), teams headers, data = self._requester.requestJsonAndCheck( "POST", self.protection_url + "/restrictions/teams", input=teams )
def get_params(self): """Manually pull params defined in config from OnShape and return a python representation of the params. Quantities are converted to pint quantities, Bools are converted to python bools and Enums are converted to strings. Note that Enum names are autogenerated by OnShape and do not match the name on the OnShape UI.""" self.res = c.get_configuration(self.parent.uri.as_dict())
Manually pull params defined in config from OnShape and return a python representation of the params. Quantities are converted to pint quantities, Bools are converted to python bools and Enums are converted to strings. Note that Enum names are autogenerated by OnShape and do not match the name on the OnShape UI.
Below is the the instruction that describes the task: ### Input: Manually pull params defined in config from OnShape and return a python representation of the params. Quantities are converted to pint quantities, Bools are converted to python bools and Enums are converted to strings. Note that Enum names are autogenerated by OnShape and do not match the name on the OnShape UI. ### Response: def get_params(self): """Manually pull params defined in config from OnShape and return a python representation of the params. Quantities are converted to pint quantities, Bools are converted to python bools and Enums are converted to strings. Note that Enum names are autogenerated by OnShape and do not match the name on the OnShape UI.""" self.res = c.get_configuration(self.parent.uri.as_dict())
def set_camera(self, camera_id): """ Set the camera view to the specified camera ID. """ self.viewer.cam.fixedcamid = camera_id self.viewer.cam.type = const.CAMERA_FIXED
Set the camera view to the specified camera ID.
Below is the the instruction that describes the task: ### Input: Set the camera view to the specified camera ID. ### Response: def set_camera(self, camera_id): """ Set the camera view to the specified camera ID. """ self.viewer.cam.fixedcamid = camera_id self.viewer.cam.type = const.CAMERA_FIXED
def get_feature(vector, feature): """Get a feature vector. This returns a list of ints, equal in length to the vector input, representing presence/absence/neutrality with respect to a particular phonetic feature. Parameters ---------- vector : list A tuple or list of ints representing the phonetic features of a phone or series of phones (such as is returned by the ipa_to_features function) feature : str A feature name from the set: - ``consonantal`` - ``sonorant`` - ``syllabic`` - ``labial`` - ``round`` - ``coronal`` - ``anterior`` - ``distributed`` - ``dorsal`` - ``high`` - ``low`` - ``back`` - ``tense`` - ``pharyngeal`` - ``ATR`` - ``voice`` - ``spread_glottis`` - ``constricted_glottis`` - ``continuant`` - ``strident`` - ``lateral`` - ``delayed_release`` - ``nasal`` Returns ------- list of ints A list indicating presence/absence/neutrality with respect to the feature Raises ------ AttributeError feature must be one of ... Examples -------- >>> tails = ipa_to_features('telz') >>> get_feature(tails, 'consonantal') [1, -1, 1, 1] >>> get_feature(tails, 'sonorant') [-1, 1, 1, -1] >>> get_feature(tails, 'nasal') [-1, -1, -1, -1] >>> get_feature(tails, 'coronal') [1, -1, 1, 1] """ # :param bool binary: if False, -1, 0, & 1 represent -, 0, & + # if True, only binary oppositions are allowed: # 0 & 1 represent - & + and 0s are mapped to - if feature not in _FEATURE_MASK: raise AttributeError( "feature must be one of: '" + "', '".join( ( 'consonantal', 'sonorant', 'syllabic', 'labial', 'round', 'coronal', 'anterior', 'distributed', 'dorsal', 'high', 'low', 'back', 'tense', 'pharyngeal', 'ATR', 'voice', 'spread_glottis', 'constricted_glottis', 'continuant', 'strident', 'lateral', 'delayed_release', 'nasal', ) ) + "'" ) # each feature mask contains two bits, one each for - and + mask = _FEATURE_MASK[feature] # the lower bit represents + pos_mask = mask >> 1 retvec = [] for char in vector: if char < 0: retvec.append(float('NaN')) else: masked = char & mask if masked == 0: retvec.append(0) # 0 elif masked == mask: retvec.append(2) # +/- elif masked & pos_mask: retvec.append(1) # + else: retvec.append(-1) # - return retvec
Get a feature vector. This returns a list of ints, equal in length to the vector input, representing presence/absence/neutrality with respect to a particular phonetic feature. Parameters ---------- vector : list A tuple or list of ints representing the phonetic features of a phone or series of phones (such as is returned by the ipa_to_features function) feature : str A feature name from the set: - ``consonantal`` - ``sonorant`` - ``syllabic`` - ``labial`` - ``round`` - ``coronal`` - ``anterior`` - ``distributed`` - ``dorsal`` - ``high`` - ``low`` - ``back`` - ``tense`` - ``pharyngeal`` - ``ATR`` - ``voice`` - ``spread_glottis`` - ``constricted_glottis`` - ``continuant`` - ``strident`` - ``lateral`` - ``delayed_release`` - ``nasal`` Returns ------- list of ints A list indicating presence/absence/neutrality with respect to the feature Raises ------ AttributeError feature must be one of ... Examples -------- >>> tails = ipa_to_features('telz') >>> get_feature(tails, 'consonantal') [1, -1, 1, 1] >>> get_feature(tails, 'sonorant') [-1, 1, 1, -1] >>> get_feature(tails, 'nasal') [-1, -1, -1, -1] >>> get_feature(tails, 'coronal') [1, -1, 1, 1]
Below is the the instruction that describes the task: ### Input: Get a feature vector. This returns a list of ints, equal in length to the vector input, representing presence/absence/neutrality with respect to a particular phonetic feature. Parameters ---------- vector : list A tuple or list of ints representing the phonetic features of a phone or series of phones (such as is returned by the ipa_to_features function) feature : str A feature name from the set: - ``consonantal`` - ``sonorant`` - ``syllabic`` - ``labial`` - ``round`` - ``coronal`` - ``anterior`` - ``distributed`` - ``dorsal`` - ``high`` - ``low`` - ``back`` - ``tense`` - ``pharyngeal`` - ``ATR`` - ``voice`` - ``spread_glottis`` - ``constricted_glottis`` - ``continuant`` - ``strident`` - ``lateral`` - ``delayed_release`` - ``nasal`` Returns ------- list of ints A list indicating presence/absence/neutrality with respect to the feature Raises ------ AttributeError feature must be one of ... Examples -------- >>> tails = ipa_to_features('telz') >>> get_feature(tails, 'consonantal') [1, -1, 1, 1] >>> get_feature(tails, 'sonorant') [-1, 1, 1, -1] >>> get_feature(tails, 'nasal') [-1, -1, -1, -1] >>> get_feature(tails, 'coronal') [1, -1, 1, 1] ### Response: def get_feature(vector, feature): """Get a feature vector. This returns a list of ints, equal in length to the vector input, representing presence/absence/neutrality with respect to a particular phonetic feature. Parameters ---------- vector : list A tuple or list of ints representing the phonetic features of a phone or series of phones (such as is returned by the ipa_to_features function) feature : str A feature name from the set: - ``consonantal`` - ``sonorant`` - ``syllabic`` - ``labial`` - ``round`` - ``coronal`` - ``anterior`` - ``distributed`` - ``dorsal`` - ``high`` - ``low`` - ``back`` - ``tense`` - ``pharyngeal`` - ``ATR`` - ``voice`` - ``spread_glottis`` - ``constricted_glottis`` - ``continuant`` - ``strident`` - ``lateral`` - ``delayed_release`` - ``nasal`` Returns ------- list of ints A list indicating presence/absence/neutrality with respect to the feature Raises ------ AttributeError feature must be one of ... Examples -------- >>> tails = ipa_to_features('telz') >>> get_feature(tails, 'consonantal') [1, -1, 1, 1] >>> get_feature(tails, 'sonorant') [-1, 1, 1, -1] >>> get_feature(tails, 'nasal') [-1, -1, -1, -1] >>> get_feature(tails, 'coronal') [1, -1, 1, 1] """ # :param bool binary: if False, -1, 0, & 1 represent -, 0, & + # if True, only binary oppositions are allowed: # 0 & 1 represent - & + and 0s are mapped to - if feature not in _FEATURE_MASK: raise AttributeError( "feature must be one of: '" + "', '".join( ( 'consonantal', 'sonorant', 'syllabic', 'labial', 'round', 'coronal', 'anterior', 'distributed', 'dorsal', 'high', 'low', 'back', 'tense', 'pharyngeal', 'ATR', 'voice', 'spread_glottis', 'constricted_glottis', 'continuant', 'strident', 'lateral', 'delayed_release', 'nasal', ) ) + "'" ) # each feature mask contains two bits, one each for - and + mask = _FEATURE_MASK[feature] # the lower bit represents + pos_mask = mask >> 1 retvec = [] for char in vector: if char < 0: retvec.append(float('NaN')) else: masked = char & mask if masked == 0: retvec.append(0) # 0 elif masked == mask: retvec.append(2) # +/- elif masked & pos_mask: retvec.append(1) # + else: retvec.append(-1) # - return retvec
def caltrack_usage_per_day_predict( model_type, model_params, prediction_index, temperature_data, degree_day_method="daily", with_disaggregated=False, with_design_matrix=False, ): """ CalTRACK predict method. Given a model type, parameters, hourly temperatures, a :any:`pandas.DatetimeIndex` index over which to predict meter usage, return model predictions as totals for the period (so billing period totals, daily totals, etc.). Optionally include the computed design matrix or disaggregated usage in the output dataframe. Parameters ---------- model_type : :any:`str` Model type (e.g., ``'cdd_hdd'``). model_params : :any:`dict` Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`. temperature_data : :any:`pandas.DataFrame` Hourly temperature data to use for prediction. Time period should match the ``prediction_index`` argument. prediction_index : :any:`pandas.DatetimeIndex` Time period over which to predict. with_disaggregated : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'base_load'``, ``'heating_load'``, and ``'cooling_load'``. with_design_matrix : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'n_days'``, ``'n_days_dropped'``, ``n_days_kept``, and ``temperature_mean``. Returns ------- prediction : :any:`pandas.DataFrame` Columns are as follows: - ``predicted_usage``: Predicted usage values computed to match ``prediction_index``. - ``base_load``: modeled base load (only for ``with_disaggregated=True``). - ``cooling_load``: modeled cooling load (only for ``with_disaggregated=True``). - ``heating_load``: modeled heating load (only for ``with_disaggregated=True``). - ``n_days``: number of days in period (only for ``with_design_matrix=True``). - ``n_days_dropped``: number of days dropped because of insufficient data (only for ``with_design_matrix=True``). - ``n_days_kept``: number of days kept because of sufficient data (only for ``with_design_matrix=True``). - ``temperature_mean``: mean temperature during given period. (only for ``with_design_matrix=True``). predict_warnings: :any: list of EEMeterWarning if any. """ if model_params is None: raise MissingModelParameterError("model_params is None.") predict_warnings = [] cooling_balance_points = [] heating_balance_points = [] if "cooling_balance_point" in model_params: cooling_balance_points.append(model_params["cooling_balance_point"]) if "heating_balance_point" in model_params: heating_balance_points.append(model_params["heating_balance_point"]) design_matrix = compute_temperature_features( prediction_index, temperature_data, heating_balance_points=heating_balance_points, cooling_balance_points=cooling_balance_points, degree_day_method=degree_day_method, use_mean_daily_values=False, ) if design_matrix.dropna().empty: if with_disaggregated: empty_columns = { "predicted_usage": [], "base_load": [], "heating_load": [], "cooling_load": [], } else: empty_columns = {"predicted_usage": []} predict_warnings.append( EEMeterWarning( qualified_name=("eemeter.caltrack.compute_temperature_features"), description=( "Design matrix empty, compute_temperature_features failed" ), data={"temperature_data": temperature_data}, ) ) return ModelPrediction( pd.DataFrame(empty_columns), design_matrix=pd.DataFrame(), warnings=predict_warnings, ) if degree_day_method == "daily": design_matrix["n_days"] = ( design_matrix.n_days_kept + design_matrix.n_days_dropped ) else: design_matrix["n_days"] = ( design_matrix.n_hours_kept + design_matrix.n_hours_dropped ) / 24 results = _caltrack_predict_design_matrix( model_type, model_params, design_matrix, input_averages=False, output_averages=False, ).to_frame("predicted_usage") if with_disaggregated: disaggregated = _caltrack_predict_design_matrix( model_type, model_params, design_matrix, disaggregated=True, input_averages=False, output_averages=False, ) results = results.join(disaggregated) if with_design_matrix: results = results.join(design_matrix) return ModelPrediction( result=results, design_matrix=design_matrix, warnings=predict_warnings )
CalTRACK predict method. Given a model type, parameters, hourly temperatures, a :any:`pandas.DatetimeIndex` index over which to predict meter usage, return model predictions as totals for the period (so billing period totals, daily totals, etc.). Optionally include the computed design matrix or disaggregated usage in the output dataframe. Parameters ---------- model_type : :any:`str` Model type (e.g., ``'cdd_hdd'``). model_params : :any:`dict` Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`. temperature_data : :any:`pandas.DataFrame` Hourly temperature data to use for prediction. Time period should match the ``prediction_index`` argument. prediction_index : :any:`pandas.DatetimeIndex` Time period over which to predict. with_disaggregated : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'base_load'``, ``'heating_load'``, and ``'cooling_load'``. with_design_matrix : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'n_days'``, ``'n_days_dropped'``, ``n_days_kept``, and ``temperature_mean``. Returns ------- prediction : :any:`pandas.DataFrame` Columns are as follows: - ``predicted_usage``: Predicted usage values computed to match ``prediction_index``. - ``base_load``: modeled base load (only for ``with_disaggregated=True``). - ``cooling_load``: modeled cooling load (only for ``with_disaggregated=True``). - ``heating_load``: modeled heating load (only for ``with_disaggregated=True``). - ``n_days``: number of days in period (only for ``with_design_matrix=True``). - ``n_days_dropped``: number of days dropped because of insufficient data (only for ``with_design_matrix=True``). - ``n_days_kept``: number of days kept because of sufficient data (only for ``with_design_matrix=True``). - ``temperature_mean``: mean temperature during given period. (only for ``with_design_matrix=True``). predict_warnings: :any: list of EEMeterWarning if any.
Below is the the instruction that describes the task: ### Input: CalTRACK predict method. Given a model type, parameters, hourly temperatures, a :any:`pandas.DatetimeIndex` index over which to predict meter usage, return model predictions as totals for the period (so billing period totals, daily totals, etc.). Optionally include the computed design matrix or disaggregated usage in the output dataframe. Parameters ---------- model_type : :any:`str` Model type (e.g., ``'cdd_hdd'``). model_params : :any:`dict` Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`. temperature_data : :any:`pandas.DataFrame` Hourly temperature data to use for prediction. Time period should match the ``prediction_index`` argument. prediction_index : :any:`pandas.DatetimeIndex` Time period over which to predict. with_disaggregated : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'base_load'``, ``'heating_load'``, and ``'cooling_load'``. with_design_matrix : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'n_days'``, ``'n_days_dropped'``, ``n_days_kept``, and ``temperature_mean``. Returns ------- prediction : :any:`pandas.DataFrame` Columns are as follows: - ``predicted_usage``: Predicted usage values computed to match ``prediction_index``. - ``base_load``: modeled base load (only for ``with_disaggregated=True``). - ``cooling_load``: modeled cooling load (only for ``with_disaggregated=True``). - ``heating_load``: modeled heating load (only for ``with_disaggregated=True``). - ``n_days``: number of days in period (only for ``with_design_matrix=True``). - ``n_days_dropped``: number of days dropped because of insufficient data (only for ``with_design_matrix=True``). - ``n_days_kept``: number of days kept because of sufficient data (only for ``with_design_matrix=True``). - ``temperature_mean``: mean temperature during given period. (only for ``with_design_matrix=True``). predict_warnings: :any: list of EEMeterWarning if any. ### Response: def caltrack_usage_per_day_predict( model_type, model_params, prediction_index, temperature_data, degree_day_method="daily", with_disaggregated=False, with_design_matrix=False, ): """ CalTRACK predict method. Given a model type, parameters, hourly temperatures, a :any:`pandas.DatetimeIndex` index over which to predict meter usage, return model predictions as totals for the period (so billing period totals, daily totals, etc.). Optionally include the computed design matrix or disaggregated usage in the output dataframe. Parameters ---------- model_type : :any:`str` Model type (e.g., ``'cdd_hdd'``). model_params : :any:`dict` Parameters as stored in :any:`eemeter.CalTRACKUsagePerDayCandidateModel.model_params`. temperature_data : :any:`pandas.DataFrame` Hourly temperature data to use for prediction. Time period should match the ``prediction_index`` argument. prediction_index : :any:`pandas.DatetimeIndex` Time period over which to predict. with_disaggregated : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'base_load'``, ``'heating_load'``, and ``'cooling_load'``. with_design_matrix : :any:`bool`, optional If True, return results as a :any:`pandas.DataFrame` with columns ``'n_days'``, ``'n_days_dropped'``, ``n_days_kept``, and ``temperature_mean``. Returns ------- prediction : :any:`pandas.DataFrame` Columns are as follows: - ``predicted_usage``: Predicted usage values computed to match ``prediction_index``. - ``base_load``: modeled base load (only for ``with_disaggregated=True``). - ``cooling_load``: modeled cooling load (only for ``with_disaggregated=True``). - ``heating_load``: modeled heating load (only for ``with_disaggregated=True``). - ``n_days``: number of days in period (only for ``with_design_matrix=True``). - ``n_days_dropped``: number of days dropped because of insufficient data (only for ``with_design_matrix=True``). - ``n_days_kept``: number of days kept because of sufficient data (only for ``with_design_matrix=True``). - ``temperature_mean``: mean temperature during given period. (only for ``with_design_matrix=True``). predict_warnings: :any: list of EEMeterWarning if any. """ if model_params is None: raise MissingModelParameterError("model_params is None.") predict_warnings = [] cooling_balance_points = [] heating_balance_points = [] if "cooling_balance_point" in model_params: cooling_balance_points.append(model_params["cooling_balance_point"]) if "heating_balance_point" in model_params: heating_balance_points.append(model_params["heating_balance_point"]) design_matrix = compute_temperature_features( prediction_index, temperature_data, heating_balance_points=heating_balance_points, cooling_balance_points=cooling_balance_points, degree_day_method=degree_day_method, use_mean_daily_values=False, ) if design_matrix.dropna().empty: if with_disaggregated: empty_columns = { "predicted_usage": [], "base_load": [], "heating_load": [], "cooling_load": [], } else: empty_columns = {"predicted_usage": []} predict_warnings.append( EEMeterWarning( qualified_name=("eemeter.caltrack.compute_temperature_features"), description=( "Design matrix empty, compute_temperature_features failed" ), data={"temperature_data": temperature_data}, ) ) return ModelPrediction( pd.DataFrame(empty_columns), design_matrix=pd.DataFrame(), warnings=predict_warnings, ) if degree_day_method == "daily": design_matrix["n_days"] = ( design_matrix.n_days_kept + design_matrix.n_days_dropped ) else: design_matrix["n_days"] = ( design_matrix.n_hours_kept + design_matrix.n_hours_dropped ) / 24 results = _caltrack_predict_design_matrix( model_type, model_params, design_matrix, input_averages=False, output_averages=False, ).to_frame("predicted_usage") if with_disaggregated: disaggregated = _caltrack_predict_design_matrix( model_type, model_params, design_matrix, disaggregated=True, input_averages=False, output_averages=False, ) results = results.join(disaggregated) if with_design_matrix: results = results.join(design_matrix) return ModelPrediction( result=results, design_matrix=design_matrix, warnings=predict_warnings )
def ResetConsoleColor() -> bool: """ Reset to the default text color on console window. Return bool, True if succeed otherwise False. """ if sys.stdout: sys.stdout.flush() bool(ctypes.windll.kernel32.SetConsoleTextAttribute(_ConsoleOutputHandle, _DefaultConsoleColor))
Reset to the default text color on console window. Return bool, True if succeed otherwise False.
Below is the the instruction that describes the task: ### Input: Reset to the default text color on console window. Return bool, True if succeed otherwise False. ### Response: def ResetConsoleColor() -> bool: """ Reset to the default text color on console window. Return bool, True if succeed otherwise False. """ if sys.stdout: sys.stdout.flush() bool(ctypes.windll.kernel32.SetConsoleTextAttribute(_ConsoleOutputHandle, _DefaultConsoleColor))
def resume(self): """Sends Play Directive to resume playback at the paused offset""" directive = self._play_directive('REPLACE_ALL') directive['audioItem'] = self._audio_item() self._response['directives'].append(directive) return self
Sends Play Directive to resume playback at the paused offset
Below is the the instruction that describes the task: ### Input: Sends Play Directive to resume playback at the paused offset ### Response: def resume(self): """Sends Play Directive to resume playback at the paused offset""" directive = self._play_directive('REPLACE_ALL') directive['audioItem'] = self._audio_item() self._response['directives'].append(directive) return self
def find_ruuvitags(bt_device=''): """ Find all RuuviTags. Function will print the mac and the state of the sensors when found. Function will execute as long as it is stopped. Stop ecexution with Crtl+C. Returns: dict: MAC and state of found sensors """ log.info('Finding RuuviTags. Stop with Ctrl+C.') datas = dict() for new_data in RuuviTagSensor._get_ruuvitag_datas(bt_device=bt_device): if new_data[0] in datas: continue datas[new_data[0]] = new_data[1] log.info(new_data[0]) log.info(new_data[1]) return datas
Find all RuuviTags. Function will print the mac and the state of the sensors when found. Function will execute as long as it is stopped. Stop ecexution with Crtl+C. Returns: dict: MAC and state of found sensors
Below is the the instruction that describes the task: ### Input: Find all RuuviTags. Function will print the mac and the state of the sensors when found. Function will execute as long as it is stopped. Stop ecexution with Crtl+C. Returns: dict: MAC and state of found sensors ### Response: def find_ruuvitags(bt_device=''): """ Find all RuuviTags. Function will print the mac and the state of the sensors when found. Function will execute as long as it is stopped. Stop ecexution with Crtl+C. Returns: dict: MAC and state of found sensors """ log.info('Finding RuuviTags. Stop with Ctrl+C.') datas = dict() for new_data in RuuviTagSensor._get_ruuvitag_datas(bt_device=bt_device): if new_data[0] in datas: continue datas[new_data[0]] = new_data[1] log.info(new_data[0]) log.info(new_data[1]) return datas
def optional_data_connections(self): '''Finds all data connections in which one or more components are not required. If all the components involved in a connection are required, that connection is also required. If one or more are not required, that connection is optional. Example: >>> s = RtsProfile(xml_spec=open('test/rtsystem.xml').read()) >>> len(s.optional_data_connections()) 0 ''' result = [] for conn in self._data_port_connectors: source_comp = self.find_comp_by_target(conn.source_data_port) target_comp = self.find_comp_by_target(conn.target_data_port) if not source_comp.is_required or not target_comp.is_required: result.append(conn) return result
Finds all data connections in which one or more components are not required. If all the components involved in a connection are required, that connection is also required. If one or more are not required, that connection is optional. Example: >>> s = RtsProfile(xml_spec=open('test/rtsystem.xml').read()) >>> len(s.optional_data_connections()) 0
Below is the the instruction that describes the task: ### Input: Finds all data connections in which one or more components are not required. If all the components involved in a connection are required, that connection is also required. If one or more are not required, that connection is optional. Example: >>> s = RtsProfile(xml_spec=open('test/rtsystem.xml').read()) >>> len(s.optional_data_connections()) 0 ### Response: def optional_data_connections(self): '''Finds all data connections in which one or more components are not required. If all the components involved in a connection are required, that connection is also required. If one or more are not required, that connection is optional. Example: >>> s = RtsProfile(xml_spec=open('test/rtsystem.xml').read()) >>> len(s.optional_data_connections()) 0 ''' result = [] for conn in self._data_port_connectors: source_comp = self.find_comp_by_target(conn.source_data_port) target_comp = self.find_comp_by_target(conn.target_data_port) if not source_comp.is_required or not target_comp.is_required: result.append(conn) return result
def gather(self, *futures: Union[asyncio.Future, asyncio.coroutine]): """Gather list of futures/coros and return single Task ready to schedule. :Example: Prepare all futures to execution .. code-block:: python >>> async def do_something(): ... return 'something' ... >>> async def do_something_else(): ... return 'something_else' ... Gather all tasks and then pass to context loop .. code-block:: python >>> loop = Loop(return_exceptions=True) >>> loop.gather(do_something(), do_something_else()) >>> with loop as l: ... result = l.run_until_complete() ... :param futures: One or more coroutine or future. :type futures: asyncio.Future, asyncio.coroutine :return: Futures grouped into single future :rtype: asyncio.Task, asyncio.Future """ self.ft_count = len(futures) self.futures = asyncio.gather(*futures, loop=self.loop, return_exceptions=self.return_exceptions)
Gather list of futures/coros and return single Task ready to schedule. :Example: Prepare all futures to execution .. code-block:: python >>> async def do_something(): ... return 'something' ... >>> async def do_something_else(): ... return 'something_else' ... Gather all tasks and then pass to context loop .. code-block:: python >>> loop = Loop(return_exceptions=True) >>> loop.gather(do_something(), do_something_else()) >>> with loop as l: ... result = l.run_until_complete() ... :param futures: One or more coroutine or future. :type futures: asyncio.Future, asyncio.coroutine :return: Futures grouped into single future :rtype: asyncio.Task, asyncio.Future
Below is the the instruction that describes the task: ### Input: Gather list of futures/coros and return single Task ready to schedule. :Example: Prepare all futures to execution .. code-block:: python >>> async def do_something(): ... return 'something' ... >>> async def do_something_else(): ... return 'something_else' ... Gather all tasks and then pass to context loop .. code-block:: python >>> loop = Loop(return_exceptions=True) >>> loop.gather(do_something(), do_something_else()) >>> with loop as l: ... result = l.run_until_complete() ... :param futures: One or more coroutine or future. :type futures: asyncio.Future, asyncio.coroutine :return: Futures grouped into single future :rtype: asyncio.Task, asyncio.Future ### Response: def gather(self, *futures: Union[asyncio.Future, asyncio.coroutine]): """Gather list of futures/coros and return single Task ready to schedule. :Example: Prepare all futures to execution .. code-block:: python >>> async def do_something(): ... return 'something' ... >>> async def do_something_else(): ... return 'something_else' ... Gather all tasks and then pass to context loop .. code-block:: python >>> loop = Loop(return_exceptions=True) >>> loop.gather(do_something(), do_something_else()) >>> with loop as l: ... result = l.run_until_complete() ... :param futures: One or more coroutine or future. :type futures: asyncio.Future, asyncio.coroutine :return: Futures grouped into single future :rtype: asyncio.Task, asyncio.Future """ self.ft_count = len(futures) self.futures = asyncio.gather(*futures, loop=self.loop, return_exceptions=self.return_exceptions)
def redirect_stdout(self, enabled=True, log_level=logging.INFO): """ Redirect sys.stdout to file-like object. """ if enabled: if self.__stdout_wrapper: self.__stdout_wrapper.update_log_level(log_level=log_level) else: self.__stdout_wrapper = StdOutWrapper(logger=self, log_level=log_level) self.__stdout_stream = self.__stdout_wrapper else: self.__stdout_stream = _original_stdout # Assign the new stream to sys.stdout sys.stdout = self.__stdout_stream
Redirect sys.stdout to file-like object.
Below is the the instruction that describes the task: ### Input: Redirect sys.stdout to file-like object. ### Response: def redirect_stdout(self, enabled=True, log_level=logging.INFO): """ Redirect sys.stdout to file-like object. """ if enabled: if self.__stdout_wrapper: self.__stdout_wrapper.update_log_level(log_level=log_level) else: self.__stdout_wrapper = StdOutWrapper(logger=self, log_level=log_level) self.__stdout_stream = self.__stdout_wrapper else: self.__stdout_stream = _original_stdout # Assign the new stream to sys.stdout sys.stdout = self.__stdout_stream
def import_upload(self, nvrea, ftype='rpm', rpm_name='', desc=None, htype='md5', lic=None, group=None, vendor=None, req=None): """ import the completed upload into pulp `ftype` - the type of the upload `rpm_name` - the name of the uploaded rpm `desc` - description of the rpm `htype` - checksum type `lic` - license used in the packaged software `group` - package group `vendor` - software vendor `req` - dependencies """ query = '/repositories/%s/actions/import_upload/' % self.repoid data = {'upload_id': self.uid, 'unit_type_id': ftype, 'unit_key': { 'name': rpm_name, 'version': nvrea[1], 'release': nvrea[2], 'epoch': nvrea[3], 'arch': nvrea[4], 'checksumtype': htype, 'checksum': self.cksum, }, 'unit_metadata': { 'filename': self.pkg_name, 'license': lic if lic else '', 'requires': req if req else '', # 'type': ftype, 'description': desc if desc else '', # 'size': self.size, 'vendor': vendor if vendor else '', 'relativepath': self.pkg_name, } } _r = self.connector.post(query, data) if _r.status_code not in [Constants.PULP_POST_OK, Constants.PULP_POST_ACCEPTED]: juicer.utils.Log.log_error("Import error importing '%s'... server said: \n %s", (self.pkg_name, juicer.utils.load_json_str(_r.content))) _r.raise_for_status() juicer.utils.Log.log_debug("Finalized upload id %s" % self.uid)
import the completed upload into pulp `ftype` - the type of the upload `rpm_name` - the name of the uploaded rpm `desc` - description of the rpm `htype` - checksum type `lic` - license used in the packaged software `group` - package group `vendor` - software vendor `req` - dependencies
Below is the the instruction that describes the task: ### Input: import the completed upload into pulp `ftype` - the type of the upload `rpm_name` - the name of the uploaded rpm `desc` - description of the rpm `htype` - checksum type `lic` - license used in the packaged software `group` - package group `vendor` - software vendor `req` - dependencies ### Response: def import_upload(self, nvrea, ftype='rpm', rpm_name='', desc=None, htype='md5', lic=None, group=None, vendor=None, req=None): """ import the completed upload into pulp `ftype` - the type of the upload `rpm_name` - the name of the uploaded rpm `desc` - description of the rpm `htype` - checksum type `lic` - license used in the packaged software `group` - package group `vendor` - software vendor `req` - dependencies """ query = '/repositories/%s/actions/import_upload/' % self.repoid data = {'upload_id': self.uid, 'unit_type_id': ftype, 'unit_key': { 'name': rpm_name, 'version': nvrea[1], 'release': nvrea[2], 'epoch': nvrea[3], 'arch': nvrea[4], 'checksumtype': htype, 'checksum': self.cksum, }, 'unit_metadata': { 'filename': self.pkg_name, 'license': lic if lic else '', 'requires': req if req else '', # 'type': ftype, 'description': desc if desc else '', # 'size': self.size, 'vendor': vendor if vendor else '', 'relativepath': self.pkg_name, } } _r = self.connector.post(query, data) if _r.status_code not in [Constants.PULP_POST_OK, Constants.PULP_POST_ACCEPTED]: juicer.utils.Log.log_error("Import error importing '%s'... server said: \n %s", (self.pkg_name, juicer.utils.load_json_str(_r.content))) _r.raise_for_status() juicer.utils.Log.log_debug("Finalized upload id %s" % self.uid)
def nom_diam_pipe(self): """The nominal diameter of the LFOM pipe""" ID = pc.diam_circle(self.area_pipe_min) return pipe.ND_SDR_available(ID, self.sdr)
The nominal diameter of the LFOM pipe
Below is the the instruction that describes the task: ### Input: The nominal diameter of the LFOM pipe ### Response: def nom_diam_pipe(self): """The nominal diameter of the LFOM pipe""" ID = pc.diam_circle(self.area_pipe_min) return pipe.ND_SDR_available(ID, self.sdr)
def academic_degree(self) -> str: """Get a random academic degree. :return: Degree. :Example: Bachelor. """ degrees = self._data['academic_degree'] return self.random.choice(degrees)
Get a random academic degree. :return: Degree. :Example: Bachelor.
Below is the the instruction that describes the task: ### Input: Get a random academic degree. :return: Degree. :Example: Bachelor. ### Response: def academic_degree(self) -> str: """Get a random academic degree. :return: Degree. :Example: Bachelor. """ degrees = self._data['academic_degree'] return self.random.choice(degrees)
def parse(self, data, lexer=None, *args, **kwargs): """Parse the input JSON data string into a python data structure. Args: data: An input data string lexer: An optional ply.lex instance that overrides the default lexer. Returns: A python dict or list representing the input JSON data. """ if lexer is None: lexer = self.lexer return self.parser.parse(data, lexer=lexer, *args, **kwargs)
Parse the input JSON data string into a python data structure. Args: data: An input data string lexer: An optional ply.lex instance that overrides the default lexer. Returns: A python dict or list representing the input JSON data.
Below is the the instruction that describes the task: ### Input: Parse the input JSON data string into a python data structure. Args: data: An input data string lexer: An optional ply.lex instance that overrides the default lexer. Returns: A python dict or list representing the input JSON data. ### Response: def parse(self, data, lexer=None, *args, **kwargs): """Parse the input JSON data string into a python data structure. Args: data: An input data string lexer: An optional ply.lex instance that overrides the default lexer. Returns: A python dict or list representing the input JSON data. """ if lexer is None: lexer = self.lexer return self.parser.parse(data, lexer=lexer, *args, **kwargs)
def _load_file(path): """ Loads a file from the local filesystem """ if not os.path.exists(path): parser.error("{} was not found!".format(path)) if USING_PYTHON2: mode = "r" else: mode = "rb" try: f = open(path, mode) return f except IOError as ex: parser.error("{path} could not be read due to an I/O error! ({ex})".format(path=path, ex=ex))
Loads a file from the local filesystem
Below is the the instruction that describes the task: ### Input: Loads a file from the local filesystem ### Response: def _load_file(path): """ Loads a file from the local filesystem """ if not os.path.exists(path): parser.error("{} was not found!".format(path)) if USING_PYTHON2: mode = "r" else: mode = "rb" try: f = open(path, mode) return f except IOError as ex: parser.error("{path} could not be read due to an I/O error! ({ex})".format(path=path, ex=ex))
def date_range(start, end, length, time_unit='us'): """ Computes a date range given a start date, end date and the number of samples. """ step = (1./compute_density(start, end, length, time_unit)) if pd and isinstance(start, pd.Timestamp): start = start.to_datetime64() step = np.timedelta64(int(round(step)), time_unit) return start+step/2.+np.arange(length)*step
Computes a date range given a start date, end date and the number of samples.
Below is the the instruction that describes the task: ### Input: Computes a date range given a start date, end date and the number of samples. ### Response: def date_range(start, end, length, time_unit='us'): """ Computes a date range given a start date, end date and the number of samples. """ step = (1./compute_density(start, end, length, time_unit)) if pd and isinstance(start, pd.Timestamp): start = start.to_datetime64() step = np.timedelta64(int(round(step)), time_unit) return start+step/2.+np.arange(length)*step
def cee_map_remap_lossless_priority_lossless_remapped_priority(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") cee_map = ET.SubElement(config, "cee-map", xmlns="urn:brocade.com:mgmt:brocade-cee-map") name_key = ET.SubElement(cee_map, "name") name_key.text = kwargs.pop('name') remap = ET.SubElement(cee_map, "remap") lossless_priority = ET.SubElement(remap, "lossless-priority") lossless_remapped_priority = ET.SubElement(lossless_priority, "lossless-remapped-priority") lossless_remapped_priority.text = kwargs.pop('lossless_remapped_priority') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def cee_map_remap_lossless_priority_lossless_remapped_priority(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") cee_map = ET.SubElement(config, "cee-map", xmlns="urn:brocade.com:mgmt:brocade-cee-map") name_key = ET.SubElement(cee_map, "name") name_key.text = kwargs.pop('name') remap = ET.SubElement(cee_map, "remap") lossless_priority = ET.SubElement(remap, "lossless-priority") lossless_remapped_priority = ET.SubElement(lossless_priority, "lossless-remapped-priority") lossless_remapped_priority.text = kwargs.pop('lossless_remapped_priority') callback = kwargs.pop('callback', self._callback) return callback(config)
def json(self, align_threshold: float = 0.0) -> Dict: """ Returns a dictionary suitable for json.dumps() representing all the information in the class. It is initialized with any keys present in the corresponding `TranslatorInput` object's pass_through_dict. Keys from here that are not overwritten by Sockeye will thus be passed through to the output. :param align_threshold: If alignments are defined, only print ones over this threshold. :return: A dictionary. """ _d = self.pass_through_dict # type: Dict[str, Any] _d['sentence_id'] = self.sentence_id _d['translation'] = self.translation _d['score'] = self.score if self.nbest_translations is not None and len(self.nbest_translations) > 1: _d['translations'] = self.nbest_translations _d['scores'] = self.nbest_scores if self.nbest_attention_matrices: extracted_alignments = [] for alignment_matrix in self.nbest_attention_matrices: extracted_alignments.append(list(utils.get_alignments(alignment_matrix, threshold=align_threshold))) _d['alignments'] = extracted_alignments return _d
Returns a dictionary suitable for json.dumps() representing all the information in the class. It is initialized with any keys present in the corresponding `TranslatorInput` object's pass_through_dict. Keys from here that are not overwritten by Sockeye will thus be passed through to the output. :param align_threshold: If alignments are defined, only print ones over this threshold. :return: A dictionary.
Below is the the instruction that describes the task: ### Input: Returns a dictionary suitable for json.dumps() representing all the information in the class. It is initialized with any keys present in the corresponding `TranslatorInput` object's pass_through_dict. Keys from here that are not overwritten by Sockeye will thus be passed through to the output. :param align_threshold: If alignments are defined, only print ones over this threshold. :return: A dictionary. ### Response: def json(self, align_threshold: float = 0.0) -> Dict: """ Returns a dictionary suitable for json.dumps() representing all the information in the class. It is initialized with any keys present in the corresponding `TranslatorInput` object's pass_through_dict. Keys from here that are not overwritten by Sockeye will thus be passed through to the output. :param align_threshold: If alignments are defined, only print ones over this threshold. :return: A dictionary. """ _d = self.pass_through_dict # type: Dict[str, Any] _d['sentence_id'] = self.sentence_id _d['translation'] = self.translation _d['score'] = self.score if self.nbest_translations is not None and len(self.nbest_translations) > 1: _d['translations'] = self.nbest_translations _d['scores'] = self.nbest_scores if self.nbest_attention_matrices: extracted_alignments = [] for alignment_matrix in self.nbest_attention_matrices: extracted_alignments.append(list(utils.get_alignments(alignment_matrix, threshold=align_threshold))) _d['alignments'] = extracted_alignments return _d
def _check_ubridge_version(self, env=None): """ Checks if the ubridge executable version """ try: output = yield from subprocess_check_output(self._path, "-v", cwd=self._working_dir, env=env) match = re.search("ubridge version ([0-9a-z\.]+)", output) if match: self._version = match.group(1) if sys.platform.startswith("win") or sys.platform.startswith("darwin"): minimum_required_version = "0.9.12" else: # uBridge version 0.9.14 is required for packet filters # to work for IOU nodes. minimum_required_version = "0.9.14" if parse_version(self._version) < parse_version(minimum_required_version): raise UbridgeError("uBridge executable version must be >= {}".format(minimum_required_version)) else: raise UbridgeError("Could not determine uBridge version for {}".format(self._path)) except (OSError, subprocess.SubprocessError) as e: raise UbridgeError("Error while looking for uBridge version: {}".format(e))
Checks if the ubridge executable version
Below is the the instruction that describes the task: ### Input: Checks if the ubridge executable version ### Response: def _check_ubridge_version(self, env=None): """ Checks if the ubridge executable version """ try: output = yield from subprocess_check_output(self._path, "-v", cwd=self._working_dir, env=env) match = re.search("ubridge version ([0-9a-z\.]+)", output) if match: self._version = match.group(1) if sys.platform.startswith("win") or sys.platform.startswith("darwin"): minimum_required_version = "0.9.12" else: # uBridge version 0.9.14 is required for packet filters # to work for IOU nodes. minimum_required_version = "0.9.14" if parse_version(self._version) < parse_version(minimum_required_version): raise UbridgeError("uBridge executable version must be >= {}".format(minimum_required_version)) else: raise UbridgeError("Could not determine uBridge version for {}".format(self._path)) except (OSError, subprocess.SubprocessError) as e: raise UbridgeError("Error while looking for uBridge version: {}".format(e))
def and_(*fs): """Creates a function that returns true for given arguments iff every given function evalutes to true for those arguments. :param fs: Functions to combine :return: Short-circuiting function performing logical conjunction on results of ``fs`` applied to its arguments """ ensure_argcount(fs, min_=1) fs = list(imap(ensure_callable, fs)) if len(fs) == 1: return fs[0] if len(fs) == 2: f1, f2 = fs return lambda *args, **kwargs: ( f1(*args, **kwargs) and f2(*args, **kwargs)) if len(fs) == 3: f1, f2, f3 = fs return lambda *args, **kwargs: ( f1(*args, **kwargs) and f2(*args, **kwargs) and f3(*args, **kwargs)) def g(*args, **kwargs): for f in fs: if not f(*args, **kwargs): return False return True return g
Creates a function that returns true for given arguments iff every given function evalutes to true for those arguments. :param fs: Functions to combine :return: Short-circuiting function performing logical conjunction on results of ``fs`` applied to its arguments
Below is the the instruction that describes the task: ### Input: Creates a function that returns true for given arguments iff every given function evalutes to true for those arguments. :param fs: Functions to combine :return: Short-circuiting function performing logical conjunction on results of ``fs`` applied to its arguments ### Response: def and_(*fs): """Creates a function that returns true for given arguments iff every given function evalutes to true for those arguments. :param fs: Functions to combine :return: Short-circuiting function performing logical conjunction on results of ``fs`` applied to its arguments """ ensure_argcount(fs, min_=1) fs = list(imap(ensure_callable, fs)) if len(fs) == 1: return fs[0] if len(fs) == 2: f1, f2 = fs return lambda *args, **kwargs: ( f1(*args, **kwargs) and f2(*args, **kwargs)) if len(fs) == 3: f1, f2, f3 = fs return lambda *args, **kwargs: ( f1(*args, **kwargs) and f2(*args, **kwargs) and f3(*args, **kwargs)) def g(*args, **kwargs): for f in fs: if not f(*args, **kwargs): return False return True return g
def ports(self): '''The list of all ports belonging to this component.''' with self._mutex: if not self._ports: self._ports = [ports.parse_port(port, self) \ for port in self._obj.get_ports()] return self._ports
The list of all ports belonging to this component.
Below is the the instruction that describes the task: ### Input: The list of all ports belonging to this component. ### Response: def ports(self): '''The list of all ports belonging to this component.''' with self._mutex: if not self._ports: self._ports = [ports.parse_port(port, self) \ for port in self._obj.get_ports()] return self._ports
def _serialize_model_helper(self, model, field_dict=None): """ A recursive function for serializing a model into a json ready format. """ field_dict = field_dict or self.dot_field_list_to_dict() if model is None: return None if isinstance(model, Query): model = model.all() if isinstance(model, (list, set)): return [self.serialize_model(m, field_dict=field_dict) for m in model] model_dict = {} for name, sub in six.iteritems(field_dict): value = getattr(model, name) if sub: value = self.serialize_model(value, field_dict=sub) model_dict[name] = value return model_dict
A recursive function for serializing a model into a json ready format.
Below is the the instruction that describes the task: ### Input: A recursive function for serializing a model into a json ready format. ### Response: def _serialize_model_helper(self, model, field_dict=None): """ A recursive function for serializing a model into a json ready format. """ field_dict = field_dict or self.dot_field_list_to_dict() if model is None: return None if isinstance(model, Query): model = model.all() if isinstance(model, (list, set)): return [self.serialize_model(m, field_dict=field_dict) for m in model] model_dict = {} for name, sub in six.iteritems(field_dict): value = getattr(model, name) if sub: value = self.serialize_model(value, field_dict=sub) model_dict[name] = value return model_dict
def _kl_bernoulli_bernoulli(a, b, name=None): """Calculate the batched KL divergence KL(a || b) with a and b Bernoulli. Args: a: instance of a Bernoulli distribution object. b: instance of a Bernoulli distribution object. name: (optional) Name to use for created operations. default is "kl_bernoulli_bernoulli". Returns: Batchwise KL(a || b) """ with tf.name_scope(name or "kl_bernoulli_bernoulli"): delta_probs0 = tf.nn.softplus(-b.logits) - tf.nn.softplus(-a.logits) delta_probs1 = tf.nn.softplus(b.logits) - tf.nn.softplus(a.logits) return (tf.sigmoid(a.logits) * delta_probs0 + tf.sigmoid(-a.logits) * delta_probs1)
Calculate the batched KL divergence KL(a || b) with a and b Bernoulli. Args: a: instance of a Bernoulli distribution object. b: instance of a Bernoulli distribution object. name: (optional) Name to use for created operations. default is "kl_bernoulli_bernoulli". Returns: Batchwise KL(a || b)
Below is the the instruction that describes the task: ### Input: Calculate the batched KL divergence KL(a || b) with a and b Bernoulli. Args: a: instance of a Bernoulli distribution object. b: instance of a Bernoulli distribution object. name: (optional) Name to use for created operations. default is "kl_bernoulli_bernoulli". Returns: Batchwise KL(a || b) ### Response: def _kl_bernoulli_bernoulli(a, b, name=None): """Calculate the batched KL divergence KL(a || b) with a and b Bernoulli. Args: a: instance of a Bernoulli distribution object. b: instance of a Bernoulli distribution object. name: (optional) Name to use for created operations. default is "kl_bernoulli_bernoulli". Returns: Batchwise KL(a || b) """ with tf.name_scope(name or "kl_bernoulli_bernoulli"): delta_probs0 = tf.nn.softplus(-b.logits) - tf.nn.softplus(-a.logits) delta_probs1 = tf.nn.softplus(b.logits) - tf.nn.softplus(a.logits) return (tf.sigmoid(a.logits) * delta_probs0 + tf.sigmoid(-a.logits) * delta_probs1)
def write_csv(filename, data, delimiter=CSV_DELIMITER): """ Write image data to CSV file :param filename: name of CSV file to write data to :type filename: str :param data: image data to write to CSV file :type data: numpy array :param delimiter: delimiter used in CSV file. Default is ``;`` :type delimiter: str """ with open(filename, 'w') as file: csv_writer = csv.writer(file, delimiter=delimiter) for line in data: csv_writer.writerow(line)
Write image data to CSV file :param filename: name of CSV file to write data to :type filename: str :param data: image data to write to CSV file :type data: numpy array :param delimiter: delimiter used in CSV file. Default is ``;`` :type delimiter: str
Below is the the instruction that describes the task: ### Input: Write image data to CSV file :param filename: name of CSV file to write data to :type filename: str :param data: image data to write to CSV file :type data: numpy array :param delimiter: delimiter used in CSV file. Default is ``;`` :type delimiter: str ### Response: def write_csv(filename, data, delimiter=CSV_DELIMITER): """ Write image data to CSV file :param filename: name of CSV file to write data to :type filename: str :param data: image data to write to CSV file :type data: numpy array :param delimiter: delimiter used in CSV file. Default is ``;`` :type delimiter: str """ with open(filename, 'w') as file: csv_writer = csv.writer(file, delimiter=delimiter) for line in data: csv_writer.writerow(line)
def colorize(txt, fg=None, bg=None): """ Print escape codes to set the terminal color. fg and bg are indices into the color palette for the foreground and background colors. """ setting = '' setting += _SET_FG.format(fg) if fg else '' setting += _SET_BG.format(bg) if bg else '' return setting + str(txt) + _STYLE_RESET
Print escape codes to set the terminal color. fg and bg are indices into the color palette for the foreground and background colors.
Below is the the instruction that describes the task: ### Input: Print escape codes to set the terminal color. fg and bg are indices into the color palette for the foreground and background colors. ### Response: def colorize(txt, fg=None, bg=None): """ Print escape codes to set the terminal color. fg and bg are indices into the color palette for the foreground and background colors. """ setting = '' setting += _SET_FG.format(fg) if fg else '' setting += _SET_BG.format(bg) if bg else '' return setting + str(txt) + _STYLE_RESET
def sources_remove(source_uri, ruby=None, runas=None, gem_bin=None): ''' Remove a gem source. :param source_uri: string The source URI to remove. :param gem_bin: string : None Full path to ``gem`` binary to use. :param ruby: string : None If RVM or rbenv are installed, the ruby version and gemset to use. Ignored if ``gem_bin`` is specified. :param runas: string : None The user to run gem as. CLI Example: .. code-block:: bash salt '*' gem.sources_remove http://rubygems.org/ ''' return _gem(['sources', '--remove', source_uri], ruby, gem_bin=gem_bin, runas=runas)
Remove a gem source. :param source_uri: string The source URI to remove. :param gem_bin: string : None Full path to ``gem`` binary to use. :param ruby: string : None If RVM or rbenv are installed, the ruby version and gemset to use. Ignored if ``gem_bin`` is specified. :param runas: string : None The user to run gem as. CLI Example: .. code-block:: bash salt '*' gem.sources_remove http://rubygems.org/
Below is the the instruction that describes the task: ### Input: Remove a gem source. :param source_uri: string The source URI to remove. :param gem_bin: string : None Full path to ``gem`` binary to use. :param ruby: string : None If RVM or rbenv are installed, the ruby version and gemset to use. Ignored if ``gem_bin`` is specified. :param runas: string : None The user to run gem as. CLI Example: .. code-block:: bash salt '*' gem.sources_remove http://rubygems.org/ ### Response: def sources_remove(source_uri, ruby=None, runas=None, gem_bin=None): ''' Remove a gem source. :param source_uri: string The source URI to remove. :param gem_bin: string : None Full path to ``gem`` binary to use. :param ruby: string : None If RVM or rbenv are installed, the ruby version and gemset to use. Ignored if ``gem_bin`` is specified. :param runas: string : None The user to run gem as. CLI Example: .. code-block:: bash salt '*' gem.sources_remove http://rubygems.org/ ''' return _gem(['sources', '--remove', source_uri], ruby, gem_bin=gem_bin, runas=runas)
def find_vlans( self, number, name, iexact, environment, net_type, network, ip_version, subnet, acl, pagination): """ Find vlans by all search parameters :param number: Filter by vlan number column :param name: Filter by vlan name column :param iexact: Filter by name will be exact? :param environment: Filter by environment ID related :param net_type: Filter by network_type ID related :param network: Filter by each octs in network :param ip_version: Get only version (0:ipv4, 1:ipv6, 2:all) :param subnet: Filter by octs will search by subnets? :param acl: Filter by vlan acl column :param pagination: Class with all data needed to paginate :return: Following dictionary: :: {'vlan': {'id': < id_vlan >, 'nome': < nome_vlan >, 'num_vlan': < num_vlan >, 'id_ambiente': < id_ambiente >, 'descricao': < descricao >, 'acl_file_name': < acl_file_name >, 'acl_valida': < acl_valida >, 'acl_file_name_v6': < acl_file_name_v6 >, 'acl_valida_v6': < acl_valida_v6 >, 'ativada': < ativada >, 'ambiente_name': < divisao_dc-ambiente_logico-grupo_l3 > 'redeipv4': [ { all networkipv4 related } ], 'redeipv6': [ { all networkipv6 related } ] }, 'total': {< total_registros >} } :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. """ if not isinstance(pagination, Pagination): raise InvalidParameterError( u"Invalid parameter: pagination must be a class of type 'Pagination'.") vlan_map = dict() vlan_map['start_record'] = pagination.start_record vlan_map['end_record'] = pagination.end_record vlan_map['asorting_cols'] = pagination.asorting_cols vlan_map['searchable_columns'] = pagination.searchable_columns vlan_map['custom_search'] = pagination.custom_search vlan_map['numero'] = number vlan_map['nome'] = name vlan_map['exato'] = iexact vlan_map['ambiente'] = environment vlan_map['tipo_rede'] = net_type vlan_map['rede'] = network vlan_map['versao'] = ip_version vlan_map['subrede'] = subnet vlan_map['acl'] = acl url = 'vlan/find/' code, xml = self.submit({'vlan': vlan_map}, 'POST', url) key = 'vlan' return get_list_map( self.response( code, xml, [ key, 'redeipv4', 'redeipv6', 'equipamentos']), key)
Find vlans by all search parameters :param number: Filter by vlan number column :param name: Filter by vlan name column :param iexact: Filter by name will be exact? :param environment: Filter by environment ID related :param net_type: Filter by network_type ID related :param network: Filter by each octs in network :param ip_version: Get only version (0:ipv4, 1:ipv6, 2:all) :param subnet: Filter by octs will search by subnets? :param acl: Filter by vlan acl column :param pagination: Class with all data needed to paginate :return: Following dictionary: :: {'vlan': {'id': < id_vlan >, 'nome': < nome_vlan >, 'num_vlan': < num_vlan >, 'id_ambiente': < id_ambiente >, 'descricao': < descricao >, 'acl_file_name': < acl_file_name >, 'acl_valida': < acl_valida >, 'acl_file_name_v6': < acl_file_name_v6 >, 'acl_valida_v6': < acl_valida_v6 >, 'ativada': < ativada >, 'ambiente_name': < divisao_dc-ambiente_logico-grupo_l3 > 'redeipv4': [ { all networkipv4 related } ], 'redeipv6': [ { all networkipv6 related } ] }, 'total': {< total_registros >} } :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response.
Below is the the instruction that describes the task: ### Input: Find vlans by all search parameters :param number: Filter by vlan number column :param name: Filter by vlan name column :param iexact: Filter by name will be exact? :param environment: Filter by environment ID related :param net_type: Filter by network_type ID related :param network: Filter by each octs in network :param ip_version: Get only version (0:ipv4, 1:ipv6, 2:all) :param subnet: Filter by octs will search by subnets? :param acl: Filter by vlan acl column :param pagination: Class with all data needed to paginate :return: Following dictionary: :: {'vlan': {'id': < id_vlan >, 'nome': < nome_vlan >, 'num_vlan': < num_vlan >, 'id_ambiente': < id_ambiente >, 'descricao': < descricao >, 'acl_file_name': < acl_file_name >, 'acl_valida': < acl_valida >, 'acl_file_name_v6': < acl_file_name_v6 >, 'acl_valida_v6': < acl_valida_v6 >, 'ativada': < ativada >, 'ambiente_name': < divisao_dc-ambiente_logico-grupo_l3 > 'redeipv4': [ { all networkipv4 related } ], 'redeipv6': [ { all networkipv6 related } ] }, 'total': {< total_registros >} } :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. ### Response: def find_vlans( self, number, name, iexact, environment, net_type, network, ip_version, subnet, acl, pagination): """ Find vlans by all search parameters :param number: Filter by vlan number column :param name: Filter by vlan name column :param iexact: Filter by name will be exact? :param environment: Filter by environment ID related :param net_type: Filter by network_type ID related :param network: Filter by each octs in network :param ip_version: Get only version (0:ipv4, 1:ipv6, 2:all) :param subnet: Filter by octs will search by subnets? :param acl: Filter by vlan acl column :param pagination: Class with all data needed to paginate :return: Following dictionary: :: {'vlan': {'id': < id_vlan >, 'nome': < nome_vlan >, 'num_vlan': < num_vlan >, 'id_ambiente': < id_ambiente >, 'descricao': < descricao >, 'acl_file_name': < acl_file_name >, 'acl_valida': < acl_valida >, 'acl_file_name_v6': < acl_file_name_v6 >, 'acl_valida_v6': < acl_valida_v6 >, 'ativada': < ativada >, 'ambiente_name': < divisao_dc-ambiente_logico-grupo_l3 > 'redeipv4': [ { all networkipv4 related } ], 'redeipv6': [ { all networkipv6 related } ] }, 'total': {< total_registros >} } :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. """ if not isinstance(pagination, Pagination): raise InvalidParameterError( u"Invalid parameter: pagination must be a class of type 'Pagination'.") vlan_map = dict() vlan_map['start_record'] = pagination.start_record vlan_map['end_record'] = pagination.end_record vlan_map['asorting_cols'] = pagination.asorting_cols vlan_map['searchable_columns'] = pagination.searchable_columns vlan_map['custom_search'] = pagination.custom_search vlan_map['numero'] = number vlan_map['nome'] = name vlan_map['exato'] = iexact vlan_map['ambiente'] = environment vlan_map['tipo_rede'] = net_type vlan_map['rede'] = network vlan_map['versao'] = ip_version vlan_map['subrede'] = subnet vlan_map['acl'] = acl url = 'vlan/find/' code, xml = self.submit({'vlan': vlan_map}, 'POST', url) key = 'vlan' return get_list_map( self.response( code, xml, [ key, 'redeipv4', 'redeipv6', 'equipamentos']), key)
def get_pool(cls) -> Pool: """ Yields: existing db connection pool """ if len(cls._connection_params) < 5: raise ConnectionError('Please call SQLStore.connect before calling this method') if not cls._pool: cls._pool = yield from create_pool(**cls._connection_params) return cls._pool
Yields: existing db connection pool
Below is the the instruction that describes the task: ### Input: Yields: existing db connection pool ### Response: def get_pool(cls) -> Pool: """ Yields: existing db connection pool """ if len(cls._connection_params) < 5: raise ConnectionError('Please call SQLStore.connect before calling this method') if not cls._pool: cls._pool = yield from create_pool(**cls._connection_params) return cls._pool
def add_child(self, child): '''Add child to ``Node`` object Args: ``child`` (``Node``): The child ``Node`` to be added ''' if not isinstance(child, Node): raise TypeError("child must be a Node") self.children.append(child); child.parent = self
Add child to ``Node`` object Args: ``child`` (``Node``): The child ``Node`` to be added
Below is the the instruction that describes the task: ### Input: Add child to ``Node`` object Args: ``child`` (``Node``): The child ``Node`` to be added ### Response: def add_child(self, child): '''Add child to ``Node`` object Args: ``child`` (``Node``): The child ``Node`` to be added ''' if not isinstance(child, Node): raise TypeError("child must be a Node") self.children.append(child); child.parent = self
def list_alarms(self, limit=None, marker=None, return_next=False): """ Returns a list of all the alarms created on this entity. """ return self._alarm_manager.list(limit=limit, marker=marker, return_next=return_next)
Returns a list of all the alarms created on this entity.
Below is the the instruction that describes the task: ### Input: Returns a list of all the alarms created on this entity. ### Response: def list_alarms(self, limit=None, marker=None, return_next=False): """ Returns a list of all the alarms created on this entity. """ return self._alarm_manager.list(limit=limit, marker=marker, return_next=return_next)
def summary(args): """ %prog summary old.new.chain old.fasta new.fasta Provide stats of the chain file. """ from jcvi.formats.fasta import summary as fsummary from jcvi.utils.cbook import percentage, human_size p = OptionParser(summary.__doc__) opts, args = p.parse_args(args) if len(args) != 3: sys.exit(not p.print_help()) chainfile, oldfasta, newfasta = args chain = Chain(chainfile) ungapped, dt, dq = chain.ungapped, chain.dt, chain.dq print("File `{0}` contains {1} chains.".\ format(chainfile, len(chain)), file=sys.stderr) print("ungapped={0} dt={1} dq={2}".\ format(human_size(ungapped), human_size(dt), human_size(dq)), file=sys.stderr) oldreal, oldnn, oldlen = fsummary([oldfasta, "--outfile=/dev/null"]) print("Old fasta (`{0}`) mapped: {1}".\ format(oldfasta, percentage(ungapped, oldreal)), file=sys.stderr) newreal, newnn, newlen = fsummary([newfasta, "--outfile=/dev/null"]) print("New fasta (`{0}`) mapped: {1}".\ format(newfasta, percentage(ungapped, newreal)), file=sys.stderr)
%prog summary old.new.chain old.fasta new.fasta Provide stats of the chain file.
Below is the the instruction that describes the task: ### Input: %prog summary old.new.chain old.fasta new.fasta Provide stats of the chain file. ### Response: def summary(args): """ %prog summary old.new.chain old.fasta new.fasta Provide stats of the chain file. """ from jcvi.formats.fasta import summary as fsummary from jcvi.utils.cbook import percentage, human_size p = OptionParser(summary.__doc__) opts, args = p.parse_args(args) if len(args) != 3: sys.exit(not p.print_help()) chainfile, oldfasta, newfasta = args chain = Chain(chainfile) ungapped, dt, dq = chain.ungapped, chain.dt, chain.dq print("File `{0}` contains {1} chains.".\ format(chainfile, len(chain)), file=sys.stderr) print("ungapped={0} dt={1} dq={2}".\ format(human_size(ungapped), human_size(dt), human_size(dq)), file=sys.stderr) oldreal, oldnn, oldlen = fsummary([oldfasta, "--outfile=/dev/null"]) print("Old fasta (`{0}`) mapped: {1}".\ format(oldfasta, percentage(ungapped, oldreal)), file=sys.stderr) newreal, newnn, newlen = fsummary([newfasta, "--outfile=/dev/null"]) print("New fasta (`{0}`) mapped: {1}".\ format(newfasta, percentage(ungapped, newreal)), file=sys.stderr)
def _generate_move( cls, char, width=None, fill_char=None, bounce=False, reverse=True, back_char=None): """ Yields strings that simulate movement of a character from left to right. For use with `BarSet.from_char`. Arguments: char : Character to move across the progress bar. width : Width for the progress bar. Default: cls.default_width fill_char : String for empty space. Default: cls.default_fill_char bounce : Whether to move the character in both directions. reverse : Whether to start on the right side. back_char : Character to use for the bounce's backward movement. Default: `char` """ width = width or cls.default_width char = str(char) filler = str(fill_char or cls.default_fill_char) * (width - len(char)) rangeargs = RangeMoveArgs( (0, width, 1), (width, 0, -1), ) if reverse: # Reverse the arguments for range to start from the right. # Not using swap, because the stopping point is different. rangeargs = RangeMoveArgs( (width, -1, -1), (0, width - 1, 1), ) yield from ( ''.join((filler[:i], char, filler[i:])) for i in range(*rangeargs.forward) ) if bounce: bouncechar = char if back_char is None else back_char yield from ( ''.join((filler[:i], str(bouncechar), filler[i:])) for i in range(*rangeargs.backward) )
Yields strings that simulate movement of a character from left to right. For use with `BarSet.from_char`. Arguments: char : Character to move across the progress bar. width : Width for the progress bar. Default: cls.default_width fill_char : String for empty space. Default: cls.default_fill_char bounce : Whether to move the character in both directions. reverse : Whether to start on the right side. back_char : Character to use for the bounce's backward movement. Default: `char`
Below is the the instruction that describes the task: ### Input: Yields strings that simulate movement of a character from left to right. For use with `BarSet.from_char`. Arguments: char : Character to move across the progress bar. width : Width for the progress bar. Default: cls.default_width fill_char : String for empty space. Default: cls.default_fill_char bounce : Whether to move the character in both directions. reverse : Whether to start on the right side. back_char : Character to use for the bounce's backward movement. Default: `char` ### Response: def _generate_move( cls, char, width=None, fill_char=None, bounce=False, reverse=True, back_char=None): """ Yields strings that simulate movement of a character from left to right. For use with `BarSet.from_char`. Arguments: char : Character to move across the progress bar. width : Width for the progress bar. Default: cls.default_width fill_char : String for empty space. Default: cls.default_fill_char bounce : Whether to move the character in both directions. reverse : Whether to start on the right side. back_char : Character to use for the bounce's backward movement. Default: `char` """ width = width or cls.default_width char = str(char) filler = str(fill_char or cls.default_fill_char) * (width - len(char)) rangeargs = RangeMoveArgs( (0, width, 1), (width, 0, -1), ) if reverse: # Reverse the arguments for range to start from the right. # Not using swap, because the stopping point is different. rangeargs = RangeMoveArgs( (width, -1, -1), (0, width - 1, 1), ) yield from ( ''.join((filler[:i], char, filler[i:])) for i in range(*rangeargs.forward) ) if bounce: bouncechar = char if back_char is None else back_char yield from ( ''.join((filler[:i], str(bouncechar), filler[i:])) for i in range(*rangeargs.backward) )
def is_npvalue(u, dtype): ''' is_npvalue(u, dtype) yields True if u is a member of the given dtype according to numpy. The dtype may be specified as a string (see numpy_type) or a type. Note that dtype may be None, 'any', or np.generic, but this will always return True if so. Note that is_npvalue(1, 'int') will yield True, while is_npvalue(np.array(1), 'int'), is_npvalue(np.array([1]), 'int'), and is_npvalue([1], 'int') will yield False (because lists and numpy arrays of ints aren't ints). See also is_nparray, is_npscalar, is_scalar. ''' if is_quantity(u): return is_npvalue(mag(u), dtype=dtype) return any(np.issubdtype(type(u), np.generic if d is None else d) for d in numpy_type(dtype))
is_npvalue(u, dtype) yields True if u is a member of the given dtype according to numpy. The dtype may be specified as a string (see numpy_type) or a type. Note that dtype may be None, 'any', or np.generic, but this will always return True if so. Note that is_npvalue(1, 'int') will yield True, while is_npvalue(np.array(1), 'int'), is_npvalue(np.array([1]), 'int'), and is_npvalue([1], 'int') will yield False (because lists and numpy arrays of ints aren't ints). See also is_nparray, is_npscalar, is_scalar.
Below is the the instruction that describes the task: ### Input: is_npvalue(u, dtype) yields True if u is a member of the given dtype according to numpy. The dtype may be specified as a string (see numpy_type) or a type. Note that dtype may be None, 'any', or np.generic, but this will always return True if so. Note that is_npvalue(1, 'int') will yield True, while is_npvalue(np.array(1), 'int'), is_npvalue(np.array([1]), 'int'), and is_npvalue([1], 'int') will yield False (because lists and numpy arrays of ints aren't ints). See also is_nparray, is_npscalar, is_scalar. ### Response: def is_npvalue(u, dtype): ''' is_npvalue(u, dtype) yields True if u is a member of the given dtype according to numpy. The dtype may be specified as a string (see numpy_type) or a type. Note that dtype may be None, 'any', or np.generic, but this will always return True if so. Note that is_npvalue(1, 'int') will yield True, while is_npvalue(np.array(1), 'int'), is_npvalue(np.array([1]), 'int'), and is_npvalue([1], 'int') will yield False (because lists and numpy arrays of ints aren't ints). See also is_nparray, is_npscalar, is_scalar. ''' if is_quantity(u): return is_npvalue(mag(u), dtype=dtype) return any(np.issubdtype(type(u), np.generic if d is None else d) for d in numpy_type(dtype))