code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def listFileArray(self): """ API to list files in DBS. Either non-wildcarded logical_file_name, non-wildcarded dataset, non-wildcarded block_name or non-wildcarded lfn list is required. The combination of a non-wildcarded dataset or block_name with an wildcarded logical_file_name is supported. * For lumi_list the following two json formats are supported: - [a1, a2, a3,] - [[a,b], [c, d],] * lumi_list can be either a list of lumi section numbers as [a1, a2, a3,] or a list of lumi section range as [[a,b], [c, d],]. Thay cannot be mixed. * If lumi_list is provided run only run_num=single-run-number is allowed * When lfn list is present, no run or lumi list is allowed. * When run_num =1 is present, logical_file_name should be present too. :param logical_file_name: logical_file_name of the file :type logical_file_name: str, list :param dataset: dataset :type dataset: str :param block_name: block name :type block_name: str :param release_version: release version :type release_version: str :param pset_hash: parameter set hash :type pset_hash: str :param app_name: Name of the application :type app_name: str :param output_module_label: name of the used output module :type output_module_label: str :param run_num: run , run ranges, and run list. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. Max length 1000. :type run_num: int, list, string :param origin_site_name: site where the file was created :type origin_site_name: str :param lumi_list: List containing luminosity sections. Max length 1000. :type lumi_list: list :param detail: Get detailed information about a file :type detail: bool :param validFileOnly: default=0 return all the files. when =1, only return files with is_file_valid=1 or dataset_access_type=PRODUCTION or VALID :type validFileOnly: int :param sumOverLumi: default=0 event_count is the event_count/file, when=1 and run_num is specified, the event_count is sum of the event_count/lumi for that run; When sumOverLumi = 1, no other input can be a list, for example no run_num list, lumi list or lfn list. :type sumOverLumi: int :returns: List of dictionaries containing the following keys (logical_file_name). If detail parameter is true, the dictionaries contain the following keys (check_sum, branch_hash_id, adler32, block_id, event_count, file_type, create_by, logical_file_name, creation_date, last_modified_by, dataset, block_name, file_id, file_size, last_modification_date, dataset_id, file_type_id, auto_cross_section, md5, is_file_valid) :rtype: list of dicts """ ret = [] try : body = request.body.read() if body: data = cjson.decode(body) data = validateJSONInputNoCopy("files", data, True) if 'sumOverLumi' in data and data['sumOverLumi'] ==1: if ('logical_file_name' in data and isinstance(data['logical_file_name'], list)) \ or ('run_num' in data and isinstance(data['run_num'], list)): dbsExceptionHandler("dbsException-invalid-input", "When sumOverLumi=1, no input can be a list becaue nesting of WITH clause within WITH clause not supported yet by Oracle. ", self.logger.exception) if 'lumi_list' in data and data['lumi_list']: if 'sumOverLumi' in data and data['sumOverLumi'] ==1: dbsExceptionHandler("dbsException-invalid-input", "When lumi_list is given, sumOverLumi must set to 0 becaue nesting of WITH clause within WITH clause not supported yet by Oracle.", self.logger.exception) data['lumi_list'] = self.dbsUtils2.decodeLumiIntervals(data['lumi_list']) if 'run_num' not in data.keys() or not data['run_num'] or data['run_num'] ==-1 : dbsExceptionHandler("dbsException-invalid-input", "When lumi_list is given, require a single run_num.", self.logger.exception) #check if run_num =1 w/o lfn if ('logical_file_name' not in data or not data['logical_file_name']) and 'run_num' in data: if isinstance(data['run_num'], list): if 1 in data['run_num'] or '1' in data['run_num']: raise dbsExceptionHandler("dbsException-invalid-input", 'files API does not supprt run_num=1 without logical_file_name.', self.logger.exception) else: if data['run_num'] == 1 or data['run_num'] == '1': raise dbsExceptionHandler("dbsException-invalid-input", 'files API does not supprt run_num=1 without logical_file_name.', self.logger.exception) #Because CMSWEB has a 300 seconds responding time. We have to limit the array siz to make sure that #the API can be finished in 300 second. See github issues #465 for tests' results. # YG May-20-2015 max_array_size = 1000 if ( 'run_num' in data.keys() and isinstance(data['run_num'], list) and len(data['run_num'])>max_array_size)\ or ('lumi_list' in data.keys() and isinstance(data['lumi_list'], list) and len(data['lumi_list'])>max_array_size)\ or ('logical_file_name' in data.keys() and isinstance(data['logical_file_name'], list) and len(data['logical_file_name'])>max_array_size): dbsExceptionHandler("dbsException-invalid-input", "The Max list length supported in listFileArray is %s." %max_array_size, self.logger.exception) # ret = self.dbsFile.listFiles(input_body=data) except cjson.DecodeError as De: dbsExceptionHandler('dbsException-invalid-input2', "Invalid input", self.logger.exception, str(De)) except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except HTTPError as he: raise he except Exception as ex: sError = "DBSReaderModel/listFileArray. %s \n Exception trace: \n %s" \ % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', ex.message, self.logger.exception, sError) for item in ret: yield item
API to list files in DBS. Either non-wildcarded logical_file_name, non-wildcarded dataset, non-wildcarded block_name or non-wildcarded lfn list is required. The combination of a non-wildcarded dataset or block_name with an wildcarded logical_file_name is supported. * For lumi_list the following two json formats are supported: - [a1, a2, a3,] - [[a,b], [c, d],] * lumi_list can be either a list of lumi section numbers as [a1, a2, a3,] or a list of lumi section range as [[a,b], [c, d],]. Thay cannot be mixed. * If lumi_list is provided run only run_num=single-run-number is allowed * When lfn list is present, no run or lumi list is allowed. * When run_num =1 is present, logical_file_name should be present too. :param logical_file_name: logical_file_name of the file :type logical_file_name: str, list :param dataset: dataset :type dataset: str :param block_name: block name :type block_name: str :param release_version: release version :type release_version: str :param pset_hash: parameter set hash :type pset_hash: str :param app_name: Name of the application :type app_name: str :param output_module_label: name of the used output module :type output_module_label: str :param run_num: run , run ranges, and run list. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. Max length 1000. :type run_num: int, list, string :param origin_site_name: site where the file was created :type origin_site_name: str :param lumi_list: List containing luminosity sections. Max length 1000. :type lumi_list: list :param detail: Get detailed information about a file :type detail: bool :param validFileOnly: default=0 return all the files. when =1, only return files with is_file_valid=1 or dataset_access_type=PRODUCTION or VALID :type validFileOnly: int :param sumOverLumi: default=0 event_count is the event_count/file, when=1 and run_num is specified, the event_count is sum of the event_count/lumi for that run; When sumOverLumi = 1, no other input can be a list, for example no run_num list, lumi list or lfn list. :type sumOverLumi: int :returns: List of dictionaries containing the following keys (logical_file_name). If detail parameter is true, the dictionaries contain the following keys (check_sum, branch_hash_id, adler32, block_id, event_count, file_type, create_by, logical_file_name, creation_date, last_modified_by, dataset, block_name, file_id, file_size, last_modification_date, dataset_id, file_type_id, auto_cross_section, md5, is_file_valid) :rtype: list of dicts
Below is the the instruction that describes the task: ### Input: API to list files in DBS. Either non-wildcarded logical_file_name, non-wildcarded dataset, non-wildcarded block_name or non-wildcarded lfn list is required. The combination of a non-wildcarded dataset or block_name with an wildcarded logical_file_name is supported. * For lumi_list the following two json formats are supported: - [a1, a2, a3,] - [[a,b], [c, d],] * lumi_list can be either a list of lumi section numbers as [a1, a2, a3,] or a list of lumi section range as [[a,b], [c, d],]. Thay cannot be mixed. * If lumi_list is provided run only run_num=single-run-number is allowed * When lfn list is present, no run or lumi list is allowed. * When run_num =1 is present, logical_file_name should be present too. :param logical_file_name: logical_file_name of the file :type logical_file_name: str, list :param dataset: dataset :type dataset: str :param block_name: block name :type block_name: str :param release_version: release version :type release_version: str :param pset_hash: parameter set hash :type pset_hash: str :param app_name: Name of the application :type app_name: str :param output_module_label: name of the used output module :type output_module_label: str :param run_num: run , run ranges, and run list. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. Max length 1000. :type run_num: int, list, string :param origin_site_name: site where the file was created :type origin_site_name: str :param lumi_list: List containing luminosity sections. Max length 1000. :type lumi_list: list :param detail: Get detailed information about a file :type detail: bool :param validFileOnly: default=0 return all the files. when =1, only return files with is_file_valid=1 or dataset_access_type=PRODUCTION or VALID :type validFileOnly: int :param sumOverLumi: default=0 event_count is the event_count/file, when=1 and run_num is specified, the event_count is sum of the event_count/lumi for that run; When sumOverLumi = 1, no other input can be a list, for example no run_num list, lumi list or lfn list. :type sumOverLumi: int :returns: List of dictionaries containing the following keys (logical_file_name). If detail parameter is true, the dictionaries contain the following keys (check_sum, branch_hash_id, adler32, block_id, event_count, file_type, create_by, logical_file_name, creation_date, last_modified_by, dataset, block_name, file_id, file_size, last_modification_date, dataset_id, file_type_id, auto_cross_section, md5, is_file_valid) :rtype: list of dicts ### Response: def listFileArray(self): """ API to list files in DBS. Either non-wildcarded logical_file_name, non-wildcarded dataset, non-wildcarded block_name or non-wildcarded lfn list is required. The combination of a non-wildcarded dataset or block_name with an wildcarded logical_file_name is supported. * For lumi_list the following two json formats are supported: - [a1, a2, a3,] - [[a,b], [c, d],] * lumi_list can be either a list of lumi section numbers as [a1, a2, a3,] or a list of lumi section range as [[a,b], [c, d],]. Thay cannot be mixed. * If lumi_list is provided run only run_num=single-run-number is allowed * When lfn list is present, no run or lumi list is allowed. * When run_num =1 is present, logical_file_name should be present too. :param logical_file_name: logical_file_name of the file :type logical_file_name: str, list :param dataset: dataset :type dataset: str :param block_name: block name :type block_name: str :param release_version: release version :type release_version: str :param pset_hash: parameter set hash :type pset_hash: str :param app_name: Name of the application :type app_name: str :param output_module_label: name of the used output module :type output_module_label: str :param run_num: run , run ranges, and run list. Possible format are: run_num, 'run_min-run_max' or ['run_min-run_max', run1, run2, ...]. Max length 1000. :type run_num: int, list, string :param origin_site_name: site where the file was created :type origin_site_name: str :param lumi_list: List containing luminosity sections. Max length 1000. :type lumi_list: list :param detail: Get detailed information about a file :type detail: bool :param validFileOnly: default=0 return all the files. when =1, only return files with is_file_valid=1 or dataset_access_type=PRODUCTION or VALID :type validFileOnly: int :param sumOverLumi: default=0 event_count is the event_count/file, when=1 and run_num is specified, the event_count is sum of the event_count/lumi for that run; When sumOverLumi = 1, no other input can be a list, for example no run_num list, lumi list or lfn list. :type sumOverLumi: int :returns: List of dictionaries containing the following keys (logical_file_name). If detail parameter is true, the dictionaries contain the following keys (check_sum, branch_hash_id, adler32, block_id, event_count, file_type, create_by, logical_file_name, creation_date, last_modified_by, dataset, block_name, file_id, file_size, last_modification_date, dataset_id, file_type_id, auto_cross_section, md5, is_file_valid) :rtype: list of dicts """ ret = [] try : body = request.body.read() if body: data = cjson.decode(body) data = validateJSONInputNoCopy("files", data, True) if 'sumOverLumi' in data and data['sumOverLumi'] ==1: if ('logical_file_name' in data and isinstance(data['logical_file_name'], list)) \ or ('run_num' in data and isinstance(data['run_num'], list)): dbsExceptionHandler("dbsException-invalid-input", "When sumOverLumi=1, no input can be a list becaue nesting of WITH clause within WITH clause not supported yet by Oracle. ", self.logger.exception) if 'lumi_list' in data and data['lumi_list']: if 'sumOverLumi' in data and data['sumOverLumi'] ==1: dbsExceptionHandler("dbsException-invalid-input", "When lumi_list is given, sumOverLumi must set to 0 becaue nesting of WITH clause within WITH clause not supported yet by Oracle.", self.logger.exception) data['lumi_list'] = self.dbsUtils2.decodeLumiIntervals(data['lumi_list']) if 'run_num' not in data.keys() or not data['run_num'] or data['run_num'] ==-1 : dbsExceptionHandler("dbsException-invalid-input", "When lumi_list is given, require a single run_num.", self.logger.exception) #check if run_num =1 w/o lfn if ('logical_file_name' not in data or not data['logical_file_name']) and 'run_num' in data: if isinstance(data['run_num'], list): if 1 in data['run_num'] or '1' in data['run_num']: raise dbsExceptionHandler("dbsException-invalid-input", 'files API does not supprt run_num=1 without logical_file_name.', self.logger.exception) else: if data['run_num'] == 1 or data['run_num'] == '1': raise dbsExceptionHandler("dbsException-invalid-input", 'files API does not supprt run_num=1 without logical_file_name.', self.logger.exception) #Because CMSWEB has a 300 seconds responding time. We have to limit the array siz to make sure that #the API can be finished in 300 second. See github issues #465 for tests' results. # YG May-20-2015 max_array_size = 1000 if ( 'run_num' in data.keys() and isinstance(data['run_num'], list) and len(data['run_num'])>max_array_size)\ or ('lumi_list' in data.keys() and isinstance(data['lumi_list'], list) and len(data['lumi_list'])>max_array_size)\ or ('logical_file_name' in data.keys() and isinstance(data['logical_file_name'], list) and len(data['logical_file_name'])>max_array_size): dbsExceptionHandler("dbsException-invalid-input", "The Max list length supported in listFileArray is %s." %max_array_size, self.logger.exception) # ret = self.dbsFile.listFiles(input_body=data) except cjson.DecodeError as De: dbsExceptionHandler('dbsException-invalid-input2', "Invalid input", self.logger.exception, str(De)) except dbsException as de: dbsExceptionHandler(de.eCode, de.message, self.logger.exception, de.serverError) except HTTPError as he: raise he except Exception as ex: sError = "DBSReaderModel/listFileArray. %s \n Exception trace: \n %s" \ % (ex, traceback.format_exc()) dbsExceptionHandler('dbsException-server-error', ex.message, self.logger.exception, sError) for item in ret: yield item
def _publish_message(self, exchange, routing_key, message, properties): """Publish the message to RabbitMQ :param str exchange: The exchange to publish to :param str routing_key: The routing key to publish with :param str message: The message body :param pika.BasicProperties: The message properties """ if self._rabbitmq_is_closed or not self._rabbitmq_channel: LOGGER.warning('Temporarily buffering message to publish') self._add_to_publish_stack(exchange, routing_key, message, properties) return self._rabbitmq_channel.basic_publish(exchange, routing_key, message, properties)
Publish the message to RabbitMQ :param str exchange: The exchange to publish to :param str routing_key: The routing key to publish with :param str message: The message body :param pika.BasicProperties: The message properties
Below is the the instruction that describes the task: ### Input: Publish the message to RabbitMQ :param str exchange: The exchange to publish to :param str routing_key: The routing key to publish with :param str message: The message body :param pika.BasicProperties: The message properties ### Response: def _publish_message(self, exchange, routing_key, message, properties): """Publish the message to RabbitMQ :param str exchange: The exchange to publish to :param str routing_key: The routing key to publish with :param str message: The message body :param pika.BasicProperties: The message properties """ if self._rabbitmq_is_closed or not self._rabbitmq_channel: LOGGER.warning('Temporarily buffering message to publish') self._add_to_publish_stack(exchange, routing_key, message, properties) return self._rabbitmq_channel.basic_publish(exchange, routing_key, message, properties)
def deleteEdge(self, networkId, edgeId, verbose=None): """ Deletes the edge specified by the `edgeId` and `networkId` parameters. :param networkId: SUID of the network containing the edge. :param edgeId: SUID of the edge :param verbose: print more :returns: default: successful operation """ response=api(url=self.___url+'networks/'+str(networkId)+'/edges/'+str(edgeId)+'', method="DELETE", verbose=verbose) return response
Deletes the edge specified by the `edgeId` and `networkId` parameters. :param networkId: SUID of the network containing the edge. :param edgeId: SUID of the edge :param verbose: print more :returns: default: successful operation
Below is the the instruction that describes the task: ### Input: Deletes the edge specified by the `edgeId` and `networkId` parameters. :param networkId: SUID of the network containing the edge. :param edgeId: SUID of the edge :param verbose: print more :returns: default: successful operation ### Response: def deleteEdge(self, networkId, edgeId, verbose=None): """ Deletes the edge specified by the `edgeId` and `networkId` parameters. :param networkId: SUID of the network containing the edge. :param edgeId: SUID of the edge :param verbose: print more :returns: default: successful operation """ response=api(url=self.___url+'networks/'+str(networkId)+'/edges/'+str(edgeId)+'', method="DELETE", verbose=verbose) return response
def expand_source_files(filenames, cwd=None): """Expand a list of filenames passed in as sources. This is a helper function for handling command line arguments that specify a list of source files and directories. Any directories in filenames will be scanned recursively for .py files. Any files that do not end with ".py" will be dropped. Args: filenames: A list of filenames to process. cwd: An optional working directory to expand relative paths Returns: A list of sorted full paths to .py files """ out = [] for f in expand_paths(filenames, cwd): if os.path.isdir(f): # If we have a directory, collect all the .py files within it. out += collect_files(f, ".py") else: if f.endswith(".py"): out.append(f) return sorted(set(out))
Expand a list of filenames passed in as sources. This is a helper function for handling command line arguments that specify a list of source files and directories. Any directories in filenames will be scanned recursively for .py files. Any files that do not end with ".py" will be dropped. Args: filenames: A list of filenames to process. cwd: An optional working directory to expand relative paths Returns: A list of sorted full paths to .py files
Below is the the instruction that describes the task: ### Input: Expand a list of filenames passed in as sources. This is a helper function for handling command line arguments that specify a list of source files and directories. Any directories in filenames will be scanned recursively for .py files. Any files that do not end with ".py" will be dropped. Args: filenames: A list of filenames to process. cwd: An optional working directory to expand relative paths Returns: A list of sorted full paths to .py files ### Response: def expand_source_files(filenames, cwd=None): """Expand a list of filenames passed in as sources. This is a helper function for handling command line arguments that specify a list of source files and directories. Any directories in filenames will be scanned recursively for .py files. Any files that do not end with ".py" will be dropped. Args: filenames: A list of filenames to process. cwd: An optional working directory to expand relative paths Returns: A list of sorted full paths to .py files """ out = [] for f in expand_paths(filenames, cwd): if os.path.isdir(f): # If we have a directory, collect all the .py files within it. out += collect_files(f, ".py") else: if f.endswith(".py"): out.append(f) return sorted(set(out))
def known_context_patterns(self, val): ''' val must be an ArticleContextPattern, a dictionary, or list of \ dictionaries e.g., {'attr': 'class', 'value': 'my-article-class'} or [{'attr': 'class', 'value': 'my-article-class'}, {'attr': 'id', 'value': 'my-article-id'}] ''' def create_pat_from_dict(val): '''Helper function used to create an ArticleContextPattern from a dictionary ''' if "tag" in val: pat = ArticleContextPattern(tag=val["tag"]) if "attr" in val: pat.attr = val["attr"] pat.value = val["value"] elif "attr" in val: pat = ArticleContextPattern(attr=val["attr"], value=val["value"]) if "domain" in val: pat.domain = val["domain"] return pat if isinstance(val, list): self._known_context_patterns = [ x if isinstance(x, ArticleContextPattern) else create_pat_from_dict(x) for x in val ] + self.known_context_patterns elif isinstance(val, ArticleContextPattern): self._known_context_patterns.insert(0, val) elif isinstance(val, dict): self._known_context_patterns.insert(0, create_pat_from_dict(val)) else: raise Exception("Unknown type: {}. Use a ArticleContextPattern.".format(type(val)))
val must be an ArticleContextPattern, a dictionary, or list of \ dictionaries e.g., {'attr': 'class', 'value': 'my-article-class'} or [{'attr': 'class', 'value': 'my-article-class'}, {'attr': 'id', 'value': 'my-article-id'}]
Below is the the instruction that describes the task: ### Input: val must be an ArticleContextPattern, a dictionary, or list of \ dictionaries e.g., {'attr': 'class', 'value': 'my-article-class'} or [{'attr': 'class', 'value': 'my-article-class'}, {'attr': 'id', 'value': 'my-article-id'}] ### Response: def known_context_patterns(self, val): ''' val must be an ArticleContextPattern, a dictionary, or list of \ dictionaries e.g., {'attr': 'class', 'value': 'my-article-class'} or [{'attr': 'class', 'value': 'my-article-class'}, {'attr': 'id', 'value': 'my-article-id'}] ''' def create_pat_from_dict(val): '''Helper function used to create an ArticleContextPattern from a dictionary ''' if "tag" in val: pat = ArticleContextPattern(tag=val["tag"]) if "attr" in val: pat.attr = val["attr"] pat.value = val["value"] elif "attr" in val: pat = ArticleContextPattern(attr=val["attr"], value=val["value"]) if "domain" in val: pat.domain = val["domain"] return pat if isinstance(val, list): self._known_context_patterns = [ x if isinstance(x, ArticleContextPattern) else create_pat_from_dict(x) for x in val ] + self.known_context_patterns elif isinstance(val, ArticleContextPattern): self._known_context_patterns.insert(0, val) elif isinstance(val, dict): self._known_context_patterns.insert(0, create_pat_from_dict(val)) else: raise Exception("Unknown type: {}. Use a ArticleContextPattern.".format(type(val)))
def load_segment(self, f, is_irom_segment=False): """ Load the next segment from the image file """ file_offs = f.tell() (offset, size) = struct.unpack('<II', f.read(8)) self.warn_if_unusual_segment(offset, size, is_irom_segment) segment_data = f.read(size) if len(segment_data) < size: raise FatalError('End of file reading segment 0x%x, length %d (actual length %d)' % (offset, size, len(segment_data))) segment = ImageSegment(offset, segment_data, file_offs) self.segments.append(segment) return segment
Load the next segment from the image file
Below is the the instruction that describes the task: ### Input: Load the next segment from the image file ### Response: def load_segment(self, f, is_irom_segment=False): """ Load the next segment from the image file """ file_offs = f.tell() (offset, size) = struct.unpack('<II', f.read(8)) self.warn_if_unusual_segment(offset, size, is_irom_segment) segment_data = f.read(size) if len(segment_data) < size: raise FatalError('End of file reading segment 0x%x, length %d (actual length %d)' % (offset, size, len(segment_data))) segment = ImageSegment(offset, segment_data, file_offs) self.segments.append(segment) return segment
def is_valid(self): """ Tests if the dependency is in a valid state """ return super(SimpleDependency, self).is_valid() or ( self.requirement.immediate_rebind and self._pending_ref is not None )
Tests if the dependency is in a valid state
Below is the the instruction that describes the task: ### Input: Tests if the dependency is in a valid state ### Response: def is_valid(self): """ Tests if the dependency is in a valid state """ return super(SimpleDependency, self).is_valid() or ( self.requirement.immediate_rebind and self._pending_ref is not None )
def sanitize_loginurl (self): """Make login configuration consistent.""" url = self["loginurl"] disable = False if not self["loginpasswordfield"]: log.warn(LOG_CHECK, _("no CGI password fieldname given for login URL.")) disable = True if not self["loginuserfield"]: log.warn(LOG_CHECK, _("no CGI user fieldname given for login URL.")) disable = True if self.get_user_password(url) == (None, None): log.warn(LOG_CHECK, _("no user/password authentication data found for login URL.")) disable = True if not url.lower().startswith(("http:", "https:")): log.warn(LOG_CHECK, _("login URL is not a HTTP URL.")) disable = True urlparts = urlparse.urlsplit(url) if not urlparts[0] or not urlparts[1] or not urlparts[2]: log.warn(LOG_CHECK, _("login URL is incomplete.")) disable = True if disable: log.warn(LOG_CHECK, _("disabling login URL %(url)s.") % {"url": url}) self["loginurl"] = None
Make login configuration consistent.
Below is the the instruction that describes the task: ### Input: Make login configuration consistent. ### Response: def sanitize_loginurl (self): """Make login configuration consistent.""" url = self["loginurl"] disable = False if not self["loginpasswordfield"]: log.warn(LOG_CHECK, _("no CGI password fieldname given for login URL.")) disable = True if not self["loginuserfield"]: log.warn(LOG_CHECK, _("no CGI user fieldname given for login URL.")) disable = True if self.get_user_password(url) == (None, None): log.warn(LOG_CHECK, _("no user/password authentication data found for login URL.")) disable = True if not url.lower().startswith(("http:", "https:")): log.warn(LOG_CHECK, _("login URL is not a HTTP URL.")) disable = True urlparts = urlparse.urlsplit(url) if not urlparts[0] or not urlparts[1] or not urlparts[2]: log.warn(LOG_CHECK, _("login URL is incomplete.")) disable = True if disable: log.warn(LOG_CHECK, _("disabling login URL %(url)s.") % {"url": url}) self["loginurl"] = None
def as_minimized(values: List[float], maximized: List[bool]) -> List[float]: """ Return vector values as minimized """ return [v * -1. if m else v for v, m in zip(values, maximized)]
Return vector values as minimized
Below is the the instruction that describes the task: ### Input: Return vector values as minimized ### Response: def as_minimized(values: List[float], maximized: List[bool]) -> List[float]: """ Return vector values as minimized """ return [v * -1. if m else v for v, m in zip(values, maximized)]
def set_project_status(project_id, status, **kwargs): """ Set the status of a project to 'X' """ user_id = kwargs.get('user_id') #check_perm(user_id, 'delete_project') project = _get_project(project_id) project.check_write_permission(user_id) project.status = status db.DBSession.flush()
Set the status of a project to 'X'
Below is the the instruction that describes the task: ### Input: Set the status of a project to 'X' ### Response: def set_project_status(project_id, status, **kwargs): """ Set the status of a project to 'X' """ user_id = kwargs.get('user_id') #check_perm(user_id, 'delete_project') project = _get_project(project_id) project.check_write_permission(user_id) project.status = status db.DBSession.flush()
def multi_post(self, urls, query_params=None, data=None, to_json=True, send_as_file=False): """Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send_as_file - A boolean, should the data be sent as a file. Returns: a list of dicts if to_json is set of requests.response otherwise. Raises: InvalidRequestError - Can not decide how many requests to issue. """ return self._multi_request( MultiRequest._VERB_POST, urls, query_params, data, to_json=to_json, send_as_file=send_as_file, )
Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send_as_file - A boolean, should the data be sent as a file. Returns: a list of dicts if to_json is set of requests.response otherwise. Raises: InvalidRequestError - Can not decide how many requests to issue.
Below is the the instruction that describes the task: ### Input: Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send_as_file - A boolean, should the data be sent as a file. Returns: a list of dicts if to_json is set of requests.response otherwise. Raises: InvalidRequestError - Can not decide how many requests to issue. ### Response: def multi_post(self, urls, query_params=None, data=None, to_json=True, send_as_file=False): """Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send_as_file - A boolean, should the data be sent as a file. Returns: a list of dicts if to_json is set of requests.response otherwise. Raises: InvalidRequestError - Can not decide how many requests to issue. """ return self._multi_request( MultiRequest._VERB_POST, urls, query_params, data, to_json=to_json, send_as_file=send_as_file, )
def getdrawings(): """Get all the drawings.""" infos = Info.query.all() sketches = [json.loads(info.contents) for info in infos] return jsonify(drawings=sketches)
Get all the drawings.
Below is the the instruction that describes the task: ### Input: Get all the drawings. ### Response: def getdrawings(): """Get all the drawings.""" infos = Info.query.all() sketches = [json.loads(info.contents) for info in infos] return jsonify(drawings=sketches)
async def _on_trace_notification(self, trace_event): """Callback function called when a trace chunk is received. Args: trace_chunk (dict): The received trace chunk information """ conn_string = trace_event.get('connection_string') payload = trace_event.get('payload') await self.notify_event(conn_string, 'trace', payload)
Callback function called when a trace chunk is received. Args: trace_chunk (dict): The received trace chunk information
Below is the the instruction that describes the task: ### Input: Callback function called when a trace chunk is received. Args: trace_chunk (dict): The received trace chunk information ### Response: async def _on_trace_notification(self, trace_event): """Callback function called when a trace chunk is received. Args: trace_chunk (dict): The received trace chunk information """ conn_string = trace_event.get('connection_string') payload = trace_event.get('payload') await self.notify_event(conn_string, 'trace', payload)
def reset(ctx): """ Reset OpenPGP application. This action will wipe all OpenPGP data, and set all PINs to their default values. """ click.echo("Resetting OpenPGP data, don't remove your YubiKey...") ctx.obj['controller'].reset() click.echo('Success! All data has been cleared and default PINs are set.') echo_default_pins()
Reset OpenPGP application. This action will wipe all OpenPGP data, and set all PINs to their default values.
Below is the the instruction that describes the task: ### Input: Reset OpenPGP application. This action will wipe all OpenPGP data, and set all PINs to their default values. ### Response: def reset(ctx): """ Reset OpenPGP application. This action will wipe all OpenPGP data, and set all PINs to their default values. """ click.echo("Resetting OpenPGP data, don't remove your YubiKey...") ctx.obj['controller'].reset() click.echo('Success! All data has been cleared and default PINs are set.') echo_default_pins()
def fetch_existing_token_of_user(self, client_id, grant_type, user_id): """ Retrieve an access token issued to a client and user for a specific grant. :param client_id: The identifier of a client as a `str`. :param grant_type: The type of grant. :param user_id: The identifier of the user the access token has been issued to. :return: An instance of :class:`oauth2.datatype.AccessToken`. :raises: :class:`oauth2.error.AccessTokenNotFound` if not access token could be retrieved. """ token_data = self.fetchone(self.fetch_existing_token_of_user_query, client_id, grant_type, user_id) if token_data is None: raise AccessTokenNotFound scopes = self._fetch_scopes(access_token_id=token_data[0]) data = self._fetch_data(access_token_id=token_data[0]) return self._row_to_token(data=data, scopes=scopes, row=token_data)
Retrieve an access token issued to a client and user for a specific grant. :param client_id: The identifier of a client as a `str`. :param grant_type: The type of grant. :param user_id: The identifier of the user the access token has been issued to. :return: An instance of :class:`oauth2.datatype.AccessToken`. :raises: :class:`oauth2.error.AccessTokenNotFound` if not access token could be retrieved.
Below is the the instruction that describes the task: ### Input: Retrieve an access token issued to a client and user for a specific grant. :param client_id: The identifier of a client as a `str`. :param grant_type: The type of grant. :param user_id: The identifier of the user the access token has been issued to. :return: An instance of :class:`oauth2.datatype.AccessToken`. :raises: :class:`oauth2.error.AccessTokenNotFound` if not access token could be retrieved. ### Response: def fetch_existing_token_of_user(self, client_id, grant_type, user_id): """ Retrieve an access token issued to a client and user for a specific grant. :param client_id: The identifier of a client as a `str`. :param grant_type: The type of grant. :param user_id: The identifier of the user the access token has been issued to. :return: An instance of :class:`oauth2.datatype.AccessToken`. :raises: :class:`oauth2.error.AccessTokenNotFound` if not access token could be retrieved. """ token_data = self.fetchone(self.fetch_existing_token_of_user_query, client_id, grant_type, user_id) if token_data is None: raise AccessTokenNotFound scopes = self._fetch_scopes(access_token_id=token_data[0]) data = self._fetch_data(access_token_id=token_data[0]) return self._row_to_token(data=data, scopes=scopes, row=token_data)
def load(cls, filename, schema_only=False): """ Load an ARFF File from a file. """ o = open(filename) s = o.read() a = cls.parse(s, schema_only=schema_only) if not schema_only: a._filename = filename o.close() return a
Load an ARFF File from a file.
Below is the the instruction that describes the task: ### Input: Load an ARFF File from a file. ### Response: def load(cls, filename, schema_only=False): """ Load an ARFF File from a file. """ o = open(filename) s = o.read() a = cls.parse(s, schema_only=schema_only) if not schema_only: a._filename = filename o.close() return a
def del_repo(repo, **kwargs): ''' Delete a repo from the sources.list / sources.list.d If the .list file is in the sources.list.d directory and the file that the repo exists in does not contain any other repo configuration, the file itself will be deleted. The repo passed in must be a fully formed repository definition string. CLI Examples: .. code-block:: bash salt '*' pkg.del_repo "myrepo definition" ''' _check_apt() is_ppa = False if repo.startswith('ppa:') and __grains__['os'] in ('Ubuntu', 'Mint', 'neon'): # This is a PPA definition meaning special handling is needed # to derive the name. is_ppa = True dist = __grains__['lsb_distrib_codename'] if not HAS_SOFTWAREPROPERTIES: _warn_software_properties(repo) owner_name, ppa_name = repo[4:].split('/') if 'ppa_auth' in kwargs: auth_info = '{0}@'.format(kwargs['ppa_auth']) repo = LP_PVT_SRC_FORMAT.format(auth_info, dist, owner_name, ppa_name) else: repo = LP_SRC_FORMAT.format(owner_name, ppa_name, dist) else: if hasattr(softwareproperties.ppa, 'PPAShortcutHandler'): repo = softwareproperties.ppa.PPAShortcutHandler(repo).expand(dist)[0] else: repo = softwareproperties.ppa.expand_ppa_line(repo, dist)[0] sources = sourceslist.SourcesList() repos = [s for s in sources.list if not s.invalid] if repos: deleted_from = dict() try: repo_type, \ repo_architectures, \ repo_uri, \ repo_dist, \ repo_comps = _split_repo_str(repo) except SyntaxError: raise SaltInvocationError( 'Error: repo \'{0}\' not a well formatted definition' .format(repo) ) for source in repos: if (source.type == repo_type and source.architectures == repo_architectures and source.uri == repo_uri and source.dist == repo_dist): s_comps = set(source.comps) r_comps = set(repo_comps) if s_comps.intersection(r_comps): deleted_from[source.file] = 0 source.comps = list(s_comps.difference(r_comps)) if not source.comps: try: sources.remove(source) except ValueError: pass # PPAs are special and can add deb-src where expand_ppa_line # doesn't always reflect this. Lets just cleanup here for good # measure if (is_ppa and repo_type == 'deb' and source.type == 'deb-src' and source.uri == repo_uri and source.dist == repo_dist): s_comps = set(source.comps) r_comps = set(repo_comps) if s_comps.intersection(r_comps): deleted_from[source.file] = 0 source.comps = list(s_comps.difference(r_comps)) if not source.comps: try: sources.remove(source) except ValueError: pass sources.save() if deleted_from: ret = '' for source in sources: if source.file in deleted_from: deleted_from[source.file] += 1 for repo_file, count in six.iteritems(deleted_from): msg = 'Repo \'{0}\' has been removed from {1}.\n' if count == 0 and 'sources.list.d/' in repo_file: if os.path.isfile(repo_file): msg = ('File {1} containing repo \'{0}\' has been ' 'removed.') try: os.remove(repo_file) except OSError: pass ret += msg.format(repo, repo_file) # explicit refresh after a repo is deleted refresh_db() return ret raise CommandExecutionError( 'Repo {0} doesn\'t exist in the sources.list(s)'.format(repo) )
Delete a repo from the sources.list / sources.list.d If the .list file is in the sources.list.d directory and the file that the repo exists in does not contain any other repo configuration, the file itself will be deleted. The repo passed in must be a fully formed repository definition string. CLI Examples: .. code-block:: bash salt '*' pkg.del_repo "myrepo definition"
Below is the the instruction that describes the task: ### Input: Delete a repo from the sources.list / sources.list.d If the .list file is in the sources.list.d directory and the file that the repo exists in does not contain any other repo configuration, the file itself will be deleted. The repo passed in must be a fully formed repository definition string. CLI Examples: .. code-block:: bash salt '*' pkg.del_repo "myrepo definition" ### Response: def del_repo(repo, **kwargs): ''' Delete a repo from the sources.list / sources.list.d If the .list file is in the sources.list.d directory and the file that the repo exists in does not contain any other repo configuration, the file itself will be deleted. The repo passed in must be a fully formed repository definition string. CLI Examples: .. code-block:: bash salt '*' pkg.del_repo "myrepo definition" ''' _check_apt() is_ppa = False if repo.startswith('ppa:') and __grains__['os'] in ('Ubuntu', 'Mint', 'neon'): # This is a PPA definition meaning special handling is needed # to derive the name. is_ppa = True dist = __grains__['lsb_distrib_codename'] if not HAS_SOFTWAREPROPERTIES: _warn_software_properties(repo) owner_name, ppa_name = repo[4:].split('/') if 'ppa_auth' in kwargs: auth_info = '{0}@'.format(kwargs['ppa_auth']) repo = LP_PVT_SRC_FORMAT.format(auth_info, dist, owner_name, ppa_name) else: repo = LP_SRC_FORMAT.format(owner_name, ppa_name, dist) else: if hasattr(softwareproperties.ppa, 'PPAShortcutHandler'): repo = softwareproperties.ppa.PPAShortcutHandler(repo).expand(dist)[0] else: repo = softwareproperties.ppa.expand_ppa_line(repo, dist)[0] sources = sourceslist.SourcesList() repos = [s for s in sources.list if not s.invalid] if repos: deleted_from = dict() try: repo_type, \ repo_architectures, \ repo_uri, \ repo_dist, \ repo_comps = _split_repo_str(repo) except SyntaxError: raise SaltInvocationError( 'Error: repo \'{0}\' not a well formatted definition' .format(repo) ) for source in repos: if (source.type == repo_type and source.architectures == repo_architectures and source.uri == repo_uri and source.dist == repo_dist): s_comps = set(source.comps) r_comps = set(repo_comps) if s_comps.intersection(r_comps): deleted_from[source.file] = 0 source.comps = list(s_comps.difference(r_comps)) if not source.comps: try: sources.remove(source) except ValueError: pass # PPAs are special and can add deb-src where expand_ppa_line # doesn't always reflect this. Lets just cleanup here for good # measure if (is_ppa and repo_type == 'deb' and source.type == 'deb-src' and source.uri == repo_uri and source.dist == repo_dist): s_comps = set(source.comps) r_comps = set(repo_comps) if s_comps.intersection(r_comps): deleted_from[source.file] = 0 source.comps = list(s_comps.difference(r_comps)) if not source.comps: try: sources.remove(source) except ValueError: pass sources.save() if deleted_from: ret = '' for source in sources: if source.file in deleted_from: deleted_from[source.file] += 1 for repo_file, count in six.iteritems(deleted_from): msg = 'Repo \'{0}\' has been removed from {1}.\n' if count == 0 and 'sources.list.d/' in repo_file: if os.path.isfile(repo_file): msg = ('File {1} containing repo \'{0}\' has been ' 'removed.') try: os.remove(repo_file) except OSError: pass ret += msg.format(repo, repo_file) # explicit refresh after a repo is deleted refresh_db() return ret raise CommandExecutionError( 'Repo {0} doesn\'t exist in the sources.list(s)'.format(repo) )
def _squeeze(self, var_tbl: dict) -> dict: """ Makes sure no extra dimensions are floating around in the input arrays Returns inferred dict of variable: val pairs """ var_names = var_tbl.copy() # squeeze any numpy arrays for v in var_tbl: val = var_tbl[v] if isinstance(val, np.ndarray): var_names[v] = val.squeeze() return var_names
Makes sure no extra dimensions are floating around in the input arrays Returns inferred dict of variable: val pairs
Below is the the instruction that describes the task: ### Input: Makes sure no extra dimensions are floating around in the input arrays Returns inferred dict of variable: val pairs ### Response: def _squeeze(self, var_tbl: dict) -> dict: """ Makes sure no extra dimensions are floating around in the input arrays Returns inferred dict of variable: val pairs """ var_names = var_tbl.copy() # squeeze any numpy arrays for v in var_tbl: val = var_tbl[v] if isinstance(val, np.ndarray): var_names[v] = val.squeeze() return var_names
def _get_ns(self, reply): '''_get_ns Low-level api: Return a dict of nsmap. Parameters ---------- reply : `Element` rpc-reply as an instance of Element. Returns ------- dict A dict of nsmap. ''' def get_prefix(url): if url in special_prefixes: return special_prefixes[url] for i in self.namespaces: if url == i[2]: return i[1] return None root = reply.getroottree() urls = set() for node in root.iter(): urls.update([u for p, u in node.nsmap.items()]) ret = {url: get_prefix(url) for url in urls} i = 0 for url in [url for url in ret if ret[url] is None]: logger.warning('{} cannot be found in namespaces of any ' \ 'models'.format(url)) ret[url] = 'ns{:02d}'.format(i) i += 1 return {p: u for u, p in ret.items()}
_get_ns Low-level api: Return a dict of nsmap. Parameters ---------- reply : `Element` rpc-reply as an instance of Element. Returns ------- dict A dict of nsmap.
Below is the the instruction that describes the task: ### Input: _get_ns Low-level api: Return a dict of nsmap. Parameters ---------- reply : `Element` rpc-reply as an instance of Element. Returns ------- dict A dict of nsmap. ### Response: def _get_ns(self, reply): '''_get_ns Low-level api: Return a dict of nsmap. Parameters ---------- reply : `Element` rpc-reply as an instance of Element. Returns ------- dict A dict of nsmap. ''' def get_prefix(url): if url in special_prefixes: return special_prefixes[url] for i in self.namespaces: if url == i[2]: return i[1] return None root = reply.getroottree() urls = set() for node in root.iter(): urls.update([u for p, u in node.nsmap.items()]) ret = {url: get_prefix(url) for url in urls} i = 0 for url in [url for url in ret if ret[url] is None]: logger.warning('{} cannot be found in namespaces of any ' \ 'models'.format(url)) ret[url] = 'ns{:02d}'.format(i) i += 1 return {p: u for u, p in ret.items()}
def SCISetStylingEx(self, line: int, col: int, style: bytearray): """ Pythonic wrapper for the SCI_SETSTYLINGEX command. For example, the following code will fetch the styling for the first five characters applies it verbatim to the next five characters. text, style = SCIGetStyledText((0, 0, 0, 5)) SCISetStylingEx((0, 5), style) |Args| * ``line`` (**int**): line number where to start styling. * ``col`` (**int**): column number where to start styling. * ``style`` (**bytearray**): Scintilla style bits. |Returns| **None** |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type. """ if not self.isPositionValid(line, col): return pos = self.positionFromLineIndex(line, col) self.SendScintilla(self.SCI_STARTSTYLING, pos, 0xFF) self.SendScintilla(self.SCI_SETSTYLINGEX, len(style), style)
Pythonic wrapper for the SCI_SETSTYLINGEX command. For example, the following code will fetch the styling for the first five characters applies it verbatim to the next five characters. text, style = SCIGetStyledText((0, 0, 0, 5)) SCISetStylingEx((0, 5), style) |Args| * ``line`` (**int**): line number where to start styling. * ``col`` (**int**): column number where to start styling. * ``style`` (**bytearray**): Scintilla style bits. |Returns| **None** |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type.
Below is the the instruction that describes the task: ### Input: Pythonic wrapper for the SCI_SETSTYLINGEX command. For example, the following code will fetch the styling for the first five characters applies it verbatim to the next five characters. text, style = SCIGetStyledText((0, 0, 0, 5)) SCISetStylingEx((0, 5), style) |Args| * ``line`` (**int**): line number where to start styling. * ``col`` (**int**): column number where to start styling. * ``style`` (**bytearray**): Scintilla style bits. |Returns| **None** |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type. ### Response: def SCISetStylingEx(self, line: int, col: int, style: bytearray): """ Pythonic wrapper for the SCI_SETSTYLINGEX command. For example, the following code will fetch the styling for the first five characters applies it verbatim to the next five characters. text, style = SCIGetStyledText((0, 0, 0, 5)) SCISetStylingEx((0, 5), style) |Args| * ``line`` (**int**): line number where to start styling. * ``col`` (**int**): column number where to start styling. * ``style`` (**bytearray**): Scintilla style bits. |Returns| **None** |Raises| * **QtmacsArgumentError** if at least one argument has an invalid type. """ if not self.isPositionValid(line, col): return pos = self.positionFromLineIndex(line, col) self.SendScintilla(self.SCI_STARTSTYLING, pos, 0xFF) self.SendScintilla(self.SCI_SETSTYLINGEX, len(style), style)
def update_parser(self, parser): """Update config dictionary with declared arguments in an argparse.parser New variables will be created, and existing ones overridden. Args: parser (argparse.ArgumentParser): parser to read variables from """ self._parser = parser ini_str = argparse_to_ini(parser) configp = configparser.ConfigParser(allow_no_value=True) configp.read_dict(self._config) configp.read_string(ini_str) self._config.update( {s: dict(configp.items(s)) for s in configp.sections()} )
Update config dictionary with declared arguments in an argparse.parser New variables will be created, and existing ones overridden. Args: parser (argparse.ArgumentParser): parser to read variables from
Below is the the instruction that describes the task: ### Input: Update config dictionary with declared arguments in an argparse.parser New variables will be created, and existing ones overridden. Args: parser (argparse.ArgumentParser): parser to read variables from ### Response: def update_parser(self, parser): """Update config dictionary with declared arguments in an argparse.parser New variables will be created, and existing ones overridden. Args: parser (argparse.ArgumentParser): parser to read variables from """ self._parser = parser ini_str = argparse_to_ini(parser) configp = configparser.ConfigParser(allow_no_value=True) configp.read_dict(self._config) configp.read_string(ini_str) self._config.update( {s: dict(configp.items(s)) for s in configp.sections()} )
def _decode_value(self, value): """ Decodes the value by turning any binary data back into Python objects. The method searches for ObjectId values, loads the associated binary data from GridFS and returns the decoded Python object. Args: value (object): The value that should be decoded. Raises: DataStoreDecodingError: An ObjectId was found but the id is not a valid GridFS id. DataStoreDecodeUnknownType: The type of the specified value is unknown. Returns: object: The decoded value as a valid Python object. """ if isinstance(value, (int, float, str, bool, datetime)): return value elif isinstance(value, list): return [self._decode_value(item) for item in value] elif isinstance(value, dict): result = {} for key, item in value.items(): result[key] = self._decode_value(item) return result elif isinstance(value, ObjectId): if self._gridfs.exists({"_id": value}): return pickle.loads(self._gridfs.get(value).read()) else: raise DataStoreGridfsIdInvalid() else: raise DataStoreDecodeUnknownType()
Decodes the value by turning any binary data back into Python objects. The method searches for ObjectId values, loads the associated binary data from GridFS and returns the decoded Python object. Args: value (object): The value that should be decoded. Raises: DataStoreDecodingError: An ObjectId was found but the id is not a valid GridFS id. DataStoreDecodeUnknownType: The type of the specified value is unknown. Returns: object: The decoded value as a valid Python object.
Below is the the instruction that describes the task: ### Input: Decodes the value by turning any binary data back into Python objects. The method searches for ObjectId values, loads the associated binary data from GridFS and returns the decoded Python object. Args: value (object): The value that should be decoded. Raises: DataStoreDecodingError: An ObjectId was found but the id is not a valid GridFS id. DataStoreDecodeUnknownType: The type of the specified value is unknown. Returns: object: The decoded value as a valid Python object. ### Response: def _decode_value(self, value): """ Decodes the value by turning any binary data back into Python objects. The method searches for ObjectId values, loads the associated binary data from GridFS and returns the decoded Python object. Args: value (object): The value that should be decoded. Raises: DataStoreDecodingError: An ObjectId was found but the id is not a valid GridFS id. DataStoreDecodeUnknownType: The type of the specified value is unknown. Returns: object: The decoded value as a valid Python object. """ if isinstance(value, (int, float, str, bool, datetime)): return value elif isinstance(value, list): return [self._decode_value(item) for item in value] elif isinstance(value, dict): result = {} for key, item in value.items(): result[key] = self._decode_value(item) return result elif isinstance(value, ObjectId): if self._gridfs.exists({"_id": value}): return pickle.loads(self._gridfs.get(value).read()) else: raise DataStoreGridfsIdInvalid() else: raise DataStoreDecodeUnknownType()
def deserialize(self): """Turns a sequence of bytes into a message dictionary.""" if self.msgbytes is None: raise LLRPError('No message bytes to deserialize.') data = self.msgbytes msgtype, length, msgid = struct.unpack(self.full_hdr_fmt, data[:self.full_hdr_len]) ver = (msgtype >> 10) & BITMASK(3) msgtype = msgtype & BITMASK(10) try: name = Message_Type2Name[msgtype] logger.debug('deserializing %s command', name) decoder = Message_struct[name]['decode'] except KeyError: raise LLRPError('Cannot find decoder for message type ' '{}'.format(msgtype)) body = data[self.full_hdr_len:length] try: self.msgdict = { name: dict(decoder(body)) } self.msgdict[name]['Ver'] = ver self.msgdict[name]['Type'] = msgtype self.msgdict[name]['ID'] = msgid logger.debug('done deserializing %s command', name) except ValueError: logger.exception('Unable to decode body %s, %s', body, decoder(body)) except LLRPError: logger.exception('Problem with %s message format', name) return '' return ''
Turns a sequence of bytes into a message dictionary.
Below is the the instruction that describes the task: ### Input: Turns a sequence of bytes into a message dictionary. ### Response: def deserialize(self): """Turns a sequence of bytes into a message dictionary.""" if self.msgbytes is None: raise LLRPError('No message bytes to deserialize.') data = self.msgbytes msgtype, length, msgid = struct.unpack(self.full_hdr_fmt, data[:self.full_hdr_len]) ver = (msgtype >> 10) & BITMASK(3) msgtype = msgtype & BITMASK(10) try: name = Message_Type2Name[msgtype] logger.debug('deserializing %s command', name) decoder = Message_struct[name]['decode'] except KeyError: raise LLRPError('Cannot find decoder for message type ' '{}'.format(msgtype)) body = data[self.full_hdr_len:length] try: self.msgdict = { name: dict(decoder(body)) } self.msgdict[name]['Ver'] = ver self.msgdict[name]['Type'] = msgtype self.msgdict[name]['ID'] = msgid logger.debug('done deserializing %s command', name) except ValueError: logger.exception('Unable to decode body %s, %s', body, decoder(body)) except LLRPError: logger.exception('Problem with %s message format', name) return '' return ''
def set_index(self, index): """Display the data of the given index :param index: the index to paint :type index: QtCore.QModelIndex :returns: None :rtype: None :raises: None """ item = index.internalPointer() note = item.internal_data() self.content_lb.setText(note.content) self.created_dte.setDateTime(dt_to_qdatetime(note.date_created)) self.updated_dte.setDateTime(dt_to_qdatetime(note.date_updated)) self.username_lb.setText(note.user.username)
Display the data of the given index :param index: the index to paint :type index: QtCore.QModelIndex :returns: None :rtype: None :raises: None
Below is the the instruction that describes the task: ### Input: Display the data of the given index :param index: the index to paint :type index: QtCore.QModelIndex :returns: None :rtype: None :raises: None ### Response: def set_index(self, index): """Display the data of the given index :param index: the index to paint :type index: QtCore.QModelIndex :returns: None :rtype: None :raises: None """ item = index.internalPointer() note = item.internal_data() self.content_lb.setText(note.content) self.created_dte.setDateTime(dt_to_qdatetime(note.date_created)) self.updated_dte.setDateTime(dt_to_qdatetime(note.date_updated)) self.username_lb.setText(note.user.username)
def format_general_name(name): """Format a single general name. >>> import ipaddress >>> format_general_name(x509.DNSName('example.com')) 'DNS:example.com' >>> format_general_name(x509.IPAddress(ipaddress.IPv4Address('127.0.0.1'))) 'IP:127.0.0.1' """ if isinstance(name, x509.DirectoryName): value = format_name(name.value) else: value = name.value return '%s:%s' % (SAN_NAME_MAPPINGS[type(name)], value)
Format a single general name. >>> import ipaddress >>> format_general_name(x509.DNSName('example.com')) 'DNS:example.com' >>> format_general_name(x509.IPAddress(ipaddress.IPv4Address('127.0.0.1'))) 'IP:127.0.0.1'
Below is the the instruction that describes the task: ### Input: Format a single general name. >>> import ipaddress >>> format_general_name(x509.DNSName('example.com')) 'DNS:example.com' >>> format_general_name(x509.IPAddress(ipaddress.IPv4Address('127.0.0.1'))) 'IP:127.0.0.1' ### Response: def format_general_name(name): """Format a single general name. >>> import ipaddress >>> format_general_name(x509.DNSName('example.com')) 'DNS:example.com' >>> format_general_name(x509.IPAddress(ipaddress.IPv4Address('127.0.0.1'))) 'IP:127.0.0.1' """ if isinstance(name, x509.DirectoryName): value = format_name(name.value) else: value = name.value return '%s:%s' % (SAN_NAME_MAPPINGS[type(name)], value)
def refresh_existing_encodings(self, encodings_from_file): """ Refresh existing encodings for messages, when encoding was changed by user in dialog :return: """ update = False for msg in self.table_model.protocol.messages: i = next((i for i, d in enumerate(encodings_from_file) if d.name == msg.decoder.name), 0) if msg.decoder != encodings_from_file[i]: update = True msg.decoder = encodings_from_file[i] msg.clear_decoded_bits() msg.clear_encoded_bits() if update: self.refresh_table() self.refresh_estimated_time()
Refresh existing encodings for messages, when encoding was changed by user in dialog :return:
Below is the the instruction that describes the task: ### Input: Refresh existing encodings for messages, when encoding was changed by user in dialog :return: ### Response: def refresh_existing_encodings(self, encodings_from_file): """ Refresh existing encodings for messages, when encoding was changed by user in dialog :return: """ update = False for msg in self.table_model.protocol.messages: i = next((i for i, d in enumerate(encodings_from_file) if d.name == msg.decoder.name), 0) if msg.decoder != encodings_from_file[i]: update = True msg.decoder = encodings_from_file[i] msg.clear_decoded_bits() msg.clear_encoded_bits() if update: self.refresh_table() self.refresh_estimated_time()
def _convert_type(self, data_type): ''' CDF data types to python struct data types ''' if (data_type == 1) or (data_type == 41): dt_string = 'b' elif data_type == 2: dt_string = 'h' elif data_type == 4: dt_string = 'i' elif (data_type == 8) or (data_type == 33): dt_string = 'q' elif data_type == 11: dt_string = 'B' elif data_type == 12: dt_string = 'H' elif data_type == 14: dt_string = 'I' elif (data_type == 21) or (data_type == 44): dt_string = 'f' elif (data_type == 22) or (data_type == 45) or (data_type == 31): dt_string = 'd' elif (data_type == 32): dt_string = 'd' elif (data_type == 51) or (data_type == 52): dt_string = 's' return dt_string
CDF data types to python struct data types
Below is the the instruction that describes the task: ### Input: CDF data types to python struct data types ### Response: def _convert_type(self, data_type): ''' CDF data types to python struct data types ''' if (data_type == 1) or (data_type == 41): dt_string = 'b' elif data_type == 2: dt_string = 'h' elif data_type == 4: dt_string = 'i' elif (data_type == 8) or (data_type == 33): dt_string = 'q' elif data_type == 11: dt_string = 'B' elif data_type == 12: dt_string = 'H' elif data_type == 14: dt_string = 'I' elif (data_type == 21) or (data_type == 44): dt_string = 'f' elif (data_type == 22) or (data_type == 45) or (data_type == 31): dt_string = 'd' elif (data_type == 32): dt_string = 'd' elif (data_type == 51) or (data_type == 52): dt_string = 's' return dt_string
def add_tab_stop(self, position, alignment=WD_TAB_ALIGNMENT.LEFT, leader=WD_TAB_LEADER.SPACES): """ Add a new tab stop at *position*, a |Length| object specifying the location of the tab stop relative to the paragraph edge. A negative *position* value is valid and appears in hanging indentation. Tab alignment defaults to left, but may be specified by passing a member of the :ref:`WdTabAlignment` enumeration as *alignment*. An optional leader character can be specified by passing a member of the :ref:`WdTabLeader` enumeration as *leader*. """ tabs = self._pPr.get_or_add_tabs() tab = tabs.insert_tab_in_order(position, alignment, leader) return TabStop(tab)
Add a new tab stop at *position*, a |Length| object specifying the location of the tab stop relative to the paragraph edge. A negative *position* value is valid and appears in hanging indentation. Tab alignment defaults to left, but may be specified by passing a member of the :ref:`WdTabAlignment` enumeration as *alignment*. An optional leader character can be specified by passing a member of the :ref:`WdTabLeader` enumeration as *leader*.
Below is the the instruction that describes the task: ### Input: Add a new tab stop at *position*, a |Length| object specifying the location of the tab stop relative to the paragraph edge. A negative *position* value is valid and appears in hanging indentation. Tab alignment defaults to left, but may be specified by passing a member of the :ref:`WdTabAlignment` enumeration as *alignment*. An optional leader character can be specified by passing a member of the :ref:`WdTabLeader` enumeration as *leader*. ### Response: def add_tab_stop(self, position, alignment=WD_TAB_ALIGNMENT.LEFT, leader=WD_TAB_LEADER.SPACES): """ Add a new tab stop at *position*, a |Length| object specifying the location of the tab stop relative to the paragraph edge. A negative *position* value is valid and appears in hanging indentation. Tab alignment defaults to left, but may be specified by passing a member of the :ref:`WdTabAlignment` enumeration as *alignment*. An optional leader character can be specified by passing a member of the :ref:`WdTabLeader` enumeration as *leader*. """ tabs = self._pPr.get_or_add_tabs() tab = tabs.insert_tab_in_order(position, alignment, leader) return TabStop(tab)
def execute_query(self, payload): '''Execute the query and returns and response''' if vars(self).get('oauth'): if not self.oauth.token_is_valid(): # Refresh token if token has expired self.oauth.refresh_token() response = self.oauth.session.get(self.PRIVATE_URL, params= payload, header_auth=True) else: response = requests.get(self.PUBLIC_URL, params= payload) self._response = response # Saving last response object. return response
Execute the query and returns and response
Below is the the instruction that describes the task: ### Input: Execute the query and returns and response ### Response: def execute_query(self, payload): '''Execute the query and returns and response''' if vars(self).get('oauth'): if not self.oauth.token_is_valid(): # Refresh token if token has expired self.oauth.refresh_token() response = self.oauth.session.get(self.PRIVATE_URL, params= payload, header_auth=True) else: response = requests.get(self.PUBLIC_URL, params= payload) self._response = response # Saving last response object. return response
def delete_network(self, name, tenant_id, subnet_id, net_id): """Delete the openstack subnet and network. """ try: self.neutronclient.delete_subnet(subnet_id) except Exception as exc: LOG.error("Failed to delete subnet %(sub)s exc %(exc)s", {'sub': subnet_id, 'exc': str(exc)}) return try: self.neutronclient.delete_network(net_id) except Exception as exc: LOG.error("Failed to delete network %(name)s exc %(exc)s", {'name': name, 'exc': str(exc)})
Delete the openstack subnet and network.
Below is the the instruction that describes the task: ### Input: Delete the openstack subnet and network. ### Response: def delete_network(self, name, tenant_id, subnet_id, net_id): """Delete the openstack subnet and network. """ try: self.neutronclient.delete_subnet(subnet_id) except Exception as exc: LOG.error("Failed to delete subnet %(sub)s exc %(exc)s", {'sub': subnet_id, 'exc': str(exc)}) return try: self.neutronclient.delete_network(net_id) except Exception as exc: LOG.error("Failed to delete network %(name)s exc %(exc)s", {'name': name, 'exc': str(exc)})
def off(self, dev_id): """Turn OFF all features of the device. schedules, weather intelligence, water budget, etc. """ path = 'device/off' payload = {'id': dev_id} return self.rachio.put(path, payload)
Turn OFF all features of the device. schedules, weather intelligence, water budget, etc.
Below is the the instruction that describes the task: ### Input: Turn OFF all features of the device. schedules, weather intelligence, water budget, etc. ### Response: def off(self, dev_id): """Turn OFF all features of the device. schedules, weather intelligence, water budget, etc. """ path = 'device/off' payload = {'id': dev_id} return self.rachio.put(path, payload)
def get_level_fmt(self, level): """Get format for log level.""" key = None if level == logging.DEBUG: key = 'debug' elif level == logging.INFO: key = 'info' elif level == logging.WARNING: key = 'warning' elif level == logging.ERROR: key = 'error' elif level == logging.CRITICAL: key = 'critical' return self.overwrites.get(key, self.fmt)
Get format for log level.
Below is the the instruction that describes the task: ### Input: Get format for log level. ### Response: def get_level_fmt(self, level): """Get format for log level.""" key = None if level == logging.DEBUG: key = 'debug' elif level == logging.INFO: key = 'info' elif level == logging.WARNING: key = 'warning' elif level == logging.ERROR: key = 'error' elif level == logging.CRITICAL: key = 'critical' return self.overwrites.get(key, self.fmt)
def _set_properties(self, api_response): """Update properties from resource in body of ``api_response`` :type api_response: dict :param api_response: response returned from an API call """ job_id_present = ( "jobReference" in api_response and "jobId" in api_response["jobReference"] and "projectId" in api_response["jobReference"] ) if not job_id_present: raise ValueError("QueryResult requires a job reference") self._properties.clear() self._properties.update(copy.deepcopy(api_response))
Update properties from resource in body of ``api_response`` :type api_response: dict :param api_response: response returned from an API call
Below is the the instruction that describes the task: ### Input: Update properties from resource in body of ``api_response`` :type api_response: dict :param api_response: response returned from an API call ### Response: def _set_properties(self, api_response): """Update properties from resource in body of ``api_response`` :type api_response: dict :param api_response: response returned from an API call """ job_id_present = ( "jobReference" in api_response and "jobId" in api_response["jobReference"] and "projectId" in api_response["jobReference"] ) if not job_id_present: raise ValueError("QueryResult requires a job reference") self._properties.clear() self._properties.update(copy.deepcopy(api_response))
def AddForwardedIp(self, address, interface): """Configure a new IP address on the network interface. Args: address: string, the IP address to configure. interface: string, the output device to use. """ for ip in list(netaddr.IPNetwork(address)): self._RunIfconfig(args=[interface, 'alias', '%s/32' % str(ip)])
Configure a new IP address on the network interface. Args: address: string, the IP address to configure. interface: string, the output device to use.
Below is the the instruction that describes the task: ### Input: Configure a new IP address on the network interface. Args: address: string, the IP address to configure. interface: string, the output device to use. ### Response: def AddForwardedIp(self, address, interface): """Configure a new IP address on the network interface. Args: address: string, the IP address to configure. interface: string, the output device to use. """ for ip in list(netaddr.IPNetwork(address)): self._RunIfconfig(args=[interface, 'alias', '%s/32' % str(ip)])
def get_changeform_initial_data(self, request): '''Copy initial data from parent''' initial = super(PageAdmin, self).get_changeform_initial_data(request) if ('translation_of' in request.GET): original = self.model._tree_manager.get( pk=request.GET.get('translation_of')) initial['layout'] = original.layout initial['theme'] = original.theme initial['color_scheme'] = original.color_scheme # optionaly translate title and make slug old_lang = translation.get_language() translation.activate(request.GET.get('language')) title = _(original.title) if title != original.title: initial['title'] = title initial['slug'] = slugify(title) translation.activate(old_lang) return initial
Copy initial data from parent
Below is the the instruction that describes the task: ### Input: Copy initial data from parent ### Response: def get_changeform_initial_data(self, request): '''Copy initial data from parent''' initial = super(PageAdmin, self).get_changeform_initial_data(request) if ('translation_of' in request.GET): original = self.model._tree_manager.get( pk=request.GET.get('translation_of')) initial['layout'] = original.layout initial['theme'] = original.theme initial['color_scheme'] = original.color_scheme # optionaly translate title and make slug old_lang = translation.get_language() translation.activate(request.GET.get('language')) title = _(original.title) if title != original.title: initial['title'] = title initial['slug'] = slugify(title) translation.activate(old_lang) return initial
def _run(cmd, cwd=None, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, output_encoding=None, output_loglevel='debug', log_callback=None, runas=None, group=None, shell=DEFAULT_SHELL, python_shell=False, env=None, clean_env=False, prepend_path=None, rstrip=True, template=None, umask=None, timeout=None, with_communicate=True, reset_system_locale=True, ignore_retcode=False, saltenv='base', pillarenv=None, pillar_override=None, use_vt=False, password=None, bg=False, encoded_cmd=False, success_retcodes=None, success_stdout=None, success_stderr=None, **kwargs): ''' Do the DRY thing and only call subprocess.Popen() once ''' if 'pillar' in kwargs and not pillar_override: pillar_override = kwargs['pillar'] if output_loglevel != 'quiet' and _is_valid_shell(shell) is False: log.warning( 'Attempt to run a shell command with what may be an invalid shell! ' 'Check to ensure that the shell <%s> is valid for this user.', shell ) output_loglevel = _check_loglevel(output_loglevel) log_callback = _check_cb(log_callback) use_sudo = False if runas is None and '__context__' in globals(): runas = __context__.get('runas') if password is None and '__context__' in globals(): password = __context__.get('runas_password') # Set the default working directory to the home directory of the user # salt-minion is running as. Defaults to home directory of user under which # the minion is running. if not cwd: cwd = os.path.expanduser('~{0}'.format('' if not runas else runas)) # make sure we can access the cwd # when run from sudo or another environment where the euid is # changed ~ will expand to the home of the original uid and # the euid might not have access to it. See issue #1844 if not os.access(cwd, os.R_OK): cwd = '/' if salt.utils.platform.is_windows(): cwd = os.path.abspath(os.sep) else: # Handle edge cases where numeric/other input is entered, and would be # yaml-ified into non-string types cwd = six.text_type(cwd) if bg: ignore_retcode = True use_vt = False if not salt.utils.platform.is_windows(): if not os.path.isfile(shell) or not os.access(shell, os.X_OK): msg = 'The shell {0} is not available'.format(shell) raise CommandExecutionError(msg) if salt.utils.platform.is_windows() and use_vt: # Memozation so not much overhead raise CommandExecutionError('VT not available on windows') if shell.lower().strip() == 'powershell': # Strip whitespace if isinstance(cmd, six.string_types): cmd = cmd.strip() # If we were called by script(), then fakeout the Windows # shell to run a Powershell script. # Else just run a Powershell command. stack = traceback.extract_stack(limit=2) # extract_stack() returns a list of tuples. # The last item in the list [-1] is the current method. # The third item[2] in each tuple is the name of that method. if stack[-2][2] == 'script': cmd = 'Powershell -NonInteractive -NoProfile -ExecutionPolicy Bypass -File ' + cmd elif encoded_cmd: cmd = 'Powershell -NonInteractive -EncodedCommand {0}'.format(cmd) else: cmd = 'Powershell -NonInteractive -NoProfile "{0}"'.format(cmd.replace('"', '\\"')) # munge the cmd and cwd through the template (cmd, cwd) = _render_cmd(cmd, cwd, template, saltenv, pillarenv, pillar_override) ret = {} # If the pub jid is here then this is a remote ex or salt call command and needs to be # checked if blacklisted if '__pub_jid' in kwargs: if not _check_avail(cmd): raise CommandExecutionError( 'The shell command "{0}" is not permitted'.format(cmd) ) env = _parse_env(env) for bad_env_key in (x for x, y in six.iteritems(env) if y is None): log.error('Environment variable \'%s\' passed without a value. ' 'Setting value to an empty string', bad_env_key) env[bad_env_key] = '' def _get_stripped(cmd): # Return stripped command string copies to improve logging. if isinstance(cmd, list): return [x.strip() if isinstance(x, six.string_types) else x for x in cmd] elif isinstance(cmd, six.string_types): return cmd.strip() else: return cmd if output_loglevel is not None: # Always log the shell commands at INFO unless quiet logging is # requested. The command output is what will be controlled by the # 'loglevel' parameter. msg = ( 'Executing command {0}{1}{0} {2}{3}in directory \'{4}\'{5}'.format( '\'' if not isinstance(cmd, list) else '', _get_stripped(cmd), 'as user \'{0}\' '.format(runas) if runas else '', 'in group \'{0}\' '.format(group) if group else '', cwd, '. Executing command in the background, no output will be ' 'logged.' if bg else '' ) ) log.info(log_callback(msg)) if runas and salt.utils.platform.is_windows(): if not HAS_WIN_RUNAS: msg = 'missing salt/utils/win_runas.py' raise CommandExecutionError(msg) if isinstance(cmd, (list, tuple)): cmd = ' '.join(cmd) return win_runas(cmd, runas, password, cwd) if runas and salt.utils.platform.is_darwin(): # we need to insert the user simulation into the command itself and not # just run it from the environment on macOS as that # method doesn't work properly when run as root for certain commands. if isinstance(cmd, (list, tuple)): cmd = ' '.join(map(_cmd_quote, cmd)) cmd = 'su -l {0} -c "cd {1}; {2}"'.format(runas, cwd, cmd) # set runas to None, because if you try to run `su -l` as well as # simulate the environment macOS will prompt for the password of the # user and will cause salt to hang. runas = None if runas: # Save the original command before munging it try: pwd.getpwnam(runas) except KeyError: raise CommandExecutionError( 'User \'{0}\' is not available'.format(runas) ) if group: if salt.utils.platform.is_windows(): msg = 'group is not currently available on Windows' raise SaltInvocationError(msg) if not which_bin(['sudo']): msg = 'group argument requires sudo but not found' raise CommandExecutionError(msg) try: grp.getgrnam(group) except KeyError: raise CommandExecutionError( 'Group \'{0}\' is not available'.format(runas) ) else: use_sudo = True if runas or group: try: # Getting the environment for the runas user # Use markers to thwart any stdout noise # There must be a better way to do this. import uuid marker = '<<<' + str(uuid.uuid4()) + '>>>' marker_b = marker.encode(__salt_system_encoding__) py_code = ( 'import sys, os, itertools; ' 'sys.stdout.write(\"' + marker + '\"); ' 'sys.stdout.write(\"\\0\".join(itertools.chain(*os.environ.items()))); ' 'sys.stdout.write(\"' + marker + '\");' ) if use_sudo or __grains__['os'] in ['MacOS', 'Darwin']: env_cmd = ['sudo'] # runas is optional if use_sudo is set. if runas: env_cmd.extend(['-u', runas]) if group: env_cmd.extend(['-g', group]) if shell != DEFAULT_SHELL: env_cmd.extend(['-s', '--', shell, '-c']) else: env_cmd.extend(['-i', '--']) env_cmd.extend([sys.executable]) elif __grains__['os'] in ['FreeBSD']: env_cmd = ('su', '-', runas, '-c', "{0} -c {1}".format(shell, sys.executable)) elif __grains__['os_family'] in ['Solaris']: env_cmd = ('su', '-', runas, '-c', sys.executable) elif __grains__['os_family'] in ['AIX']: env_cmd = ('su', '-', runas, '-c', sys.executable) else: env_cmd = ('su', '-s', shell, '-', runas, '-c', sys.executable) msg = 'env command: {0}'.format(env_cmd) log.debug(log_callback(msg)) env_bytes, env_encoded_err = subprocess.Popen( env_cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE ).communicate(salt.utils.stringutils.to_bytes(py_code)) marker_count = env_bytes.count(marker_b) if marker_count == 0: # Possibly PAM prevented the login log.error( 'Environment could not be retrieved for user \'%s\': ' 'stderr=%r stdout=%r', runas, env_encoded_err, env_bytes ) # Ensure that we get an empty env_runas dict below since we # were not able to get the environment. env_bytes = b'' elif marker_count != 2: raise CommandExecutionError( 'Environment could not be retrieved for user \'{0}\'', info={'stderr': repr(env_encoded_err), 'stdout': repr(env_bytes)} ) else: # Strip the marker env_bytes = env_bytes.split(marker_b)[1] if six.PY2: import itertools env_runas = dict(itertools.izip(*[iter(env_bytes.split(b'\0'))]*2)) elif six.PY3: env_runas = dict(list(zip(*[iter(env_bytes.split(b'\0'))]*2))) env_runas = dict( (salt.utils.stringutils.to_str(k), salt.utils.stringutils.to_str(v)) for k, v in six.iteritems(env_runas) ) env_runas.update(env) # Fix platforms like Solaris that don't set a USER env var in the # user's default environment as obtained above. if env_runas.get('USER') != runas: env_runas['USER'] = runas # Fix some corner cases where shelling out to get the user's # environment returns the wrong home directory. runas_home = os.path.expanduser('~{0}'.format(runas)) if env_runas.get('HOME') != runas_home: env_runas['HOME'] = runas_home env = env_runas except ValueError as exc: log.exception('Error raised retrieving environment for user %s', runas) raise CommandExecutionError( 'Environment could not be retrieved for user \'{0}\': {1}'.format( runas, exc ) ) if reset_system_locale is True: if not salt.utils.platform.is_windows(): # Default to C! # Salt only knows how to parse English words # Don't override if the user has passed LC_ALL env.setdefault('LC_CTYPE', 'C') env.setdefault('LC_NUMERIC', 'C') env.setdefault('LC_TIME', 'C') env.setdefault('LC_COLLATE', 'C') env.setdefault('LC_MONETARY', 'C') env.setdefault('LC_MESSAGES', 'C') env.setdefault('LC_PAPER', 'C') env.setdefault('LC_NAME', 'C') env.setdefault('LC_ADDRESS', 'C') env.setdefault('LC_TELEPHONE', 'C') env.setdefault('LC_MEASUREMENT', 'C') env.setdefault('LC_IDENTIFICATION', 'C') env.setdefault('LANGUAGE', 'C') else: # On Windows set the codepage to US English. if python_shell: cmd = 'chcp 437 > nul & ' + cmd if clean_env: run_env = env else: run_env = os.environ.copy() run_env.update(env) if prepend_path: run_env['PATH'] = ':'.join((prepend_path, run_env['PATH'])) if python_shell is None: python_shell = False new_kwargs = {'cwd': cwd, 'shell': python_shell, 'env': run_env if six.PY3 else salt.utils.data.encode(run_env), 'stdin': six.text_type(stdin) if stdin is not None else stdin, 'stdout': stdout, 'stderr': stderr, 'with_communicate': with_communicate, 'timeout': timeout, 'bg': bg, } if 'stdin_raw_newlines' in kwargs: new_kwargs['stdin_raw_newlines'] = kwargs['stdin_raw_newlines'] if umask is not None: _umask = six.text_type(umask).lstrip('0') if _umask == '': msg = 'Zero umask is not allowed.' raise CommandExecutionError(msg) try: _umask = int(_umask, 8) except ValueError: raise CommandExecutionError("Invalid umask: '{0}'".format(umask)) else: _umask = None if runas or group or umask: new_kwargs['preexec_fn'] = functools.partial( salt.utils.user.chugid_and_umask, runas, _umask, group) if not salt.utils.platform.is_windows(): # close_fds is not supported on Windows platforms if you redirect # stdin/stdout/stderr if new_kwargs['shell'] is True: new_kwargs['executable'] = shell new_kwargs['close_fds'] = True if not os.path.isabs(cwd) or not os.path.isdir(cwd): raise CommandExecutionError( 'Specified cwd \'{0}\' either not absolute or does not exist' .format(cwd) ) if python_shell is not True \ and not salt.utils.platform.is_windows() \ and not isinstance(cmd, list): cmd = salt.utils.args.shlex_split(cmd) if success_retcodes is None: success_retcodes = [0] else: try: success_retcodes = [int(i) for i in salt.utils.args.split_input( success_retcodes )] except ValueError: raise SaltInvocationError( 'success_retcodes must be a list of integers' ) if success_stdout is None: success_stdout = [] else: try: success_stdout = [i for i in salt.utils.args.split_input( success_stdout )] except ValueError: raise SaltInvocationError( 'success_stdout must be a list of integers' ) if success_stderr is None: success_stderr = [] else: try: success_stderr = [i for i in salt.utils.args.split_input( success_stderr )] except ValueError: raise SaltInvocationError( 'success_stderr must be a list of integers' ) if not use_vt: # This is where the magic happens try: proc = salt.utils.timed_subprocess.TimedProc(cmd, **new_kwargs) except (OSError, IOError) as exc: msg = ( 'Unable to run command \'{0}\' with the context \'{1}\', ' 'reason: {2}'.format( cmd if output_loglevel is not None else 'REDACTED', new_kwargs, exc ) ) raise CommandExecutionError(msg) try: proc.run() except TimedProcTimeoutError as exc: ret['stdout'] = six.text_type(exc) ret['stderr'] = '' ret['retcode'] = None ret['pid'] = proc.process.pid # ok return code for timeouts? ret['retcode'] = 1 return ret if output_loglevel != 'quiet' and output_encoding is not None: log.debug('Decoding output from command %s using %s encoding', cmd, output_encoding) try: out = salt.utils.stringutils.to_unicode( proc.stdout, encoding=output_encoding) except TypeError: # stdout is None out = '' except UnicodeDecodeError: out = salt.utils.stringutils.to_unicode( proc.stdout, encoding=output_encoding, errors='replace') if output_loglevel != 'quiet': log.error( 'Failed to decode stdout from command %s, non-decodable ' 'characters have been replaced', cmd ) try: err = salt.utils.stringutils.to_unicode( proc.stderr, encoding=output_encoding) except TypeError: # stderr is None err = '' except UnicodeDecodeError: err = salt.utils.stringutils.to_unicode( proc.stderr, encoding=output_encoding, errors='replace') if output_loglevel != 'quiet': log.error( 'Failed to decode stderr from command %s, non-decodable ' 'characters have been replaced', cmd ) if rstrip: if out is not None: out = out.rstrip() if err is not None: err = err.rstrip() ret['pid'] = proc.process.pid ret['retcode'] = proc.process.returncode if ret['retcode'] in success_retcodes: ret['retcode'] = 0 ret['stdout'] = out ret['stderr'] = err if ret['stdout'] in success_stdout or ret['stderr'] in success_stderr: ret['retcode'] = 0 else: formatted_timeout = '' if timeout: formatted_timeout = ' (timeout: {0}s)'.format(timeout) if output_loglevel is not None: msg = 'Running {0} in VT{1}'.format(cmd, formatted_timeout) log.debug(log_callback(msg)) stdout, stderr = '', '' now = time.time() if timeout: will_timeout = now + timeout else: will_timeout = -1 try: proc = salt.utils.vt.Terminal( cmd, shell=True, log_stdout=True, log_stderr=True, cwd=cwd, preexec_fn=new_kwargs.get('preexec_fn', None), env=run_env, log_stdin_level=output_loglevel, log_stdout_level=output_loglevel, log_stderr_level=output_loglevel, stream_stdout=True, stream_stderr=True ) ret['pid'] = proc.pid while proc.has_unread_data: try: try: time.sleep(0.5) try: cstdout, cstderr = proc.recv() except IOError: cstdout, cstderr = '', '' if cstdout: stdout += cstdout else: cstdout = '' if cstderr: stderr += cstderr else: cstderr = '' if timeout and (time.time() > will_timeout): ret['stderr'] = ( 'SALT: Timeout after {0}s\n{1}').format( timeout, stderr) ret['retcode'] = None break except KeyboardInterrupt: ret['stderr'] = 'SALT: User break\n{0}'.format(stderr) ret['retcode'] = 1 break except salt.utils.vt.TerminalException as exc: log.error('VT: %s', exc, exc_info_on_loglevel=logging.DEBUG) ret = {'retcode': 1, 'pid': '2'} break # only set stdout on success as we already mangled in other # cases ret['stdout'] = stdout if not proc.isalive(): # Process terminated, i.e., not canceled by the user or by # the timeout ret['stderr'] = stderr ret['retcode'] = proc.exitstatus if ret['retcode'] in success_retcodes: ret['retcode'] = 0 if ret['stdout'] in success_stdout or ret['stderr'] in success_stderr: ret['retcode'] = 0 ret['pid'] = proc.pid finally: proc.close(terminate=True, kill=True) try: if ignore_retcode: __context__['retcode'] = 0 else: __context__['retcode'] = ret['retcode'] except NameError: # Ignore the context error during grain generation pass # Log the output if output_loglevel is not None: if not ignore_retcode and ret['retcode'] != 0: if output_loglevel < LOG_LEVELS['error']: output_loglevel = LOG_LEVELS['error'] msg = ( 'Command \'{0}\' failed with return code: {1}'.format( cmd, ret['retcode'] ) ) log.error(log_callback(msg)) if ret['stdout']: log.log(output_loglevel, 'stdout: %s', log_callback(ret['stdout'])) if ret['stderr']: log.log(output_loglevel, 'stderr: %s', log_callback(ret['stderr'])) if ret['retcode']: log.log(output_loglevel, 'retcode: %s', ret['retcode']) return ret
Do the DRY thing and only call subprocess.Popen() once
Below is the the instruction that describes the task: ### Input: Do the DRY thing and only call subprocess.Popen() once ### Response: def _run(cmd, cwd=None, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, output_encoding=None, output_loglevel='debug', log_callback=None, runas=None, group=None, shell=DEFAULT_SHELL, python_shell=False, env=None, clean_env=False, prepend_path=None, rstrip=True, template=None, umask=None, timeout=None, with_communicate=True, reset_system_locale=True, ignore_retcode=False, saltenv='base', pillarenv=None, pillar_override=None, use_vt=False, password=None, bg=False, encoded_cmd=False, success_retcodes=None, success_stdout=None, success_stderr=None, **kwargs): ''' Do the DRY thing and only call subprocess.Popen() once ''' if 'pillar' in kwargs and not pillar_override: pillar_override = kwargs['pillar'] if output_loglevel != 'quiet' and _is_valid_shell(shell) is False: log.warning( 'Attempt to run a shell command with what may be an invalid shell! ' 'Check to ensure that the shell <%s> is valid for this user.', shell ) output_loglevel = _check_loglevel(output_loglevel) log_callback = _check_cb(log_callback) use_sudo = False if runas is None and '__context__' in globals(): runas = __context__.get('runas') if password is None and '__context__' in globals(): password = __context__.get('runas_password') # Set the default working directory to the home directory of the user # salt-minion is running as. Defaults to home directory of user under which # the minion is running. if not cwd: cwd = os.path.expanduser('~{0}'.format('' if not runas else runas)) # make sure we can access the cwd # when run from sudo or another environment where the euid is # changed ~ will expand to the home of the original uid and # the euid might not have access to it. See issue #1844 if not os.access(cwd, os.R_OK): cwd = '/' if salt.utils.platform.is_windows(): cwd = os.path.abspath(os.sep) else: # Handle edge cases where numeric/other input is entered, and would be # yaml-ified into non-string types cwd = six.text_type(cwd) if bg: ignore_retcode = True use_vt = False if not salt.utils.platform.is_windows(): if not os.path.isfile(shell) or not os.access(shell, os.X_OK): msg = 'The shell {0} is not available'.format(shell) raise CommandExecutionError(msg) if salt.utils.platform.is_windows() and use_vt: # Memozation so not much overhead raise CommandExecutionError('VT not available on windows') if shell.lower().strip() == 'powershell': # Strip whitespace if isinstance(cmd, six.string_types): cmd = cmd.strip() # If we were called by script(), then fakeout the Windows # shell to run a Powershell script. # Else just run a Powershell command. stack = traceback.extract_stack(limit=2) # extract_stack() returns a list of tuples. # The last item in the list [-1] is the current method. # The third item[2] in each tuple is the name of that method. if stack[-2][2] == 'script': cmd = 'Powershell -NonInteractive -NoProfile -ExecutionPolicy Bypass -File ' + cmd elif encoded_cmd: cmd = 'Powershell -NonInteractive -EncodedCommand {0}'.format(cmd) else: cmd = 'Powershell -NonInteractive -NoProfile "{0}"'.format(cmd.replace('"', '\\"')) # munge the cmd and cwd through the template (cmd, cwd) = _render_cmd(cmd, cwd, template, saltenv, pillarenv, pillar_override) ret = {} # If the pub jid is here then this is a remote ex or salt call command and needs to be # checked if blacklisted if '__pub_jid' in kwargs: if not _check_avail(cmd): raise CommandExecutionError( 'The shell command "{0}" is not permitted'.format(cmd) ) env = _parse_env(env) for bad_env_key in (x for x, y in six.iteritems(env) if y is None): log.error('Environment variable \'%s\' passed without a value. ' 'Setting value to an empty string', bad_env_key) env[bad_env_key] = '' def _get_stripped(cmd): # Return stripped command string copies to improve logging. if isinstance(cmd, list): return [x.strip() if isinstance(x, six.string_types) else x for x in cmd] elif isinstance(cmd, six.string_types): return cmd.strip() else: return cmd if output_loglevel is not None: # Always log the shell commands at INFO unless quiet logging is # requested. The command output is what will be controlled by the # 'loglevel' parameter. msg = ( 'Executing command {0}{1}{0} {2}{3}in directory \'{4}\'{5}'.format( '\'' if not isinstance(cmd, list) else '', _get_stripped(cmd), 'as user \'{0}\' '.format(runas) if runas else '', 'in group \'{0}\' '.format(group) if group else '', cwd, '. Executing command in the background, no output will be ' 'logged.' if bg else '' ) ) log.info(log_callback(msg)) if runas and salt.utils.platform.is_windows(): if not HAS_WIN_RUNAS: msg = 'missing salt/utils/win_runas.py' raise CommandExecutionError(msg) if isinstance(cmd, (list, tuple)): cmd = ' '.join(cmd) return win_runas(cmd, runas, password, cwd) if runas and salt.utils.platform.is_darwin(): # we need to insert the user simulation into the command itself and not # just run it from the environment on macOS as that # method doesn't work properly when run as root for certain commands. if isinstance(cmd, (list, tuple)): cmd = ' '.join(map(_cmd_quote, cmd)) cmd = 'su -l {0} -c "cd {1}; {2}"'.format(runas, cwd, cmd) # set runas to None, because if you try to run `su -l` as well as # simulate the environment macOS will prompt for the password of the # user and will cause salt to hang. runas = None if runas: # Save the original command before munging it try: pwd.getpwnam(runas) except KeyError: raise CommandExecutionError( 'User \'{0}\' is not available'.format(runas) ) if group: if salt.utils.platform.is_windows(): msg = 'group is not currently available on Windows' raise SaltInvocationError(msg) if not which_bin(['sudo']): msg = 'group argument requires sudo but not found' raise CommandExecutionError(msg) try: grp.getgrnam(group) except KeyError: raise CommandExecutionError( 'Group \'{0}\' is not available'.format(runas) ) else: use_sudo = True if runas or group: try: # Getting the environment for the runas user # Use markers to thwart any stdout noise # There must be a better way to do this. import uuid marker = '<<<' + str(uuid.uuid4()) + '>>>' marker_b = marker.encode(__salt_system_encoding__) py_code = ( 'import sys, os, itertools; ' 'sys.stdout.write(\"' + marker + '\"); ' 'sys.stdout.write(\"\\0\".join(itertools.chain(*os.environ.items()))); ' 'sys.stdout.write(\"' + marker + '\");' ) if use_sudo or __grains__['os'] in ['MacOS', 'Darwin']: env_cmd = ['sudo'] # runas is optional if use_sudo is set. if runas: env_cmd.extend(['-u', runas]) if group: env_cmd.extend(['-g', group]) if shell != DEFAULT_SHELL: env_cmd.extend(['-s', '--', shell, '-c']) else: env_cmd.extend(['-i', '--']) env_cmd.extend([sys.executable]) elif __grains__['os'] in ['FreeBSD']: env_cmd = ('su', '-', runas, '-c', "{0} -c {1}".format(shell, sys.executable)) elif __grains__['os_family'] in ['Solaris']: env_cmd = ('su', '-', runas, '-c', sys.executable) elif __grains__['os_family'] in ['AIX']: env_cmd = ('su', '-', runas, '-c', sys.executable) else: env_cmd = ('su', '-s', shell, '-', runas, '-c', sys.executable) msg = 'env command: {0}'.format(env_cmd) log.debug(log_callback(msg)) env_bytes, env_encoded_err = subprocess.Popen( env_cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE ).communicate(salt.utils.stringutils.to_bytes(py_code)) marker_count = env_bytes.count(marker_b) if marker_count == 0: # Possibly PAM prevented the login log.error( 'Environment could not be retrieved for user \'%s\': ' 'stderr=%r stdout=%r', runas, env_encoded_err, env_bytes ) # Ensure that we get an empty env_runas dict below since we # were not able to get the environment. env_bytes = b'' elif marker_count != 2: raise CommandExecutionError( 'Environment could not be retrieved for user \'{0}\'', info={'stderr': repr(env_encoded_err), 'stdout': repr(env_bytes)} ) else: # Strip the marker env_bytes = env_bytes.split(marker_b)[1] if six.PY2: import itertools env_runas = dict(itertools.izip(*[iter(env_bytes.split(b'\0'))]*2)) elif six.PY3: env_runas = dict(list(zip(*[iter(env_bytes.split(b'\0'))]*2))) env_runas = dict( (salt.utils.stringutils.to_str(k), salt.utils.stringutils.to_str(v)) for k, v in six.iteritems(env_runas) ) env_runas.update(env) # Fix platforms like Solaris that don't set a USER env var in the # user's default environment as obtained above. if env_runas.get('USER') != runas: env_runas['USER'] = runas # Fix some corner cases where shelling out to get the user's # environment returns the wrong home directory. runas_home = os.path.expanduser('~{0}'.format(runas)) if env_runas.get('HOME') != runas_home: env_runas['HOME'] = runas_home env = env_runas except ValueError as exc: log.exception('Error raised retrieving environment for user %s', runas) raise CommandExecutionError( 'Environment could not be retrieved for user \'{0}\': {1}'.format( runas, exc ) ) if reset_system_locale is True: if not salt.utils.platform.is_windows(): # Default to C! # Salt only knows how to parse English words # Don't override if the user has passed LC_ALL env.setdefault('LC_CTYPE', 'C') env.setdefault('LC_NUMERIC', 'C') env.setdefault('LC_TIME', 'C') env.setdefault('LC_COLLATE', 'C') env.setdefault('LC_MONETARY', 'C') env.setdefault('LC_MESSAGES', 'C') env.setdefault('LC_PAPER', 'C') env.setdefault('LC_NAME', 'C') env.setdefault('LC_ADDRESS', 'C') env.setdefault('LC_TELEPHONE', 'C') env.setdefault('LC_MEASUREMENT', 'C') env.setdefault('LC_IDENTIFICATION', 'C') env.setdefault('LANGUAGE', 'C') else: # On Windows set the codepage to US English. if python_shell: cmd = 'chcp 437 > nul & ' + cmd if clean_env: run_env = env else: run_env = os.environ.copy() run_env.update(env) if prepend_path: run_env['PATH'] = ':'.join((prepend_path, run_env['PATH'])) if python_shell is None: python_shell = False new_kwargs = {'cwd': cwd, 'shell': python_shell, 'env': run_env if six.PY3 else salt.utils.data.encode(run_env), 'stdin': six.text_type(stdin) if stdin is not None else stdin, 'stdout': stdout, 'stderr': stderr, 'with_communicate': with_communicate, 'timeout': timeout, 'bg': bg, } if 'stdin_raw_newlines' in kwargs: new_kwargs['stdin_raw_newlines'] = kwargs['stdin_raw_newlines'] if umask is not None: _umask = six.text_type(umask).lstrip('0') if _umask == '': msg = 'Zero umask is not allowed.' raise CommandExecutionError(msg) try: _umask = int(_umask, 8) except ValueError: raise CommandExecutionError("Invalid umask: '{0}'".format(umask)) else: _umask = None if runas or group or umask: new_kwargs['preexec_fn'] = functools.partial( salt.utils.user.chugid_and_umask, runas, _umask, group) if not salt.utils.platform.is_windows(): # close_fds is not supported on Windows platforms if you redirect # stdin/stdout/stderr if new_kwargs['shell'] is True: new_kwargs['executable'] = shell new_kwargs['close_fds'] = True if not os.path.isabs(cwd) or not os.path.isdir(cwd): raise CommandExecutionError( 'Specified cwd \'{0}\' either not absolute or does not exist' .format(cwd) ) if python_shell is not True \ and not salt.utils.platform.is_windows() \ and not isinstance(cmd, list): cmd = salt.utils.args.shlex_split(cmd) if success_retcodes is None: success_retcodes = [0] else: try: success_retcodes = [int(i) for i in salt.utils.args.split_input( success_retcodes )] except ValueError: raise SaltInvocationError( 'success_retcodes must be a list of integers' ) if success_stdout is None: success_stdout = [] else: try: success_stdout = [i for i in salt.utils.args.split_input( success_stdout )] except ValueError: raise SaltInvocationError( 'success_stdout must be a list of integers' ) if success_stderr is None: success_stderr = [] else: try: success_stderr = [i for i in salt.utils.args.split_input( success_stderr )] except ValueError: raise SaltInvocationError( 'success_stderr must be a list of integers' ) if not use_vt: # This is where the magic happens try: proc = salt.utils.timed_subprocess.TimedProc(cmd, **new_kwargs) except (OSError, IOError) as exc: msg = ( 'Unable to run command \'{0}\' with the context \'{1}\', ' 'reason: {2}'.format( cmd if output_loglevel is not None else 'REDACTED', new_kwargs, exc ) ) raise CommandExecutionError(msg) try: proc.run() except TimedProcTimeoutError as exc: ret['stdout'] = six.text_type(exc) ret['stderr'] = '' ret['retcode'] = None ret['pid'] = proc.process.pid # ok return code for timeouts? ret['retcode'] = 1 return ret if output_loglevel != 'quiet' and output_encoding is not None: log.debug('Decoding output from command %s using %s encoding', cmd, output_encoding) try: out = salt.utils.stringutils.to_unicode( proc.stdout, encoding=output_encoding) except TypeError: # stdout is None out = '' except UnicodeDecodeError: out = salt.utils.stringutils.to_unicode( proc.stdout, encoding=output_encoding, errors='replace') if output_loglevel != 'quiet': log.error( 'Failed to decode stdout from command %s, non-decodable ' 'characters have been replaced', cmd ) try: err = salt.utils.stringutils.to_unicode( proc.stderr, encoding=output_encoding) except TypeError: # stderr is None err = '' except UnicodeDecodeError: err = salt.utils.stringutils.to_unicode( proc.stderr, encoding=output_encoding, errors='replace') if output_loglevel != 'quiet': log.error( 'Failed to decode stderr from command %s, non-decodable ' 'characters have been replaced', cmd ) if rstrip: if out is not None: out = out.rstrip() if err is not None: err = err.rstrip() ret['pid'] = proc.process.pid ret['retcode'] = proc.process.returncode if ret['retcode'] in success_retcodes: ret['retcode'] = 0 ret['stdout'] = out ret['stderr'] = err if ret['stdout'] in success_stdout or ret['stderr'] in success_stderr: ret['retcode'] = 0 else: formatted_timeout = '' if timeout: formatted_timeout = ' (timeout: {0}s)'.format(timeout) if output_loglevel is not None: msg = 'Running {0} in VT{1}'.format(cmd, formatted_timeout) log.debug(log_callback(msg)) stdout, stderr = '', '' now = time.time() if timeout: will_timeout = now + timeout else: will_timeout = -1 try: proc = salt.utils.vt.Terminal( cmd, shell=True, log_stdout=True, log_stderr=True, cwd=cwd, preexec_fn=new_kwargs.get('preexec_fn', None), env=run_env, log_stdin_level=output_loglevel, log_stdout_level=output_loglevel, log_stderr_level=output_loglevel, stream_stdout=True, stream_stderr=True ) ret['pid'] = proc.pid while proc.has_unread_data: try: try: time.sleep(0.5) try: cstdout, cstderr = proc.recv() except IOError: cstdout, cstderr = '', '' if cstdout: stdout += cstdout else: cstdout = '' if cstderr: stderr += cstderr else: cstderr = '' if timeout and (time.time() > will_timeout): ret['stderr'] = ( 'SALT: Timeout after {0}s\n{1}').format( timeout, stderr) ret['retcode'] = None break except KeyboardInterrupt: ret['stderr'] = 'SALT: User break\n{0}'.format(stderr) ret['retcode'] = 1 break except salt.utils.vt.TerminalException as exc: log.error('VT: %s', exc, exc_info_on_loglevel=logging.DEBUG) ret = {'retcode': 1, 'pid': '2'} break # only set stdout on success as we already mangled in other # cases ret['stdout'] = stdout if not proc.isalive(): # Process terminated, i.e., not canceled by the user or by # the timeout ret['stderr'] = stderr ret['retcode'] = proc.exitstatus if ret['retcode'] in success_retcodes: ret['retcode'] = 0 if ret['stdout'] in success_stdout or ret['stderr'] in success_stderr: ret['retcode'] = 0 ret['pid'] = proc.pid finally: proc.close(terminate=True, kill=True) try: if ignore_retcode: __context__['retcode'] = 0 else: __context__['retcode'] = ret['retcode'] except NameError: # Ignore the context error during grain generation pass # Log the output if output_loglevel is not None: if not ignore_retcode and ret['retcode'] != 0: if output_loglevel < LOG_LEVELS['error']: output_loglevel = LOG_LEVELS['error'] msg = ( 'Command \'{0}\' failed with return code: {1}'.format( cmd, ret['retcode'] ) ) log.error(log_callback(msg)) if ret['stdout']: log.log(output_loglevel, 'stdout: %s', log_callback(ret['stdout'])) if ret['stderr']: log.log(output_loglevel, 'stderr: %s', log_callback(ret['stderr'])) if ret['retcode']: log.log(output_loglevel, 'retcode: %s', ret['retcode']) return ret
def cli(env, name, public): """List images.""" image_mgr = SoftLayer.ImageManager(env.client) images = [] if public in [False, None]: for image in image_mgr.list_private_images(name=name, mask=image_mod.MASK): images.append(image) if public in [True, None]: for image in image_mgr.list_public_images(name=name, mask=image_mod.MASK): images.append(image) table = formatting.Table(['id', 'name', 'type', 'visibility', 'account']) images = [image for image in images if image['parentId'] == ''] for image in images: visibility = (image_mod.PUBLIC_TYPE if image['publicFlag'] else image_mod.PRIVATE_TYPE) table.add_row([ image.get('id', formatting.blank()), formatting.FormattedItem(image['name'], click.wrap_text(image['name'], width=50)), formatting.FormattedItem( utils.lookup(image, 'imageType', 'keyName'), utils.lookup(image, 'imageType', 'name')), visibility, image.get('accountId', formatting.blank()), ]) env.fout(table)
List images.
Below is the the instruction that describes the task: ### Input: List images. ### Response: def cli(env, name, public): """List images.""" image_mgr = SoftLayer.ImageManager(env.client) images = [] if public in [False, None]: for image in image_mgr.list_private_images(name=name, mask=image_mod.MASK): images.append(image) if public in [True, None]: for image in image_mgr.list_public_images(name=name, mask=image_mod.MASK): images.append(image) table = formatting.Table(['id', 'name', 'type', 'visibility', 'account']) images = [image for image in images if image['parentId'] == ''] for image in images: visibility = (image_mod.PUBLIC_TYPE if image['publicFlag'] else image_mod.PRIVATE_TYPE) table.add_row([ image.get('id', formatting.blank()), formatting.FormattedItem(image['name'], click.wrap_text(image['name'], width=50)), formatting.FormattedItem( utils.lookup(image, 'imageType', 'keyName'), utils.lookup(image, 'imageType', 'name')), visibility, image.get('accountId', formatting.blank()), ]) env.fout(table)
def _reduce_datetimes(row): """Receives a row, converts datetimes to strings.""" row = list(row) for i in range(len(row)): if hasattr(row[i], 'isoformat'): row[i] = row[i].isoformat() return tuple(row)
Receives a row, converts datetimes to strings.
Below is the the instruction that describes the task: ### Input: Receives a row, converts datetimes to strings. ### Response: def _reduce_datetimes(row): """Receives a row, converts datetimes to strings.""" row = list(row) for i in range(len(row)): if hasattr(row[i], 'isoformat'): row[i] = row[i].isoformat() return tuple(row)
def qn(tag): """ Stands for "qualified name", a utility function to turn a namespace prefixed tag name into a Clark-notation qualified tag name for lxml. For example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``. """ prefix, tagroot = tag.split(':') uri = nsmap[prefix] return '{%s}%s' % (uri, tagroot)
Stands for "qualified name", a utility function to turn a namespace prefixed tag name into a Clark-notation qualified tag name for lxml. For example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``.
Below is the the instruction that describes the task: ### Input: Stands for "qualified name", a utility function to turn a namespace prefixed tag name into a Clark-notation qualified tag name for lxml. For example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``. ### Response: def qn(tag): """ Stands for "qualified name", a utility function to turn a namespace prefixed tag name into a Clark-notation qualified tag name for lxml. For example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``. """ prefix, tagroot = tag.split(':') uri = nsmap[prefix] return '{%s}%s' % (uri, tagroot)
def get_by_name(self, name, clip_to_valid = False): """ Retrieve the active segmentlists whose name equals name. The result is a segmentlistdict indexed by instrument. All segmentlist objects within it will be copies of the contents of this object, modifications will not affect the contents of this object. If clip_to_valid is True then the segmentlists will be intersected with their respective intervals of validity, otherwise they will be the verbatim active segments. NOTE: the intersection operation required by clip_to_valid will yield undefined results unless the active and valid segmentlist objects are coalesced. """ result = segments.segmentlistdict() for seglist in self: if seglist.name != name: continue segs = seglist.active if clip_to_valid: # do not use in-place intersection segs = segs & seglist.valid for instrument in seglist.instruments: if instrument in result: raise ValueError("multiple '%s' segmentlists for instrument '%s'" % (name, instrument)) result[instrument] = segs.copy() return result
Retrieve the active segmentlists whose name equals name. The result is a segmentlistdict indexed by instrument. All segmentlist objects within it will be copies of the contents of this object, modifications will not affect the contents of this object. If clip_to_valid is True then the segmentlists will be intersected with their respective intervals of validity, otherwise they will be the verbatim active segments. NOTE: the intersection operation required by clip_to_valid will yield undefined results unless the active and valid segmentlist objects are coalesced.
Below is the the instruction that describes the task: ### Input: Retrieve the active segmentlists whose name equals name. The result is a segmentlistdict indexed by instrument. All segmentlist objects within it will be copies of the contents of this object, modifications will not affect the contents of this object. If clip_to_valid is True then the segmentlists will be intersected with their respective intervals of validity, otherwise they will be the verbatim active segments. NOTE: the intersection operation required by clip_to_valid will yield undefined results unless the active and valid segmentlist objects are coalesced. ### Response: def get_by_name(self, name, clip_to_valid = False): """ Retrieve the active segmentlists whose name equals name. The result is a segmentlistdict indexed by instrument. All segmentlist objects within it will be copies of the contents of this object, modifications will not affect the contents of this object. If clip_to_valid is True then the segmentlists will be intersected with their respective intervals of validity, otherwise they will be the verbatim active segments. NOTE: the intersection operation required by clip_to_valid will yield undefined results unless the active and valid segmentlist objects are coalesced. """ result = segments.segmentlistdict() for seglist in self: if seglist.name != name: continue segs = seglist.active if clip_to_valid: # do not use in-place intersection segs = segs & seglist.valid for instrument in seglist.instruments: if instrument in result: raise ValueError("multiple '%s' segmentlists for instrument '%s'" % (name, instrument)) result[instrument] = segs.copy() return result
def validate_dict(in_dict, **kwargs): """ Returns Boolean of whether given dict conforms to type specifications given in kwargs. """ if not isinstance(in_dict, dict): raise ValueError('requires a dictionary') for key, value in kwargs.iteritems(): if key == 'required': for required_key in value: if required_key not in in_dict: return False elif key not in in_dict: continue elif value == bool: in_dict[key] = (True if str(in_dict[key]).lower() == 'true' else False) else: if (isinstance(in_dict[key], list) and len(in_dict[key]) == 1 and value != list): in_dict[key] = in_dict[key][0] try: if key in in_dict: in_dict[key] = value(in_dict[key]) except ValueError: return False return True
Returns Boolean of whether given dict conforms to type specifications given in kwargs.
Below is the the instruction that describes the task: ### Input: Returns Boolean of whether given dict conforms to type specifications given in kwargs. ### Response: def validate_dict(in_dict, **kwargs): """ Returns Boolean of whether given dict conforms to type specifications given in kwargs. """ if not isinstance(in_dict, dict): raise ValueError('requires a dictionary') for key, value in kwargs.iteritems(): if key == 'required': for required_key in value: if required_key not in in_dict: return False elif key not in in_dict: continue elif value == bool: in_dict[key] = (True if str(in_dict[key]).lower() == 'true' else False) else: if (isinstance(in_dict[key], list) and len(in_dict[key]) == 1 and value != list): in_dict[key] = in_dict[key][0] try: if key in in_dict: in_dict[key] = value(in_dict[key]) except ValueError: return False return True
def add_bias(self, name, b, input_name, output_name, shape_bias = [1]): """ Add bias layer to the model. Parameters ---------- name: str The name of this layer. b: int | numpy.array Bias to add to the input. input_name: str The input blob name of this layer. output_name: str The output blob name of this layer. shape_bias: [int] List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W]. See Also -------- add_scale """ spec = self.spec nn_spec = self.nn_spec spec_layer = nn_spec.layers.add() spec_layer.name = name spec_layer.input.append(input_name) spec_layer.output.append(output_name) spec_layer_params = spec_layer.bias #add bias and its shape bias = spec_layer_params.bias spec_layer_params.shape.extend(shape_bias) if isinstance(b, int): bias.floatValue.append(float(b)) else: bias.floatValue.extend(map(float, b.flatten())) if len(bias.floatValue) != np.prod(shape_bias): raise ValueError("Dimensions of 'shape_bias' do not match the size of the provided 'b' parameter")
Add bias layer to the model. Parameters ---------- name: str The name of this layer. b: int | numpy.array Bias to add to the input. input_name: str The input blob name of this layer. output_name: str The output blob name of this layer. shape_bias: [int] List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W]. See Also -------- add_scale
Below is the the instruction that describes the task: ### Input: Add bias layer to the model. Parameters ---------- name: str The name of this layer. b: int | numpy.array Bias to add to the input. input_name: str The input blob name of this layer. output_name: str The output blob name of this layer. shape_bias: [int] List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W]. See Also -------- add_scale ### Response: def add_bias(self, name, b, input_name, output_name, shape_bias = [1]): """ Add bias layer to the model. Parameters ---------- name: str The name of this layer. b: int | numpy.array Bias to add to the input. input_name: str The input blob name of this layer. output_name: str The output blob name of this layer. shape_bias: [int] List of ints that specifies the shape of the bias parameter (if present). Can be [1] or [C] or [1,H,W] or [C,H,W]. See Also -------- add_scale """ spec = self.spec nn_spec = self.nn_spec spec_layer = nn_spec.layers.add() spec_layer.name = name spec_layer.input.append(input_name) spec_layer.output.append(output_name) spec_layer_params = spec_layer.bias #add bias and its shape bias = spec_layer_params.bias spec_layer_params.shape.extend(shape_bias) if isinstance(b, int): bias.floatValue.append(float(b)) else: bias.floatValue.extend(map(float, b.flatten())) if len(bias.floatValue) != np.prod(shape_bias): raise ValueError("Dimensions of 'shape_bias' do not match the size of the provided 'b' parameter")
def sh(self, *command, **kwargs): """Run a shell command with the given arguments.""" self.log.debug('shell: %s', ' '.join(command)) return subprocess.check_call(' '.join(command), stdout=sys.stdout, stderr=sys.stderr, stdin=sys.stdin, shell=True, **kwargs)
Run a shell command with the given arguments.
Below is the the instruction that describes the task: ### Input: Run a shell command with the given arguments. ### Response: def sh(self, *command, **kwargs): """Run a shell command with the given arguments.""" self.log.debug('shell: %s', ' '.join(command)) return subprocess.check_call(' '.join(command), stdout=sys.stdout, stderr=sys.stderr, stdin=sys.stdin, shell=True, **kwargs)
def getLogger(name=None, **kwargs): """Build a logger with the given name. :param name: The name for the logger. This is usually the module name, ``__name__``. :type name: string """ adapter = _LOGGERS.get(name) if not adapter: # NOTE(jd) Keep using the `adapter' variable here because so it's not # collected by Python since _LOGGERS contains only a weakref adapter = KeywordArgumentAdapter(logging.getLogger(name), kwargs) _LOGGERS[name] = adapter return adapter
Build a logger with the given name. :param name: The name for the logger. This is usually the module name, ``__name__``. :type name: string
Below is the the instruction that describes the task: ### Input: Build a logger with the given name. :param name: The name for the logger. This is usually the module name, ``__name__``. :type name: string ### Response: def getLogger(name=None, **kwargs): """Build a logger with the given name. :param name: The name for the logger. This is usually the module name, ``__name__``. :type name: string """ adapter = _LOGGERS.get(name) if not adapter: # NOTE(jd) Keep using the `adapter' variable here because so it's not # collected by Python since _LOGGERS contains only a weakref adapter = KeywordArgumentAdapter(logging.getLogger(name), kwargs) _LOGGERS[name] = adapter return adapter
def release_client(self, cb): """ Return a Connection object to the pool :param Connection cb: the client to release """ if cb: self._q.put(cb, True) self._clients_in_use -= 1
Return a Connection object to the pool :param Connection cb: the client to release
Below is the the instruction that describes the task: ### Input: Return a Connection object to the pool :param Connection cb: the client to release ### Response: def release_client(self, cb): """ Return a Connection object to the pool :param Connection cb: the client to release """ if cb: self._q.put(cb, True) self._clients_in_use -= 1
def wrap_lines(content, length=80): """Wraps long lines to a maximum length of 80. :param content: the content to wrap. :param legnth: the maximum length to wrap the content. :type content: str :type length: int :returns: a string containing the wrapped content. :rtype: str """ return "\n".join(textwrap.wrap(content, length, break_long_words=False))
Wraps long lines to a maximum length of 80. :param content: the content to wrap. :param legnth: the maximum length to wrap the content. :type content: str :type length: int :returns: a string containing the wrapped content. :rtype: str
Below is the the instruction that describes the task: ### Input: Wraps long lines to a maximum length of 80. :param content: the content to wrap. :param legnth: the maximum length to wrap the content. :type content: str :type length: int :returns: a string containing the wrapped content. :rtype: str ### Response: def wrap_lines(content, length=80): """Wraps long lines to a maximum length of 80. :param content: the content to wrap. :param legnth: the maximum length to wrap the content. :type content: str :type length: int :returns: a string containing the wrapped content. :rtype: str """ return "\n".join(textwrap.wrap(content, length, break_long_words=False))
def write(self, default: bool=False): """Restore B6/M8 entry to original format Args: default (bool): output entry in default BLAST+ B6 format Returns: str: properly formatted string containing the B6/M8 entry """ none_type = type(None) if default: # Default order of format specifiers ordered_vals = ['query', 'subject', 'identity', 'length', 'mismatches', 'gaps', 'query_start', 'query_end', 'subject_start', 'subject_end', 'evalue', 'bitscore'] else: # Original order of B6 entry format specifiers try: ordered_vals = [self.custom_fs[i] if i in self.custom_fs else getattr(self, i) for i in self.fs_order] except TypeError: ordered_vals = [getattr(self, i) for i in self.fs_order] # Format entry for writing fstr = "\t".join(['-' if type(i) == none_type else str(i) for i in ordered_vals]) return '{}{}'.format(fstr, os.linesep)
Restore B6/M8 entry to original format Args: default (bool): output entry in default BLAST+ B6 format Returns: str: properly formatted string containing the B6/M8 entry
Below is the the instruction that describes the task: ### Input: Restore B6/M8 entry to original format Args: default (bool): output entry in default BLAST+ B6 format Returns: str: properly formatted string containing the B6/M8 entry ### Response: def write(self, default: bool=False): """Restore B6/M8 entry to original format Args: default (bool): output entry in default BLAST+ B6 format Returns: str: properly formatted string containing the B6/M8 entry """ none_type = type(None) if default: # Default order of format specifiers ordered_vals = ['query', 'subject', 'identity', 'length', 'mismatches', 'gaps', 'query_start', 'query_end', 'subject_start', 'subject_end', 'evalue', 'bitscore'] else: # Original order of B6 entry format specifiers try: ordered_vals = [self.custom_fs[i] if i in self.custom_fs else getattr(self, i) for i in self.fs_order] except TypeError: ordered_vals = [getattr(self, i) for i in self.fs_order] # Format entry for writing fstr = "\t".join(['-' if type(i) == none_type else str(i) for i in ordered_vals]) return '{}{}'.format(fstr, os.linesep)
def write_uint64(self, value, little_endian=True): """ Pack the value as an unsigned integer and write 8 bytes to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written. """ if little_endian: endian = "<" else: endian = ">" return self.pack('%sQ' % endian, value)
Pack the value as an unsigned integer and write 8 bytes to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written.
Below is the the instruction that describes the task: ### Input: Pack the value as an unsigned integer and write 8 bytes to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written. ### Response: def write_uint64(self, value, little_endian=True): """ Pack the value as an unsigned integer and write 8 bytes to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written. """ if little_endian: endian = "<" else: endian = ">" return self.pack('%sQ' % endian, value)
def get_teams_of_org(self): """ Retrieves the number of teams of the organization. """ print 'Getting teams.' counter = 0 for team in self.org_retrieved.iter_teams(): self.teams_json[team.id] = team.to_json() counter += 1 return counter
Retrieves the number of teams of the organization.
Below is the the instruction that describes the task: ### Input: Retrieves the number of teams of the organization. ### Response: def get_teams_of_org(self): """ Retrieves the number of teams of the organization. """ print 'Getting teams.' counter = 0 for team in self.org_retrieved.iter_teams(): self.teams_json[team.id] = team.to_json() counter += 1 return counter
def load(self, config, file_object, prefer=None): """ An abstract method that loads from a given file object. :param class config: The config class to load into :param file file_object: The file object to load from :param str prefer: The preferred serialization module name :returns: A dictionary converted from the content of the given file object :rtype: dict """ return self.loads(config, file_object.read(), prefer=prefer)
An abstract method that loads from a given file object. :param class config: The config class to load into :param file file_object: The file object to load from :param str prefer: The preferred serialization module name :returns: A dictionary converted from the content of the given file object :rtype: dict
Below is the the instruction that describes the task: ### Input: An abstract method that loads from a given file object. :param class config: The config class to load into :param file file_object: The file object to load from :param str prefer: The preferred serialization module name :returns: A dictionary converted from the content of the given file object :rtype: dict ### Response: def load(self, config, file_object, prefer=None): """ An abstract method that loads from a given file object. :param class config: The config class to load into :param file file_object: The file object to load from :param str prefer: The preferred serialization module name :returns: A dictionary converted from the content of the given file object :rtype: dict """ return self.loads(config, file_object.read(), prefer=prefer)
def show(): """ Show the modifiers and colors """ # modifiers sys.stdout.write(colorful.bold('bold') + ' ') sys.stdout.write(colorful.dimmed('dimmed') + ' ') sys.stdout.write(colorful.italic('italic') + ' ') sys.stdout.write(colorful.underlined('underlined') + ' ') sys.stdout.write(colorful.inversed('inversed') + ' ') sys.stdout.write(colorful.concealed('concealed') + ' ') sys.stdout.write(colorful.struckthrough('struckthrough') + '\n') # foreground colors sys.stdout.write(colorful.red('red') + ' ') sys.stdout.write(colorful.green('green') + ' ') sys.stdout.write(colorful.yellow('yellow') + ' ') sys.stdout.write(colorful.blue('blue') + ' ') sys.stdout.write(colorful.magenta('magenta') + ' ') sys.stdout.write(colorful.cyan('cyan') + ' ') sys.stdout.write(colorful.white('white') + '\n') # background colors sys.stdout.write(colorful.on_red('red') + ' ') sys.stdout.write(colorful.on_green('green') + ' ') sys.stdout.write(colorful.on_yellow('yellow') + ' ') sys.stdout.write(colorful.on_blue('blue') + ' ') sys.stdout.write(colorful.on_magenta('magenta') + ' ') sys.stdout.write(colorful.on_cyan('cyan') + ' ') sys.stdout.write(colorful.on_white('white') + '\n')
Show the modifiers and colors
Below is the the instruction that describes the task: ### Input: Show the modifiers and colors ### Response: def show(): """ Show the modifiers and colors """ # modifiers sys.stdout.write(colorful.bold('bold') + ' ') sys.stdout.write(colorful.dimmed('dimmed') + ' ') sys.stdout.write(colorful.italic('italic') + ' ') sys.stdout.write(colorful.underlined('underlined') + ' ') sys.stdout.write(colorful.inversed('inversed') + ' ') sys.stdout.write(colorful.concealed('concealed') + ' ') sys.stdout.write(colorful.struckthrough('struckthrough') + '\n') # foreground colors sys.stdout.write(colorful.red('red') + ' ') sys.stdout.write(colorful.green('green') + ' ') sys.stdout.write(colorful.yellow('yellow') + ' ') sys.stdout.write(colorful.blue('blue') + ' ') sys.stdout.write(colorful.magenta('magenta') + ' ') sys.stdout.write(colorful.cyan('cyan') + ' ') sys.stdout.write(colorful.white('white') + '\n') # background colors sys.stdout.write(colorful.on_red('red') + ' ') sys.stdout.write(colorful.on_green('green') + ' ') sys.stdout.write(colorful.on_yellow('yellow') + ' ') sys.stdout.write(colorful.on_blue('blue') + ' ') sys.stdout.write(colorful.on_magenta('magenta') + ' ') sys.stdout.write(colorful.on_cyan('cyan') + ' ') sys.stdout.write(colorful.on_white('white') + '\n')
def save_json(self, frames): """ Saves frames data into a json file at the specified json_path, named with the widget uuid. """ if self.json_save_path is None: return path = os.path.join(self.json_save_path, '%s.json' % self.id) if not os.path.isdir(self.json_save_path): os.mkdir(self.json_save_path) with open(path, 'w') as f: json.dump(frames, f) self.json_data = frames
Saves frames data into a json file at the specified json_path, named with the widget uuid.
Below is the the instruction that describes the task: ### Input: Saves frames data into a json file at the specified json_path, named with the widget uuid. ### Response: def save_json(self, frames): """ Saves frames data into a json file at the specified json_path, named with the widget uuid. """ if self.json_save_path is None: return path = os.path.join(self.json_save_path, '%s.json' % self.id) if not os.path.isdir(self.json_save_path): os.mkdir(self.json_save_path) with open(path, 'w') as f: json.dump(frames, f) self.json_data = frames
def _send_command(self, command): """ Send a command to the Chromecast on media channel. """ if self.status is None or self.status.media_session_id is None: self.logger.warning( "%s command requested but no session is active.", command[MESSAGE_TYPE]) return command['mediaSessionId'] = self.status.media_session_id self.send_message(command, inc_session_id=True)
Send a command to the Chromecast on media channel.
Below is the the instruction that describes the task: ### Input: Send a command to the Chromecast on media channel. ### Response: def _send_command(self, command): """ Send a command to the Chromecast on media channel. """ if self.status is None or self.status.media_session_id is None: self.logger.warning( "%s command requested but no session is active.", command[MESSAGE_TYPE]) return command['mediaSessionId'] = self.status.media_session_id self.send_message(command, inc_session_id=True)
def namedtuple_row_strategy(column_names): """ Namedtuple row strategy, rows returned as named tuples Column names that are not valid Python identifiers will be replaced with col<number>_ """ import collections # replace empty column names with placeholders column_names = [name if is_valid_identifier(name) else 'col%s_' % idx for idx, name in enumerate(column_names)] row_class = collections.namedtuple('Row', column_names) def row_factory(row): return row_class(*row) return row_factory
Namedtuple row strategy, rows returned as named tuples Column names that are not valid Python identifiers will be replaced with col<number>_
Below is the the instruction that describes the task: ### Input: Namedtuple row strategy, rows returned as named tuples Column names that are not valid Python identifiers will be replaced with col<number>_ ### Response: def namedtuple_row_strategy(column_names): """ Namedtuple row strategy, rows returned as named tuples Column names that are not valid Python identifiers will be replaced with col<number>_ """ import collections # replace empty column names with placeholders column_names = [name if is_valid_identifier(name) else 'col%s_' % idx for idx, name in enumerate(column_names)] row_class = collections.namedtuple('Row', column_names) def row_factory(row): return row_class(*row) return row_factory
def validate(self, value): """Validate field value.""" if value is not None and not isinstance(value, bool): raise ValidationError("field must be a boolean") super().validate(value)
Validate field value.
Below is the the instruction that describes the task: ### Input: Validate field value. ### Response: def validate(self, value): """Validate field value.""" if value is not None and not isinstance(value, bool): raise ValidationError("field must be a boolean") super().validate(value)
def get_file_type(self, abs_real_path_and_basename, _cache_file_type=_CACHE_FILE_TYPE): ''' :param abs_real_path_and_basename: The result from get_abs_path_real_path_and_base_from_file or get_abs_path_real_path_and_base_from_frame. :return _pydevd_bundle.pydevd_dont_trace_files.PYDEV_FILE: If it's a file internal to the debugger which shouldn't be traced nor shown to the user. _pydevd_bundle.pydevd_dont_trace_files.LIB_FILE: If it's a file in a library which shouldn't be traced. None: If it's a regular user file which should be traced. ''' try: return _cache_file_type[abs_real_path_and_basename[0]] except: file_type = self._internal_get_file_type(abs_real_path_and_basename) if file_type is None: file_type = PYDEV_FILE if self.dont_trace_external_files(abs_real_path_and_basename[0]) else None _cache_file_type[abs_real_path_and_basename[0]] = file_type return file_type
:param abs_real_path_and_basename: The result from get_abs_path_real_path_and_base_from_file or get_abs_path_real_path_and_base_from_frame. :return _pydevd_bundle.pydevd_dont_trace_files.PYDEV_FILE: If it's a file internal to the debugger which shouldn't be traced nor shown to the user. _pydevd_bundle.pydevd_dont_trace_files.LIB_FILE: If it's a file in a library which shouldn't be traced. None: If it's a regular user file which should be traced.
Below is the the instruction that describes the task: ### Input: :param abs_real_path_and_basename: The result from get_abs_path_real_path_and_base_from_file or get_abs_path_real_path_and_base_from_frame. :return _pydevd_bundle.pydevd_dont_trace_files.PYDEV_FILE: If it's a file internal to the debugger which shouldn't be traced nor shown to the user. _pydevd_bundle.pydevd_dont_trace_files.LIB_FILE: If it's a file in a library which shouldn't be traced. None: If it's a regular user file which should be traced. ### Response: def get_file_type(self, abs_real_path_and_basename, _cache_file_type=_CACHE_FILE_TYPE): ''' :param abs_real_path_and_basename: The result from get_abs_path_real_path_and_base_from_file or get_abs_path_real_path_and_base_from_frame. :return _pydevd_bundle.pydevd_dont_trace_files.PYDEV_FILE: If it's a file internal to the debugger which shouldn't be traced nor shown to the user. _pydevd_bundle.pydevd_dont_trace_files.LIB_FILE: If it's a file in a library which shouldn't be traced. None: If it's a regular user file which should be traced. ''' try: return _cache_file_type[abs_real_path_and_basename[0]] except: file_type = self._internal_get_file_type(abs_real_path_and_basename) if file_type is None: file_type = PYDEV_FILE if self.dont_trace_external_files(abs_real_path_and_basename[0]) else None _cache_file_type[abs_real_path_and_basename[0]] = file_type return file_type
def _updateFrame(self): """ Updates the frame for the given sender. """ for col, mov in self._movies.items(): self.setIcon(col, QtGui.QIcon(mov.currentPixmap()))
Updates the frame for the given sender.
Below is the the instruction that describes the task: ### Input: Updates the frame for the given sender. ### Response: def _updateFrame(self): """ Updates the frame for the given sender. """ for col, mov in self._movies.items(): self.setIcon(col, QtGui.QIcon(mov.currentPixmap()))
def max_date(self, symbol): """ Return the maximum datetime stored for a particular symbol Parameters ---------- symbol : `str` symbol name for the item """ res = self._collection.find_one({SYMBOL: symbol}, projection={ID: 0, END: 1}, sort=[(START, pymongo.DESCENDING)]) if res is None: raise NoDataFoundException("No Data found for {}".format(symbol)) return utc_dt_to_local_dt(res[END])
Return the maximum datetime stored for a particular symbol Parameters ---------- symbol : `str` symbol name for the item
Below is the the instruction that describes the task: ### Input: Return the maximum datetime stored for a particular symbol Parameters ---------- symbol : `str` symbol name for the item ### Response: def max_date(self, symbol): """ Return the maximum datetime stored for a particular symbol Parameters ---------- symbol : `str` symbol name for the item """ res = self._collection.find_one({SYMBOL: symbol}, projection={ID: 0, END: 1}, sort=[(START, pymongo.DESCENDING)]) if res is None: raise NoDataFoundException("No Data found for {}".format(symbol)) return utc_dt_to_local_dt(res[END])
def start_blocking(self): """ Start the advertiser in the background, but wait until it is ready """ self._cav_started.clear() self.start() self._cav_started.wait()
Start the advertiser in the background, but wait until it is ready
Below is the the instruction that describes the task: ### Input: Start the advertiser in the background, but wait until it is ready ### Response: def start_blocking(self): """ Start the advertiser in the background, but wait until it is ready """ self._cav_started.clear() self.start() self._cav_started.wait()
def genome_coverage(genomes, scaffold_coverage, total_bases): """ coverage = (number of bases / length of genome) * 100 """ coverage = {} custom = {} std = {} for genome in genomes: for sequence in parse_fasta(genome): scaffold = sequence[0].split('>')[1].split()[0] coverage, std = sum_coverage(coverage, std, genome, scaffold, sequence, scaffold_coverage) custom = calc_custom(custom, genome, scaffold, sequence, scaffold_coverage, total_bases) std = calc_std(std) custom_std = calc_std(custom) custom_av = {} for genome in custom: custom_av[genome] = [] for sample in custom[genome]: custom_av[genome].append(numpy.mean(sample)) for genome in coverage: print('%s\t%s' % (genome, coverage[genome][0][1])) if total_bases is True: total_bases = calc_total_mapped_bases(coverage) absolute = absolute_abundance(coverage, total_bases) for genome in coverage: calculated = [] for calc in coverage[genome]: calculated.append(calc[0] / calc[1]) coverage[genome] = calculated relative = relative_abundance(coverage) return coverage, std, absolute, relative, custom_av, custom_std
coverage = (number of bases / length of genome) * 100
Below is the the instruction that describes the task: ### Input: coverage = (number of bases / length of genome) * 100 ### Response: def genome_coverage(genomes, scaffold_coverage, total_bases): """ coverage = (number of bases / length of genome) * 100 """ coverage = {} custom = {} std = {} for genome in genomes: for sequence in parse_fasta(genome): scaffold = sequence[0].split('>')[1].split()[0] coverage, std = sum_coverage(coverage, std, genome, scaffold, sequence, scaffold_coverage) custom = calc_custom(custom, genome, scaffold, sequence, scaffold_coverage, total_bases) std = calc_std(std) custom_std = calc_std(custom) custom_av = {} for genome in custom: custom_av[genome] = [] for sample in custom[genome]: custom_av[genome].append(numpy.mean(sample)) for genome in coverage: print('%s\t%s' % (genome, coverage[genome][0][1])) if total_bases is True: total_bases = calc_total_mapped_bases(coverage) absolute = absolute_abundance(coverage, total_bases) for genome in coverage: calculated = [] for calc in coverage[genome]: calculated.append(calc[0] / calc[1]) coverage[genome] = calculated relative = relative_abundance(coverage) return coverage, std, absolute, relative, custom_av, custom_std
def get_hardware(self, hardware_id, **kwargs): """Get details about a hardware device. :param integer id: the hardware ID :returns: A dictionary containing a large amount of information about the specified server. Example:: object_mask = "mask[id,networkVlans[vlanNumber]]" # Object masks are optional result = mgr.get_hardware(hardware_id=1234,mask=object_mask) """ if 'mask' not in kwargs: kwargs['mask'] = ( 'id,' 'globalIdentifier,' 'fullyQualifiedDomainName,' 'hostname,' 'domain,' 'provisionDate,' 'hardwareStatus,' 'processorPhysicalCoreAmount,' 'memoryCapacity,' 'notes,' 'privateNetworkOnlyFlag,' 'primaryBackendIpAddress,' 'primaryIpAddress,' 'networkManagementIpAddress,' 'userData,' 'datacenter,' '''networkComponents[id, status, speed, maxSpeed, name, ipmiMacAddress, ipmiIpAddress, macAddress, primaryIpAddress, port, primarySubnet[id, netmask, broadcastAddress, networkIdentifier, gateway]],''' 'hardwareChassis[id,name],' 'activeTransaction[id, transactionStatus[friendlyName,name]],' '''operatingSystem[ softwareLicense[softwareDescription[manufacturer, name, version, referenceCode]], passwords[username,password]],''' '''softwareComponents[ softwareLicense[softwareDescription[manufacturer, name, version, referenceCode]], passwords[username,password]],''' 'billingItem[' 'id,nextInvoiceTotalRecurringAmount,' 'children[nextInvoiceTotalRecurringAmount],' 'orderItem.order.userRecord[username]' '],' 'hourlyBillingFlag,' 'tagReferences[id,tag[name,id]],' 'networkVlans[id,vlanNumber,networkSpace],' 'remoteManagementAccounts[username,password]' ) return self.hardware.getObject(id=hardware_id, **kwargs)
Get details about a hardware device. :param integer id: the hardware ID :returns: A dictionary containing a large amount of information about the specified server. Example:: object_mask = "mask[id,networkVlans[vlanNumber]]" # Object masks are optional result = mgr.get_hardware(hardware_id=1234,mask=object_mask)
Below is the the instruction that describes the task: ### Input: Get details about a hardware device. :param integer id: the hardware ID :returns: A dictionary containing a large amount of information about the specified server. Example:: object_mask = "mask[id,networkVlans[vlanNumber]]" # Object masks are optional result = mgr.get_hardware(hardware_id=1234,mask=object_mask) ### Response: def get_hardware(self, hardware_id, **kwargs): """Get details about a hardware device. :param integer id: the hardware ID :returns: A dictionary containing a large amount of information about the specified server. Example:: object_mask = "mask[id,networkVlans[vlanNumber]]" # Object masks are optional result = mgr.get_hardware(hardware_id=1234,mask=object_mask) """ if 'mask' not in kwargs: kwargs['mask'] = ( 'id,' 'globalIdentifier,' 'fullyQualifiedDomainName,' 'hostname,' 'domain,' 'provisionDate,' 'hardwareStatus,' 'processorPhysicalCoreAmount,' 'memoryCapacity,' 'notes,' 'privateNetworkOnlyFlag,' 'primaryBackendIpAddress,' 'primaryIpAddress,' 'networkManagementIpAddress,' 'userData,' 'datacenter,' '''networkComponents[id, status, speed, maxSpeed, name, ipmiMacAddress, ipmiIpAddress, macAddress, primaryIpAddress, port, primarySubnet[id, netmask, broadcastAddress, networkIdentifier, gateway]],''' 'hardwareChassis[id,name],' 'activeTransaction[id, transactionStatus[friendlyName,name]],' '''operatingSystem[ softwareLicense[softwareDescription[manufacturer, name, version, referenceCode]], passwords[username,password]],''' '''softwareComponents[ softwareLicense[softwareDescription[manufacturer, name, version, referenceCode]], passwords[username,password]],''' 'billingItem[' 'id,nextInvoiceTotalRecurringAmount,' 'children[nextInvoiceTotalRecurringAmount],' 'orderItem.order.userRecord[username]' '],' 'hourlyBillingFlag,' 'tagReferences[id,tag[name,id]],' 'networkVlans[id,vlanNumber,networkSpace],' 'remoteManagementAccounts[username,password]' ) return self.hardware.getObject(id=hardware_id, **kwargs)
def get_redirect_url(self, url, encrypt_code, card_id): """ 获取卡券跳转外链 """ from wechatpy.utils import WeChatSigner code = self.decrypt_code(encrypt_code) signer = WeChatSigner() signer.add_data(self.secret) signer.add_data(code) signer.add_data(card_id) signature = signer.signature r = '{url}?encrypt_code={code}&card_id={card_id}&signature={signature}' return r.format( url=url, code=encrypt_code, card_id=card_id, signature=signature )
获取卡券跳转外链
Below is the the instruction that describes the task: ### Input: 获取卡券跳转外链 ### Response: def get_redirect_url(self, url, encrypt_code, card_id): """ 获取卡券跳转外链 """ from wechatpy.utils import WeChatSigner code = self.decrypt_code(encrypt_code) signer = WeChatSigner() signer.add_data(self.secret) signer.add_data(code) signer.add_data(card_id) signature = signer.signature r = '{url}?encrypt_code={code}&card_id={card_id}&signature={signature}' return r.format( url=url, code=encrypt_code, card_id=card_id, signature=signature )
def process_tomography_set(meas_qubits, meas_basis='Pauli', prep_qubits=None, prep_basis='SIC'): """ Generate a dictionary of process tomography experiment configurations. This returns a data structure that is used by other tomography functions to generate state and process tomography circuits, and extract tomography data from results after execution on a backend. A quantum process tomography set is created by specifying a preparation basis along with a measurement basis. The preparation basis may be a user defined `tomography_basis`, or one of the two built in basis 'SIC' or 'Pauli'. - SIC: Is a minimal symmetric informationally complete preparation basis for 4 states for each qubit (4 ^ number of qubits total preparation states). These correspond to the |0> state and the 3 other vertices of a tetrahedron on the Bloch-sphere. - Pauli: Is a tomographically overcomplete preparation basis of the six eigenstates of the 3 Pauli operators (6 ^ number of qubits total preparation states). Args: meas_qubits (list): The qubits being measured. meas_basis (tomography_basis or str): The qubit measurement basis. The default value is 'Pauli'. prep_qubits (list or None): The qubits being prepared. If None then meas_qubits will be used for process tomography experiments. prep_basis (tomography_basis or str): The qubit preparation basis. The default value is 'SIC'. Returns: dict: A dict of tomography configurations that can be parsed by `create_tomography_circuits` and `tomography_data` functions for implementing quantum tomography experiments. This output contains fields "qubits", "meas_basis", "prep_basus", circuits". ``` { 'qubits': qubits (list[ints]), 'meas_basis': meas_basis (tomography_basis), 'prep_basis': prep_basis (tomography_basis), 'circuit_labels': (list[string]), 'circuits': (list[dict]) # prep and meas configurations } ``` """ return tomography_set(meas_qubits, meas_basis=meas_basis, prep_qubits=prep_qubits, prep_basis=prep_basis)
Generate a dictionary of process tomography experiment configurations. This returns a data structure that is used by other tomography functions to generate state and process tomography circuits, and extract tomography data from results after execution on a backend. A quantum process tomography set is created by specifying a preparation basis along with a measurement basis. The preparation basis may be a user defined `tomography_basis`, or one of the two built in basis 'SIC' or 'Pauli'. - SIC: Is a minimal symmetric informationally complete preparation basis for 4 states for each qubit (4 ^ number of qubits total preparation states). These correspond to the |0> state and the 3 other vertices of a tetrahedron on the Bloch-sphere. - Pauli: Is a tomographically overcomplete preparation basis of the six eigenstates of the 3 Pauli operators (6 ^ number of qubits total preparation states). Args: meas_qubits (list): The qubits being measured. meas_basis (tomography_basis or str): The qubit measurement basis. The default value is 'Pauli'. prep_qubits (list or None): The qubits being prepared. If None then meas_qubits will be used for process tomography experiments. prep_basis (tomography_basis or str): The qubit preparation basis. The default value is 'SIC'. Returns: dict: A dict of tomography configurations that can be parsed by `create_tomography_circuits` and `tomography_data` functions for implementing quantum tomography experiments. This output contains fields "qubits", "meas_basis", "prep_basus", circuits". ``` { 'qubits': qubits (list[ints]), 'meas_basis': meas_basis (tomography_basis), 'prep_basis': prep_basis (tomography_basis), 'circuit_labels': (list[string]), 'circuits': (list[dict]) # prep and meas configurations } ```
Below is the the instruction that describes the task: ### Input: Generate a dictionary of process tomography experiment configurations. This returns a data structure that is used by other tomography functions to generate state and process tomography circuits, and extract tomography data from results after execution on a backend. A quantum process tomography set is created by specifying a preparation basis along with a measurement basis. The preparation basis may be a user defined `tomography_basis`, or one of the two built in basis 'SIC' or 'Pauli'. - SIC: Is a minimal symmetric informationally complete preparation basis for 4 states for each qubit (4 ^ number of qubits total preparation states). These correspond to the |0> state and the 3 other vertices of a tetrahedron on the Bloch-sphere. - Pauli: Is a tomographically overcomplete preparation basis of the six eigenstates of the 3 Pauli operators (6 ^ number of qubits total preparation states). Args: meas_qubits (list): The qubits being measured. meas_basis (tomography_basis or str): The qubit measurement basis. The default value is 'Pauli'. prep_qubits (list or None): The qubits being prepared. If None then meas_qubits will be used for process tomography experiments. prep_basis (tomography_basis or str): The qubit preparation basis. The default value is 'SIC'. Returns: dict: A dict of tomography configurations that can be parsed by `create_tomography_circuits` and `tomography_data` functions for implementing quantum tomography experiments. This output contains fields "qubits", "meas_basis", "prep_basus", circuits". ``` { 'qubits': qubits (list[ints]), 'meas_basis': meas_basis (tomography_basis), 'prep_basis': prep_basis (tomography_basis), 'circuit_labels': (list[string]), 'circuits': (list[dict]) # prep and meas configurations } ``` ### Response: def process_tomography_set(meas_qubits, meas_basis='Pauli', prep_qubits=None, prep_basis='SIC'): """ Generate a dictionary of process tomography experiment configurations. This returns a data structure that is used by other tomography functions to generate state and process tomography circuits, and extract tomography data from results after execution on a backend. A quantum process tomography set is created by specifying a preparation basis along with a measurement basis. The preparation basis may be a user defined `tomography_basis`, or one of the two built in basis 'SIC' or 'Pauli'. - SIC: Is a minimal symmetric informationally complete preparation basis for 4 states for each qubit (4 ^ number of qubits total preparation states). These correspond to the |0> state and the 3 other vertices of a tetrahedron on the Bloch-sphere. - Pauli: Is a tomographically overcomplete preparation basis of the six eigenstates of the 3 Pauli operators (6 ^ number of qubits total preparation states). Args: meas_qubits (list): The qubits being measured. meas_basis (tomography_basis or str): The qubit measurement basis. The default value is 'Pauli'. prep_qubits (list or None): The qubits being prepared. If None then meas_qubits will be used for process tomography experiments. prep_basis (tomography_basis or str): The qubit preparation basis. The default value is 'SIC'. Returns: dict: A dict of tomography configurations that can be parsed by `create_tomography_circuits` and `tomography_data` functions for implementing quantum tomography experiments. This output contains fields "qubits", "meas_basis", "prep_basus", circuits". ``` { 'qubits': qubits (list[ints]), 'meas_basis': meas_basis (tomography_basis), 'prep_basis': prep_basis (tomography_basis), 'circuit_labels': (list[string]), 'circuits': (list[dict]) # prep and meas configurations } ``` """ return tomography_set(meas_qubits, meas_basis=meas_basis, prep_qubits=prep_qubits, prep_basis=prep_basis)
def connect(self, host: str = '192.168.0.3', port: Union[int, str] = 5555) -> None: '''Connect to a device via TCP/IP directly.''' self.device_sn = f'{host}:{port}' if not is_connectable(host, port): raise ConnectionError(f'Cannot connect to {self.device_sn}.') self._execute('connect', self.device_sn)
Connect to a device via TCP/IP directly.
Below is the the instruction that describes the task: ### Input: Connect to a device via TCP/IP directly. ### Response: def connect(self, host: str = '192.168.0.3', port: Union[int, str] = 5555) -> None: '''Connect to a device via TCP/IP directly.''' self.device_sn = f'{host}:{port}' if not is_connectable(host, port): raise ConnectionError(f'Cannot connect to {self.device_sn}.') self._execute('connect', self.device_sn)
def simplex(x, rho): """ Projection onto the probability simplex http://arxiv.org/pdf/1309.1541v1.pdf """ # sort the elements in descending order u = np.flipud(np.sort(x.ravel())) lambdas = (1 - np.cumsum(u)) / (1. + np.arange(u.size)) ix = np.where(u + lambdas > 0)[0].max() return np.maximum(x + lambdas[ix], 0)
Projection onto the probability simplex http://arxiv.org/pdf/1309.1541v1.pdf
Below is the the instruction that describes the task: ### Input: Projection onto the probability simplex http://arxiv.org/pdf/1309.1541v1.pdf ### Response: def simplex(x, rho): """ Projection onto the probability simplex http://arxiv.org/pdf/1309.1541v1.pdf """ # sort the elements in descending order u = np.flipud(np.sort(x.ravel())) lambdas = (1 - np.cumsum(u)) / (1. + np.arange(u.size)) ix = np.where(u + lambdas > 0)[0].max() return np.maximum(x + lambdas[ix], 0)
def read_coils(slave_id, starting_address, quantity): """ Return ADU for Modbus function code 01: Read Coils. :param slave_id: Number of slave. :return: Byte array with ADU. """ function = ReadCoils() function.starting_address = starting_address function.quantity = quantity return _create_request_adu(slave_id, function.request_pdu)
Return ADU for Modbus function code 01: Read Coils. :param slave_id: Number of slave. :return: Byte array with ADU.
Below is the the instruction that describes the task: ### Input: Return ADU for Modbus function code 01: Read Coils. :param slave_id: Number of slave. :return: Byte array with ADU. ### Response: def read_coils(slave_id, starting_address, quantity): """ Return ADU for Modbus function code 01: Read Coils. :param slave_id: Number of slave. :return: Byte array with ADU. """ function = ReadCoils() function.starting_address = starting_address function.quantity = quantity return _create_request_adu(slave_id, function.request_pdu)
def parse_output(host, output): """Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str """ current_file = None for line in output.readlines(): file_name_search = FILE_PATH_REGEX.search(line) if file_name_search: current_file = file_name_search.group(1) continue if INVALID_MESSAGE_REGEX.match(line) or INVALID_BYTES_REGEX.match(line): print_line(host, current_file, line, "ERROR") elif VALID_MESSAGE_REGEX.match(line) or \ line.startswith('Starting offset:'): continue else: print_line(host, current_file, line, "UNEXPECTED OUTPUT")
Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str
Below is the the instruction that describes the task: ### Input: Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str ### Response: def parse_output(host, output): """Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str """ current_file = None for line in output.readlines(): file_name_search = FILE_PATH_REGEX.search(line) if file_name_search: current_file = file_name_search.group(1) continue if INVALID_MESSAGE_REGEX.match(line) or INVALID_BYTES_REGEX.match(line): print_line(host, current_file, line, "ERROR") elif VALID_MESSAGE_REGEX.match(line) or \ line.startswith('Starting offset:'): continue else: print_line(host, current_file, line, "UNEXPECTED OUTPUT")
def send_to_back(self): """adjusts sprite's z-order so that the sprite is behind it's siblings""" if not self.parent: return self.z_order = self.parent._z_ordered_sprites[0].z_order - 1
adjusts sprite's z-order so that the sprite is behind it's siblings
Below is the the instruction that describes the task: ### Input: adjusts sprite's z-order so that the sprite is behind it's siblings ### Response: def send_to_back(self): """adjusts sprite's z-order so that the sprite is behind it's siblings""" if not self.parent: return self.z_order = self.parent._z_ordered_sprites[0].z_order - 1
def merge_extinfo(host, extinfo): """Merge extended host information into a host :param host: the host to edit :type host: alignak.objects.host.Host :param extinfo: the external info we get data from :type extinfo: alignak.objects.hostextinfo.HostExtInfo :return: None """ # Note that 2d_coords and 3d_coords are never merged, so not usable ! properties = ['notes', 'notes_url', 'icon_image', 'icon_image_alt', 'vrml_image', 'statusmap_image'] # host properties have precedence over hostextinfo properties for prop in properties: if getattr(host, prop) == '' and getattr(extinfo, prop) != '': setattr(host, prop, getattr(extinfo, prop))
Merge extended host information into a host :param host: the host to edit :type host: alignak.objects.host.Host :param extinfo: the external info we get data from :type extinfo: alignak.objects.hostextinfo.HostExtInfo :return: None
Below is the the instruction that describes the task: ### Input: Merge extended host information into a host :param host: the host to edit :type host: alignak.objects.host.Host :param extinfo: the external info we get data from :type extinfo: alignak.objects.hostextinfo.HostExtInfo :return: None ### Response: def merge_extinfo(host, extinfo): """Merge extended host information into a host :param host: the host to edit :type host: alignak.objects.host.Host :param extinfo: the external info we get data from :type extinfo: alignak.objects.hostextinfo.HostExtInfo :return: None """ # Note that 2d_coords and 3d_coords are never merged, so not usable ! properties = ['notes', 'notes_url', 'icon_image', 'icon_image_alt', 'vrml_image', 'statusmap_image'] # host properties have precedence over hostextinfo properties for prop in properties: if getattr(host, prop) == '' and getattr(extinfo, prop) != '': setattr(host, prop, getattr(extinfo, prop))
def get_log_tag(process_name): """method returns tag that all messages will be preceded with""" process_obj = context.process_context[process_name] if isinstance(process_obj, FreerunProcessEntry): return str(process_obj.token) elif isinstance(process_obj, ManagedProcessEntry): return str(process_obj.token) + str(process_obj.time_qualifier) elif isinstance(process_obj, DaemonProcessEntry): return str(process_obj.token) else: raise ValueError('Unknown process type: {0}'.format(process_obj.__class__.__name__))
method returns tag that all messages will be preceded with
Below is the the instruction that describes the task: ### Input: method returns tag that all messages will be preceded with ### Response: def get_log_tag(process_name): """method returns tag that all messages will be preceded with""" process_obj = context.process_context[process_name] if isinstance(process_obj, FreerunProcessEntry): return str(process_obj.token) elif isinstance(process_obj, ManagedProcessEntry): return str(process_obj.token) + str(process_obj.time_qualifier) elif isinstance(process_obj, DaemonProcessEntry): return str(process_obj.token) else: raise ValueError('Unknown process type: {0}'.format(process_obj.__class__.__name__))
def get_normalized_elevation_array(world): ''' Convert raw elevation into normalized values between 0 and 255, and return a numpy array of these values ''' e = world.layers['elevation'].data ocean = world.layers['ocean'].data mask = numpy.ma.array(e, mask=ocean) # only land min_elev_land = mask.min() max_elev_land = mask.max() elev_delta_land = max_elev_land - min_elev_land mask = numpy.ma.array(e, mask=numpy.logical_not(ocean)) # only ocean min_elev_sea = mask.min() max_elev_sea = mask.max() elev_delta_sea = max_elev_sea - min_elev_sea c = numpy.empty(e.shape, dtype=numpy.float) c[numpy.invert(ocean)] = (e[numpy.invert(ocean)] - min_elev_land) * 127 / elev_delta_land + 128 c[ocean] = (e[ocean] - min_elev_sea) * 127 / elev_delta_sea c = numpy.rint(c).astype(dtype=numpy.int32) # proper rounding return c
Convert raw elevation into normalized values between 0 and 255, and return a numpy array of these values
Below is the the instruction that describes the task: ### Input: Convert raw elevation into normalized values between 0 and 255, and return a numpy array of these values ### Response: def get_normalized_elevation_array(world): ''' Convert raw elevation into normalized values between 0 and 255, and return a numpy array of these values ''' e = world.layers['elevation'].data ocean = world.layers['ocean'].data mask = numpy.ma.array(e, mask=ocean) # only land min_elev_land = mask.min() max_elev_land = mask.max() elev_delta_land = max_elev_land - min_elev_land mask = numpy.ma.array(e, mask=numpy.logical_not(ocean)) # only ocean min_elev_sea = mask.min() max_elev_sea = mask.max() elev_delta_sea = max_elev_sea - min_elev_sea c = numpy.empty(e.shape, dtype=numpy.float) c[numpy.invert(ocean)] = (e[numpy.invert(ocean)] - min_elev_land) * 127 / elev_delta_land + 128 c[ocean] = (e[ocean] - min_elev_sea) * 127 / elev_delta_sea c = numpy.rint(c).astype(dtype=numpy.int32) # proper rounding return c
def load_graphdef(model_url, reset_device=True): """Load GraphDef from a binary proto file.""" graph_def = load(model_url) if reset_device: for n in graph_def.node: n.device = "" return graph_def
Load GraphDef from a binary proto file.
Below is the the instruction that describes the task: ### Input: Load GraphDef from a binary proto file. ### Response: def load_graphdef(model_url, reset_device=True): """Load GraphDef from a binary proto file.""" graph_def = load(model_url) if reset_device: for n in graph_def.node: n.device = "" return graph_def
def get_header(self, service_id, version_number, name): """Retrieves a Header object by name.""" content = self._fetch("/service/%s/version/%d/header/%s" % (service_id, version_number, name)) return FastlyHeader(self, content)
Retrieves a Header object by name.
Below is the the instruction that describes the task: ### Input: Retrieves a Header object by name. ### Response: def get_header(self, service_id, version_number, name): """Retrieves a Header object by name.""" content = self._fetch("/service/%s/version/%d/header/%s" % (service_id, version_number, name)) return FastlyHeader(self, content)
def _write_newick_internal_label(out, node_id, node, otu_group, label_key, needs_quotes_pattern): """`label_key` is a string (a key in the otu object) or a callable that takes two arguments: the node, and the otu (which may be None for an internal node) If `leaf_labels` is not None, it shoulr be a (list, dict) pair which will be filled. The list will hold the order encountered, and the dict will map name to index in the list """ otu_id = node.get('@otu') if is_str_type(label_key): if otu_id is None: return otu = otu_group[otu_id] label = otu.get(label_key) else: label = label_key(node_id, node, None) if label is not None: label = quote_newick_name(label, needs_quotes_pattern) out.write(label)
`label_key` is a string (a key in the otu object) or a callable that takes two arguments: the node, and the otu (which may be None for an internal node) If `leaf_labels` is not None, it shoulr be a (list, dict) pair which will be filled. The list will hold the order encountered, and the dict will map name to index in the list
Below is the the instruction that describes the task: ### Input: `label_key` is a string (a key in the otu object) or a callable that takes two arguments: the node, and the otu (which may be None for an internal node) If `leaf_labels` is not None, it shoulr be a (list, dict) pair which will be filled. The list will hold the order encountered, and the dict will map name to index in the list ### Response: def _write_newick_internal_label(out, node_id, node, otu_group, label_key, needs_quotes_pattern): """`label_key` is a string (a key in the otu object) or a callable that takes two arguments: the node, and the otu (which may be None for an internal node) If `leaf_labels` is not None, it shoulr be a (list, dict) pair which will be filled. The list will hold the order encountered, and the dict will map name to index in the list """ otu_id = node.get('@otu') if is_str_type(label_key): if otu_id is None: return otu = otu_group[otu_id] label = otu.get(label_key) else: label = label_key(node_id, node, None) if label is not None: label = quote_newick_name(label, needs_quotes_pattern) out.write(label)
def ohlcDF(symbol, token='', version=''): '''Returns the official open and close for a give symbol. https://iexcloud.io/docs/api/#news 9:30am-5pm ET Mon-Fri Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ''' o = ohlc(symbol, token, version) if o: df = pd.io.json.json_normalize(o) _toDatetime(df) else: df = pd.DataFrame() return df
Returns the official open and close for a give symbol. https://iexcloud.io/docs/api/#news 9:30am-5pm ET Mon-Fri Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result
Below is the the instruction that describes the task: ### Input: Returns the official open and close for a give symbol. https://iexcloud.io/docs/api/#news 9:30am-5pm ET Mon-Fri Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ### Response: def ohlcDF(symbol, token='', version=''): '''Returns the official open and close for a give symbol. https://iexcloud.io/docs/api/#news 9:30am-5pm ET Mon-Fri Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: DataFrame: result ''' o = ohlc(symbol, token, version) if o: df = pd.io.json.json_normalize(o) _toDatetime(df) else: df = pd.DataFrame() return df
def energy_minimize(dirname='em', mdp=config.templates['em.mdp'], struct='solvate/ionized.gro', top='top/system.top', output='em.pdb', deffnm="em", mdrunner=None, mdrun_args=None, **kwargs): """Energy minimize the system. This sets up the system (creates run input files) and also runs ``mdrun_d``. Thus it can take a while. Additional itp files should be in the same directory as the top file. Many of the keyword arguments below already have sensible values. :Keywords: *dirname* set up under directory dirname [em] *struct* input structure (gro, pdb, ...) [solvate/ionized.gro] *output* output structure (will be put under dirname) [em.pdb] *deffnm* default name for mdrun-related files [em] *top* topology file [top/system.top] *mdp* mdp file (or use the template) [templates/em.mdp] *includes* additional directories to search for itp files *mdrunner* :class:`gromacs.run.MDrunner` instance; by default we just try :func:`gromacs.mdrun_d` and :func:`gromacs.mdrun` but a MDrunner instance gives the user the ability to run mpi jobs etc. [None] *mdrun_args* arguments for *mdrunner* (as a dict), e.g. ``{'nt': 2}``; empty by default .. versionaddedd:: 0.7.0 *kwargs* remaining key/value pairs that should be changed in the template mdp file, eg ``nstxtcout=250, nstfout=250``. .. note:: If :func:`~gromacs.mdrun_d` is not found, the function falls back to :func:`~gromacs.mdrun` instead. """ structure = realpath(struct) topology = realpath(top) mdp_template = config.get_template(mdp) deffnm = deffnm.strip() mdrun_args = {} if mdrun_args is None else mdrun_args # write the processed topology to the default output kwargs.setdefault('pp', 'processed.top') # filter some kwargs that might come through when feeding output # from previous stages such as solvate(); necessary because *all* # **kwargs must be *either* substitutions in the mdp file *or* valid # command line parameters for ``grompp``. kwargs.pop('ndx', None) # mainselection is not used but only passed through; right now we # set it to the default that is being used in all argument lists # but that is not pretty. TODO. mainselection = kwargs.pop('mainselection', '"Protein"') # only interesting when passed from solvate() qtot = kwargs.pop('qtot', 0) # mdp is now the *output* MDP that will be generated from mdp_template mdp = deffnm+'.mdp' tpr = deffnm+'.tpr' logger.info("[{dirname!s}] Energy minimization of struct={struct!r}, top={top!r}, mdp={mdp!r} ...".format(**vars())) cbook.add_mdp_includes(topology, kwargs) if qtot != 0: # At the moment this is purely user-reported and really only here because # it might get fed into the function when using the keyword-expansion pipeline # usage paradigm. wmsg = "Total charge was reported as qtot = {qtot:g} <> 0; probably a problem.".format(**vars()) logger.warn(wmsg) warnings.warn(wmsg, category=BadParameterWarning) with in_dir(dirname): unprocessed = cbook.edit_mdp(mdp_template, new_mdp=mdp, **kwargs) check_mdpargs(unprocessed) gromacs.grompp(f=mdp, o=tpr, c=structure, r=structure, p=topology, **unprocessed) mdrun_args.update(v=True, stepout=10, deffnm=deffnm, c=output) if mdrunner is None: mdrun = run.get_double_or_single_prec_mdrun() mdrun(**mdrun_args) else: if type(mdrunner) is type: # class # user wants full control and provides simulation.MDrunner **class** # NO CHECKING --- in principle user can supply any callback they like mdrun = mdrunner(**mdrun_args) mdrun.run() else: # anything with a run() method that takes mdrun arguments... try: mdrunner.run(mdrunargs=mdrun_args) except AttributeError: logger.error("mdrunner: Provide a gromacs.run.MDrunner class or instance or a callback with a run() method") raise TypeError("mdrunner: Provide a gromacs.run.MDrunner class or instance or a callback with a run() method") # em.gro --> gives 'Bad box in file em.gro' warning --- why?? # --> use em.pdb instead. if not os.path.exists(output): errmsg = "Energy minimized system NOT produced." logger.error(errmsg) raise GromacsError(errmsg) final_struct = realpath(output) logger.info("[{dirname!s}] energy minimized structure {final_struct!r}".format(**vars())) return {'struct': final_struct, 'top': topology, 'mainselection': mainselection, }
Energy minimize the system. This sets up the system (creates run input files) and also runs ``mdrun_d``. Thus it can take a while. Additional itp files should be in the same directory as the top file. Many of the keyword arguments below already have sensible values. :Keywords: *dirname* set up under directory dirname [em] *struct* input structure (gro, pdb, ...) [solvate/ionized.gro] *output* output structure (will be put under dirname) [em.pdb] *deffnm* default name for mdrun-related files [em] *top* topology file [top/system.top] *mdp* mdp file (or use the template) [templates/em.mdp] *includes* additional directories to search for itp files *mdrunner* :class:`gromacs.run.MDrunner` instance; by default we just try :func:`gromacs.mdrun_d` and :func:`gromacs.mdrun` but a MDrunner instance gives the user the ability to run mpi jobs etc. [None] *mdrun_args* arguments for *mdrunner* (as a dict), e.g. ``{'nt': 2}``; empty by default .. versionaddedd:: 0.7.0 *kwargs* remaining key/value pairs that should be changed in the template mdp file, eg ``nstxtcout=250, nstfout=250``. .. note:: If :func:`~gromacs.mdrun_d` is not found, the function falls back to :func:`~gromacs.mdrun` instead.
Below is the the instruction that describes the task: ### Input: Energy minimize the system. This sets up the system (creates run input files) and also runs ``mdrun_d``. Thus it can take a while. Additional itp files should be in the same directory as the top file. Many of the keyword arguments below already have sensible values. :Keywords: *dirname* set up under directory dirname [em] *struct* input structure (gro, pdb, ...) [solvate/ionized.gro] *output* output structure (will be put under dirname) [em.pdb] *deffnm* default name for mdrun-related files [em] *top* topology file [top/system.top] *mdp* mdp file (or use the template) [templates/em.mdp] *includes* additional directories to search for itp files *mdrunner* :class:`gromacs.run.MDrunner` instance; by default we just try :func:`gromacs.mdrun_d` and :func:`gromacs.mdrun` but a MDrunner instance gives the user the ability to run mpi jobs etc. [None] *mdrun_args* arguments for *mdrunner* (as a dict), e.g. ``{'nt': 2}``; empty by default .. versionaddedd:: 0.7.0 *kwargs* remaining key/value pairs that should be changed in the template mdp file, eg ``nstxtcout=250, nstfout=250``. .. note:: If :func:`~gromacs.mdrun_d` is not found, the function falls back to :func:`~gromacs.mdrun` instead. ### Response: def energy_minimize(dirname='em', mdp=config.templates['em.mdp'], struct='solvate/ionized.gro', top='top/system.top', output='em.pdb', deffnm="em", mdrunner=None, mdrun_args=None, **kwargs): """Energy minimize the system. This sets up the system (creates run input files) and also runs ``mdrun_d``. Thus it can take a while. Additional itp files should be in the same directory as the top file. Many of the keyword arguments below already have sensible values. :Keywords: *dirname* set up under directory dirname [em] *struct* input structure (gro, pdb, ...) [solvate/ionized.gro] *output* output structure (will be put under dirname) [em.pdb] *deffnm* default name for mdrun-related files [em] *top* topology file [top/system.top] *mdp* mdp file (or use the template) [templates/em.mdp] *includes* additional directories to search for itp files *mdrunner* :class:`gromacs.run.MDrunner` instance; by default we just try :func:`gromacs.mdrun_d` and :func:`gromacs.mdrun` but a MDrunner instance gives the user the ability to run mpi jobs etc. [None] *mdrun_args* arguments for *mdrunner* (as a dict), e.g. ``{'nt': 2}``; empty by default .. versionaddedd:: 0.7.0 *kwargs* remaining key/value pairs that should be changed in the template mdp file, eg ``nstxtcout=250, nstfout=250``. .. note:: If :func:`~gromacs.mdrun_d` is not found, the function falls back to :func:`~gromacs.mdrun` instead. """ structure = realpath(struct) topology = realpath(top) mdp_template = config.get_template(mdp) deffnm = deffnm.strip() mdrun_args = {} if mdrun_args is None else mdrun_args # write the processed topology to the default output kwargs.setdefault('pp', 'processed.top') # filter some kwargs that might come through when feeding output # from previous stages such as solvate(); necessary because *all* # **kwargs must be *either* substitutions in the mdp file *or* valid # command line parameters for ``grompp``. kwargs.pop('ndx', None) # mainselection is not used but only passed through; right now we # set it to the default that is being used in all argument lists # but that is not pretty. TODO. mainselection = kwargs.pop('mainselection', '"Protein"') # only interesting when passed from solvate() qtot = kwargs.pop('qtot', 0) # mdp is now the *output* MDP that will be generated from mdp_template mdp = deffnm+'.mdp' tpr = deffnm+'.tpr' logger.info("[{dirname!s}] Energy minimization of struct={struct!r}, top={top!r}, mdp={mdp!r} ...".format(**vars())) cbook.add_mdp_includes(topology, kwargs) if qtot != 0: # At the moment this is purely user-reported and really only here because # it might get fed into the function when using the keyword-expansion pipeline # usage paradigm. wmsg = "Total charge was reported as qtot = {qtot:g} <> 0; probably a problem.".format(**vars()) logger.warn(wmsg) warnings.warn(wmsg, category=BadParameterWarning) with in_dir(dirname): unprocessed = cbook.edit_mdp(mdp_template, new_mdp=mdp, **kwargs) check_mdpargs(unprocessed) gromacs.grompp(f=mdp, o=tpr, c=structure, r=structure, p=topology, **unprocessed) mdrun_args.update(v=True, stepout=10, deffnm=deffnm, c=output) if mdrunner is None: mdrun = run.get_double_or_single_prec_mdrun() mdrun(**mdrun_args) else: if type(mdrunner) is type: # class # user wants full control and provides simulation.MDrunner **class** # NO CHECKING --- in principle user can supply any callback they like mdrun = mdrunner(**mdrun_args) mdrun.run() else: # anything with a run() method that takes mdrun arguments... try: mdrunner.run(mdrunargs=mdrun_args) except AttributeError: logger.error("mdrunner: Provide a gromacs.run.MDrunner class or instance or a callback with a run() method") raise TypeError("mdrunner: Provide a gromacs.run.MDrunner class or instance or a callback with a run() method") # em.gro --> gives 'Bad box in file em.gro' warning --- why?? # --> use em.pdb instead. if not os.path.exists(output): errmsg = "Energy minimized system NOT produced." logger.error(errmsg) raise GromacsError(errmsg) final_struct = realpath(output) logger.info("[{dirname!s}] energy minimized structure {final_struct!r}".format(**vars())) return {'struct': final_struct, 'top': topology, 'mainselection': mainselection, }
def main(args=None): """Script body.""" if args is None: # parse command-line arguments parser = get_argument_parser() args = parser.parse_args() fasta_file = args.fasta_file species = args.species chrom_pat = args.chromosome_pattern output_file = args.output_file log_file = args.log_file quiet = args.quiet verbose = args.verbose # configure root logger log_stream = sys.stdout if output_file == '-': # if we print output to stdout, redirect log messages to stderr log_stream = sys.stderr logger = misc.get_logger(log_stream=log_stream, log_file=log_file, quiet=quiet, verbose=verbose) # generate regular expression object from the chromosome pattern if chrom_pat is None: chrom_pat = ensembl.SPECIES_CHROMPAT[species] chrom_re = re.compile(chrom_pat) # filter the FASTA file # note: each chromosome sequence is temporarily read into memory, # so this script has a large memory footprint with \ misc.smart_open_read( fasta_file, mode='r', encoding='ascii', try_gzip=True ) as fh, \ misc.smart_open_write( output_file, mode='w', encoding='ascii' ) as ofh: # inside = False reader = FastaReader(fh) for seq in reader: chrom = seq.name.split(' ', 1)[0] if chrom_re.match(chrom) is None: logger.info('Ignoring chromosome "%s"...', chrom) continue seq.name = chrom seq.append_fasta(ofh) return 0
Script body.
Below is the the instruction that describes the task: ### Input: Script body. ### Response: def main(args=None): """Script body.""" if args is None: # parse command-line arguments parser = get_argument_parser() args = parser.parse_args() fasta_file = args.fasta_file species = args.species chrom_pat = args.chromosome_pattern output_file = args.output_file log_file = args.log_file quiet = args.quiet verbose = args.verbose # configure root logger log_stream = sys.stdout if output_file == '-': # if we print output to stdout, redirect log messages to stderr log_stream = sys.stderr logger = misc.get_logger(log_stream=log_stream, log_file=log_file, quiet=quiet, verbose=verbose) # generate regular expression object from the chromosome pattern if chrom_pat is None: chrom_pat = ensembl.SPECIES_CHROMPAT[species] chrom_re = re.compile(chrom_pat) # filter the FASTA file # note: each chromosome sequence is temporarily read into memory, # so this script has a large memory footprint with \ misc.smart_open_read( fasta_file, mode='r', encoding='ascii', try_gzip=True ) as fh, \ misc.smart_open_write( output_file, mode='w', encoding='ascii' ) as ofh: # inside = False reader = FastaReader(fh) for seq in reader: chrom = seq.name.split(' ', 1)[0] if chrom_re.match(chrom) is None: logger.info('Ignoring chromosome "%s"...', chrom) continue seq.name = chrom seq.append_fasta(ofh) return 0
def auth_view(name, **kwargs): """ Shows an authorization group's content. """ ctx = Context(**kwargs) ctx.execute_action('auth:group:view', **{ 'storage': ctx.repo.create_secure_service('storage'), 'name': name, })
Shows an authorization group's content.
Below is the the instruction that describes the task: ### Input: Shows an authorization group's content. ### Response: def auth_view(name, **kwargs): """ Shows an authorization group's content. """ ctx = Context(**kwargs) ctx.execute_action('auth:group:view', **{ 'storage': ctx.repo.create_secure_service('storage'), 'name': name, })
def merge_or_link(self, input_args, raw_folder, local_base="sample"): """ This function standardizes various input possibilities by converting either .bam, .fastq, or .fastq.gz files into a local file; merging those if multiple files given. :param list input_args: This is a list of arguments, each one is a class of inputs (which can in turn be a string or a list). Typically, input_args is a list with 2 elements: first a list of read1 files; second an (optional!) list of read2 files. :param str raw_folder: Name/path of folder for the merge/link. :param str local_base: Usually the sample name. This (plus file extension) will be the name of the local file linked (or merged) by this function. """ self.make_sure_path_exists(raw_folder) if not isinstance(input_args, list): raise Exception("Input must be a list") if any(isinstance(i, list) for i in input_args): # We have a list of lists. Process each individually. local_input_files = list() n_input_files = len(filter(bool, input_args)) print("Number of input file sets:\t\t" + str(n_input_files)) for input_i, input_arg in enumerate(input_args): # Count how many non-null items there are in the list; # we only append _R1 (etc.) if there are multiple input files. if n_input_files > 1: local_base_extended = local_base + "_R" + str(input_i + 1) else: local_base_extended = local_base if input_arg: out = self.merge_or_link( input_arg, raw_folder, local_base_extended) print("Local input file: '{}'".format(out)) # Make sure file exists: if not os.path.isfile(out): print("Not a file: '{}'".format(out)) local_input_files.append(out) return local_input_files else: # We have a list of individual arguments. Merge them. if len(input_args) == 1: # Only one argument in this list. A single input file; we just link # it, regardless of file type: # Pull the value out of the list input_arg = input_args[0] input_ext = self.get_input_ext(input_arg) # Convert to absolute path if not os.path.isabs(input_arg): input_arg = os.path.abspath(input_arg) # Link it to into the raw folder local_input_abs = os.path.join(raw_folder, local_base + input_ext) self.pm.run( "ln -sf " + input_arg + " " + local_input_abs, target=local_input_abs, shell=True) # return the local (linked) filename absolute path return local_input_abs else: # Otherwise, there are multiple inputs. # If more than 1 input file is given, then these are to be merged # if they are in bam format. if all([self.get_input_ext(x) == ".bam" for x in input_args]): sample_merged = local_base + ".merged.bam" output_merge = os.path.join(raw_folder, sample_merged) cmd = self.merge_bams(input_args, output_merge) self.pm.run(cmd, output_merge) cmd2 = self.validate_bam(output_merge) self.pm.run(cmd, output_merge, nofail=True) return output_merge # if multiple fastq if all([self.get_input_ext(x) == ".fastq.gz" for x in input_args]): sample_merged_gz = local_base + ".merged.fastq.gz" output_merge_gz = os.path.join(raw_folder, sample_merged_gz) #cmd1 = self.ziptool + "-d -c " + " ".join(input_args) + " > " + output_merge #cmd2 = self.ziptool + " " + output_merge #self.pm.run([cmd1, cmd2], output_merge_gz) # you can save yourself the decompression/recompression: cmd = "cat " + " ".join(input_args) + " > " + output_merge_gz self.pm.run(cmd, output_merge_gz) return output_merge_gz if all([self.get_input_ext(x) == ".fastq" for x in input_args]): sample_merged = local_base + ".merged.fastq" output_merge = os.path.join(raw_folder, sample_merged) cmd = "cat " + " ".join(input_args) + " > " + output_merge self.pm.run(cmd, output_merge) return output_merge # At this point, we don't recognize the input file types or they # do not match. raise NotImplementedError( "Input files must be of the same type, and can only " "merge bam or fastq.")
This function standardizes various input possibilities by converting either .bam, .fastq, or .fastq.gz files into a local file; merging those if multiple files given. :param list input_args: This is a list of arguments, each one is a class of inputs (which can in turn be a string or a list). Typically, input_args is a list with 2 elements: first a list of read1 files; second an (optional!) list of read2 files. :param str raw_folder: Name/path of folder for the merge/link. :param str local_base: Usually the sample name. This (plus file extension) will be the name of the local file linked (or merged) by this function.
Below is the the instruction that describes the task: ### Input: This function standardizes various input possibilities by converting either .bam, .fastq, or .fastq.gz files into a local file; merging those if multiple files given. :param list input_args: This is a list of arguments, each one is a class of inputs (which can in turn be a string or a list). Typically, input_args is a list with 2 elements: first a list of read1 files; second an (optional!) list of read2 files. :param str raw_folder: Name/path of folder for the merge/link. :param str local_base: Usually the sample name. This (plus file extension) will be the name of the local file linked (or merged) by this function. ### Response: def merge_or_link(self, input_args, raw_folder, local_base="sample"): """ This function standardizes various input possibilities by converting either .bam, .fastq, or .fastq.gz files into a local file; merging those if multiple files given. :param list input_args: This is a list of arguments, each one is a class of inputs (which can in turn be a string or a list). Typically, input_args is a list with 2 elements: first a list of read1 files; second an (optional!) list of read2 files. :param str raw_folder: Name/path of folder for the merge/link. :param str local_base: Usually the sample name. This (plus file extension) will be the name of the local file linked (or merged) by this function. """ self.make_sure_path_exists(raw_folder) if not isinstance(input_args, list): raise Exception("Input must be a list") if any(isinstance(i, list) for i in input_args): # We have a list of lists. Process each individually. local_input_files = list() n_input_files = len(filter(bool, input_args)) print("Number of input file sets:\t\t" + str(n_input_files)) for input_i, input_arg in enumerate(input_args): # Count how many non-null items there are in the list; # we only append _R1 (etc.) if there are multiple input files. if n_input_files > 1: local_base_extended = local_base + "_R" + str(input_i + 1) else: local_base_extended = local_base if input_arg: out = self.merge_or_link( input_arg, raw_folder, local_base_extended) print("Local input file: '{}'".format(out)) # Make sure file exists: if not os.path.isfile(out): print("Not a file: '{}'".format(out)) local_input_files.append(out) return local_input_files else: # We have a list of individual arguments. Merge them. if len(input_args) == 1: # Only one argument in this list. A single input file; we just link # it, regardless of file type: # Pull the value out of the list input_arg = input_args[0] input_ext = self.get_input_ext(input_arg) # Convert to absolute path if not os.path.isabs(input_arg): input_arg = os.path.abspath(input_arg) # Link it to into the raw folder local_input_abs = os.path.join(raw_folder, local_base + input_ext) self.pm.run( "ln -sf " + input_arg + " " + local_input_abs, target=local_input_abs, shell=True) # return the local (linked) filename absolute path return local_input_abs else: # Otherwise, there are multiple inputs. # If more than 1 input file is given, then these are to be merged # if they are in bam format. if all([self.get_input_ext(x) == ".bam" for x in input_args]): sample_merged = local_base + ".merged.bam" output_merge = os.path.join(raw_folder, sample_merged) cmd = self.merge_bams(input_args, output_merge) self.pm.run(cmd, output_merge) cmd2 = self.validate_bam(output_merge) self.pm.run(cmd, output_merge, nofail=True) return output_merge # if multiple fastq if all([self.get_input_ext(x) == ".fastq.gz" for x in input_args]): sample_merged_gz = local_base + ".merged.fastq.gz" output_merge_gz = os.path.join(raw_folder, sample_merged_gz) #cmd1 = self.ziptool + "-d -c " + " ".join(input_args) + " > " + output_merge #cmd2 = self.ziptool + " " + output_merge #self.pm.run([cmd1, cmd2], output_merge_gz) # you can save yourself the decompression/recompression: cmd = "cat " + " ".join(input_args) + " > " + output_merge_gz self.pm.run(cmd, output_merge_gz) return output_merge_gz if all([self.get_input_ext(x) == ".fastq" for x in input_args]): sample_merged = local_base + ".merged.fastq" output_merge = os.path.join(raw_folder, sample_merged) cmd = "cat " + " ".join(input_args) + " > " + output_merge self.pm.run(cmd, output_merge) return output_merge # At this point, we don't recognize the input file types or they # do not match. raise NotImplementedError( "Input files must be of the same type, and can only " "merge bam or fastq.")
def s_find_first(pred, first, lst): """Evaluate `first`; if predicate `pred` succeeds on the result of `first`, return the result; otherwise recur on the first element of `lst`. :param pred: a predicate. :param first: a promise. :param lst: a list of quoted promises. :return: the first element for which predicate is true.""" if pred(first): return first elif lst: return s_find_first(pred, unquote(lst[0]), lst[1:]) else: return None
Evaluate `first`; if predicate `pred` succeeds on the result of `first`, return the result; otherwise recur on the first element of `lst`. :param pred: a predicate. :param first: a promise. :param lst: a list of quoted promises. :return: the first element for which predicate is true.
Below is the the instruction that describes the task: ### Input: Evaluate `first`; if predicate `pred` succeeds on the result of `first`, return the result; otherwise recur on the first element of `lst`. :param pred: a predicate. :param first: a promise. :param lst: a list of quoted promises. :return: the first element for which predicate is true. ### Response: def s_find_first(pred, first, lst): """Evaluate `first`; if predicate `pred` succeeds on the result of `first`, return the result; otherwise recur on the first element of `lst`. :param pred: a predicate. :param first: a promise. :param lst: a list of quoted promises. :return: the first element for which predicate is true.""" if pred(first): return first elif lst: return s_find_first(pred, unquote(lst[0]), lst[1:]) else: return None
def medium_integer(self, column, auto_increment=False, unsigned=False): """ Create a new medium integer column on the table. :param column: The column :type column: str :type auto_increment: bool :type unsigned: bool :rtype: Fluent """ return self._add_column('medium_integer', column, auto_increment=auto_increment, unsigned=unsigned)
Create a new medium integer column on the table. :param column: The column :type column: str :type auto_increment: bool :type unsigned: bool :rtype: Fluent
Below is the the instruction that describes the task: ### Input: Create a new medium integer column on the table. :param column: The column :type column: str :type auto_increment: bool :type unsigned: bool :rtype: Fluent ### Response: def medium_integer(self, column, auto_increment=False, unsigned=False): """ Create a new medium integer column on the table. :param column: The column :type column: str :type auto_increment: bool :type unsigned: bool :rtype: Fluent """ return self._add_column('medium_integer', column, auto_increment=auto_increment, unsigned=unsigned)
def Uniform(cls, low: 'TensorFluent', high: 'TensorFluent', batch_size: Optional[int] = None) -> Tuple[Distribution, 'TensorFluent']: '''Returns a TensorFluent for the Uniform sampling op with given low and high parameters. Args: low: The low parameter of the Uniform distribution. high: The high parameter of the Uniform distribution. batch_size: The size of the batch (optional). Returns: The Uniform distribution and a TensorFluent sample drawn from the distribution. Raises: ValueError: If parameters do not have the same scope. ''' if low.scope != high.scope: raise ValueError('Uniform distribution: parameters must have same scope!') dist = tf.distributions.Uniform(low.tensor, high.tensor) batch = low.batch or high.batch if not batch and batch_size is not None: t = dist.sample(batch_size) batch = True else: t = dist.sample() scope = low.scope.as_list() return (dist, TensorFluent(t, scope, batch=batch))
Returns a TensorFluent for the Uniform sampling op with given low and high parameters. Args: low: The low parameter of the Uniform distribution. high: The high parameter of the Uniform distribution. batch_size: The size of the batch (optional). Returns: The Uniform distribution and a TensorFluent sample drawn from the distribution. Raises: ValueError: If parameters do not have the same scope.
Below is the the instruction that describes the task: ### Input: Returns a TensorFluent for the Uniform sampling op with given low and high parameters. Args: low: The low parameter of the Uniform distribution. high: The high parameter of the Uniform distribution. batch_size: The size of the batch (optional). Returns: The Uniform distribution and a TensorFluent sample drawn from the distribution. Raises: ValueError: If parameters do not have the same scope. ### Response: def Uniform(cls, low: 'TensorFluent', high: 'TensorFluent', batch_size: Optional[int] = None) -> Tuple[Distribution, 'TensorFluent']: '''Returns a TensorFluent for the Uniform sampling op with given low and high parameters. Args: low: The low parameter of the Uniform distribution. high: The high parameter of the Uniform distribution. batch_size: The size of the batch (optional). Returns: The Uniform distribution and a TensorFluent sample drawn from the distribution. Raises: ValueError: If parameters do not have the same scope. ''' if low.scope != high.scope: raise ValueError('Uniform distribution: parameters must have same scope!') dist = tf.distributions.Uniform(low.tensor, high.tensor) batch = low.batch or high.batch if not batch and batch_size is not None: t = dist.sample(batch_size) batch = True else: t = dist.sample() scope = low.scope.as_list() return (dist, TensorFluent(t, scope, batch=batch))
def log(logger, level, message): """Logs message to stderr if logging isn't initialized.""" if logger.parent.name != 'root': logger.log(level, message) else: print(message, file=sys.stderr)
Logs message to stderr if logging isn't initialized.
Below is the the instruction that describes the task: ### Input: Logs message to stderr if logging isn't initialized. ### Response: def log(logger, level, message): """Logs message to stderr if logging isn't initialized.""" if logger.parent.name != 'root': logger.log(level, message) else: print(message, file=sys.stderr)
def _add_vxr_levels_r(self, f, vxrhead, numVXRs): ''' Build a new level of VXRs... make VXRs more tree-like From: VXR1 -> VXR2 -> VXR3 -> VXR4 -> ... -> VXRn To: new VXR1 / | \ VXR2 VXR3 VXR4 / | \ ... VXR5 .......... VXRn Parameters: f : file The open CDF file vxrhead : int The byte location of the first VXR for a variable numVXRs : int The total number of VXRs Returns: newVXRhead : int The byte location of the newest VXR head newvxroff : int The byte location of the last VXR head ''' newNumVXRs = int(numVXRs / CDF.NUM_VXRlvl_ENTRIES) remaining = int(numVXRs % CDF.NUM_VXRlvl_ENTRIES) vxroff = vxrhead prevxroff = -1 if (remaining != 0): newNumVXRs += 1 CDF.level += 1 for x in range(0, newNumVXRs): newvxroff = self._write_vxr(f, numEntries=CDF.NUM_VXRlvl_ENTRIES) if (x > 0): self._update_offset_value(f, prevxroff+12, 8, newvxroff) else: newvxrhead = newvxroff prevxroff = newvxroff if (x == (newNumVXRs - 1)): if (remaining == 0): endEntry = CDF.NUM_VXRlvl_ENTRIES else: endEntry = remaining else: endEntry = CDF.NUM_VXRlvl_ENTRIES for _ in range(0, endEntry): recFirst, recLast = self._get_recrange(f, vxroff) self._use_vxrentry(f, newvxroff, recFirst, recLast, vxroff) vxroff = self._read_offset_value(f, vxroff+12, 8) vxroff = vxrhead # Break the horizontal links for x in range(0, numVXRs): nvxroff = self._read_offset_value(f, vxroff+12, 8) self._update_offset_value(f, vxroff+12, 8, 0) vxroff = nvxroff # Iterate this process if we're over NUM_VXRlvl_ENTRIES if (newNumVXRs > CDF.NUM_VXRlvl_ENTRIES): return self._add_vxr_levels_r(f, newvxrhead, newNumVXRs) else: return newvxrhead, newvxroff
Build a new level of VXRs... make VXRs more tree-like From: VXR1 -> VXR2 -> VXR3 -> VXR4 -> ... -> VXRn To: new VXR1 / | \ VXR2 VXR3 VXR4 / | \ ... VXR5 .......... VXRn Parameters: f : file The open CDF file vxrhead : int The byte location of the first VXR for a variable numVXRs : int The total number of VXRs Returns: newVXRhead : int The byte location of the newest VXR head newvxroff : int The byte location of the last VXR head
Below is the the instruction that describes the task: ### Input: Build a new level of VXRs... make VXRs more tree-like From: VXR1 -> VXR2 -> VXR3 -> VXR4 -> ... -> VXRn To: new VXR1 / | \ VXR2 VXR3 VXR4 / | \ ... VXR5 .......... VXRn Parameters: f : file The open CDF file vxrhead : int The byte location of the first VXR for a variable numVXRs : int The total number of VXRs Returns: newVXRhead : int The byte location of the newest VXR head newvxroff : int The byte location of the last VXR head ### Response: def _add_vxr_levels_r(self, f, vxrhead, numVXRs): ''' Build a new level of VXRs... make VXRs more tree-like From: VXR1 -> VXR2 -> VXR3 -> VXR4 -> ... -> VXRn To: new VXR1 / | \ VXR2 VXR3 VXR4 / | \ ... VXR5 .......... VXRn Parameters: f : file The open CDF file vxrhead : int The byte location of the first VXR for a variable numVXRs : int The total number of VXRs Returns: newVXRhead : int The byte location of the newest VXR head newvxroff : int The byte location of the last VXR head ''' newNumVXRs = int(numVXRs / CDF.NUM_VXRlvl_ENTRIES) remaining = int(numVXRs % CDF.NUM_VXRlvl_ENTRIES) vxroff = vxrhead prevxroff = -1 if (remaining != 0): newNumVXRs += 1 CDF.level += 1 for x in range(0, newNumVXRs): newvxroff = self._write_vxr(f, numEntries=CDF.NUM_VXRlvl_ENTRIES) if (x > 0): self._update_offset_value(f, prevxroff+12, 8, newvxroff) else: newvxrhead = newvxroff prevxroff = newvxroff if (x == (newNumVXRs - 1)): if (remaining == 0): endEntry = CDF.NUM_VXRlvl_ENTRIES else: endEntry = remaining else: endEntry = CDF.NUM_VXRlvl_ENTRIES for _ in range(0, endEntry): recFirst, recLast = self._get_recrange(f, vxroff) self._use_vxrentry(f, newvxroff, recFirst, recLast, vxroff) vxroff = self._read_offset_value(f, vxroff+12, 8) vxroff = vxrhead # Break the horizontal links for x in range(0, numVXRs): nvxroff = self._read_offset_value(f, vxroff+12, 8) self._update_offset_value(f, vxroff+12, 8, 0) vxroff = nvxroff # Iterate this process if we're over NUM_VXRlvl_ENTRIES if (newNumVXRs > CDF.NUM_VXRlvl_ENTRIES): return self._add_vxr_levels_r(f, newvxrhead, newNumVXRs) else: return newvxrhead, newvxroff
def track(cls, obj, ptr): """ Track an object which needs destruction when it is garbage collected. """ cls._objects.add(cls(obj, ptr))
Track an object which needs destruction when it is garbage collected.
Below is the the instruction that describes the task: ### Input: Track an object which needs destruction when it is garbage collected. ### Response: def track(cls, obj, ptr): """ Track an object which needs destruction when it is garbage collected. """ cls._objects.add(cls(obj, ptr))
def cli(env, prop): """Find details about this machine.""" try: if prop == 'network': env.fout(get_network()) return meta_prop = META_MAPPING.get(prop) or prop env.fout(SoftLayer.MetadataManager().get(meta_prop)) except SoftLayer.TransportError: raise exceptions.CLIAbort( 'Cannot connect to the backend service address. Make sure ' 'this command is being ran from a device on the backend ' 'network.')
Find details about this machine.
Below is the the instruction that describes the task: ### Input: Find details about this machine. ### Response: def cli(env, prop): """Find details about this machine.""" try: if prop == 'network': env.fout(get_network()) return meta_prop = META_MAPPING.get(prop) or prop env.fout(SoftLayer.MetadataManager().get(meta_prop)) except SoftLayer.TransportError: raise exceptions.CLIAbort( 'Cannot connect to the backend service address. Make sure ' 'this command is being ran from a device on the backend ' 'network.')
def last_modified_version(self, **kwargs): """ Get the last modified version """ self.items(**kwargs) return int(self.request.headers.get("last-modified-version", 0))
Get the last modified version
Below is the the instruction that describes the task: ### Input: Get the last modified version ### Response: def last_modified_version(self, **kwargs): """ Get the last modified version """ self.items(**kwargs) return int(self.request.headers.get("last-modified-version", 0))
def create_physical_relationship(manager, physical_handle_id, other_handle_id, rel_type): """ Makes relationship between the two nodes and returns the relationship. If a relationship is not possible NoRelationshipPossible exception is raised. """ other_meta_type = get_node_meta_type(manager, other_handle_id) if other_meta_type == 'Physical': if rel_type == 'Has' or rel_type == 'Connected_to': return _create_relationship(manager, physical_handle_id, other_handle_id, rel_type) elif other_meta_type == 'Location' and rel_type == 'Located_in': return _create_relationship(manager, physical_handle_id, other_handle_id, rel_type) raise exceptions.NoRelationshipPossible(physical_handle_id, 'Physical', other_handle_id, other_meta_type, rel_type)
Makes relationship between the two nodes and returns the relationship. If a relationship is not possible NoRelationshipPossible exception is raised.
Below is the the instruction that describes the task: ### Input: Makes relationship between the two nodes and returns the relationship. If a relationship is not possible NoRelationshipPossible exception is raised. ### Response: def create_physical_relationship(manager, physical_handle_id, other_handle_id, rel_type): """ Makes relationship between the two nodes and returns the relationship. If a relationship is not possible NoRelationshipPossible exception is raised. """ other_meta_type = get_node_meta_type(manager, other_handle_id) if other_meta_type == 'Physical': if rel_type == 'Has' or rel_type == 'Connected_to': return _create_relationship(manager, physical_handle_id, other_handle_id, rel_type) elif other_meta_type == 'Location' and rel_type == 'Located_in': return _create_relationship(manager, physical_handle_id, other_handle_id, rel_type) raise exceptions.NoRelationshipPossible(physical_handle_id, 'Physical', other_handle_id, other_meta_type, rel_type)
def getApplicationSupportedMimeTypes(self, pchAppKey, pchMimeTypesBuffer, unMimeTypesBuffer): """Get the list of supported mime types for this application, comma-delimited""" fn = self.function_table.getApplicationSupportedMimeTypes result = fn(pchAppKey, pchMimeTypesBuffer, unMimeTypesBuffer) return result
Get the list of supported mime types for this application, comma-delimited
Below is the the instruction that describes the task: ### Input: Get the list of supported mime types for this application, comma-delimited ### Response: def getApplicationSupportedMimeTypes(self, pchAppKey, pchMimeTypesBuffer, unMimeTypesBuffer): """Get the list of supported mime types for this application, comma-delimited""" fn = self.function_table.getApplicationSupportedMimeTypes result = fn(pchAppKey, pchMimeTypesBuffer, unMimeTypesBuffer) return result
def value_counts(self, dropna=True): """ Returns a Series containing counts of unique values. Parameters ---------- dropna : boolean, default True Don't include counts of NaN, even if NaN is in sp_values. Returns ------- counts : Series """ from pandas import Index, Series keys, counts = algos._value_counts_arraylike(self.sp_values, dropna=dropna) fcounts = self.sp_index.ngaps if fcounts > 0: if self._null_fill_value and dropna: pass else: if self._null_fill_value: mask = isna(keys) else: mask = keys == self.fill_value if mask.any(): counts[mask] += fcounts else: keys = np.insert(keys, 0, self.fill_value) counts = np.insert(counts, 0, fcounts) if not isinstance(keys, ABCIndexClass): keys = Index(keys) result = Series(counts, index=keys) return result
Returns a Series containing counts of unique values. Parameters ---------- dropna : boolean, default True Don't include counts of NaN, even if NaN is in sp_values. Returns ------- counts : Series
Below is the the instruction that describes the task: ### Input: Returns a Series containing counts of unique values. Parameters ---------- dropna : boolean, default True Don't include counts of NaN, even if NaN is in sp_values. Returns ------- counts : Series ### Response: def value_counts(self, dropna=True): """ Returns a Series containing counts of unique values. Parameters ---------- dropna : boolean, default True Don't include counts of NaN, even if NaN is in sp_values. Returns ------- counts : Series """ from pandas import Index, Series keys, counts = algos._value_counts_arraylike(self.sp_values, dropna=dropna) fcounts = self.sp_index.ngaps if fcounts > 0: if self._null_fill_value and dropna: pass else: if self._null_fill_value: mask = isna(keys) else: mask = keys == self.fill_value if mask.any(): counts[mask] += fcounts else: keys = np.insert(keys, 0, self.fill_value) counts = np.insert(counts, 0, fcounts) if not isinstance(keys, ABCIndexClass): keys = Index(keys) result = Series(counts, index=keys) return result
def render_diagram(root_task, out_base, max_param_len=20, horizontal=False, colored=False): """Render a diagram of the ETL pipeline All upstream tasks (i.e. requirements) of :attr:`root_task` are rendered. Nodes are, by default, styled as simple rects. This style is augmented by any :attr:`diagram_style` attributes of the tasks. .. note:: This function requires the 'dot' executable from the GraphViz package to be installed and its location configured in your `project_config.py` variable :attr:`DOT_EXECUTABLE`. Args: root_task (luigi.Task): Task instance that defines the 'upstream root' of the pipeline out_base (str): base output file name (file endings will be appended) max_param_len (int): Maximum shown length of task parameter values horizontal (bool): If True, layout graph left-to-right instead of top-to-bottom colored (bool): If True, show task completion status by color of nodes """ import re import codecs import subprocess from ozelot import config from ozelot.etl.tasks import get_task_name, get_task_param_string # the graph - lines in dot file lines = [u"digraph G {"] if horizontal: lines.append(u"rankdir=LR;") # helper function: make unique task id from task name and parameters: # task name + parameter string, with spaces replaced with _ and all non-alphanumerical characters stripped def get_id(task): s = get_task_name(task) + "_" + get_task_param_string(task) return re.sub(r'\W+', '', re.sub(' ', '_', s)) # node names of tasks that have already been added to the graph existing_nodes = set() # edge sets (tuples of two node names) that have already been added existing_edges = set() # recursion function for generating the pipeline graph def _build(task, parent_id=None): tid = get_id(task) # add node if it's not already there if tid not in existing_nodes: # build task label: task name plus dictionary of parameters as table params = task.to_str_params() param_list = "" for k, v in params.items(): # truncate param value if necessary, and add "..." if len(v) > max_param_len: v = v[:max_param_len] + "..." param_list += "<TR><TD ALIGN=\"LEFT\">" \ "<FONT POINT-SIZE=\"10\">{:s}</FONT>" \ "</TD><TD ALIGN=\"LEFT\">" \ "<FONT POINT-SIZE=\"10\">{:s}</FONT>" \ "</TD></TR>".format(k, v) label = "<TABLE BORDER=\"0\" CELLSPACING=\"1\" CELLPADDING=\"1\">" \ "<TR><TD COLSPAN=\"2\" ALIGN=\"CENTER\">" \ "<FONT POINT-SIZE=\"12\">{:s}</FONT>" \ "</TD></TR>" \ "".format(get_task_name(task)) + param_list + "</TABLE>" style = getattr(task, 'diagram_style', []) if colored: color = ', color="{:s}"'.format("green" if task.complete() else "red") else: color = '' # add a node for the task lines.append(u"{name:s} [label=< {label:s} >, shape=\"rect\" {color:s}, style=\"{style:s}\"];\n" u"".format(name=tid, label=label, color=color, style=','.join(style))) existing_nodes.add(tid) # recurse over requirements for req in task.requires(): _build(req, parent_id=tid) # add edge from current node to (upstream) parent, if it doesn't already exist if parent_id is not None and (tid, parent_id) not in existing_edges: lines.append(u"{source:s} -> {target:s};\n".format(source=tid, target=parent_id)) # generate pipeline graph _build(root_task) # close the graph definition lines.append(u"}") # write description in DOT format with codecs.open(out_base + '.dot', 'w', encoding='utf-8') as f: f.write(u"\n".join(lines)) # check existence of DOT_EXECUTABLE variable and file if not hasattr(config, 'DOT_EXECUTABLE'): raise RuntimeError("Please configure the 'DOT_EXECUTABLE' variable in your 'project_config.py'") if not os.path.exists(config.DOT_EXECUTABLE): raise IOError("Could not find file pointed to by 'DOT_EXECUTABLE': " + str(config.DOT_EXECUTABLE)) # render to image using DOT # noinspection PyUnresolvedReferences subprocess.check_call([ config.DOT_EXECUTABLE, '-T', 'png', '-o', out_base + '.png', out_base + '.dot' ])
Render a diagram of the ETL pipeline All upstream tasks (i.e. requirements) of :attr:`root_task` are rendered. Nodes are, by default, styled as simple rects. This style is augmented by any :attr:`diagram_style` attributes of the tasks. .. note:: This function requires the 'dot' executable from the GraphViz package to be installed and its location configured in your `project_config.py` variable :attr:`DOT_EXECUTABLE`. Args: root_task (luigi.Task): Task instance that defines the 'upstream root' of the pipeline out_base (str): base output file name (file endings will be appended) max_param_len (int): Maximum shown length of task parameter values horizontal (bool): If True, layout graph left-to-right instead of top-to-bottom colored (bool): If True, show task completion status by color of nodes
Below is the the instruction that describes the task: ### Input: Render a diagram of the ETL pipeline All upstream tasks (i.e. requirements) of :attr:`root_task` are rendered. Nodes are, by default, styled as simple rects. This style is augmented by any :attr:`diagram_style` attributes of the tasks. .. note:: This function requires the 'dot' executable from the GraphViz package to be installed and its location configured in your `project_config.py` variable :attr:`DOT_EXECUTABLE`. Args: root_task (luigi.Task): Task instance that defines the 'upstream root' of the pipeline out_base (str): base output file name (file endings will be appended) max_param_len (int): Maximum shown length of task parameter values horizontal (bool): If True, layout graph left-to-right instead of top-to-bottom colored (bool): If True, show task completion status by color of nodes ### Response: def render_diagram(root_task, out_base, max_param_len=20, horizontal=False, colored=False): """Render a diagram of the ETL pipeline All upstream tasks (i.e. requirements) of :attr:`root_task` are rendered. Nodes are, by default, styled as simple rects. This style is augmented by any :attr:`diagram_style` attributes of the tasks. .. note:: This function requires the 'dot' executable from the GraphViz package to be installed and its location configured in your `project_config.py` variable :attr:`DOT_EXECUTABLE`. Args: root_task (luigi.Task): Task instance that defines the 'upstream root' of the pipeline out_base (str): base output file name (file endings will be appended) max_param_len (int): Maximum shown length of task parameter values horizontal (bool): If True, layout graph left-to-right instead of top-to-bottom colored (bool): If True, show task completion status by color of nodes """ import re import codecs import subprocess from ozelot import config from ozelot.etl.tasks import get_task_name, get_task_param_string # the graph - lines in dot file lines = [u"digraph G {"] if horizontal: lines.append(u"rankdir=LR;") # helper function: make unique task id from task name and parameters: # task name + parameter string, with spaces replaced with _ and all non-alphanumerical characters stripped def get_id(task): s = get_task_name(task) + "_" + get_task_param_string(task) return re.sub(r'\W+', '', re.sub(' ', '_', s)) # node names of tasks that have already been added to the graph existing_nodes = set() # edge sets (tuples of two node names) that have already been added existing_edges = set() # recursion function for generating the pipeline graph def _build(task, parent_id=None): tid = get_id(task) # add node if it's not already there if tid not in existing_nodes: # build task label: task name plus dictionary of parameters as table params = task.to_str_params() param_list = "" for k, v in params.items(): # truncate param value if necessary, and add "..." if len(v) > max_param_len: v = v[:max_param_len] + "..." param_list += "<TR><TD ALIGN=\"LEFT\">" \ "<FONT POINT-SIZE=\"10\">{:s}</FONT>" \ "</TD><TD ALIGN=\"LEFT\">" \ "<FONT POINT-SIZE=\"10\">{:s}</FONT>" \ "</TD></TR>".format(k, v) label = "<TABLE BORDER=\"0\" CELLSPACING=\"1\" CELLPADDING=\"1\">" \ "<TR><TD COLSPAN=\"2\" ALIGN=\"CENTER\">" \ "<FONT POINT-SIZE=\"12\">{:s}</FONT>" \ "</TD></TR>" \ "".format(get_task_name(task)) + param_list + "</TABLE>" style = getattr(task, 'diagram_style', []) if colored: color = ', color="{:s}"'.format("green" if task.complete() else "red") else: color = '' # add a node for the task lines.append(u"{name:s} [label=< {label:s} >, shape=\"rect\" {color:s}, style=\"{style:s}\"];\n" u"".format(name=tid, label=label, color=color, style=','.join(style))) existing_nodes.add(tid) # recurse over requirements for req in task.requires(): _build(req, parent_id=tid) # add edge from current node to (upstream) parent, if it doesn't already exist if parent_id is not None and (tid, parent_id) not in existing_edges: lines.append(u"{source:s} -> {target:s};\n".format(source=tid, target=parent_id)) # generate pipeline graph _build(root_task) # close the graph definition lines.append(u"}") # write description in DOT format with codecs.open(out_base + '.dot', 'w', encoding='utf-8') as f: f.write(u"\n".join(lines)) # check existence of DOT_EXECUTABLE variable and file if not hasattr(config, 'DOT_EXECUTABLE'): raise RuntimeError("Please configure the 'DOT_EXECUTABLE' variable in your 'project_config.py'") if not os.path.exists(config.DOT_EXECUTABLE): raise IOError("Could not find file pointed to by 'DOT_EXECUTABLE': " + str(config.DOT_EXECUTABLE)) # render to image using DOT # noinspection PyUnresolvedReferences subprocess.check_call([ config.DOT_EXECUTABLE, '-T', 'png', '-o', out_base + '.png', out_base + '.dot' ])
def run(self): """ Runs the simulation. """ self.init_run() if self.debug: self.dump("AfterInit: ") #print("++++++++++++++++ Time: %f"%self.current_time) while self.step(): #self.dump("Time: %f"%self.current_time) #print("++++++++++++++++ Time: %f"%self.current_time) pass
Runs the simulation.
Below is the the instruction that describes the task: ### Input: Runs the simulation. ### Response: def run(self): """ Runs the simulation. """ self.init_run() if self.debug: self.dump("AfterInit: ") #print("++++++++++++++++ Time: %f"%self.current_time) while self.step(): #self.dump("Time: %f"%self.current_time) #print("++++++++++++++++ Time: %f"%self.current_time) pass
def handle_items(repo, **kwargs): """:return: repo.files()""" log.info('items: %s %s' %(repo, kwargs)) if not hasattr(repo, 'items'): return [] return [i.serialize() for i in repo.items(**kwargs)]
:return: repo.files()
Below is the the instruction that describes the task: ### Input: :return: repo.files() ### Response: def handle_items(repo, **kwargs): """:return: repo.files()""" log.info('items: %s %s' %(repo, kwargs)) if not hasattr(repo, 'items'): return [] return [i.serialize() for i in repo.items(**kwargs)]
def _get_err_msg(row, col, fld, val, prt_flds): """Return an informative message with details of xlsx write attempt.""" import traceback traceback.print_exc() err_msg = ( "ROW({R}) COL({C}) FIELD({F}) VAL({V})\n".format(R=row, C=col, F=fld, V=val), "PRINT FIELDS({N}): {F}".format(N=len(prt_flds), F=" ".join(prt_flds))) return "\n".join(err_msg)
Return an informative message with details of xlsx write attempt.
Below is the the instruction that describes the task: ### Input: Return an informative message with details of xlsx write attempt. ### Response: def _get_err_msg(row, col, fld, val, prt_flds): """Return an informative message with details of xlsx write attempt.""" import traceback traceback.print_exc() err_msg = ( "ROW({R}) COL({C}) FIELD({F}) VAL({V})\n".format(R=row, C=col, F=fld, V=val), "PRINT FIELDS({N}): {F}".format(N=len(prt_flds), F=" ".join(prt_flds))) return "\n".join(err_msg)
def _resolve_periods_in_year(scale, frame): """ Convert the scale to an annualzation factor. If scale is None then attempt to resolve from frame. If scale is a scalar then use it. If scale is a string then use it to lookup the annual factor """ if scale is None: return periodicity(frame) elif isinstance(scale, basestring): return periodicity(scale) elif np.isscalar(scale): return scale else: raise ValueError("scale must be None, scalar, or string, not %s" % type(scale))
Convert the scale to an annualzation factor. If scale is None then attempt to resolve from frame. If scale is a scalar then use it. If scale is a string then use it to lookup the annual factor
Below is the the instruction that describes the task: ### Input: Convert the scale to an annualzation factor. If scale is None then attempt to resolve from frame. If scale is a scalar then use it. If scale is a string then use it to lookup the annual factor ### Response: def _resolve_periods_in_year(scale, frame): """ Convert the scale to an annualzation factor. If scale is None then attempt to resolve from frame. If scale is a scalar then use it. If scale is a string then use it to lookup the annual factor """ if scale is None: return periodicity(frame) elif isinstance(scale, basestring): return periodicity(scale) elif np.isscalar(scale): return scale else: raise ValueError("scale must be None, scalar, or string, not %s" % type(scale))
def handle_msg(self, msg): """handle the message *msg*. """ addr_ = msg.data["URI"] status = msg.data.get('status', True) if status: service = msg.data.get('service') for service in self.services: if not service or service in service: LOGGER.debug("Adding address %s %s", str(addr_), str(service)) self.subscriber.add(addr_) break else: LOGGER.debug("Removing address %s", str(addr_)) self.subscriber.remove(addr_)
handle the message *msg*.
Below is the the instruction that describes the task: ### Input: handle the message *msg*. ### Response: def handle_msg(self, msg): """handle the message *msg*. """ addr_ = msg.data["URI"] status = msg.data.get('status', True) if status: service = msg.data.get('service') for service in self.services: if not service or service in service: LOGGER.debug("Adding address %s %s", str(addr_), str(service)) self.subscriber.add(addr_) break else: LOGGER.debug("Removing address %s", str(addr_)) self.subscriber.remove(addr_)
def _ensure_worker(self): """Ensure there are enough workers available""" while len(self._workers) < self._min_workers or len(self._workers) < self._queue.qsize() < self._max_workers: worker = threading.Thread( target=self._execute_futures, name=self.identifier + '_%d' % time.time(), ) worker.daemon = True self._workers.add(worker) worker.start()
Ensure there are enough workers available
Below is the the instruction that describes the task: ### Input: Ensure there are enough workers available ### Response: def _ensure_worker(self): """Ensure there are enough workers available""" while len(self._workers) < self._min_workers or len(self._workers) < self._queue.qsize() < self._max_workers: worker = threading.Thread( target=self._execute_futures, name=self.identifier + '_%d' % time.time(), ) worker.daemon = True self._workers.add(worker) worker.start()
def wallet_representative(self, wallet): """ Returns the default representative for **wallet** :param wallet: Wallet to get default representative account for :type wallet: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.wallet_representative( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F" ... ) "xrb_3e3j5tkog48pnny9dmfzj1r16pg8t1e76dz5tmac6iq689wyjfpi00000000" """ wallet = self._process_value(wallet, 'wallet') payload = {"wallet": wallet} resp = self.call('wallet_representative', payload) return resp['representative']
Returns the default representative for **wallet** :param wallet: Wallet to get default representative account for :type wallet: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.wallet_representative( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F" ... ) "xrb_3e3j5tkog48pnny9dmfzj1r16pg8t1e76dz5tmac6iq689wyjfpi00000000"
Below is the the instruction that describes the task: ### Input: Returns the default representative for **wallet** :param wallet: Wallet to get default representative account for :type wallet: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.wallet_representative( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F" ... ) "xrb_3e3j5tkog48pnny9dmfzj1r16pg8t1e76dz5tmac6iq689wyjfpi00000000" ### Response: def wallet_representative(self, wallet): """ Returns the default representative for **wallet** :param wallet: Wallet to get default representative account for :type wallet: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.wallet_representative( ... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F" ... ) "xrb_3e3j5tkog48pnny9dmfzj1r16pg8t1e76dz5tmac6iq689wyjfpi00000000" """ wallet = self._process_value(wallet, 'wallet') payload = {"wallet": wallet} resp = self.call('wallet_representative', payload) return resp['representative']
def set_missing_value_policy(self, policy, target_attr_name=None): """ Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist. """ assert policy in MISSING_VALUE_POLICIES, \ "Unknown policy: %s" % (policy,) for attr_name in self.data.attribute_names: if target_attr_name is not None and target_attr_name != attr_name: continue self.missing_value_policy[attr_name] = policy
Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist.
Below is the the instruction that describes the task: ### Input: Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist. ### Response: def set_missing_value_policy(self, policy, target_attr_name=None): """ Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist. """ assert policy in MISSING_VALUE_POLICIES, \ "Unknown policy: %s" % (policy,) for attr_name in self.data.attribute_names: if target_attr_name is not None and target_attr_name != attr_name: continue self.missing_value_policy[attr_name] = policy
def write(self): """Restore GFF3 entry to original format Returns: str: properly formatted string containing the GFF3 entry """ none_type = type(None) # Format attributes for writing attrs = self.attribute_string() # Place holder if field value is NoneType for attr in self.__dict__.keys(): if type(attr) == none_type: setattr(self, attr, '.') # Format entry for writing fstr = '{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}\t{8}{9}'\ .format(self.seqid, self.source, self.type, str(self.start), str(self.end), self._score_str, self.strand, self.phase, attrs, os.linesep) return fstr
Restore GFF3 entry to original format Returns: str: properly formatted string containing the GFF3 entry
Below is the the instruction that describes the task: ### Input: Restore GFF3 entry to original format Returns: str: properly formatted string containing the GFF3 entry ### Response: def write(self): """Restore GFF3 entry to original format Returns: str: properly formatted string containing the GFF3 entry """ none_type = type(None) # Format attributes for writing attrs = self.attribute_string() # Place holder if field value is NoneType for attr in self.__dict__.keys(): if type(attr) == none_type: setattr(self, attr, '.') # Format entry for writing fstr = '{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}\t{8}{9}'\ .format(self.seqid, self.source, self.type, str(self.start), str(self.end), self._score_str, self.strand, self.phase, attrs, os.linesep) return fstr