code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def bind(self, fn: Callable[[Any], 'Observable']) -> 'Observable': r"""Chain continuation passing functions. Haskell: m >>= k = Cont $ \c -> runCont m $ \a -> runCont (k a) c """ source = self return Observable(lambda on_next: source.subscribe(lambda a: fn(a).subscribe(on_next)))
r"""Chain continuation passing functions. Haskell: m >>= k = Cont $ \c -> runCont m $ \a -> runCont (k a) c
Below is the the instruction that describes the task: ### Input: r"""Chain continuation passing functions. Haskell: m >>= k = Cont $ \c -> runCont m $ \a -> runCont (k a) c ### Response: def bind(self, fn: Callable[[Any], 'Observable']) -> 'Observable': r"""Chain continuation passing functions. Haskell: m >>= k = Cont $ \c -> runCont m $ \a -> runCont (k a) c """ source = self return Observable(lambda on_next: source.subscribe(lambda a: fn(a).subscribe(on_next)))
def eccentricity(self): """ The eccentricity of the 2D Gaussian function that has the same second-order moments as the source. The eccentricity is the fraction of the distance along the semimajor axis at which the focus lies. .. math:: e = \\sqrt{1 - \\frac{b^2}{a^2}} where :math:`a` and :math:`b` are the lengths of the semimajor and semiminor axes, respectively. """ l1, l2 = self.covariance_eigvals if l1 == 0: return 0. # pragma: no cover return np.sqrt(1. - (l2 / l1))
The eccentricity of the 2D Gaussian function that has the same second-order moments as the source. The eccentricity is the fraction of the distance along the semimajor axis at which the focus lies. .. math:: e = \\sqrt{1 - \\frac{b^2}{a^2}} where :math:`a` and :math:`b` are the lengths of the semimajor and semiminor axes, respectively.
Below is the the instruction that describes the task: ### Input: The eccentricity of the 2D Gaussian function that has the same second-order moments as the source. The eccentricity is the fraction of the distance along the semimajor axis at which the focus lies. .. math:: e = \\sqrt{1 - \\frac{b^2}{a^2}} where :math:`a` and :math:`b` are the lengths of the semimajor and semiminor axes, respectively. ### Response: def eccentricity(self): """ The eccentricity of the 2D Gaussian function that has the same second-order moments as the source. The eccentricity is the fraction of the distance along the semimajor axis at which the focus lies. .. math:: e = \\sqrt{1 - \\frac{b^2}{a^2}} where :math:`a` and :math:`b` are the lengths of the semimajor and semiminor axes, respectively. """ l1, l2 = self.covariance_eigvals if l1 == 0: return 0. # pragma: no cover return np.sqrt(1. - (l2 / l1))
def shrink(self, shrink): """ Remove unnecessary parts :param shrink: Object to shringk :type shrink: dict | list :return: Shrunk object :rtype: dict | list """ if isinstance(shrink, list): return self._shrink_list(shrink) if isinstance(shrink, dict): return self._shrink_dict(shrink) return shrink
Remove unnecessary parts :param shrink: Object to shringk :type shrink: dict | list :return: Shrunk object :rtype: dict | list
Below is the the instruction that describes the task: ### Input: Remove unnecessary parts :param shrink: Object to shringk :type shrink: dict | list :return: Shrunk object :rtype: dict | list ### Response: def shrink(self, shrink): """ Remove unnecessary parts :param shrink: Object to shringk :type shrink: dict | list :return: Shrunk object :rtype: dict | list """ if isinstance(shrink, list): return self._shrink_list(shrink) if isinstance(shrink, dict): return self._shrink_dict(shrink) return shrink
def submit_batch_prediction(job_request, job_id=None): """Submit a batch prediction job. Args: job_request: the arguments of the training job in a dict. For example, { 'version_name': 'projects/my-project/models/my-model/versions/my-version', 'data_format': 'TEXT', 'input_paths': ['gs://my_bucket/my_file.csv'], 'output_path': 'gs://my_bucket/predict_output', 'region': 'us-central1', 'max_worker_count': 1, } job_id: id for the training job. If None, an id based on timestamp will be generated. Returns: A Job object representing the batch prediction job. """ if job_id is None: job_id = 'prediction_' + datetime.datetime.now().strftime('%y%m%d_%H%M%S') job = { 'job_id': job_id, 'prediction_input': job_request, } context = datalab.Context.default() cloudml = discovery.build('ml', 'v1', credentials=context.credentials) request = cloudml.projects().jobs().create(body=job, parent='projects/' + context.project_id) request.headers['user-agent'] = 'GoogleCloudDataLab/1.0' request.execute() return Job(job_id)
Submit a batch prediction job. Args: job_request: the arguments of the training job in a dict. For example, { 'version_name': 'projects/my-project/models/my-model/versions/my-version', 'data_format': 'TEXT', 'input_paths': ['gs://my_bucket/my_file.csv'], 'output_path': 'gs://my_bucket/predict_output', 'region': 'us-central1', 'max_worker_count': 1, } job_id: id for the training job. If None, an id based on timestamp will be generated. Returns: A Job object representing the batch prediction job.
Below is the the instruction that describes the task: ### Input: Submit a batch prediction job. Args: job_request: the arguments of the training job in a dict. For example, { 'version_name': 'projects/my-project/models/my-model/versions/my-version', 'data_format': 'TEXT', 'input_paths': ['gs://my_bucket/my_file.csv'], 'output_path': 'gs://my_bucket/predict_output', 'region': 'us-central1', 'max_worker_count': 1, } job_id: id for the training job. If None, an id based on timestamp will be generated. Returns: A Job object representing the batch prediction job. ### Response: def submit_batch_prediction(job_request, job_id=None): """Submit a batch prediction job. Args: job_request: the arguments of the training job in a dict. For example, { 'version_name': 'projects/my-project/models/my-model/versions/my-version', 'data_format': 'TEXT', 'input_paths': ['gs://my_bucket/my_file.csv'], 'output_path': 'gs://my_bucket/predict_output', 'region': 'us-central1', 'max_worker_count': 1, } job_id: id for the training job. If None, an id based on timestamp will be generated. Returns: A Job object representing the batch prediction job. """ if job_id is None: job_id = 'prediction_' + datetime.datetime.now().strftime('%y%m%d_%H%M%S') job = { 'job_id': job_id, 'prediction_input': job_request, } context = datalab.Context.default() cloudml = discovery.build('ml', 'v1', credentials=context.credentials) request = cloudml.projects().jobs().create(body=job, parent='projects/' + context.project_id) request.headers['user-agent'] = 'GoogleCloudDataLab/1.0' request.execute() return Job(job_id)
def select_connected_bonds(): '''Select the bonds connected to the currently selected atoms.''' s = current_system() start, end = s.bonds.transpose() selected = np.zeros(s.n_bonds, 'bool') for i in selected_atoms(): selected |= (i == start) | (i == end) csel = current_selection() bsel = csel['bonds'].add( Selection(selected.nonzero()[0], s.n_bonds)) ret = csel.copy() ret['bonds'] = bsel return select_selection(ret)
Select the bonds connected to the currently selected atoms.
Below is the the instruction that describes the task: ### Input: Select the bonds connected to the currently selected atoms. ### Response: def select_connected_bonds(): '''Select the bonds connected to the currently selected atoms.''' s = current_system() start, end = s.bonds.transpose() selected = np.zeros(s.n_bonds, 'bool') for i in selected_atoms(): selected |= (i == start) | (i == end) csel = current_selection() bsel = csel['bonds'].add( Selection(selected.nonzero()[0], s.n_bonds)) ret = csel.copy() ret['bonds'] = bsel return select_selection(ret)
def setActionManifestPath(self, pchActionManifestPath): """ Sets the path to the action manifest JSON file that is used by this application. If this information was set on the Steam partner site, calls to this function are ignored. If the Steam partner site setting and the path provided by this call are different, VRInputError_MismatchedActionManifest is returned. This call must be made before the first call to UpdateActionState or IVRSystem::PollNextEvent. """ fn = self.function_table.setActionManifestPath result = fn(pchActionManifestPath) return result
Sets the path to the action manifest JSON file that is used by this application. If this information was set on the Steam partner site, calls to this function are ignored. If the Steam partner site setting and the path provided by this call are different, VRInputError_MismatchedActionManifest is returned. This call must be made before the first call to UpdateActionState or IVRSystem::PollNextEvent.
Below is the the instruction that describes the task: ### Input: Sets the path to the action manifest JSON file that is used by this application. If this information was set on the Steam partner site, calls to this function are ignored. If the Steam partner site setting and the path provided by this call are different, VRInputError_MismatchedActionManifest is returned. This call must be made before the first call to UpdateActionState or IVRSystem::PollNextEvent. ### Response: def setActionManifestPath(self, pchActionManifestPath): """ Sets the path to the action manifest JSON file that is used by this application. If this information was set on the Steam partner site, calls to this function are ignored. If the Steam partner site setting and the path provided by this call are different, VRInputError_MismatchedActionManifest is returned. This call must be made before the first call to UpdateActionState or IVRSystem::PollNextEvent. """ fn = self.function_table.setActionManifestPath result = fn(pchActionManifestPath) return result
def get_setting(key, *default): """Return specific search setting from Django conf.""" if default: return get_settings().get(key, default[0]) else: return get_settings()[key]
Return specific search setting from Django conf.
Below is the the instruction that describes the task: ### Input: Return specific search setting from Django conf. ### Response: def get_setting(key, *default): """Return specific search setting from Django conf.""" if default: return get_settings().get(key, default[0]) else: return get_settings()[key]
def tree_model_natsort(model, row1, row2, user_data=None): '''用natural sorting算法对TreeModel的一个column进行排序''' sort_column, sort_type = model.get_sort_column_id() value1 = model.get_value(row1, sort_column) value2 = model.get_value(row2, sort_column) sort_list1 = util.natsort(value1) sort_list2 = util.natsort(value2) status = sort_list1 < sort_list2 if sort_list1 < sort_list2: return -1 else: return 1
用natural sorting算法对TreeModel的一个column进行排序
Below is the the instruction that describes the task: ### Input: 用natural sorting算法对TreeModel的一个column进行排序 ### Response: def tree_model_natsort(model, row1, row2, user_data=None): '''用natural sorting算法对TreeModel的一个column进行排序''' sort_column, sort_type = model.get_sort_column_id() value1 = model.get_value(row1, sort_column) value2 = model.get_value(row2, sort_column) sort_list1 = util.natsort(value1) sort_list2 = util.natsort(value2) status = sort_list1 < sort_list2 if sort_list1 < sort_list2: return -1 else: return 1
def fit(self, X, y=None): ''' Learn the linear transformation to clipped eigenvalues. Note that if min_eig isn't zero and any of the original eigenvalues were exactly zero, this will leave those eigenvalues as zero. Parameters ---------- X : array, shape [n, n] The *symmetric* input similarities. If X is asymmetric, it will be treated as if it were symmetric based on its lower-triangular part. ''' n = X.shape[0] if X.shape != (n, n): raise TypeError("Input must be a square matrix.") # TODO: only get negative eigs somehow? memory = get_memory(self.memory) vals, vecs = memory.cache(scipy.linalg.eigh, ignore=['overwrite_a'])( X, overwrite_a=not self.copy) vals = vals.reshape(-1, 1) if self.min_eig == 0: inner = vals > self.min_eig else: with np.errstate(divide='ignore'): inner = np.where(vals >= self.min_eig, 1, np.where(vals == 0, 0, self.min_eig / vals)) self.clip_ = np.dot(vecs, inner * vecs.T) return self
Learn the linear transformation to clipped eigenvalues. Note that if min_eig isn't zero and any of the original eigenvalues were exactly zero, this will leave those eigenvalues as zero. Parameters ---------- X : array, shape [n, n] The *symmetric* input similarities. If X is asymmetric, it will be treated as if it were symmetric based on its lower-triangular part.
Below is the the instruction that describes the task: ### Input: Learn the linear transformation to clipped eigenvalues. Note that if min_eig isn't zero and any of the original eigenvalues were exactly zero, this will leave those eigenvalues as zero. Parameters ---------- X : array, shape [n, n] The *symmetric* input similarities. If X is asymmetric, it will be treated as if it were symmetric based on its lower-triangular part. ### Response: def fit(self, X, y=None): ''' Learn the linear transformation to clipped eigenvalues. Note that if min_eig isn't zero and any of the original eigenvalues were exactly zero, this will leave those eigenvalues as zero. Parameters ---------- X : array, shape [n, n] The *symmetric* input similarities. If X is asymmetric, it will be treated as if it were symmetric based on its lower-triangular part. ''' n = X.shape[0] if X.shape != (n, n): raise TypeError("Input must be a square matrix.") # TODO: only get negative eigs somehow? memory = get_memory(self.memory) vals, vecs = memory.cache(scipy.linalg.eigh, ignore=['overwrite_a'])( X, overwrite_a=not self.copy) vals = vals.reshape(-1, 1) if self.min_eig == 0: inner = vals > self.min_eig else: with np.errstate(divide='ignore'): inner = np.where(vals >= self.min_eig, 1, np.where(vals == 0, 0, self.min_eig / vals)) self.clip_ = np.dot(vecs, inner * vecs.T) return self
def keypair_delete(self, name): ''' Delete a keypair ''' nt_ks = self.compute_conn nt_ks.keypairs.delete(name) return 'Keypair deleted: {0}'.format(name)
Delete a keypair
Below is the the instruction that describes the task: ### Input: Delete a keypair ### Response: def keypair_delete(self, name): ''' Delete a keypair ''' nt_ks = self.compute_conn nt_ks.keypairs.delete(name) return 'Keypair deleted: {0}'.format(name)
def setEffort(self, vehID, edgeID, effort=None, begTime=None, endTime=None): """setEffort(string, string, double, int, int) -> None Inserts the information about the effort of edge "edgeID" valid from begin time to end time into the vehicle's internal edge weights container. If the time is not specified, any previously set values for that edge are removed. If begTime or endTime are not specified the value is set for the whole simulation duration. """ if type(edgeID) != str and type(begTime) == str: # legacy handling warnings.warn("Parameter order has changed for setEffort(). Attempting legacy ordering. Please update your code.", stacklevel=2) return self.setEffort(vehID, begTime, endTime, edgeID, effort) if effort is None: # reset self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + len(edgeID)) self._connection._string += struct.pack("!Bi", tc.TYPE_COMPOUND, 1) self._connection._packString(edgeID) self._connection._sendExact() elif begTime is None: # set value for the whole simulation self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + len(edgeID) + 1 + 8) self._connection._string += struct.pack("!Bi", tc.TYPE_COMPOUND, 2) self._connection._packString(edgeID) self._connection._string += struct.pack("!Bd", tc.TYPE_DOUBLE, effort) self._connection._sendExact() else: self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + 1 + 4 + 1 + 4 + len(edgeID) + 1 + 8) self._connection._string += struct.pack("!BiBiBi", tc.TYPE_COMPOUND, 4, tc.TYPE_INTEGER, begTime, tc.TYPE_INTEGER, endTime) self._connection._packString(edgeID) self._connection._string += struct.pack("!Bd", tc.TYPE_DOUBLE, effort) self._connection._sendExact()
setEffort(string, string, double, int, int) -> None Inserts the information about the effort of edge "edgeID" valid from begin time to end time into the vehicle's internal edge weights container. If the time is not specified, any previously set values for that edge are removed. If begTime or endTime are not specified the value is set for the whole simulation duration.
Below is the the instruction that describes the task: ### Input: setEffort(string, string, double, int, int) -> None Inserts the information about the effort of edge "edgeID" valid from begin time to end time into the vehicle's internal edge weights container. If the time is not specified, any previously set values for that edge are removed. If begTime or endTime are not specified the value is set for the whole simulation duration. ### Response: def setEffort(self, vehID, edgeID, effort=None, begTime=None, endTime=None): """setEffort(string, string, double, int, int) -> None Inserts the information about the effort of edge "edgeID" valid from begin time to end time into the vehicle's internal edge weights container. If the time is not specified, any previously set values for that edge are removed. If begTime or endTime are not specified the value is set for the whole simulation duration. """ if type(edgeID) != str and type(begTime) == str: # legacy handling warnings.warn("Parameter order has changed for setEffort(). Attempting legacy ordering. Please update your code.", stacklevel=2) return self.setEffort(vehID, begTime, endTime, edgeID, effort) if effort is None: # reset self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + len(edgeID)) self._connection._string += struct.pack("!Bi", tc.TYPE_COMPOUND, 1) self._connection._packString(edgeID) self._connection._sendExact() elif begTime is None: # set value for the whole simulation self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + len(edgeID) + 1 + 8) self._connection._string += struct.pack("!Bi", tc.TYPE_COMPOUND, 2) self._connection._packString(edgeID) self._connection._string += struct.pack("!Bd", tc.TYPE_DOUBLE, effort) self._connection._sendExact() else: self._connection._beginMessage(tc.CMD_SET_VEHICLE_VARIABLE, tc.VAR_EDGE_EFFORT, vehID, 1 + 4 + 1 + 4 + 1 + 4 + 1 + 4 + len(edgeID) + 1 + 8) self._connection._string += struct.pack("!BiBiBi", tc.TYPE_COMPOUND, 4, tc.TYPE_INTEGER, begTime, tc.TYPE_INTEGER, endTime) self._connection._packString(edgeID) self._connection._string += struct.pack("!Bd", tc.TYPE_DOUBLE, effort) self._connection._sendExact()
def wrap2stub(self, customfunc): """ Wrapping the inspector as a stub based on the type Args: customfunc: function that replaces the original Returns: function, the spy wrapper around the customfunc """ if self.args_type == "MODULE_FUNCTION": wrapper = Wrapper.wrap_spy(customfunc, self.obj) setattr(self.obj, self.prop, wrapper) elif self.args_type == "MODULE": wrapper = Wrapper.EmptyClass setattr(CPSCOPE, self.obj.__name__, wrapper) elif self.args_type == "FUNCTION": wrapper = Wrapper.wrap_spy(customfunc) setattr(CPSCOPE, self.obj.__name__, wrapper) elif self.args_type == "PURE": wrapper = Wrapper.wrap_spy(customfunc) setattr(self.pure, "func", wrapper) return wrapper
Wrapping the inspector as a stub based on the type Args: customfunc: function that replaces the original Returns: function, the spy wrapper around the customfunc
Below is the the instruction that describes the task: ### Input: Wrapping the inspector as a stub based on the type Args: customfunc: function that replaces the original Returns: function, the spy wrapper around the customfunc ### Response: def wrap2stub(self, customfunc): """ Wrapping the inspector as a stub based on the type Args: customfunc: function that replaces the original Returns: function, the spy wrapper around the customfunc """ if self.args_type == "MODULE_FUNCTION": wrapper = Wrapper.wrap_spy(customfunc, self.obj) setattr(self.obj, self.prop, wrapper) elif self.args_type == "MODULE": wrapper = Wrapper.EmptyClass setattr(CPSCOPE, self.obj.__name__, wrapper) elif self.args_type == "FUNCTION": wrapper = Wrapper.wrap_spy(customfunc) setattr(CPSCOPE, self.obj.__name__, wrapper) elif self.args_type == "PURE": wrapper = Wrapper.wrap_spy(customfunc) setattr(self.pure, "func", wrapper) return wrapper
def max_dropback(self): """最大回撤 """ return round( float( max( [ (self.assets.iloc[idx] - self.assets.iloc[idx::].min()) / self.assets.iloc[idx] for idx in range(len(self.assets)) ] ) ), 2 )
最大回撤
Below is the the instruction that describes the task: ### Input: 最大回撤 ### Response: def max_dropback(self): """最大回撤 """ return round( float( max( [ (self.assets.iloc[idx] - self.assets.iloc[idx::].min()) / self.assets.iloc[idx] for idx in range(len(self.assets)) ] ) ), 2 )
def profile(name): ''' This state module allows you to modify system tuned parameters Example tuned.sls file to set profile to virtual-guest tuned: tuned: - profile - name: virtual-guest name tuned profile name to set the system to To see a valid list of states call execution module: :py:func:`tuned.list <salt.modules.tuned.list_>` ''' # create data-structure to return with default value ret = {'name': '', 'changes': {}, 'result': False, 'comment': ''} ret[name] = name profile = name # get the current state of tuned-adm current_state = __salt__['tuned.active']() valid_profiles = __salt__['tuned.list']() # check valid profiles, and return error if profile name is not valid if profile not in valid_profiles: raise salt.exceptions.SaltInvocationError('Invalid Profile Name') # if current state is same as requested state, return without doing much if profile in current_state: ret['result'] = True ret['comment'] = 'System already in the correct state' return ret # test mode if __opts__['test'] is True: ret['comment'] = 'The state of "{0}" will be changed.'.format( current_state) ret['changes'] = { 'old': current_state, 'new': 'Profile will be set to {0}'.format(profile), } # return None when testing ret['result'] = None return ret # we come to this stage if current state is different that requested state # we there have to set the new state request new_state = __salt__['tuned.profile'](profile) # create the comment data structure ret['comment'] = 'The state of "{0}" was changed!'.format(profile) # fill in the ret data structure ret['changes'] = { 'old': current_state, 'new': new_state, } ret['result'] = True # return with the dictionary data structure return ret
This state module allows you to modify system tuned parameters Example tuned.sls file to set profile to virtual-guest tuned: tuned: - profile - name: virtual-guest name tuned profile name to set the system to To see a valid list of states call execution module: :py:func:`tuned.list <salt.modules.tuned.list_>`
Below is the the instruction that describes the task: ### Input: This state module allows you to modify system tuned parameters Example tuned.sls file to set profile to virtual-guest tuned: tuned: - profile - name: virtual-guest name tuned profile name to set the system to To see a valid list of states call execution module: :py:func:`tuned.list <salt.modules.tuned.list_>` ### Response: def profile(name): ''' This state module allows you to modify system tuned parameters Example tuned.sls file to set profile to virtual-guest tuned: tuned: - profile - name: virtual-guest name tuned profile name to set the system to To see a valid list of states call execution module: :py:func:`tuned.list <salt.modules.tuned.list_>` ''' # create data-structure to return with default value ret = {'name': '', 'changes': {}, 'result': False, 'comment': ''} ret[name] = name profile = name # get the current state of tuned-adm current_state = __salt__['tuned.active']() valid_profiles = __salt__['tuned.list']() # check valid profiles, and return error if profile name is not valid if profile not in valid_profiles: raise salt.exceptions.SaltInvocationError('Invalid Profile Name') # if current state is same as requested state, return without doing much if profile in current_state: ret['result'] = True ret['comment'] = 'System already in the correct state' return ret # test mode if __opts__['test'] is True: ret['comment'] = 'The state of "{0}" will be changed.'.format( current_state) ret['changes'] = { 'old': current_state, 'new': 'Profile will be set to {0}'.format(profile), } # return None when testing ret['result'] = None return ret # we come to this stage if current state is different that requested state # we there have to set the new state request new_state = __salt__['tuned.profile'](profile) # create the comment data structure ret['comment'] = 'The state of "{0}" was changed!'.format(profile) # fill in the ret data structure ret['changes'] = { 'old': current_state, 'new': new_state, } ret['result'] = True # return with the dictionary data structure return ret
def cfunc(name, result, *args): """Build and apply a ctypes prototype complete with parameter flags.""" atypes = [] aflags = [] for arg in args: atypes.append(arg[1]) aflags.append((arg[2], arg[0]) + arg[3:]) return CFUNCTYPE(result, *atypes)((name, _fl), tuple(aflags))
Build and apply a ctypes prototype complete with parameter flags.
Below is the the instruction that describes the task: ### Input: Build and apply a ctypes prototype complete with parameter flags. ### Response: def cfunc(name, result, *args): """Build and apply a ctypes prototype complete with parameter flags.""" atypes = [] aflags = [] for arg in args: atypes.append(arg[1]) aflags.append((arg[2], arg[0]) + arg[3:]) return CFUNCTYPE(result, *atypes)((name, _fl), tuple(aflags))
def keep_absolute_resample__roc_auc(X, y, model_generator, method_name, num_fcounts=11): """ Keep Absolute (resample) xlabel = "Max fraction of features kept" ylabel = "ROC AUC" transform = "identity" sort_order = 12 """ return __run_measure(measures.keep_resample, X, y, model_generator, method_name, 0, num_fcounts, sklearn.metrics.roc_auc_score)
Keep Absolute (resample) xlabel = "Max fraction of features kept" ylabel = "ROC AUC" transform = "identity" sort_order = 12
Below is the the instruction that describes the task: ### Input: Keep Absolute (resample) xlabel = "Max fraction of features kept" ylabel = "ROC AUC" transform = "identity" sort_order = 12 ### Response: def keep_absolute_resample__roc_auc(X, y, model_generator, method_name, num_fcounts=11): """ Keep Absolute (resample) xlabel = "Max fraction of features kept" ylabel = "ROC AUC" transform = "identity" sort_order = 12 """ return __run_measure(measures.keep_resample, X, y, model_generator, method_name, 0, num_fcounts, sklearn.metrics.roc_auc_score)
def _list_items(action, key, profile=None, subdomain=None, api_key=None): ''' List items belonging to an API call. This method should be in utils.pagerduty. ''' items = _query( profile=profile, subdomain=subdomain, api_key=api_key, action=action ) ret = {} for item in items[action]: ret[item[key]] = item return ret
List items belonging to an API call. This method should be in utils.pagerduty.
Below is the the instruction that describes the task: ### Input: List items belonging to an API call. This method should be in utils.pagerduty. ### Response: def _list_items(action, key, profile=None, subdomain=None, api_key=None): ''' List items belonging to an API call. This method should be in utils.pagerduty. ''' items = _query( profile=profile, subdomain=subdomain, api_key=api_key, action=action ) ret = {} for item in items[action]: ret[item[key]] = item return ret
def make_segments(x, y): """ Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array """ points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) return segments
Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array
Below is the the instruction that describes the task: ### Input: Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array ### Response: def make_segments(x, y): """ Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array """ points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) return segments
def to_json_file(file, data, pretty): """ Writes object instance in JSON formatted String to file :param file: File to write JSON string ot :param data: Object to convert to JSON :param pretty: Use pretty formatting or not """ json_string = to_json(data, pretty) file_utils.write_to_file(file, json_string)
Writes object instance in JSON formatted String to file :param file: File to write JSON string ot :param data: Object to convert to JSON :param pretty: Use pretty formatting or not
Below is the the instruction that describes the task: ### Input: Writes object instance in JSON formatted String to file :param file: File to write JSON string ot :param data: Object to convert to JSON :param pretty: Use pretty formatting or not ### Response: def to_json_file(file, data, pretty): """ Writes object instance in JSON formatted String to file :param file: File to write JSON string ot :param data: Object to convert to JSON :param pretty: Use pretty formatting or not """ json_string = to_json(data, pretty) file_utils.write_to_file(file, json_string)
def clause_annotations(self): """The list of clause annotations in ``words`` layer.""" if not self.is_tagged(CLAUSE_ANNOTATION): self.tag_clause_annotations() return [word.get(CLAUSE_ANNOTATION, None) for word in self[WORDS]]
The list of clause annotations in ``words`` layer.
Below is the the instruction that describes the task: ### Input: The list of clause annotations in ``words`` layer. ### Response: def clause_annotations(self): """The list of clause annotations in ``words`` layer.""" if not self.is_tagged(CLAUSE_ANNOTATION): self.tag_clause_annotations() return [word.get(CLAUSE_ANNOTATION, None) for word in self[WORDS]]
def read_data_to_asp(file: str) -> List[str]: """ Reads the given JSON file and generates the ASP definition. Args: file: the json data file Returns: the asp definition. """ if file.endswith(".json"): with open(file) as f: data = json.load(f) return schema2asp(data2schema(data)) elif file.endswith(".csv"): df = pd.read_csv(file) df = df.where((pd.notnull(df)), None) data = list(df.T.to_dict().values()) schema = data2schema(data) asp = schema2asp(schema) return asp else: raise Exception("invalid file type")
Reads the given JSON file and generates the ASP definition. Args: file: the json data file Returns: the asp definition.
Below is the the instruction that describes the task: ### Input: Reads the given JSON file and generates the ASP definition. Args: file: the json data file Returns: the asp definition. ### Response: def read_data_to_asp(file: str) -> List[str]: """ Reads the given JSON file and generates the ASP definition. Args: file: the json data file Returns: the asp definition. """ if file.endswith(".json"): with open(file) as f: data = json.load(f) return schema2asp(data2schema(data)) elif file.endswith(".csv"): df = pd.read_csv(file) df = df.where((pd.notnull(df)), None) data = list(df.T.to_dict().values()) schema = data2schema(data) asp = schema2asp(schema) return asp else: raise Exception("invalid file type")
def _calc_grad_tiled(self, img, t_grad, tile_size=512): '''Compute the value of tensor t_grad over the image in a tiled way. Random shifts are applied to the image to blur tile boundaries over multiple iterations.''' sz = tile_size h, w = img.shape[:2] sx, sy = np.random.randint(sz, size=2) img_shift = np.roll(np.roll(img, sx, 1), sy, 0) grad = np.zeros_like(img) for y in range(0, max(h-sz//2, sz),sz): for x in range(0, max(w-sz//2, sz),sz): sub = img_shift[y:y+sz,x:x+sz] g = self._session.run(t_grad, {self._t_input:sub}) grad[y:y+sz,x:x+sz] = g return np.roll(np.roll(grad, -sx, 1), -sy, 0)
Compute the value of tensor t_grad over the image in a tiled way. Random shifts are applied to the image to blur tile boundaries over multiple iterations.
Below is the the instruction that describes the task: ### Input: Compute the value of tensor t_grad over the image in a tiled way. Random shifts are applied to the image to blur tile boundaries over multiple iterations. ### Response: def _calc_grad_tiled(self, img, t_grad, tile_size=512): '''Compute the value of tensor t_grad over the image in a tiled way. Random shifts are applied to the image to blur tile boundaries over multiple iterations.''' sz = tile_size h, w = img.shape[:2] sx, sy = np.random.randint(sz, size=2) img_shift = np.roll(np.roll(img, sx, 1), sy, 0) grad = np.zeros_like(img) for y in range(0, max(h-sz//2, sz),sz): for x in range(0, max(w-sz//2, sz),sz): sub = img_shift[y:y+sz,x:x+sz] g = self._session.run(t_grad, {self._t_input:sub}) grad[y:y+sz,x:x+sz] = g return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def channel(self, rpc_timeout=60, lazy=False): """Open Channel. :param int rpc_timeout: Timeout before we give up waiting for an RPC response from the server. :raises AMQPInvalidArgument: Invalid Parameters :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. """ LOGGER.debug('Opening a new Channel') if not compatibility.is_integer(rpc_timeout): raise AMQPInvalidArgument('rpc_timeout should be an integer') elif self.is_closed: raise AMQPConnectionError('socket/connection closed') with self.lock: channel_id = self._get_next_available_channel_id() channel = Channel(channel_id, self, rpc_timeout, on_close_impl=self._cleanup_channel) self._channels[channel_id] = channel if not lazy: channel.open() LOGGER.debug('Channel #%d Opened', channel_id) return self._channels[channel_id]
Open Channel. :param int rpc_timeout: Timeout before we give up waiting for an RPC response from the server. :raises AMQPInvalidArgument: Invalid Parameters :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error.
Below is the the instruction that describes the task: ### Input: Open Channel. :param int rpc_timeout: Timeout before we give up waiting for an RPC response from the server. :raises AMQPInvalidArgument: Invalid Parameters :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. ### Response: def channel(self, rpc_timeout=60, lazy=False): """Open Channel. :param int rpc_timeout: Timeout before we give up waiting for an RPC response from the server. :raises AMQPInvalidArgument: Invalid Parameters :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. """ LOGGER.debug('Opening a new Channel') if not compatibility.is_integer(rpc_timeout): raise AMQPInvalidArgument('rpc_timeout should be an integer') elif self.is_closed: raise AMQPConnectionError('socket/connection closed') with self.lock: channel_id = self._get_next_available_channel_id() channel = Channel(channel_id, self, rpc_timeout, on_close_impl=self._cleanup_channel) self._channels[channel_id] = channel if not lazy: channel.open() LOGGER.debug('Channel #%d Opened', channel_id) return self._channels[channel_id]
def query_subscriptions(self, subscription_query): """QuerySubscriptions. [Preview API] Query for subscriptions. A subscription is returned if it matches one or more of the specified conditions. :param :class:`<SubscriptionQuery> <azure.devops.v5_0.notification.models.SubscriptionQuery>` subscription_query: :rtype: [NotificationSubscription] """ content = self._serialize.body(subscription_query, 'SubscriptionQuery') response = self._send(http_method='POST', location_id='6864db85-08c0-4006-8e8e-cc1bebe31675', version='5.0-preview.1', content=content) return self._deserialize('[NotificationSubscription]', self._unwrap_collection(response))
QuerySubscriptions. [Preview API] Query for subscriptions. A subscription is returned if it matches one or more of the specified conditions. :param :class:`<SubscriptionQuery> <azure.devops.v5_0.notification.models.SubscriptionQuery>` subscription_query: :rtype: [NotificationSubscription]
Below is the the instruction that describes the task: ### Input: QuerySubscriptions. [Preview API] Query for subscriptions. A subscription is returned if it matches one or more of the specified conditions. :param :class:`<SubscriptionQuery> <azure.devops.v5_0.notification.models.SubscriptionQuery>` subscription_query: :rtype: [NotificationSubscription] ### Response: def query_subscriptions(self, subscription_query): """QuerySubscriptions. [Preview API] Query for subscriptions. A subscription is returned if it matches one or more of the specified conditions. :param :class:`<SubscriptionQuery> <azure.devops.v5_0.notification.models.SubscriptionQuery>` subscription_query: :rtype: [NotificationSubscription] """ content = self._serialize.body(subscription_query, 'SubscriptionQuery') response = self._send(http_method='POST', location_id='6864db85-08c0-4006-8e8e-cc1bebe31675', version='5.0-preview.1', content=content) return self._deserialize('[NotificationSubscription]', self._unwrap_collection(response))
def get_page_url_title(self): ''' Get the title and current url from the remote session. Return is a 2-tuple: (page_title, page_url). ''' cr_tab_id = self.transport._get_cr_tab_meta_for_key(self.tab_id)['id'] targets = self.Target_getTargets() assert 'result' in targets assert 'targetInfos' in targets['result'] for tgt in targets['result']['targetInfos']: if tgt['targetId'] == cr_tab_id: # { # 'title': 'Page Title 1', # 'targetId': '9d2c503c-e39e-42cc-b950-96db073918ee', # 'attached': True, # 'url': 'http://localhost:47181/with_title_1', # 'type': 'page' # } title = tgt['title'] cur_url = tgt['url'] return title, cur_url
Get the title and current url from the remote session. Return is a 2-tuple: (page_title, page_url).
Below is the the instruction that describes the task: ### Input: Get the title and current url from the remote session. Return is a 2-tuple: (page_title, page_url). ### Response: def get_page_url_title(self): ''' Get the title and current url from the remote session. Return is a 2-tuple: (page_title, page_url). ''' cr_tab_id = self.transport._get_cr_tab_meta_for_key(self.tab_id)['id'] targets = self.Target_getTargets() assert 'result' in targets assert 'targetInfos' in targets['result'] for tgt in targets['result']['targetInfos']: if tgt['targetId'] == cr_tab_id: # { # 'title': 'Page Title 1', # 'targetId': '9d2c503c-e39e-42cc-b950-96db073918ee', # 'attached': True, # 'url': 'http://localhost:47181/with_title_1', # 'type': 'page' # } title = tgt['title'] cur_url = tgt['url'] return title, cur_url
def to_html(data): """ Serializes a python object as HTML This method uses the to_json method to turn the given data object into formatted JSON that is displayed in an HTML page. If pygments in installed, syntax highlighting will also be applied to the JSON. """ base_html_template = Template(''' <html> <head> {% if style %} <style type="text/css"> {{ style }} </style> {% endif %} </head> <body> {% if style %} {{ body|safe }} {% else %} <pre></code>{{ body }}</code></pre> {% endif %} </body> </html> ''') code = to_json(data, indent=4) if PYGMENTS_INSTALLED: c = Context({ 'body': highlight(code, JSONLexer(), HtmlFormatter()), 'style': HtmlFormatter().get_style_defs('.highlight') }) html = base_html_template.render(c) else: c = Context({'body': code}) html = base_html_template.render(c) return html
Serializes a python object as HTML This method uses the to_json method to turn the given data object into formatted JSON that is displayed in an HTML page. If pygments in installed, syntax highlighting will also be applied to the JSON.
Below is the the instruction that describes the task: ### Input: Serializes a python object as HTML This method uses the to_json method to turn the given data object into formatted JSON that is displayed in an HTML page. If pygments in installed, syntax highlighting will also be applied to the JSON. ### Response: def to_html(data): """ Serializes a python object as HTML This method uses the to_json method to turn the given data object into formatted JSON that is displayed in an HTML page. If pygments in installed, syntax highlighting will also be applied to the JSON. """ base_html_template = Template(''' <html> <head> {% if style %} <style type="text/css"> {{ style }} </style> {% endif %} </head> <body> {% if style %} {{ body|safe }} {% else %} <pre></code>{{ body }}</code></pre> {% endif %} </body> </html> ''') code = to_json(data, indent=4) if PYGMENTS_INSTALLED: c = Context({ 'body': highlight(code, JSONLexer(), HtmlFormatter()), 'style': HtmlFormatter().get_style_defs('.highlight') }) html = base_html_template.render(c) else: c = Context({'body': code}) html = base_html_template.render(c) return html
def calculate(self, T, method): r'''Method to calculate low-pressure gas thermal conductivity at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature of the gas, [K] method : str Name of the method to use Returns ------- kg : float Thermal conductivity of the gas at T and a low pressure, [W/m/K] ''' if method == GHARAGHEIZI_G: kg = Gharagheizi_gas(T, self.MW, self.Tb, self.Pc, self.omega) elif method == DIPPR_9B: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = DIPPR9B(T, self.MW, Cvgm, mug, self.Tc) elif method == CHUNG: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Chung(T, self.MW, self.Tc, self.omega, Cvgm, mug) elif method == ELI_HANLEY: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm kg = eli_hanley(T, self.MW, self.Tc, self.Vc, self.Zc, self.omega, Cvgm) elif method == EUCKEN_MOD: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Eucken_modified(self.MW, Cvgm, mug) elif method == EUCKEN: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Eucken(self.MW, Cvgm, mug) elif method == DIPPR_PERRY_8E: kg = EQ102(T, *self.Perrys2_314_coeffs) elif method == VDI_PPDS: kg = horner(self.VDI_PPDS_coeffs, T) elif method == BAHADORI_G: kg = Bahadori_gas(T, self.MW) elif method == COOLPROP: kg = CoolProp_T_dependent_property(T, self.CASRN, 'L', 'g') elif method in self.tabular_data: kg = self.interpolate(T, method) return kg
r'''Method to calculate low-pressure gas thermal conductivity at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature of the gas, [K] method : str Name of the method to use Returns ------- kg : float Thermal conductivity of the gas at T and a low pressure, [W/m/K]
Below is the the instruction that describes the task: ### Input: r'''Method to calculate low-pressure gas thermal conductivity at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature of the gas, [K] method : str Name of the method to use Returns ------- kg : float Thermal conductivity of the gas at T and a low pressure, [W/m/K] ### Response: def calculate(self, T, method): r'''Method to calculate low-pressure gas thermal conductivity at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature of the gas, [K] method : str Name of the method to use Returns ------- kg : float Thermal conductivity of the gas at T and a low pressure, [W/m/K] ''' if method == GHARAGHEIZI_G: kg = Gharagheizi_gas(T, self.MW, self.Tb, self.Pc, self.omega) elif method == DIPPR_9B: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = DIPPR9B(T, self.MW, Cvgm, mug, self.Tc) elif method == CHUNG: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Chung(T, self.MW, self.Tc, self.omega, Cvgm, mug) elif method == ELI_HANLEY: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm kg = eli_hanley(T, self.MW, self.Tc, self.Vc, self.Zc, self.omega, Cvgm) elif method == EUCKEN_MOD: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Eucken_modified(self.MW, Cvgm, mug) elif method == EUCKEN: Cvgm = self.Cvgm(T) if hasattr(self.Cvgm, '__call__') else self.Cvgm mug = self.mug(T) if hasattr(self.mug, '__call__') else self.mug kg = Eucken(self.MW, Cvgm, mug) elif method == DIPPR_PERRY_8E: kg = EQ102(T, *self.Perrys2_314_coeffs) elif method == VDI_PPDS: kg = horner(self.VDI_PPDS_coeffs, T) elif method == BAHADORI_G: kg = Bahadori_gas(T, self.MW) elif method == COOLPROP: kg = CoolProp_T_dependent_property(T, self.CASRN, 'L', 'g') elif method in self.tabular_data: kg = self.interpolate(T, method) return kg
def destination_encryption_configuration(self): """google.cloud.bigquery.table.EncryptionConfiguration: Custom encryption configuration for the destination table. Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None` if using default encryption. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.destinationEncryptionConfiguration """ prop = self._get_sub_prop("destinationEncryptionConfiguration") if prop is not None: prop = EncryptionConfiguration.from_api_repr(prop) return prop
google.cloud.bigquery.table.EncryptionConfiguration: Custom encryption configuration for the destination table. Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None` if using default encryption. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.destinationEncryptionConfiguration
Below is the the instruction that describes the task: ### Input: google.cloud.bigquery.table.EncryptionConfiguration: Custom encryption configuration for the destination table. Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None` if using default encryption. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.destinationEncryptionConfiguration ### Response: def destination_encryption_configuration(self): """google.cloud.bigquery.table.EncryptionConfiguration: Custom encryption configuration for the destination table. Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None` if using default encryption. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.destinationEncryptionConfiguration """ prop = self._get_sub_prop("destinationEncryptionConfiguration") if prop is not None: prop = EncryptionConfiguration.from_api_repr(prop) return prop
def remove_user_permission(rid, uid, action='full'): """ Removes user permission on a given resource. :param uid: user id :type uid: str :param rid: resource ID :type rid: str :param action: read, write, update, delete or full :type action: str """ rid = rid.replace('/', '%252F') try: acl_url = urljoin(_acl_url(), 'acls/{}/users/{}/{}'.format(rid, uid, action)) r = http.delete(acl_url) assert r.status_code == 204 except DCOSHTTPException as e: if e.response.status_code != 400: raise
Removes user permission on a given resource. :param uid: user id :type uid: str :param rid: resource ID :type rid: str :param action: read, write, update, delete or full :type action: str
Below is the the instruction that describes the task: ### Input: Removes user permission on a given resource. :param uid: user id :type uid: str :param rid: resource ID :type rid: str :param action: read, write, update, delete or full :type action: str ### Response: def remove_user_permission(rid, uid, action='full'): """ Removes user permission on a given resource. :param uid: user id :type uid: str :param rid: resource ID :type rid: str :param action: read, write, update, delete or full :type action: str """ rid = rid.replace('/', '%252F') try: acl_url = urljoin(_acl_url(), 'acls/{}/users/{}/{}'.format(rid, uid, action)) r = http.delete(acl_url) assert r.status_code == 204 except DCOSHTTPException as e: if e.response.status_code != 400: raise
def info(name, root=None): ''' Return information for the specified user name User to get the information for root Directory to chroot into CLI Example: .. code-block:: bash salt '*' shadow.info root ''' if root is not None: getspnam = functools.partial(_getspnam, root=root) else: getspnam = functools.partial(spwd.getspnam) try: data = getspnam(name) ret = { 'name': data.sp_namp if hasattr(data, 'sp_namp') else data.sp_nam, 'passwd': data.sp_pwdp if hasattr(data, 'sp_pwdp') else data.sp_pwd, 'lstchg': data.sp_lstchg, 'min': data.sp_min, 'max': data.sp_max, 'warn': data.sp_warn, 'inact': data.sp_inact, 'expire': data.sp_expire} except KeyError: return { 'name': '', 'passwd': '', 'lstchg': '', 'min': '', 'max': '', 'warn': '', 'inact': '', 'expire': ''} return ret
Return information for the specified user name User to get the information for root Directory to chroot into CLI Example: .. code-block:: bash salt '*' shadow.info root
Below is the the instruction that describes the task: ### Input: Return information for the specified user name User to get the information for root Directory to chroot into CLI Example: .. code-block:: bash salt '*' shadow.info root ### Response: def info(name, root=None): ''' Return information for the specified user name User to get the information for root Directory to chroot into CLI Example: .. code-block:: bash salt '*' shadow.info root ''' if root is not None: getspnam = functools.partial(_getspnam, root=root) else: getspnam = functools.partial(spwd.getspnam) try: data = getspnam(name) ret = { 'name': data.sp_namp if hasattr(data, 'sp_namp') else data.sp_nam, 'passwd': data.sp_pwdp if hasattr(data, 'sp_pwdp') else data.sp_pwd, 'lstchg': data.sp_lstchg, 'min': data.sp_min, 'max': data.sp_max, 'warn': data.sp_warn, 'inact': data.sp_inact, 'expire': data.sp_expire} except KeyError: return { 'name': '', 'passwd': '', 'lstchg': '', 'min': '', 'max': '', 'warn': '', 'inact': '', 'expire': ''} return ret
def average_fft(self, fftlength=None, overlap=0, window=None): """Compute the averaged one-dimensional DFT of this `TimeSeries`. This method computes a number of FFTs of duration ``fftlength`` and ``overlap`` (both given in seconds), and returns the mean average. This method is analogous to the Welch average method for power spectra. Parameters ---------- fftlength : `float` number of seconds in single FFT, default, use whole `TimeSeries` overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : complex-valued `~gwpy.frequencyseries.FrequencySeries` the transformed output, with populated frequencies array metadata See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used. """ from gwpy.spectrogram import Spectrogram # format lengths if fftlength is None: fftlength = self.duration if isinstance(fftlength, units.Quantity): fftlength = fftlength.value nfft = int((fftlength * self.sample_rate).decompose().value) noverlap = int((overlap * self.sample_rate).decompose().value) navg = divmod(self.size-noverlap, (nfft-noverlap))[0] # format window if window is None: window = 'boxcar' if isinstance(window, (str, tuple)): win = signal.get_window(window, nfft) else: win = numpy.asarray(window) if len(win.shape) != 1: raise ValueError('window must be 1-D') elif win.shape[0] != nfft: raise ValueError('Window is the wrong size.') win = win.astype(self.dtype) scaling = 1. / numpy.absolute(win).mean() if nfft % 2: nfreqs = (nfft + 1) // 2 else: nfreqs = nfft // 2 + 1 ffts = Spectrogram(numpy.zeros((navg, nfreqs), dtype=numpy.complex), channel=self.channel, epoch=self.epoch, f0=0, df=1 / fftlength, dt=1, copy=True) # stride through TimeSeries, recording FFTs as columns of Spectrogram idx = 0 for i in range(navg): # find step TimeSeries idx_end = idx + nfft if idx_end > self.size: continue stepseries = self[idx:idx_end].detrend() * win # calculated FFT, weight, and stack fft_ = stepseries.fft(nfft=nfft) * scaling ffts.value[i, :] = fft_.value idx += (nfft - noverlap) mean = ffts.mean(0) mean.name = self.name mean.epoch = self.epoch mean.channel = self.channel return mean
Compute the averaged one-dimensional DFT of this `TimeSeries`. This method computes a number of FFTs of duration ``fftlength`` and ``overlap`` (both given in seconds), and returns the mean average. This method is analogous to the Welch average method for power spectra. Parameters ---------- fftlength : `float` number of seconds in single FFT, default, use whole `TimeSeries` overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : complex-valued `~gwpy.frequencyseries.FrequencySeries` the transformed output, with populated frequencies array metadata See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used.
Below is the the instruction that describes the task: ### Input: Compute the averaged one-dimensional DFT of this `TimeSeries`. This method computes a number of FFTs of duration ``fftlength`` and ``overlap`` (both given in seconds), and returns the mean average. This method is analogous to the Welch average method for power spectra. Parameters ---------- fftlength : `float` number of seconds in single FFT, default, use whole `TimeSeries` overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : complex-valued `~gwpy.frequencyseries.FrequencySeries` the transformed output, with populated frequencies array metadata See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used. ### Response: def average_fft(self, fftlength=None, overlap=0, window=None): """Compute the averaged one-dimensional DFT of this `TimeSeries`. This method computes a number of FFTs of duration ``fftlength`` and ``overlap`` (both given in seconds), and returns the mean average. This method is analogous to the Welch average method for power spectra. Parameters ---------- fftlength : `float` number of seconds in single FFT, default, use whole `TimeSeries` overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats Returns ------- out : complex-valued `~gwpy.frequencyseries.FrequencySeries` the transformed output, with populated frequencies array metadata See Also -------- :mod:`scipy.fftpack` for the definition of the DFT and conventions used. """ from gwpy.spectrogram import Spectrogram # format lengths if fftlength is None: fftlength = self.duration if isinstance(fftlength, units.Quantity): fftlength = fftlength.value nfft = int((fftlength * self.sample_rate).decompose().value) noverlap = int((overlap * self.sample_rate).decompose().value) navg = divmod(self.size-noverlap, (nfft-noverlap))[0] # format window if window is None: window = 'boxcar' if isinstance(window, (str, tuple)): win = signal.get_window(window, nfft) else: win = numpy.asarray(window) if len(win.shape) != 1: raise ValueError('window must be 1-D') elif win.shape[0] != nfft: raise ValueError('Window is the wrong size.') win = win.astype(self.dtype) scaling = 1. / numpy.absolute(win).mean() if nfft % 2: nfreqs = (nfft + 1) // 2 else: nfreqs = nfft // 2 + 1 ffts = Spectrogram(numpy.zeros((navg, nfreqs), dtype=numpy.complex), channel=self.channel, epoch=self.epoch, f0=0, df=1 / fftlength, dt=1, copy=True) # stride through TimeSeries, recording FFTs as columns of Spectrogram idx = 0 for i in range(navg): # find step TimeSeries idx_end = idx + nfft if idx_end > self.size: continue stepseries = self[idx:idx_end].detrend() * win # calculated FFT, weight, and stack fft_ = stepseries.fft(nfft=nfft) * scaling ffts.value[i, :] = fft_.value idx += (nfft - noverlap) mean = ffts.mean(0) mean.name = self.name mean.epoch = self.epoch mean.channel = self.channel return mean
def send(self, raw_data = None): """Prepare HID raw report (unless raw_data is provided) and send it to HID device """ if self.__report_kind != HidP_Output \ and self.__report_kind != HidP_Feature: raise HIDError("Only for output or feature reports") #valid length if raw_data and (len(raw_data) != self.__raw_report_size): raise HIDError("Report size has to be %d elements (bytes)" \ % self.__raw_report_size) #should be valid report id if raw_data and raw_data[0] != self.__report_id.value: #hint, raw_data should be a plain list of integer values raise HIDError("Not matching report id") # if self.__report_kind != HidP_Output and \ self.__report_kind != HidP_Feature: raise HIDError("Can only send output or feature reports") # if not raw_data: # we'll construct the raw report self.__prepare_raw_data() elif not ( isinstance(raw_data, ctypes.Array) and \ issubclass(raw_data._type_, c_ubyte) ): # pre-memory allocation for performance self.__alloc_raw_data(raw_data) #reference proper object raw_data = self.__raw_data if self.__report_kind == HidP_Output: return self.__hid_object.send_output_report(raw_data) elif self.__report_kind == HidP_Feature: return self.__hid_object.send_feature_report(raw_data) else: pass
Prepare HID raw report (unless raw_data is provided) and send it to HID device
Below is the the instruction that describes the task: ### Input: Prepare HID raw report (unless raw_data is provided) and send it to HID device ### Response: def send(self, raw_data = None): """Prepare HID raw report (unless raw_data is provided) and send it to HID device """ if self.__report_kind != HidP_Output \ and self.__report_kind != HidP_Feature: raise HIDError("Only for output or feature reports") #valid length if raw_data and (len(raw_data) != self.__raw_report_size): raise HIDError("Report size has to be %d elements (bytes)" \ % self.__raw_report_size) #should be valid report id if raw_data and raw_data[0] != self.__report_id.value: #hint, raw_data should be a plain list of integer values raise HIDError("Not matching report id") # if self.__report_kind != HidP_Output and \ self.__report_kind != HidP_Feature: raise HIDError("Can only send output or feature reports") # if not raw_data: # we'll construct the raw report self.__prepare_raw_data() elif not ( isinstance(raw_data, ctypes.Array) and \ issubclass(raw_data._type_, c_ubyte) ): # pre-memory allocation for performance self.__alloc_raw_data(raw_data) #reference proper object raw_data = self.__raw_data if self.__report_kind == HidP_Output: return self.__hid_object.send_output_report(raw_data) elif self.__report_kind == HidP_Feature: return self.__hid_object.send_feature_report(raw_data) else: pass
def victim_assets(self, asset_type=None, asset_id=None): """Victim Asset endpoint for this resource with optional asset type. This method will set the resource endpoint for working with Victim Assets. The HTTP GET method will return all Victim Assets associated with this resource or if a asset type is provided it will return the provided asset type if it has been associated. The provided asset type can be associated to this resource using the HTTP POST method. The HTTP DELETE method will remove the provided tag from this resource. **Example Endpoints URI's** +---------+--------------------------------------------------------------------------------+ | Method | API Endpoint URI's | +=========+================================================================================+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | DELETE | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | POST | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ Args: asset_type (Optional [string]): The asset type. asset_id (Optional [string]): The asset id. """ type_entity_map = { 'emailAddresses': 'victimEmailAddress', 'networkAccounts': 'victimNetworkAccount', 'phoneNumbers': 'victimPhone', 'socialNetworks': 'victimSocialNetwork', 'webSites': 'victimWebSite', } resource = self.copy() resource._request_entity = 'victimAsset' resource._request_uri = '{}/victimAssets'.format(resource._request_uri) if asset_type is not None: resource._request_entity = type_entity_map.get(asset_type, 'victimAsset') resource._request_uri = '{}/{}'.format(resource._request_uri, asset_type) if asset_id is not None: resource._request_uri = '{}/{}'.format(resource._request_uri, asset_id) return resource
Victim Asset endpoint for this resource with optional asset type. This method will set the resource endpoint for working with Victim Assets. The HTTP GET method will return all Victim Assets associated with this resource or if a asset type is provided it will return the provided asset type if it has been associated. The provided asset type can be associated to this resource using the HTTP POST method. The HTTP DELETE method will remove the provided tag from this resource. **Example Endpoints URI's** +---------+--------------------------------------------------------------------------------+ | Method | API Endpoint URI's | +=========+================================================================================+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | DELETE | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | POST | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ Args: asset_type (Optional [string]): The asset type. asset_id (Optional [string]): The asset id.
Below is the the instruction that describes the task: ### Input: Victim Asset endpoint for this resource with optional asset type. This method will set the resource endpoint for working with Victim Assets. The HTTP GET method will return all Victim Assets associated with this resource or if a asset type is provided it will return the provided asset type if it has been associated. The provided asset type can be associated to this resource using the HTTP POST method. The HTTP DELETE method will remove the provided tag from this resource. **Example Endpoints URI's** +---------+--------------------------------------------------------------------------------+ | Method | API Endpoint URI's | +=========+================================================================================+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | DELETE | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | POST | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ Args: asset_type (Optional [string]): The asset type. asset_id (Optional [string]): The asset id. ### Response: def victim_assets(self, asset_type=None, asset_id=None): """Victim Asset endpoint for this resource with optional asset type. This method will set the resource endpoint for working with Victim Assets. The HTTP GET method will return all Victim Assets associated with this resource or if a asset type is provided it will return the provided asset type if it has been associated. The provided asset type can be associated to this resource using the HTTP POST method. The HTTP DELETE method will remove the provided tag from this resource. **Example Endpoints URI's** +---------+--------------------------------------------------------------------------------+ | Method | API Endpoint URI's | +=========+================================================================================+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/indicators/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType} | +---------+--------------------------------------------------------------------------------+ | GET | /v2/victim/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | DELETE | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ | POST | /v2/groups/{resourceType}/{uniqueId}/victimAssets/{assetType}/{resourceId} | +---------+--------------------------------------------------------------------------------+ Args: asset_type (Optional [string]): The asset type. asset_id (Optional [string]): The asset id. """ type_entity_map = { 'emailAddresses': 'victimEmailAddress', 'networkAccounts': 'victimNetworkAccount', 'phoneNumbers': 'victimPhone', 'socialNetworks': 'victimSocialNetwork', 'webSites': 'victimWebSite', } resource = self.copy() resource._request_entity = 'victimAsset' resource._request_uri = '{}/victimAssets'.format(resource._request_uri) if asset_type is not None: resource._request_entity = type_entity_map.get(asset_type, 'victimAsset') resource._request_uri = '{}/{}'.format(resource._request_uri, asset_type) if asset_id is not None: resource._request_uri = '{}/{}'.format(resource._request_uri, asset_id) return resource
def spherical_angle( ra0, dec0, ra1, dec1, ra2, dec2 ): """ Returns the spherical angle distance between two sets of great circles defined by (ra0, dec0), (ra1, dec1) and (ra0, dec0), (ra2, dec2) :param ra0: array or float, longitude of intersection point(s) :param dec0: array or float, latitude of intersection point(s) :param ra1: array or float, longitude of first point(s) :param dec1: array or float, latitude of first point(s) :param ra2: array or float, longitude of second point(s) :param dec2: array or float, latitude of second point(s) :return: spherical angle in degrees """ a = np.deg2rad( angular_distance(ra0, dec0, ra1, dec1)) b = np.deg2rad( angular_distance(ra0, dec0, ra2, dec2)) c = np.deg2rad( angular_distance(ra2, dec2, ra1, dec1)) #use the spherical law of cosines: https://en.wikipedia.org/wiki/Spherical_law_of_cosines#Rearrangements numerator = np.atleast_1d( np.cos(c) - np.cos(a) * np.cos(b) ) denominator = np.atleast_1d( np.sin(a)*np.sin(b) ) return np.where( denominator == 0 , np.zeros( len(denominator)), np.rad2deg( np.arccos( numerator/denominator)) )
Returns the spherical angle distance between two sets of great circles defined by (ra0, dec0), (ra1, dec1) and (ra0, dec0), (ra2, dec2) :param ra0: array or float, longitude of intersection point(s) :param dec0: array or float, latitude of intersection point(s) :param ra1: array or float, longitude of first point(s) :param dec1: array or float, latitude of first point(s) :param ra2: array or float, longitude of second point(s) :param dec2: array or float, latitude of second point(s) :return: spherical angle in degrees
Below is the the instruction that describes the task: ### Input: Returns the spherical angle distance between two sets of great circles defined by (ra0, dec0), (ra1, dec1) and (ra0, dec0), (ra2, dec2) :param ra0: array or float, longitude of intersection point(s) :param dec0: array or float, latitude of intersection point(s) :param ra1: array or float, longitude of first point(s) :param dec1: array or float, latitude of first point(s) :param ra2: array or float, longitude of second point(s) :param dec2: array or float, latitude of second point(s) :return: spherical angle in degrees ### Response: def spherical_angle( ra0, dec0, ra1, dec1, ra2, dec2 ): """ Returns the spherical angle distance between two sets of great circles defined by (ra0, dec0), (ra1, dec1) and (ra0, dec0), (ra2, dec2) :param ra0: array or float, longitude of intersection point(s) :param dec0: array or float, latitude of intersection point(s) :param ra1: array or float, longitude of first point(s) :param dec1: array or float, latitude of first point(s) :param ra2: array or float, longitude of second point(s) :param dec2: array or float, latitude of second point(s) :return: spherical angle in degrees """ a = np.deg2rad( angular_distance(ra0, dec0, ra1, dec1)) b = np.deg2rad( angular_distance(ra0, dec0, ra2, dec2)) c = np.deg2rad( angular_distance(ra2, dec2, ra1, dec1)) #use the spherical law of cosines: https://en.wikipedia.org/wiki/Spherical_law_of_cosines#Rearrangements numerator = np.atleast_1d( np.cos(c) - np.cos(a) * np.cos(b) ) denominator = np.atleast_1d( np.sin(a)*np.sin(b) ) return np.where( denominator == 0 , np.zeros( len(denominator)), np.rad2deg( np.arccos( numerator/denominator)) )
def retrieve_crls(self, cert): """ :param cert: An asn1crypto.x509.Certificate object :param path: A certvalidator.path.ValidationPath object for the cert :return: A list of asn1crypto.crl.CertificateList objects """ if not self._allow_fetching: return self._crls if cert.issuer_serial not in self._fetched_crls: try: crls = crl_client.fetch( cert, **self._crl_fetch_params ) self._fetched_crls[cert.issuer_serial] = crls for crl_ in crls: try: certs = crl_client.fetch_certs( crl_, user_agent=self._crl_fetch_params.get('user_agent'), timeout=self._crl_fetch_params.get('timeout') ) for cert_ in certs: if self.certificate_registry.add_other_cert(cert_): self._revocation_certs[cert_.issuer_serial] = cert_ except (URLError, socket.error): pass except (URLError, socket.error) as e: self._fetched_crls[cert.issuer_serial] = [] if self._revocation_mode == "soft-fail": self._soft_fail_exceptions.append(e) raise SoftFailError() else: raise return self._fetched_crls[cert.issuer_serial]
:param cert: An asn1crypto.x509.Certificate object :param path: A certvalidator.path.ValidationPath object for the cert :return: A list of asn1crypto.crl.CertificateList objects
Below is the the instruction that describes the task: ### Input: :param cert: An asn1crypto.x509.Certificate object :param path: A certvalidator.path.ValidationPath object for the cert :return: A list of asn1crypto.crl.CertificateList objects ### Response: def retrieve_crls(self, cert): """ :param cert: An asn1crypto.x509.Certificate object :param path: A certvalidator.path.ValidationPath object for the cert :return: A list of asn1crypto.crl.CertificateList objects """ if not self._allow_fetching: return self._crls if cert.issuer_serial not in self._fetched_crls: try: crls = crl_client.fetch( cert, **self._crl_fetch_params ) self._fetched_crls[cert.issuer_serial] = crls for crl_ in crls: try: certs = crl_client.fetch_certs( crl_, user_agent=self._crl_fetch_params.get('user_agent'), timeout=self._crl_fetch_params.get('timeout') ) for cert_ in certs: if self.certificate_registry.add_other_cert(cert_): self._revocation_certs[cert_.issuer_serial] = cert_ except (URLError, socket.error): pass except (URLError, socket.error) as e: self._fetched_crls[cert.issuer_serial] = [] if self._revocation_mode == "soft-fail": self._soft_fail_exceptions.append(e) raise SoftFailError() else: raise return self._fetched_crls[cert.issuer_serial]
def inference(self, state_arr, limit=1000): ''' Infernce. Args: state_arr: `np.ndarray` of state. limit: The number of inferencing. Returns: `list of `np.ndarray` of an optimal route. ''' agent_x, agent_y = np.where(state_arr[0] == 1) agent_x, agent_y = agent_x[0], agent_y[0] result_list = [(agent_x, agent_y, 0.0)] self.t = 1 while self.t <= limit: next_action_arr = self.extract_possible_actions(state_arr) next_q_arr = self.function_approximator.inference_q(next_action_arr) action_arr, q = self.select_action(next_action_arr, next_q_arr) agent_x, agent_y = np.where(action_arr[0] == 1) agent_x, agent_y = agent_x[0], agent_y[0] result_list.append((agent_x, agent_y, q[0])) # Update State. state_arr = self.update_state(state_arr, action_arr) # Epsode. self.t += 1 # Check. end_flag = self.check_the_end_flag(state_arr) if end_flag is True: break return result_list
Infernce. Args: state_arr: `np.ndarray` of state. limit: The number of inferencing. Returns: `list of `np.ndarray` of an optimal route.
Below is the the instruction that describes the task: ### Input: Infernce. Args: state_arr: `np.ndarray` of state. limit: The number of inferencing. Returns: `list of `np.ndarray` of an optimal route. ### Response: def inference(self, state_arr, limit=1000): ''' Infernce. Args: state_arr: `np.ndarray` of state. limit: The number of inferencing. Returns: `list of `np.ndarray` of an optimal route. ''' agent_x, agent_y = np.where(state_arr[0] == 1) agent_x, agent_y = agent_x[0], agent_y[0] result_list = [(agent_x, agent_y, 0.0)] self.t = 1 while self.t <= limit: next_action_arr = self.extract_possible_actions(state_arr) next_q_arr = self.function_approximator.inference_q(next_action_arr) action_arr, q = self.select_action(next_action_arr, next_q_arr) agent_x, agent_y = np.where(action_arr[0] == 1) agent_x, agent_y = agent_x[0], agent_y[0] result_list.append((agent_x, agent_y, q[0])) # Update State. state_arr = self.update_state(state_arr, action_arr) # Epsode. self.t += 1 # Check. end_flag = self.check_the_end_flag(state_arr) if end_flag is True: break return result_list
def get_version_info(): """ Return astropy and photutils versions. Returns ------- result : str The astropy and photutils versions. """ from astropy import __version__ astropy_version = __version__ from photutils import __version__ photutils_version = __version__ return 'astropy: {0}, photutils: {1}'.format(astropy_version, photutils_version)
Return astropy and photutils versions. Returns ------- result : str The astropy and photutils versions.
Below is the the instruction that describes the task: ### Input: Return astropy and photutils versions. Returns ------- result : str The astropy and photutils versions. ### Response: def get_version_info(): """ Return astropy and photutils versions. Returns ------- result : str The astropy and photutils versions. """ from astropy import __version__ astropy_version = __version__ from photutils import __version__ photutils_version = __version__ return 'astropy: {0}, photutils: {1}'.format(astropy_version, photutils_version)
def _init_vocab_from_file(self, filename): """Load vocab from a file. Args: filename: The file to load vocabulary from. """ with tf.gfile.Open(filename) as f: tokens = [token.strip() for token in f.readlines()] def token_gen(): for token in tokens: yield token self._init_vocab(token_gen(), add_reserved_tokens=False)
Load vocab from a file. Args: filename: The file to load vocabulary from.
Below is the the instruction that describes the task: ### Input: Load vocab from a file. Args: filename: The file to load vocabulary from. ### Response: def _init_vocab_from_file(self, filename): """Load vocab from a file. Args: filename: The file to load vocabulary from. """ with tf.gfile.Open(filename) as f: tokens = [token.strip() for token in f.readlines()] def token_gen(): for token in tokens: yield token self._init_vocab(token_gen(), add_reserved_tokens=False)
def schema_event_items(): """Schema for event items.""" return { 'timestamp': And(int, lambda n: n > 0), Optional('information', default={}): { Optional(Regex(r'([a-z][_a-z]*)')): object } }
Schema for event items.
Below is the the instruction that describes the task: ### Input: Schema for event items. ### Response: def schema_event_items(): """Schema for event items.""" return { 'timestamp': And(int, lambda n: n > 0), Optional('information', default={}): { Optional(Regex(r'([a-z][_a-z]*)')): object } }
def get_host_vsan_system(service_instance, host_ref, hostname=None): ''' Returns a host's vsan system service_instance Service instance to the host or vCenter host_ref Refernce to ESXi host hostname Name of ESXi host. Default value is None. ''' if not hostname: hostname = salt.utils.vmware.get_managed_object_name(host_ref) traversal_spec = vmodl.query.PropertyCollector.TraversalSpec( path='configManager.vsanSystem', type=vim.HostSystem, skip=False) objs = salt.utils.vmware.get_mors_with_properties( service_instance, vim.HostVsanSystem, property_list=['config.enabled'], container_ref=host_ref, traversal_spec=traversal_spec) if not objs: raise VMwareObjectRetrievalError('Host\'s \'{0}\' VSAN system was ' 'not retrieved'.format(hostname)) log.trace('[%s] Retrieved VSAN system', hostname) return objs[0]['object']
Returns a host's vsan system service_instance Service instance to the host or vCenter host_ref Refernce to ESXi host hostname Name of ESXi host. Default value is None.
Below is the the instruction that describes the task: ### Input: Returns a host's vsan system service_instance Service instance to the host or vCenter host_ref Refernce to ESXi host hostname Name of ESXi host. Default value is None. ### Response: def get_host_vsan_system(service_instance, host_ref, hostname=None): ''' Returns a host's vsan system service_instance Service instance to the host or vCenter host_ref Refernce to ESXi host hostname Name of ESXi host. Default value is None. ''' if not hostname: hostname = salt.utils.vmware.get_managed_object_name(host_ref) traversal_spec = vmodl.query.PropertyCollector.TraversalSpec( path='configManager.vsanSystem', type=vim.HostSystem, skip=False) objs = salt.utils.vmware.get_mors_with_properties( service_instance, vim.HostVsanSystem, property_list=['config.enabled'], container_ref=host_ref, traversal_spec=traversal_spec) if not objs: raise VMwareObjectRetrievalError('Host\'s \'{0}\' VSAN system was ' 'not retrieved'.format(hostname)) log.trace('[%s] Retrieved VSAN system', hostname) return objs[0]['object']
def add_training_data(self, environment_id, collection_id, natural_language_query=None, filter=None, examples=None, **kwargs): """ Add query to training data. Adds a query to the training data for this collection. The query can contain a filter and natural language query. :param str environment_id: The ID of the environment. :param str collection_id: The ID of the collection. :param str natural_language_query: The natural text query for the new training query. :param str filter: The filter used on the collection before the **natural_language_query** is applied. :param list[TrainingExample] examples: Array of training examples. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse """ if environment_id is None: raise ValueError('environment_id must be provided') if collection_id is None: raise ValueError('collection_id must be provided') if examples is not None: examples = [ self._convert_model(x, TrainingExample) for x in examples ] headers = {} if 'headers' in kwargs: headers.update(kwargs.get('headers')) sdk_headers = get_sdk_headers('discovery', 'V1', 'add_training_data') headers.update(sdk_headers) params = {'version': self.version} data = { 'natural_language_query': natural_language_query, 'filter': filter, 'examples': examples } url = '/v1/environments/{0}/collections/{1}/training_data'.format( *self._encode_path_vars(environment_id, collection_id)) response = self.request( method='POST', url=url, headers=headers, params=params, json=data, accept_json=True) return response
Add query to training data. Adds a query to the training data for this collection. The query can contain a filter and natural language query. :param str environment_id: The ID of the environment. :param str collection_id: The ID of the collection. :param str natural_language_query: The natural text query for the new training query. :param str filter: The filter used on the collection before the **natural_language_query** is applied. :param list[TrainingExample] examples: Array of training examples. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse
Below is the the instruction that describes the task: ### Input: Add query to training data. Adds a query to the training data for this collection. The query can contain a filter and natural language query. :param str environment_id: The ID of the environment. :param str collection_id: The ID of the collection. :param str natural_language_query: The natural text query for the new training query. :param str filter: The filter used on the collection before the **natural_language_query** is applied. :param list[TrainingExample] examples: Array of training examples. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse ### Response: def add_training_data(self, environment_id, collection_id, natural_language_query=None, filter=None, examples=None, **kwargs): """ Add query to training data. Adds a query to the training data for this collection. The query can contain a filter and natural language query. :param str environment_id: The ID of the environment. :param str collection_id: The ID of the collection. :param str natural_language_query: The natural text query for the new training query. :param str filter: The filter used on the collection before the **natural_language_query** is applied. :param list[TrainingExample] examples: Array of training examples. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse """ if environment_id is None: raise ValueError('environment_id must be provided') if collection_id is None: raise ValueError('collection_id must be provided') if examples is not None: examples = [ self._convert_model(x, TrainingExample) for x in examples ] headers = {} if 'headers' in kwargs: headers.update(kwargs.get('headers')) sdk_headers = get_sdk_headers('discovery', 'V1', 'add_training_data') headers.update(sdk_headers) params = {'version': self.version} data = { 'natural_language_query': natural_language_query, 'filter': filter, 'examples': examples } url = '/v1/environments/{0}/collections/{1}/training_data'.format( *self._encode_path_vars(environment_id, collection_id)) response = self.request( method='POST', url=url, headers=headers, params=params, json=data, accept_json=True) return response
def run_feature_selection(self, df_data, target, idx=0, **kwargs): """Run feature selection for one node: wrapper around ``self.predict_features``. Args: df_data (pandas.DataFrame): All the observational data target (str): Name of the target variable idx (int): (optional) For printing purposes Returns: list: scores of each feature relatively to the target """ list_features = list(df_data.columns.values) list_features.remove(target) df_target = pd.DataFrame(df_data[target], columns=[target]) df_features = df_data[list_features] return self.predict_features(df_features, df_target, idx=idx, **kwargs)
Run feature selection for one node: wrapper around ``self.predict_features``. Args: df_data (pandas.DataFrame): All the observational data target (str): Name of the target variable idx (int): (optional) For printing purposes Returns: list: scores of each feature relatively to the target
Below is the the instruction that describes the task: ### Input: Run feature selection for one node: wrapper around ``self.predict_features``. Args: df_data (pandas.DataFrame): All the observational data target (str): Name of the target variable idx (int): (optional) For printing purposes Returns: list: scores of each feature relatively to the target ### Response: def run_feature_selection(self, df_data, target, idx=0, **kwargs): """Run feature selection for one node: wrapper around ``self.predict_features``. Args: df_data (pandas.DataFrame): All the observational data target (str): Name of the target variable idx (int): (optional) For printing purposes Returns: list: scores of each feature relatively to the target """ list_features = list(df_data.columns.values) list_features.remove(target) df_target = pd.DataFrame(df_data[target], columns=[target]) df_features = df_data[list_features] return self.predict_features(df_features, df_target, idx=idx, **kwargs)
def putResourceValue(self,ep,res,data,cbfn=""): """ Put a value to a resource on an endpoint :param str ep: name of endpoint :param str res: name of resource :param str data: data to send via PUT :param fnptr cbfn: Optional - callback funtion to call when operation is completed :return: successful ``.status_code`` / ``.is_done``. Check the ``.error`` :rtype: asyncResult """ result = asyncResult(callback=cbfn) result.endpoint = ep result.resource = res data = self._putURL("/endpoints/"+ep+res,payload=data) if data.status_code == 200: #immediate success result.error = False result.is_done = True elif data.status_code == 202: self.database['async-responses'][json.loads(data.content)["async-response-id"]]= result else: result.error = response_codes("resource",data.status_code) result.is_done = True result.raw_data = data.content result.status_code = data.status_code return result
Put a value to a resource on an endpoint :param str ep: name of endpoint :param str res: name of resource :param str data: data to send via PUT :param fnptr cbfn: Optional - callback funtion to call when operation is completed :return: successful ``.status_code`` / ``.is_done``. Check the ``.error`` :rtype: asyncResult
Below is the the instruction that describes the task: ### Input: Put a value to a resource on an endpoint :param str ep: name of endpoint :param str res: name of resource :param str data: data to send via PUT :param fnptr cbfn: Optional - callback funtion to call when operation is completed :return: successful ``.status_code`` / ``.is_done``. Check the ``.error`` :rtype: asyncResult ### Response: def putResourceValue(self,ep,res,data,cbfn=""): """ Put a value to a resource on an endpoint :param str ep: name of endpoint :param str res: name of resource :param str data: data to send via PUT :param fnptr cbfn: Optional - callback funtion to call when operation is completed :return: successful ``.status_code`` / ``.is_done``. Check the ``.error`` :rtype: asyncResult """ result = asyncResult(callback=cbfn) result.endpoint = ep result.resource = res data = self._putURL("/endpoints/"+ep+res,payload=data) if data.status_code == 200: #immediate success result.error = False result.is_done = True elif data.status_code == 202: self.database['async-responses'][json.loads(data.content)["async-response-id"]]= result else: result.error = response_codes("resource",data.status_code) result.is_done = True result.raw_data = data.content result.status_code = data.status_code return result
def all_synsets(pos=None): """Return all the synsets which have the provided pos. Notes ----- Returns thousands or tens of thousands of synsets - first time will take significant time. Useful for initializing synsets as each returned synset is also stored in a global dictionary for fast retrieval the next time. Parameters ---------- pos : str Part-of-speech of the sought synsets. Sensible alternatives are wn.ADJ, wn.ADV, wn.VERB, wn.NOUN and `*`. If pos == `*`, all the synsets are retrieved and initialized for fast retrieval the next time. Returns ------- list of Synsets Lists the Synsets which have `pos` as part-of-speech. Empty list, if `pos` not in [wn.ADJ, wn.ADV, wn.VERB, wn.NOUN, `*`]. """ def _get_unique_synset_idxes(pos): idxes = [] with codecs.open(_LIT_POS_FILE,'rb', 'utf-8') as fin: if pos == None: for line in fin: split_line = line.strip().split(':') idxes.extend([int(x) for x in split_line[2].split()]) else: for line in fin: split_line = line.strip().split(':') if split_line[1] == pos: idxes.extend([int(x) for x in split_line[2].split()]) idxes = list(set(idxes)) idxes.sort() return idxes if pos in LOADED_POS: return [SYNSETS_DICT[idx] for lemma in LEM_POS_2_SS_IDX for idx in LEM_POS_2_SS_IDX[lemma][pos]] else: synset_idxes = _get_unique_synset_idxes(pos) if len(synset_idxes) == 0: return [] stored_synsets = [SYNSETS_DICT[synset_idxes[i]] for i in range(len(synset_idxes)) if synset_idxes[i] in SYNSETS_DICT] unstored_synset_idxes = [synset_idxes[i] for i in range(len(synset_idxes)) if synset_idxes[i] not in SYNSETS_DICT] synset_offsets = _get_synset_offsets(unstored_synset_idxes) synsets = _get_synsets(synset_offsets) for synset in synsets: for variant in synset.get_variants(): LEM_POS_2_SS_IDX[variant.literal][synset.pos].append(synset.id) LOADED_POS.add(pos) return stored_synsets + synsets
Return all the synsets which have the provided pos. Notes ----- Returns thousands or tens of thousands of synsets - first time will take significant time. Useful for initializing synsets as each returned synset is also stored in a global dictionary for fast retrieval the next time. Parameters ---------- pos : str Part-of-speech of the sought synsets. Sensible alternatives are wn.ADJ, wn.ADV, wn.VERB, wn.NOUN and `*`. If pos == `*`, all the synsets are retrieved and initialized for fast retrieval the next time. Returns ------- list of Synsets Lists the Synsets which have `pos` as part-of-speech. Empty list, if `pos` not in [wn.ADJ, wn.ADV, wn.VERB, wn.NOUN, `*`].
Below is the the instruction that describes the task: ### Input: Return all the synsets which have the provided pos. Notes ----- Returns thousands or tens of thousands of synsets - first time will take significant time. Useful for initializing synsets as each returned synset is also stored in a global dictionary for fast retrieval the next time. Parameters ---------- pos : str Part-of-speech of the sought synsets. Sensible alternatives are wn.ADJ, wn.ADV, wn.VERB, wn.NOUN and `*`. If pos == `*`, all the synsets are retrieved and initialized for fast retrieval the next time. Returns ------- list of Synsets Lists the Synsets which have `pos` as part-of-speech. Empty list, if `pos` not in [wn.ADJ, wn.ADV, wn.VERB, wn.NOUN, `*`]. ### Response: def all_synsets(pos=None): """Return all the synsets which have the provided pos. Notes ----- Returns thousands or tens of thousands of synsets - first time will take significant time. Useful for initializing synsets as each returned synset is also stored in a global dictionary for fast retrieval the next time. Parameters ---------- pos : str Part-of-speech of the sought synsets. Sensible alternatives are wn.ADJ, wn.ADV, wn.VERB, wn.NOUN and `*`. If pos == `*`, all the synsets are retrieved and initialized for fast retrieval the next time. Returns ------- list of Synsets Lists the Synsets which have `pos` as part-of-speech. Empty list, if `pos` not in [wn.ADJ, wn.ADV, wn.VERB, wn.NOUN, `*`]. """ def _get_unique_synset_idxes(pos): idxes = [] with codecs.open(_LIT_POS_FILE,'rb', 'utf-8') as fin: if pos == None: for line in fin: split_line = line.strip().split(':') idxes.extend([int(x) for x in split_line[2].split()]) else: for line in fin: split_line = line.strip().split(':') if split_line[1] == pos: idxes.extend([int(x) for x in split_line[2].split()]) idxes = list(set(idxes)) idxes.sort() return idxes if pos in LOADED_POS: return [SYNSETS_DICT[idx] for lemma in LEM_POS_2_SS_IDX for idx in LEM_POS_2_SS_IDX[lemma][pos]] else: synset_idxes = _get_unique_synset_idxes(pos) if len(synset_idxes) == 0: return [] stored_synsets = [SYNSETS_DICT[synset_idxes[i]] for i in range(len(synset_idxes)) if synset_idxes[i] in SYNSETS_DICT] unstored_synset_idxes = [synset_idxes[i] for i in range(len(synset_idxes)) if synset_idxes[i] not in SYNSETS_DICT] synset_offsets = _get_synset_offsets(unstored_synset_idxes) synsets = _get_synsets(synset_offsets) for synset in synsets: for variant in synset.get_variants(): LEM_POS_2_SS_IDX[variant.literal][synset.pos].append(synset.id) LOADED_POS.add(pos) return stored_synsets + synsets
def make_defaults_and_annotations(make_function_instr, builders): """ Get the AST expressions corresponding to the defaults, kwonly defaults, and annotations for a function created by `make_function_instr`. """ # Integer counts. n_defaults, n_kwonlydefaults, n_annotations = unpack_make_function_arg( make_function_instr.arg ) if n_annotations: # TOS should be a tuple of annotation names. load_annotation_names = builders.pop() annotations = dict(zip( reversed(load_annotation_names.arg), (make_expr(builders) for _ in range(n_annotations - 1)) )) else: annotations = {} kwonlys = {} while n_kwonlydefaults: default_expr = make_expr(builders) key_instr = builders.pop() if not isinstance(key_instr, instrs.LOAD_CONST): raise DecompilationError( "kwonlydefault key is not a LOAD_CONST: %s" % key_instr ) if not isinstance(key_instr.arg, str): raise DecompilationError( "kwonlydefault key builder is not a " "'LOAD_CONST of a string: %s" % key_instr ) kwonlys[key_instr.arg] = default_expr n_kwonlydefaults -= 1 defaults = make_exprs(builders, n_defaults) return defaults, kwonlys, annotations
Get the AST expressions corresponding to the defaults, kwonly defaults, and annotations for a function created by `make_function_instr`.
Below is the the instruction that describes the task: ### Input: Get the AST expressions corresponding to the defaults, kwonly defaults, and annotations for a function created by `make_function_instr`. ### Response: def make_defaults_and_annotations(make_function_instr, builders): """ Get the AST expressions corresponding to the defaults, kwonly defaults, and annotations for a function created by `make_function_instr`. """ # Integer counts. n_defaults, n_kwonlydefaults, n_annotations = unpack_make_function_arg( make_function_instr.arg ) if n_annotations: # TOS should be a tuple of annotation names. load_annotation_names = builders.pop() annotations = dict(zip( reversed(load_annotation_names.arg), (make_expr(builders) for _ in range(n_annotations - 1)) )) else: annotations = {} kwonlys = {} while n_kwonlydefaults: default_expr = make_expr(builders) key_instr = builders.pop() if not isinstance(key_instr, instrs.LOAD_CONST): raise DecompilationError( "kwonlydefault key is not a LOAD_CONST: %s" % key_instr ) if not isinstance(key_instr.arg, str): raise DecompilationError( "kwonlydefault key builder is not a " "'LOAD_CONST of a string: %s" % key_instr ) kwonlys[key_instr.arg] = default_expr n_kwonlydefaults -= 1 defaults = make_exprs(builders, n_defaults) return defaults, kwonlys, annotations
def image_summary(tag, image): """Outputs a `Summary` protocol buffer with image(s). Parameters ---------- tag : str A name for the generated summary. Will also serve as a series name in TensorBoard. image : MXNet `NDArray` or `numpy.ndarray` Image data that is one of the following layout: (H, W), (C, H, W), (N, C, H, W). The pixel values of the image are assumed to be normalized in the range [0, 1]. The image will be rescaled to the range [0, 255] and cast to `np.uint8` before creating the image protobuf. Returns ------- A `Summary` protobuf of the image. """ tag = _clean_tag(tag) image = _prepare_image(image) image = _make_image(image) return Summary(value=[Summary.Value(tag=tag, image=image)])
Outputs a `Summary` protocol buffer with image(s). Parameters ---------- tag : str A name for the generated summary. Will also serve as a series name in TensorBoard. image : MXNet `NDArray` or `numpy.ndarray` Image data that is one of the following layout: (H, W), (C, H, W), (N, C, H, W). The pixel values of the image are assumed to be normalized in the range [0, 1]. The image will be rescaled to the range [0, 255] and cast to `np.uint8` before creating the image protobuf. Returns ------- A `Summary` protobuf of the image.
Below is the the instruction that describes the task: ### Input: Outputs a `Summary` protocol buffer with image(s). Parameters ---------- tag : str A name for the generated summary. Will also serve as a series name in TensorBoard. image : MXNet `NDArray` or `numpy.ndarray` Image data that is one of the following layout: (H, W), (C, H, W), (N, C, H, W). The pixel values of the image are assumed to be normalized in the range [0, 1]. The image will be rescaled to the range [0, 255] and cast to `np.uint8` before creating the image protobuf. Returns ------- A `Summary` protobuf of the image. ### Response: def image_summary(tag, image): """Outputs a `Summary` protocol buffer with image(s). Parameters ---------- tag : str A name for the generated summary. Will also serve as a series name in TensorBoard. image : MXNet `NDArray` or `numpy.ndarray` Image data that is one of the following layout: (H, W), (C, H, W), (N, C, H, W). The pixel values of the image are assumed to be normalized in the range [0, 1]. The image will be rescaled to the range [0, 255] and cast to `np.uint8` before creating the image protobuf. Returns ------- A `Summary` protobuf of the image. """ tag = _clean_tag(tag) image = _prepare_image(image) image = _make_image(image) return Summary(value=[Summary.Value(tag=tag, image=image)])
def load_model_from_file(self, filename): """Load one parameter set from a file which contains one value per line No row is skipped. Parameters ---------- filename : string, file path Filename to loaded data from Returns ------- pid : int ID of parameter set """ assert os.path.isfile(filename) data = np.loadtxt(filename).squeeze() assert len(data.shape) == 1 pid = self.add_data(data) return pid
Load one parameter set from a file which contains one value per line No row is skipped. Parameters ---------- filename : string, file path Filename to loaded data from Returns ------- pid : int ID of parameter set
Below is the the instruction that describes the task: ### Input: Load one parameter set from a file which contains one value per line No row is skipped. Parameters ---------- filename : string, file path Filename to loaded data from Returns ------- pid : int ID of parameter set ### Response: def load_model_from_file(self, filename): """Load one parameter set from a file which contains one value per line No row is skipped. Parameters ---------- filename : string, file path Filename to loaded data from Returns ------- pid : int ID of parameter set """ assert os.path.isfile(filename) data = np.loadtxt(filename).squeeze() assert len(data.shape) == 1 pid = self.add_data(data) return pid
def _sqlalchemy_on_connection_close(self): """ Rollsback and closes the active session, since the client disconnected before the request could be completed. """ if hasattr(self, "_db_conns"): try: for db_conn in self._db_conns.values(): db_conn.rollback() except: tornado.log.app_log.warning("Error occurred during database transaction cleanup: %s", str(sys.exc_info()[0])) raise finally: for db_conn in self._db_conns.values(): try: db_conn.close() except: tornado.log.app_log.warning("Error occurred when closing the database connection", exc_info=True)
Rollsback and closes the active session, since the client disconnected before the request could be completed.
Below is the the instruction that describes the task: ### Input: Rollsback and closes the active session, since the client disconnected before the request could be completed. ### Response: def _sqlalchemy_on_connection_close(self): """ Rollsback and closes the active session, since the client disconnected before the request could be completed. """ if hasattr(self, "_db_conns"): try: for db_conn in self._db_conns.values(): db_conn.rollback() except: tornado.log.app_log.warning("Error occurred during database transaction cleanup: %s", str(sys.exc_info()[0])) raise finally: for db_conn in self._db_conns.values(): try: db_conn.close() except: tornado.log.app_log.warning("Error occurred when closing the database connection", exc_info=True)
def _readGQL(self, filePath, verbose=False): """Read a 'pretty' formatted GraphQL query file into a one-line string. Removes line breaks and comments. Condenses white space. Args: filePath (str): A relative or absolute path to a file containing a GraphQL query. File may use comments and multi-line formatting. .. _GitHub GraphQL Explorer: https://developer.github.com/v4/explorer/ verbose (Optional[bool]): If False, prints will be suppressed. Defaults to False. Returns: str: A single line GraphQL query. """ if not os.path.isfile(filePath): raise RuntimeError("Query file '%s' does not exist." % (filePath)) lastModified = os.path.getmtime(filePath) absPath = os.path.abspath(filePath) if absPath == self.__queryPath and lastModified == self.__queryTimestamp: _vPrint(verbose, "Using cached query '%s'" % (os.path.basename(self.__queryPath))) query_in = self.__query else: _vPrint(verbose, "Reading '%s' ... " % (filePath), end="", flush=True) with open(filePath, "r") as q: # Strip all comments and newlines. query_in = re.sub(r'#.*(\n|\Z)', '\n', q.read()) # Condense extra whitespace. query_in = re.sub(r'\s+', ' ', query_in) # Remove any leading or trailing whitespace. query_in = re.sub(r'(\A\s+)|(\s+\Z)', '', query_in) _vPrint(verbose, "File read!") self.__queryPath = absPath self.__queryTimestamp = lastModified self.__query = query_in return query_in
Read a 'pretty' formatted GraphQL query file into a one-line string. Removes line breaks and comments. Condenses white space. Args: filePath (str): A relative or absolute path to a file containing a GraphQL query. File may use comments and multi-line formatting. .. _GitHub GraphQL Explorer: https://developer.github.com/v4/explorer/ verbose (Optional[bool]): If False, prints will be suppressed. Defaults to False. Returns: str: A single line GraphQL query.
Below is the the instruction that describes the task: ### Input: Read a 'pretty' formatted GraphQL query file into a one-line string. Removes line breaks and comments. Condenses white space. Args: filePath (str): A relative or absolute path to a file containing a GraphQL query. File may use comments and multi-line formatting. .. _GitHub GraphQL Explorer: https://developer.github.com/v4/explorer/ verbose (Optional[bool]): If False, prints will be suppressed. Defaults to False. Returns: str: A single line GraphQL query. ### Response: def _readGQL(self, filePath, verbose=False): """Read a 'pretty' formatted GraphQL query file into a one-line string. Removes line breaks and comments. Condenses white space. Args: filePath (str): A relative or absolute path to a file containing a GraphQL query. File may use comments and multi-line formatting. .. _GitHub GraphQL Explorer: https://developer.github.com/v4/explorer/ verbose (Optional[bool]): If False, prints will be suppressed. Defaults to False. Returns: str: A single line GraphQL query. """ if not os.path.isfile(filePath): raise RuntimeError("Query file '%s' does not exist." % (filePath)) lastModified = os.path.getmtime(filePath) absPath = os.path.abspath(filePath) if absPath == self.__queryPath and lastModified == self.__queryTimestamp: _vPrint(verbose, "Using cached query '%s'" % (os.path.basename(self.__queryPath))) query_in = self.__query else: _vPrint(verbose, "Reading '%s' ... " % (filePath), end="", flush=True) with open(filePath, "r") as q: # Strip all comments and newlines. query_in = re.sub(r'#.*(\n|\Z)', '\n', q.read()) # Condense extra whitespace. query_in = re.sub(r'\s+', ' ', query_in) # Remove any leading or trailing whitespace. query_in = re.sub(r'(\A\s+)|(\s+\Z)', '', query_in) _vPrint(verbose, "File read!") self.__queryPath = absPath self.__queryTimestamp = lastModified self.__query = query_in return query_in
def QA_fetch_get_macroindex_list(ip=None, port=None): """宏观指标列表 Keyword Arguments: ip {[type]} -- [description] (default: {None}) port {[type]} -- [description] (default: {None}) 38 10 宏观指标 HG """ global extension_market_list extension_market_list = QA_fetch_get_extensionmarket_list( ) if extension_market_list is None else extension_market_list return extension_market_list.query('market==38')
宏观指标列表 Keyword Arguments: ip {[type]} -- [description] (default: {None}) port {[type]} -- [description] (default: {None}) 38 10 宏观指标 HG
Below is the the instruction that describes the task: ### Input: 宏观指标列表 Keyword Arguments: ip {[type]} -- [description] (default: {None}) port {[type]} -- [description] (default: {None}) 38 10 宏观指标 HG ### Response: def QA_fetch_get_macroindex_list(ip=None, port=None): """宏观指标列表 Keyword Arguments: ip {[type]} -- [description] (default: {None}) port {[type]} -- [description] (default: {None}) 38 10 宏观指标 HG """ global extension_market_list extension_market_list = QA_fetch_get_extensionmarket_list( ) if extension_market_list is None else extension_market_list return extension_market_list.query('market==38')
def get(self, mac): """Get data from API as instance of ResponseModel. Keyword arguments: mac -- MAC address or OUI for searching """ data = { self._FORMAT_F: 'json', self._SEARCH_F: mac } response = self.__decode_str(self.__call_api(self.__url, data), 'utf-8') if len(response) > 0: return self.__parse(response) raise EmptyResponseException()
Get data from API as instance of ResponseModel. Keyword arguments: mac -- MAC address or OUI for searching
Below is the the instruction that describes the task: ### Input: Get data from API as instance of ResponseModel. Keyword arguments: mac -- MAC address or OUI for searching ### Response: def get(self, mac): """Get data from API as instance of ResponseModel. Keyword arguments: mac -- MAC address or OUI for searching """ data = { self._FORMAT_F: 'json', self._SEARCH_F: mac } response = self.__decode_str(self.__call_api(self.__url, data), 'utf-8') if len(response) > 0: return self.__parse(response) raise EmptyResponseException()
def post(cond): """ Add a postcondition check to the annotated method. The condition is passed the return value of the annotated method. """ source = inspect.getsource(cond).strip() def inner(f): if enabled: # deal with the real function, not a wrapper f = getattr(f, 'wrapped_fn', f) def check_condition(result): if not cond(result): raise AssertionError('Postcondition failure, %s' % source) # append to the rest of the postconditions attached to this method if not hasattr(f, 'postconditions'): f.postconditions = [] f.postconditions.append(check_condition) return check(f) else: return f return inner
Add a postcondition check to the annotated method. The condition is passed the return value of the annotated method.
Below is the the instruction that describes the task: ### Input: Add a postcondition check to the annotated method. The condition is passed the return value of the annotated method. ### Response: def post(cond): """ Add a postcondition check to the annotated method. The condition is passed the return value of the annotated method. """ source = inspect.getsource(cond).strip() def inner(f): if enabled: # deal with the real function, not a wrapper f = getattr(f, 'wrapped_fn', f) def check_condition(result): if not cond(result): raise AssertionError('Postcondition failure, %s' % source) # append to the rest of the postconditions attached to this method if not hasattr(f, 'postconditions'): f.postconditions = [] f.postconditions.append(check_condition) return check(f) else: return f return inner
def _SignedVarintDecoder(mask): """Like _VarintDecoder() but decodes signed values.""" def DecodeVarint(buffer, pos): result = 0 shift = 0 while 1: if pos > len(buffer) -1: raise NotEnoughDataException( "Not enough data to decode varint" ) b = local_ord(buffer[pos]) result |= ((b & 0x7f) << shift) pos += 1 if not (b & 0x80): if result > 0x7fffffffffffffff: result -= (1 << 64) result |= ~mask else: result &= mask return (result, pos) shift += 7 if shift >= 64: raise _DecodeError('Too many bytes when decoding varint.') return DecodeVarint
Like _VarintDecoder() but decodes signed values.
Below is the the instruction that describes the task: ### Input: Like _VarintDecoder() but decodes signed values. ### Response: def _SignedVarintDecoder(mask): """Like _VarintDecoder() but decodes signed values.""" def DecodeVarint(buffer, pos): result = 0 shift = 0 while 1: if pos > len(buffer) -1: raise NotEnoughDataException( "Not enough data to decode varint" ) b = local_ord(buffer[pos]) result |= ((b & 0x7f) << shift) pos += 1 if not (b & 0x80): if result > 0x7fffffffffffffff: result -= (1 << 64) result |= ~mask else: result &= mask return (result, pos) shift += 7 if shift >= 64: raise _DecodeError('Too many bytes when decoding varint.') return DecodeVarint
def _make_short(self, data, endianness): """Returns a 2 byte integer.""" data = binwalk.core.compat.str2bytes(data) return struct.unpack('%sH' % endianness, data)[0]
Returns a 2 byte integer.
Below is the the instruction that describes the task: ### Input: Returns a 2 byte integer. ### Response: def _make_short(self, data, endianness): """Returns a 2 byte integer.""" data = binwalk.core.compat.str2bytes(data) return struct.unpack('%sH' % endianness, data)[0]
def drawSector(self, center, point, beta, fullSector=True): """Draw a circle sector. """ center = Point(center) point = Point(point) l3 = "%g %g m\n" l4 = "%g %g %g %g %g %g c\n" l5 = "%g %g l\n" betar = math.radians(-beta) w360 = math.radians(math.copysign(360, betar)) * (-1) w90 = math.radians(math.copysign(90, betar)) w45 = w90 / 2 while abs(betar) > 2 * math.pi: betar += w360 # bring angle below 360 degrees if not (self.lastPoint == point): self.draw_cont += l3 % JM_TUPLE(point * self.ipctm) self.lastPoint = point Q = Point(0, 0) # just make sure it exists C = center P = point S = P - C # vector 'center' -> 'point' rad = abs(S) # circle radius if not rad > 1e-5: raise ValueError("radius must be positive") alfa = self.horizontal_angle(center, point) while abs(betar) > abs(w90): # draw 90 degree arcs q1 = C.x + math.cos(alfa + w90) * rad q2 = C.y + math.sin(alfa + w90) * rad Q = Point(q1, q2) # the arc's end point r1 = C.x + math.cos(alfa + w45) * rad / math.cos(w45) r2 = C.y + math.sin(alfa + w45) * rad / math.cos(w45) R = Point(r1, r2) # crossing point of tangents kappah = (1 - math.cos(w45)) * 4 / 3 / abs(R - Q) kappa = kappah * abs(P - Q) cp1 = P + (R - P) * kappa # control point 1 cp2 = Q + (R - Q) * kappa # control point 2 self.draw_cont += l4 % JM_TUPLE(list(cp1 * self.ipctm) + \ list(cp2 * self.ipctm) + \ list(Q * self.ipctm)) betar -= w90 # reduce parm angle by 90 deg alfa += w90 # advance start angle by 90 deg P = Q # advance to arc end point # draw (remaining) arc if abs(betar) > 1e-3: # significant degrees left? beta2 = betar / 2 q1 = C.x + math.cos(alfa + betar) * rad q2 = C.y + math.sin(alfa + betar) * rad Q = Point(q1, q2) # the arc's end point r1 = C.x + math.cos(alfa + beta2) * rad / math.cos(beta2) r2 = C.y + math.sin(alfa + beta2) * rad / math.cos(beta2) R = Point(r1, r2) # crossing point of tangents # kappa height is 4/3 of segment height kappah = (1 - math.cos(beta2)) * 4 / 3 / abs(R - Q) # kappa height kappa = kappah * abs(P - Q) / (1 - math.cos(betar)) cp1 = P + (R - P) * kappa # control point 1 cp2 = Q + (R - Q) * kappa # control point 2 self.draw_cont += l4 % JM_TUPLE(list(cp1 * self.ipctm) + \ list(cp2 * self.ipctm) + \ list(Q * self.ipctm)) if fullSector: self.draw_cont += l3 % JM_TUPLE(point * self.ipctm) self.draw_cont += l5 % JM_TUPLE(center * self.ipctm) self.draw_cont += l5 % JM_TUPLE(Q * self.ipctm) self.lastPoint = Q return self.lastPoint
Draw a circle sector.
Below is the the instruction that describes the task: ### Input: Draw a circle sector. ### Response: def drawSector(self, center, point, beta, fullSector=True): """Draw a circle sector. """ center = Point(center) point = Point(point) l3 = "%g %g m\n" l4 = "%g %g %g %g %g %g c\n" l5 = "%g %g l\n" betar = math.radians(-beta) w360 = math.radians(math.copysign(360, betar)) * (-1) w90 = math.radians(math.copysign(90, betar)) w45 = w90 / 2 while abs(betar) > 2 * math.pi: betar += w360 # bring angle below 360 degrees if not (self.lastPoint == point): self.draw_cont += l3 % JM_TUPLE(point * self.ipctm) self.lastPoint = point Q = Point(0, 0) # just make sure it exists C = center P = point S = P - C # vector 'center' -> 'point' rad = abs(S) # circle radius if not rad > 1e-5: raise ValueError("radius must be positive") alfa = self.horizontal_angle(center, point) while abs(betar) > abs(w90): # draw 90 degree arcs q1 = C.x + math.cos(alfa + w90) * rad q2 = C.y + math.sin(alfa + w90) * rad Q = Point(q1, q2) # the arc's end point r1 = C.x + math.cos(alfa + w45) * rad / math.cos(w45) r2 = C.y + math.sin(alfa + w45) * rad / math.cos(w45) R = Point(r1, r2) # crossing point of tangents kappah = (1 - math.cos(w45)) * 4 / 3 / abs(R - Q) kappa = kappah * abs(P - Q) cp1 = P + (R - P) * kappa # control point 1 cp2 = Q + (R - Q) * kappa # control point 2 self.draw_cont += l4 % JM_TUPLE(list(cp1 * self.ipctm) + \ list(cp2 * self.ipctm) + \ list(Q * self.ipctm)) betar -= w90 # reduce parm angle by 90 deg alfa += w90 # advance start angle by 90 deg P = Q # advance to arc end point # draw (remaining) arc if abs(betar) > 1e-3: # significant degrees left? beta2 = betar / 2 q1 = C.x + math.cos(alfa + betar) * rad q2 = C.y + math.sin(alfa + betar) * rad Q = Point(q1, q2) # the arc's end point r1 = C.x + math.cos(alfa + beta2) * rad / math.cos(beta2) r2 = C.y + math.sin(alfa + beta2) * rad / math.cos(beta2) R = Point(r1, r2) # crossing point of tangents # kappa height is 4/3 of segment height kappah = (1 - math.cos(beta2)) * 4 / 3 / abs(R - Q) # kappa height kappa = kappah * abs(P - Q) / (1 - math.cos(betar)) cp1 = P + (R - P) * kappa # control point 1 cp2 = Q + (R - Q) * kappa # control point 2 self.draw_cont += l4 % JM_TUPLE(list(cp1 * self.ipctm) + \ list(cp2 * self.ipctm) + \ list(Q * self.ipctm)) if fullSector: self.draw_cont += l3 % JM_TUPLE(point * self.ipctm) self.draw_cont += l5 % JM_TUPLE(center * self.ipctm) self.draw_cont += l5 % JM_TUPLE(Q * self.ipctm) self.lastPoint = Q return self.lastPoint
def system_methodHelp(self, method_name): """system.methodHelp('add') => "Adds two integers together" Returns a string containing documentation for the specified method.""" method = None if method_name in self.funcs: method = self.funcs[method_name] elif self.instance is not None: # Instance can implement _methodHelp to return help for a method if hasattr(self.instance, '_methodHelp'): return self.instance._methodHelp(method_name) # if the instance has a _dispatch method then we # don't have enough information to provide help elif not hasattr(self.instance, '_dispatch'): try: method = resolve_dotted_attribute( self.instance, method_name, self.allow_dotted_names ) except AttributeError: pass # Note that we aren't checking that the method actually # be a callable object of some kind if method is None: return "" else: return pydoc.getdoc(method)
system.methodHelp('add') => "Adds two integers together" Returns a string containing documentation for the specified method.
Below is the the instruction that describes the task: ### Input: system.methodHelp('add') => "Adds two integers together" Returns a string containing documentation for the specified method. ### Response: def system_methodHelp(self, method_name): """system.methodHelp('add') => "Adds two integers together" Returns a string containing documentation for the specified method.""" method = None if method_name in self.funcs: method = self.funcs[method_name] elif self.instance is not None: # Instance can implement _methodHelp to return help for a method if hasattr(self.instance, '_methodHelp'): return self.instance._methodHelp(method_name) # if the instance has a _dispatch method then we # don't have enough information to provide help elif not hasattr(self.instance, '_dispatch'): try: method = resolve_dotted_attribute( self.instance, method_name, self.allow_dotted_names ) except AttributeError: pass # Note that we aren't checking that the method actually # be a callable object of some kind if method is None: return "" else: return pydoc.getdoc(method)
def send_iq_and_wait_for_reply(self, iq, *, timeout=None): """ Send an IQ stanza `iq` and wait for the response. If `timeout` is not :data:`None`, it must be the time in seconds for which to wait for a response. If the response is a ``"result"`` IQ, the value of the :attr:`~aioxmpp.IQ.payload` attribute is returned. Otherwise, the exception generated from the :attr:`~aioxmpp.IQ.error` attribute is raised. .. seealso:: :meth:`register_iq_response_future` and :meth:`send_and_wait_for_sent` for other cases raising exceptions. .. deprecated:: 0.8 This method will be removed in 1.0. Use :meth:`send` instead. .. versionchanged:: 0.8 On a timeout, :class:`TimeoutError` is now raised instead of :class:`asyncio.TimeoutError`. """ warnings.warn( r"send_iq_and_wait_for_reply is deprecated and will be removed in" r" 1.0", DeprecationWarning, stacklevel=1, ) return (yield from self.send(iq, timeout=timeout))
Send an IQ stanza `iq` and wait for the response. If `timeout` is not :data:`None`, it must be the time in seconds for which to wait for a response. If the response is a ``"result"`` IQ, the value of the :attr:`~aioxmpp.IQ.payload` attribute is returned. Otherwise, the exception generated from the :attr:`~aioxmpp.IQ.error` attribute is raised. .. seealso:: :meth:`register_iq_response_future` and :meth:`send_and_wait_for_sent` for other cases raising exceptions. .. deprecated:: 0.8 This method will be removed in 1.0. Use :meth:`send` instead. .. versionchanged:: 0.8 On a timeout, :class:`TimeoutError` is now raised instead of :class:`asyncio.TimeoutError`.
Below is the the instruction that describes the task: ### Input: Send an IQ stanza `iq` and wait for the response. If `timeout` is not :data:`None`, it must be the time in seconds for which to wait for a response. If the response is a ``"result"`` IQ, the value of the :attr:`~aioxmpp.IQ.payload` attribute is returned. Otherwise, the exception generated from the :attr:`~aioxmpp.IQ.error` attribute is raised. .. seealso:: :meth:`register_iq_response_future` and :meth:`send_and_wait_for_sent` for other cases raising exceptions. .. deprecated:: 0.8 This method will be removed in 1.0. Use :meth:`send` instead. .. versionchanged:: 0.8 On a timeout, :class:`TimeoutError` is now raised instead of :class:`asyncio.TimeoutError`. ### Response: def send_iq_and_wait_for_reply(self, iq, *, timeout=None): """ Send an IQ stanza `iq` and wait for the response. If `timeout` is not :data:`None`, it must be the time in seconds for which to wait for a response. If the response is a ``"result"`` IQ, the value of the :attr:`~aioxmpp.IQ.payload` attribute is returned. Otherwise, the exception generated from the :attr:`~aioxmpp.IQ.error` attribute is raised. .. seealso:: :meth:`register_iq_response_future` and :meth:`send_and_wait_for_sent` for other cases raising exceptions. .. deprecated:: 0.8 This method will be removed in 1.0. Use :meth:`send` instead. .. versionchanged:: 0.8 On a timeout, :class:`TimeoutError` is now raised instead of :class:`asyncio.TimeoutError`. """ warnings.warn( r"send_iq_and_wait_for_reply is deprecated and will be removed in" r" 1.0", DeprecationWarning, stacklevel=1, ) return (yield from self.send(iq, timeout=timeout))
def main(args=None): """Main function for cli.""" args = get_args(args) utils.init_log(args.log_level) if ".csv" not in args.input_file.lower(): logger.warning("Make sure the input file '%s' is in CSV format", args.input_file) try: records = csvtools.get_imported_data(args.input_file) except (EnvironmentError, Dump2PolarionException) as err: logger.fatal(err) return 1 # check if all columns required by `pytest_polarion_cfme` are there required_columns = {"id": "ID", "title": "Title"} missing_columns = [required_columns[k] for k in required_columns if k not in records.results[0]] if missing_columns: logger.fatal( "The input file '%s' is missing following columns: %s", args.input_file, ", ".join(missing_columns), ) return 1 try: dump2sqlite(records, args.output_file) # pylint: disable=broad-except except Exception as err: logger.exception(err) return 1 return 0
Main function for cli.
Below is the the instruction that describes the task: ### Input: Main function for cli. ### Response: def main(args=None): """Main function for cli.""" args = get_args(args) utils.init_log(args.log_level) if ".csv" not in args.input_file.lower(): logger.warning("Make sure the input file '%s' is in CSV format", args.input_file) try: records = csvtools.get_imported_data(args.input_file) except (EnvironmentError, Dump2PolarionException) as err: logger.fatal(err) return 1 # check if all columns required by `pytest_polarion_cfme` are there required_columns = {"id": "ID", "title": "Title"} missing_columns = [required_columns[k] for k in required_columns if k not in records.results[0]] if missing_columns: logger.fatal( "The input file '%s' is missing following columns: %s", args.input_file, ", ".join(missing_columns), ) return 1 try: dump2sqlite(records, args.output_file) # pylint: disable=broad-except except Exception as err: logger.exception(err) return 1 return 0
def _ReadAndCheckStorageMetadata(self, check_readable_only=False): """Reads storage metadata and checks that the values are valid. Args: check_readable_only (Optional[bool]): whether the store should only be checked to see if it can be read. If False, the store will be checked to see if it can be read and written to. """ query = 'SELECT key, value FROM metadata' self._cursor.execute(query) metadata_values = {row[0]: row[1] for row in self._cursor.fetchall()} self._CheckStorageMetadata( metadata_values, check_readable_only=check_readable_only) self.format_version = metadata_values['format_version'] self.compression_format = metadata_values['compression_format'] self.serialization_format = metadata_values['serialization_format'] self.storage_type = metadata_values['storage_type']
Reads storage metadata and checks that the values are valid. Args: check_readable_only (Optional[bool]): whether the store should only be checked to see if it can be read. If False, the store will be checked to see if it can be read and written to.
Below is the the instruction that describes the task: ### Input: Reads storage metadata and checks that the values are valid. Args: check_readable_only (Optional[bool]): whether the store should only be checked to see if it can be read. If False, the store will be checked to see if it can be read and written to. ### Response: def _ReadAndCheckStorageMetadata(self, check_readable_only=False): """Reads storage metadata and checks that the values are valid. Args: check_readable_only (Optional[bool]): whether the store should only be checked to see if it can be read. If False, the store will be checked to see if it can be read and written to. """ query = 'SELECT key, value FROM metadata' self._cursor.execute(query) metadata_values = {row[0]: row[1] for row in self._cursor.fetchall()} self._CheckStorageMetadata( metadata_values, check_readable_only=check_readable_only) self.format_version = metadata_values['format_version'] self.compression_format = metadata_values['compression_format'] self.serialization_format = metadata_values['serialization_format'] self.storage_type = metadata_values['storage_type']
def cli( paths, dbname, separator, quoting, skip_errors, replace_tables, table, extract_column, date, datetime, datetime_format, primary_key, fts, index, shape, filename_column, no_index_fks, no_fulltext_fks, ): """ PATHS: paths to individual .csv files or to directories containing .csvs DBNAME: name of the SQLite database file to create """ # make plural for more readable code: extract_columns = extract_column del extract_column if extract_columns: click.echo("extract_columns={}".format(extract_columns)) if dbname.endswith(".csv"): raise click.BadParameter("dbname must not end with .csv") if "." not in dbname: dbname += ".db" db_existed = os.path.exists(dbname) conn = sqlite3.connect(dbname) dataframes = [] csvs = csvs_from_paths(paths) sql_type_overrides = None for name, path in csvs.items(): try: df = load_csv(path, separator, skip_errors, quoting, shape) df.table_name = table or name if filename_column: df[filename_column] = name if shape: shape += ",{}".format(filename_column) sql_type_overrides = apply_shape(df, shape) apply_dates_and_datetimes(df, date, datetime, datetime_format) dataframes.append(df) except LoadCsvError as e: click.echo("Could not load {}: {}".format(path, e), err=True) click.echo("Loaded {} dataframes".format(len(dataframes))) # Use extract_columns to build a column:(table,label) dictionary foreign_keys = {} for col in extract_columns: bits = col.split(":") if len(bits) == 3: foreign_keys[bits[0]] = (bits[1], bits[2]) elif len(bits) == 2: foreign_keys[bits[0]] = (bits[1], "value") else: foreign_keys[bits[0]] = (bits[0], "value") # Now we have loaded the dataframes, we can refactor them created_tables = {} refactored = refactor_dataframes( conn, dataframes, foreign_keys, not no_fulltext_fks ) for df in refactored: # This is a bit trickier because we need to # create the table with extra SQL for foreign keys if replace_tables and table_exists(conn, df.table_name): drop_table(conn, df.table_name) if table_exists(conn, df.table_name): df.to_sql(df.table_name, conn, if_exists="append", index=False) else: to_sql_with_foreign_keys( conn, df, df.table_name, foreign_keys, sql_type_overrides, primary_keys=primary_key, index_fks=not no_index_fks, ) created_tables[df.table_name] = df if index: for index_defn in index: add_index(conn, df.table_name, index_defn) # Create FTS tables if fts: fts_version = best_fts_version() if not fts_version: conn.close() raise click.BadParameter( "Your SQLite version does not support any variant of FTS" ) # Check that columns make sense for table, df in created_tables.items(): for fts_column in fts: if fts_column not in df.columns: raise click.BadParameter( 'FTS column "{}" does not exist'.format(fts_column) ) generate_and_populate_fts(conn, created_tables.keys(), fts, foreign_keys) conn.close() if db_existed: click.echo( "Added {} CSV file{} to {}".format( len(csvs), "" if len(csvs) == 1 else "s", dbname ) ) else: click.echo( "Created {} from {} CSV file{}".format( dbname, len(csvs), "" if len(csvs) == 1 else "s" ) )
PATHS: paths to individual .csv files or to directories containing .csvs DBNAME: name of the SQLite database file to create
Below is the the instruction that describes the task: ### Input: PATHS: paths to individual .csv files or to directories containing .csvs DBNAME: name of the SQLite database file to create ### Response: def cli( paths, dbname, separator, quoting, skip_errors, replace_tables, table, extract_column, date, datetime, datetime_format, primary_key, fts, index, shape, filename_column, no_index_fks, no_fulltext_fks, ): """ PATHS: paths to individual .csv files or to directories containing .csvs DBNAME: name of the SQLite database file to create """ # make plural for more readable code: extract_columns = extract_column del extract_column if extract_columns: click.echo("extract_columns={}".format(extract_columns)) if dbname.endswith(".csv"): raise click.BadParameter("dbname must not end with .csv") if "." not in dbname: dbname += ".db" db_existed = os.path.exists(dbname) conn = sqlite3.connect(dbname) dataframes = [] csvs = csvs_from_paths(paths) sql_type_overrides = None for name, path in csvs.items(): try: df = load_csv(path, separator, skip_errors, quoting, shape) df.table_name = table or name if filename_column: df[filename_column] = name if shape: shape += ",{}".format(filename_column) sql_type_overrides = apply_shape(df, shape) apply_dates_and_datetimes(df, date, datetime, datetime_format) dataframes.append(df) except LoadCsvError as e: click.echo("Could not load {}: {}".format(path, e), err=True) click.echo("Loaded {} dataframes".format(len(dataframes))) # Use extract_columns to build a column:(table,label) dictionary foreign_keys = {} for col in extract_columns: bits = col.split(":") if len(bits) == 3: foreign_keys[bits[0]] = (bits[1], bits[2]) elif len(bits) == 2: foreign_keys[bits[0]] = (bits[1], "value") else: foreign_keys[bits[0]] = (bits[0], "value") # Now we have loaded the dataframes, we can refactor them created_tables = {} refactored = refactor_dataframes( conn, dataframes, foreign_keys, not no_fulltext_fks ) for df in refactored: # This is a bit trickier because we need to # create the table with extra SQL for foreign keys if replace_tables and table_exists(conn, df.table_name): drop_table(conn, df.table_name) if table_exists(conn, df.table_name): df.to_sql(df.table_name, conn, if_exists="append", index=False) else: to_sql_with_foreign_keys( conn, df, df.table_name, foreign_keys, sql_type_overrides, primary_keys=primary_key, index_fks=not no_index_fks, ) created_tables[df.table_name] = df if index: for index_defn in index: add_index(conn, df.table_name, index_defn) # Create FTS tables if fts: fts_version = best_fts_version() if not fts_version: conn.close() raise click.BadParameter( "Your SQLite version does not support any variant of FTS" ) # Check that columns make sense for table, df in created_tables.items(): for fts_column in fts: if fts_column not in df.columns: raise click.BadParameter( 'FTS column "{}" does not exist'.format(fts_column) ) generate_and_populate_fts(conn, created_tables.keys(), fts, foreign_keys) conn.close() if db_existed: click.echo( "Added {} CSV file{} to {}".format( len(csvs), "" if len(csvs) == 1 else "s", dbname ) ) else: click.echo( "Created {} from {} CSV file{}".format( dbname, len(csvs), "" if len(csvs) == 1 else "s" ) )
def _adjacency_to_edges(adjacency): """determine from an adjacency the list of edges if (u, v) in edges, then (v, u) should not be""" edges = set() for u in adjacency: for v in adjacency[u]: try: edge = (u, v) if u <= v else (v, u) except TypeError: # Py3 does not allow sorting of unlike types if (v, u) in edges: continue edge = (u, v) edges.add(edge) return edges
determine from an adjacency the list of edges if (u, v) in edges, then (v, u) should not be
Below is the the instruction that describes the task: ### Input: determine from an adjacency the list of edges if (u, v) in edges, then (v, u) should not be ### Response: def _adjacency_to_edges(adjacency): """determine from an adjacency the list of edges if (u, v) in edges, then (v, u) should not be""" edges = set() for u in adjacency: for v in adjacency[u]: try: edge = (u, v) if u <= v else (v, u) except TypeError: # Py3 does not allow sorting of unlike types if (v, u) in edges: continue edge = (u, v) edges.add(edge) return edges
def activate_eco(self): """Activates the comfort temperature.""" value = struct.pack('B', PROP_ECO) self._conn.make_request(PROP_WRITE_HANDLE, value)
Activates the comfort temperature.
Below is the the instruction that describes the task: ### Input: Activates the comfort temperature. ### Response: def activate_eco(self): """Activates the comfort temperature.""" value = struct.pack('B', PROP_ECO) self._conn.make_request(PROP_WRITE_HANDLE, value)
def post(self, uri, params={}, data={}): '''A generic method to make POST requests on the given URI.''' return requests.post( urlparse.urljoin(self.BASE_URL, uri), params=params, data=json.dumps(data), verify=False, auth=self.auth, headers = {'Content-type': 'application/json', 'Accept': 'text/plain'})
A generic method to make POST requests on the given URI.
Below is the the instruction that describes the task: ### Input: A generic method to make POST requests on the given URI. ### Response: def post(self, uri, params={}, data={}): '''A generic method to make POST requests on the given URI.''' return requests.post( urlparse.urljoin(self.BASE_URL, uri), params=params, data=json.dumps(data), verify=False, auth=self.auth, headers = {'Content-type': 'application/json', 'Accept': 'text/plain'})
def register_click_watcher(self, watcher_name, selectors, *condition_list): """ The watcher click on the object which has the *selectors* when conditions match. """ watcher = self.device.watcher(watcher_name) for condition in condition_list: watcher.when(**self.__unicode_to_dict(condition)) watcher.click(**self.__unicode_to_dict(selectors)) self.device.watchers.run()
The watcher click on the object which has the *selectors* when conditions match.
Below is the the instruction that describes the task: ### Input: The watcher click on the object which has the *selectors* when conditions match. ### Response: def register_click_watcher(self, watcher_name, selectors, *condition_list): """ The watcher click on the object which has the *selectors* when conditions match. """ watcher = self.device.watcher(watcher_name) for condition in condition_list: watcher.when(**self.__unicode_to_dict(condition)) watcher.click(**self.__unicode_to_dict(selectors)) self.device.watchers.run()
def create_task_log(self, task_id, case_task_log): """ :param task_id: Task identifier :param case_task_log: TheHive log :type case_task_log: CaseTaskLog defined in models.py :return: TheHive log :rtype: json """ req = self.url + "/api/case/task/{}/log".format(task_id) data = {'_json': json.dumps({"message":case_task_log.message})} if case_task_log.file: f = {'attachment': (os.path.basename(case_task_log.file), open(case_task_log.file, 'rb'), magic.Magic(mime=True).from_file(case_task_log.file))} try: return requests.post(req, data=data,files=f, proxies=self.proxies, auth=self.auth, verify=self.cert) except requests.exceptions.RequestException as e: raise CaseTaskException("Case task log create error: {}".format(e)) else: try: return requests.post(req, headers={'Content-Type': 'application/json'}, data=json.dumps({'message':case_task_log.message}), proxies=self.proxies, auth=self.auth, verify=self.cert) except requests.exceptions.RequestException as e: raise CaseTaskException("Case task log create error: {}".format(e))
:param task_id: Task identifier :param case_task_log: TheHive log :type case_task_log: CaseTaskLog defined in models.py :return: TheHive log :rtype: json
Below is the the instruction that describes the task: ### Input: :param task_id: Task identifier :param case_task_log: TheHive log :type case_task_log: CaseTaskLog defined in models.py :return: TheHive log :rtype: json ### Response: def create_task_log(self, task_id, case_task_log): """ :param task_id: Task identifier :param case_task_log: TheHive log :type case_task_log: CaseTaskLog defined in models.py :return: TheHive log :rtype: json """ req = self.url + "/api/case/task/{}/log".format(task_id) data = {'_json': json.dumps({"message":case_task_log.message})} if case_task_log.file: f = {'attachment': (os.path.basename(case_task_log.file), open(case_task_log.file, 'rb'), magic.Magic(mime=True).from_file(case_task_log.file))} try: return requests.post(req, data=data,files=f, proxies=self.proxies, auth=self.auth, verify=self.cert) except requests.exceptions.RequestException as e: raise CaseTaskException("Case task log create error: {}".format(e)) else: try: return requests.post(req, headers={'Content-Type': 'application/json'}, data=json.dumps({'message':case_task_log.message}), proxies=self.proxies, auth=self.auth, verify=self.cert) except requests.exceptions.RequestException as e: raise CaseTaskException("Case task log create error: {}".format(e))
def ip_address(self, container: Container ) -> Union[IPv4Address, IPv6Address]: """ The IP address used by a given container, or None if no IP address has been assigned to that container. """ r = self.__api.get('containers/{}/ip'.format(container.uid)) if r.status_code == 200: return r.json() self.__api.handle_erroneous_response(r)
The IP address used by a given container, or None if no IP address has been assigned to that container.
Below is the the instruction that describes the task: ### Input: The IP address used by a given container, or None if no IP address has been assigned to that container. ### Response: def ip_address(self, container: Container ) -> Union[IPv4Address, IPv6Address]: """ The IP address used by a given container, or None if no IP address has been assigned to that container. """ r = self.__api.get('containers/{}/ip'.format(container.uid)) if r.status_code == 200: return r.json() self.__api.handle_erroneous_response(r)
def write(filename, samples, write_params=None, static_args=None, injtype=None, **metadata): """Writes the injection samples to the given hdf file. Parameters ---------- filename : str The name of the file to write to. samples : io.FieldArray FieldArray of parameters. write_params : list, optional Only write the given parameter names. All given names must be keys in ``samples``. Default is to write all parameters in ``samples``. static_args : dict, optional Dictionary mapping static parameter names to values. These are written to the ``attrs``. injtype : str, optional Specify which `HDFInjectionSet` class to use for writing. If not provided, will try to determine it by looking for an approximant in the ``static_args``, followed by the ``samples``. \**metadata : All other keyword arguments will be written to the file's attrs. """ # DELETE the following "if" once xml is dropped ext = os.path.basename(filename) if ext.endswith(('.xml', '.xml.gz', '.xmlgz')): _XMLInjectionSet.write(filename, samples, write_params, static_args) else: # try determine the injtype if it isn't given if injtype is None: if static_args is not None and 'approximant' in static_args: injcls = hdf_injtype_from_approximant( static_args['approximant']) elif 'approximant' in samples.fieldnames: apprxs = np.unique(samples['approximant']) # make sure they all correspond to the same injection type injcls = [hdf_injtype_from_approximant(a) for a in apprxs] if not all(c == injcls[0] for c in injcls): raise ValueError("injections must all be of the same " "type") injcls = injcls[0] else: raise ValueError("Could not find an approximant in the " "static args or samples to determine the " "injection type. Please specify an " "injtype instead.") else: injcls = hdfinjtypes[injtype] injcls.write(filename, samples, write_params, static_args, **metadata)
Writes the injection samples to the given hdf file. Parameters ---------- filename : str The name of the file to write to. samples : io.FieldArray FieldArray of parameters. write_params : list, optional Only write the given parameter names. All given names must be keys in ``samples``. Default is to write all parameters in ``samples``. static_args : dict, optional Dictionary mapping static parameter names to values. These are written to the ``attrs``. injtype : str, optional Specify which `HDFInjectionSet` class to use for writing. If not provided, will try to determine it by looking for an approximant in the ``static_args``, followed by the ``samples``. \**metadata : All other keyword arguments will be written to the file's attrs.
Below is the the instruction that describes the task: ### Input: Writes the injection samples to the given hdf file. Parameters ---------- filename : str The name of the file to write to. samples : io.FieldArray FieldArray of parameters. write_params : list, optional Only write the given parameter names. All given names must be keys in ``samples``. Default is to write all parameters in ``samples``. static_args : dict, optional Dictionary mapping static parameter names to values. These are written to the ``attrs``. injtype : str, optional Specify which `HDFInjectionSet` class to use for writing. If not provided, will try to determine it by looking for an approximant in the ``static_args``, followed by the ``samples``. \**metadata : All other keyword arguments will be written to the file's attrs. ### Response: def write(filename, samples, write_params=None, static_args=None, injtype=None, **metadata): """Writes the injection samples to the given hdf file. Parameters ---------- filename : str The name of the file to write to. samples : io.FieldArray FieldArray of parameters. write_params : list, optional Only write the given parameter names. All given names must be keys in ``samples``. Default is to write all parameters in ``samples``. static_args : dict, optional Dictionary mapping static parameter names to values. These are written to the ``attrs``. injtype : str, optional Specify which `HDFInjectionSet` class to use for writing. If not provided, will try to determine it by looking for an approximant in the ``static_args``, followed by the ``samples``. \**metadata : All other keyword arguments will be written to the file's attrs. """ # DELETE the following "if" once xml is dropped ext = os.path.basename(filename) if ext.endswith(('.xml', '.xml.gz', '.xmlgz')): _XMLInjectionSet.write(filename, samples, write_params, static_args) else: # try determine the injtype if it isn't given if injtype is None: if static_args is not None and 'approximant' in static_args: injcls = hdf_injtype_from_approximant( static_args['approximant']) elif 'approximant' in samples.fieldnames: apprxs = np.unique(samples['approximant']) # make sure they all correspond to the same injection type injcls = [hdf_injtype_from_approximant(a) for a in apprxs] if not all(c == injcls[0] for c in injcls): raise ValueError("injections must all be of the same " "type") injcls = injcls[0] else: raise ValueError("Could not find an approximant in the " "static args or samples to determine the " "injection type. Please specify an " "injtype instead.") else: injcls = hdfinjtypes[injtype] injcls.write(filename, samples, write_params, static_args, **metadata)
def _compile_and_collapse(self): """Actually compile the requested regex""" self._real_regex = self._real_re_compile(*self._regex_args, **self._regex_kwargs) for attr in self._regex_attributes_to_copy: setattr(self, attr, getattr(self._real_regex, attr))
Actually compile the requested regex
Below is the the instruction that describes the task: ### Input: Actually compile the requested regex ### Response: def _compile_and_collapse(self): """Actually compile the requested regex""" self._real_regex = self._real_re_compile(*self._regex_args, **self._regex_kwargs) for attr in self._regex_attributes_to_copy: setattr(self, attr, getattr(self._real_regex, attr))
def get_next_state(self, state, ret, oper): """Returns the next state for a create or delete operation. """ if oper == fw_const.FW_CR_OP: return self.get_next_create_state(state, ret) else: return self.get_next_del_state(state, ret)
Returns the next state for a create or delete operation.
Below is the the instruction that describes the task: ### Input: Returns the next state for a create or delete operation. ### Response: def get_next_state(self, state, ret, oper): """Returns the next state for a create or delete operation. """ if oper == fw_const.FW_CR_OP: return self.get_next_create_state(state, ret) else: return self.get_next_del_state(state, ret)
def set_palette(self, background, foreground): """ Set text editor palette colors: background color and caret (text cursor) color """ palette = QPalette() palette.setColor(QPalette.Base, background) palette.setColor(QPalette.Text, foreground) self.setPalette(palette) # Set the right background color when changing color schemes # or creating new Editor windows. This seems to be a Qt bug. # Fixes Issue 2028 and 8069 if self.objectName(): style = "QPlainTextEdit#%s {background: %s; color: %s;}" % \ (self.objectName(), background.name(), foreground.name()) self.setStyleSheet(style)
Set text editor palette colors: background color and caret (text cursor) color
Below is the the instruction that describes the task: ### Input: Set text editor palette colors: background color and caret (text cursor) color ### Response: def set_palette(self, background, foreground): """ Set text editor palette colors: background color and caret (text cursor) color """ palette = QPalette() palette.setColor(QPalette.Base, background) palette.setColor(QPalette.Text, foreground) self.setPalette(palette) # Set the right background color when changing color schemes # or creating new Editor windows. This seems to be a Qt bug. # Fixes Issue 2028 and 8069 if self.objectName(): style = "QPlainTextEdit#%s {background: %s; color: %s;}" % \ (self.objectName(), background.name(), foreground.name()) self.setStyleSheet(style)
def _opt_to_args(cls, opt, val): """Convert a named option and optional value to command line argument notation, correctly handling options that take no value or that have special representations (e.g. verify and verbose). """ no_value = ( "alloptions", "all-logs", "batch", "build", "debug", "experimental", "list-plugins", "list-presets", "list-profiles", "noreport", "quiet", "verify" ) count = ("verbose",) if opt in no_value: return ["--%s" % opt] if opt in count: return ["--%s" % opt for d in range(0, int(val))] return ["--" + opt + "=" + val]
Convert a named option and optional value to command line argument notation, correctly handling options that take no value or that have special representations (e.g. verify and verbose).
Below is the the instruction that describes the task: ### Input: Convert a named option and optional value to command line argument notation, correctly handling options that take no value or that have special representations (e.g. verify and verbose). ### Response: def _opt_to_args(cls, opt, val): """Convert a named option and optional value to command line argument notation, correctly handling options that take no value or that have special representations (e.g. verify and verbose). """ no_value = ( "alloptions", "all-logs", "batch", "build", "debug", "experimental", "list-plugins", "list-presets", "list-profiles", "noreport", "quiet", "verify" ) count = ("verbose",) if opt in no_value: return ["--%s" % opt] if opt in count: return ["--%s" % opt for d in range(0, int(val))] return ["--" + opt + "=" + val]
def _report_problem(self, problem, level=logging.ERROR): '''Report a given problem''' problem = self.basename + ': ' + problem if self._logger.isEnabledFor(level): self._problematic = True if self._check_raises: raise DapInvalid(problem) self._logger.log(level, problem)
Report a given problem
Below is the the instruction that describes the task: ### Input: Report a given problem ### Response: def _report_problem(self, problem, level=logging.ERROR): '''Report a given problem''' problem = self.basename + ': ' + problem if self._logger.isEnabledFor(level): self._problematic = True if self._check_raises: raise DapInvalid(problem) self._logger.log(level, problem)
def if_match(self, values): """ Set the If-Match option of a request. :param values: the If-Match values :type values : list """ assert isinstance(values, list) for v in values: option = Option() option.number = defines.OptionRegistry.IF_MATCH.number option.value = v self.add_option(option)
Set the If-Match option of a request. :param values: the If-Match values :type values : list
Below is the the instruction that describes the task: ### Input: Set the If-Match option of a request. :param values: the If-Match values :type values : list ### Response: def if_match(self, values): """ Set the If-Match option of a request. :param values: the If-Match values :type values : list """ assert isinstance(values, list) for v in values: option = Option() option.number = defines.OptionRegistry.IF_MATCH.number option.value = v self.add_option(option)
def accept_arguments(method, number_of_arguments=1): """Returns True if the given method will accept the given number of arguments method: the method to perform introspection on number_of_arguments: the number_of_arguments """ if 'method' in method.__class__.__name__: number_of_arguments += 1 func = getattr(method, 'im_func', getattr(method, '__func__')) func_defaults = getattr(func, 'func_defaults', getattr(func, '__defaults__')) number_of_defaults = func_defaults and len(func_defaults) or 0 elif method.__class__.__name__ == 'function': func_defaults = getattr(method, 'func_defaults', getattr(method, '__defaults__')) number_of_defaults = func_defaults and len(func_defaults) or 0 coArgCount = getattr(method, 'func_code', getattr(method, '__code__')).co_argcount if(coArgCount >= number_of_arguments and coArgCount - number_of_defaults <= number_of_arguments): return True return False
Returns True if the given method will accept the given number of arguments method: the method to perform introspection on number_of_arguments: the number_of_arguments
Below is the the instruction that describes the task: ### Input: Returns True if the given method will accept the given number of arguments method: the method to perform introspection on number_of_arguments: the number_of_arguments ### Response: def accept_arguments(method, number_of_arguments=1): """Returns True if the given method will accept the given number of arguments method: the method to perform introspection on number_of_arguments: the number_of_arguments """ if 'method' in method.__class__.__name__: number_of_arguments += 1 func = getattr(method, 'im_func', getattr(method, '__func__')) func_defaults = getattr(func, 'func_defaults', getattr(func, '__defaults__')) number_of_defaults = func_defaults and len(func_defaults) or 0 elif method.__class__.__name__ == 'function': func_defaults = getattr(method, 'func_defaults', getattr(method, '__defaults__')) number_of_defaults = func_defaults and len(func_defaults) or 0 coArgCount = getattr(method, 'func_code', getattr(method, '__code__')).co_argcount if(coArgCount >= number_of_arguments and coArgCount - number_of_defaults <= number_of_arguments): return True return False
def clear(self, asset_manager_id): """ This method deletes all the data for an asset_manager_id. It should be used with extreme caution. In production it is almost always better to Inactivate rather than delete. """ self.logger.info('Clear Assets - Asset Manager: %s', asset_manager_id) url = '%s/clear/%s' % (self.endpoint, asset_manager_id) response = self.session.delete(url) if response.ok: count = response.json().get('count', 'Unknown') self.logger.info('Deleted %s Assets.', count) return count else: self.logger.error(response.text) response.raise_for_status()
This method deletes all the data for an asset_manager_id. It should be used with extreme caution. In production it is almost always better to Inactivate rather than delete.
Below is the the instruction that describes the task: ### Input: This method deletes all the data for an asset_manager_id. It should be used with extreme caution. In production it is almost always better to Inactivate rather than delete. ### Response: def clear(self, asset_manager_id): """ This method deletes all the data for an asset_manager_id. It should be used with extreme caution. In production it is almost always better to Inactivate rather than delete. """ self.logger.info('Clear Assets - Asset Manager: %s', asset_manager_id) url = '%s/clear/%s' % (self.endpoint, asset_manager_id) response = self.session.delete(url) if response.ok: count = response.json().get('count', 'Unknown') self.logger.info('Deleted %s Assets.', count) return count else: self.logger.error(response.text) response.raise_for_status()
def _communicate(self): """Callback for communicate.""" if (not self._communicate_first and self._process.state() == QProcess.NotRunning): self.communicate() elif self._fired: self._timer.stop()
Callback for communicate.
Below is the the instruction that describes the task: ### Input: Callback for communicate. ### Response: def _communicate(self): """Callback for communicate.""" if (not self._communicate_first and self._process.state() == QProcess.NotRunning): self.communicate() elif self._fired: self._timer.stop()
def answer_pre_checkout_query(token, pre_checkout_query_id, ok, error_message=None): """ Once the user has confirmed their payment and shipping details, the Bot API sends the final confirmation in the form of an Update with the field pre_checkout_query. Use this method to respond to such pre-checkout queries. On success, True is returned. Note: The Bot API must receive an answer within 10 seconds after the pre-checkout query was sent. :param token: Bot's token (you don't need to fill this) :param pre_checkout_query_id: Unique identifier for the query to be answered :param ok: Specify True if everything is alright (goods are available, etc.) and the bot is ready to proceed with the order. Use False if there are any problems. :param error_message: Required if ok is False. Error message in human readable form that explains the reason for failure to proceed with the checkout (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you were busy filling out your payment details. Please choose a different color or garment!"). Telegram will display this message to the user. :return: """ method_url = 'answerPreCheckoutQuery' payload = {'pre_checkout_query_id': pre_checkout_query_id, 'ok': ok} if error_message: payload['error_message'] = error_message return _make_request(token, method_url, params=payload)
Once the user has confirmed their payment and shipping details, the Bot API sends the final confirmation in the form of an Update with the field pre_checkout_query. Use this method to respond to such pre-checkout queries. On success, True is returned. Note: The Bot API must receive an answer within 10 seconds after the pre-checkout query was sent. :param token: Bot's token (you don't need to fill this) :param pre_checkout_query_id: Unique identifier for the query to be answered :param ok: Specify True if everything is alright (goods are available, etc.) and the bot is ready to proceed with the order. Use False if there are any problems. :param error_message: Required if ok is False. Error message in human readable form that explains the reason for failure to proceed with the checkout (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you were busy filling out your payment details. Please choose a different color or garment!"). Telegram will display this message to the user. :return:
Below is the the instruction that describes the task: ### Input: Once the user has confirmed their payment and shipping details, the Bot API sends the final confirmation in the form of an Update with the field pre_checkout_query. Use this method to respond to such pre-checkout queries. On success, True is returned. Note: The Bot API must receive an answer within 10 seconds after the pre-checkout query was sent. :param token: Bot's token (you don't need to fill this) :param pre_checkout_query_id: Unique identifier for the query to be answered :param ok: Specify True if everything is alright (goods are available, etc.) and the bot is ready to proceed with the order. Use False if there are any problems. :param error_message: Required if ok is False. Error message in human readable form that explains the reason for failure to proceed with the checkout (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you were busy filling out your payment details. Please choose a different color or garment!"). Telegram will display this message to the user. :return: ### Response: def answer_pre_checkout_query(token, pre_checkout_query_id, ok, error_message=None): """ Once the user has confirmed their payment and shipping details, the Bot API sends the final confirmation in the form of an Update with the field pre_checkout_query. Use this method to respond to such pre-checkout queries. On success, True is returned. Note: The Bot API must receive an answer within 10 seconds after the pre-checkout query was sent. :param token: Bot's token (you don't need to fill this) :param pre_checkout_query_id: Unique identifier for the query to be answered :param ok: Specify True if everything is alright (goods are available, etc.) and the bot is ready to proceed with the order. Use False if there are any problems. :param error_message: Required if ok is False. Error message in human readable form that explains the reason for failure to proceed with the checkout (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you were busy filling out your payment details. Please choose a different color or garment!"). Telegram will display this message to the user. :return: """ method_url = 'answerPreCheckoutQuery' payload = {'pre_checkout_query_id': pre_checkout_query_id, 'ok': ok} if error_message: payload['error_message'] = error_message return _make_request(token, method_url, params=payload)
def tag(self, *tags): """ Tags the job with one or more unique indentifiers. Tags must be hashable. Duplicate tags are discarded. :param tags: A unique list of ``Hashable`` tags. :return: The invoked job instance """ if not all(isinstance(tag, collections.Hashable) for tag in tags): raise TypeError('Tags must be hashable') self.tags.update(tags) return self
Tags the job with one or more unique indentifiers. Tags must be hashable. Duplicate tags are discarded. :param tags: A unique list of ``Hashable`` tags. :return: The invoked job instance
Below is the the instruction that describes the task: ### Input: Tags the job with one or more unique indentifiers. Tags must be hashable. Duplicate tags are discarded. :param tags: A unique list of ``Hashable`` tags. :return: The invoked job instance ### Response: def tag(self, *tags): """ Tags the job with one or more unique indentifiers. Tags must be hashable. Duplicate tags are discarded. :param tags: A unique list of ``Hashable`` tags. :return: The invoked job instance """ if not all(isinstance(tag, collections.Hashable) for tag in tags): raise TypeError('Tags must be hashable') self.tags.update(tags) return self
def auth_user_ldap(uname, pwd): """ Attempts to bind using the uname/pwd combo passed in. If that works, returns true. Otherwise returns false. """ if not uname or not pwd: logging.error("Username or password not supplied") return False ld = ldap.initialize(LDAP_URL) if LDAP_VERSION_3: ld.set_option(ldap.VERSION3, 1) ld.start_tls_s() udn = ld.search_s(LDAP_SEARCH_BASE, ldap.SCOPE_ONELEVEL, '(%s=%s)' % (LDAP_UNAME_ATTR,uname), [LDAP_BIND_ATTR]) if udn: try: bindres = ld.simple_bind_s(udn[0][0], pwd) except ldap.INVALID_CREDENTIALS, ldap.UNWILLING_TO_PERFORM: logging.error("Invalid or incomplete credentials for %s", uname) return False except Exception as out: logging.error("Auth attempt for %s had an unexpected error: %s", uname, out) return False else: return True else: logging.error("No user by that name") return False
Attempts to bind using the uname/pwd combo passed in. If that works, returns true. Otherwise returns false.
Below is the the instruction that describes the task: ### Input: Attempts to bind using the uname/pwd combo passed in. If that works, returns true. Otherwise returns false. ### Response: def auth_user_ldap(uname, pwd): """ Attempts to bind using the uname/pwd combo passed in. If that works, returns true. Otherwise returns false. """ if not uname or not pwd: logging.error("Username or password not supplied") return False ld = ldap.initialize(LDAP_URL) if LDAP_VERSION_3: ld.set_option(ldap.VERSION3, 1) ld.start_tls_s() udn = ld.search_s(LDAP_SEARCH_BASE, ldap.SCOPE_ONELEVEL, '(%s=%s)' % (LDAP_UNAME_ATTR,uname), [LDAP_BIND_ATTR]) if udn: try: bindres = ld.simple_bind_s(udn[0][0], pwd) except ldap.INVALID_CREDENTIALS, ldap.UNWILLING_TO_PERFORM: logging.error("Invalid or incomplete credentials for %s", uname) return False except Exception as out: logging.error("Auth attempt for %s had an unexpected error: %s", uname, out) return False else: return True else: logging.error("No user by that name") return False
def parse_requirements(self, filename): """ Recursively find all the requirements needed storing them in req_parents, req_paths, req_linenos """ cwd = os.path.dirname(filename) try: fd = open(filename, 'r') for i, line in enumerate(fd.readlines(), 0): req = self.extract_requirement(line) # if the line is not a requirement statement if not req: continue req_path = req if not os.path.isabs(req_path): req_path = os.path.normpath(os.path.join(cwd, req_path)) if not os.path.exists(req_path): logging.warning("Requirement '{0}' could not be resolved: '{1}' does not exist.".format(req, req_path)) if self.flags['cleanup']: self.skip_unresolved_requirement(filename, i) continue # if the requirement is already added to the database, skip it if req_path in self.req_paths: logging.warning("Skipping duplicate requirement '{0}' at '{2}:{3}' [file '{1}'].".format( req, req_path, filename, i+1 # human-recognizable line number )) if self.flags['cleanup']: self.skip_unresolved_requirement(filename, i) continue # store requirements to the global database self.req_parents.append(filename) self.req_paths.append(req_path) self.req_linenos.append(i) # recursion self.parse_requirements(req_path) fd.close() except IOError as err: logging.warning("I/O error: {0}".format(err))
Recursively find all the requirements needed storing them in req_parents, req_paths, req_linenos
Below is the the instruction that describes the task: ### Input: Recursively find all the requirements needed storing them in req_parents, req_paths, req_linenos ### Response: def parse_requirements(self, filename): """ Recursively find all the requirements needed storing them in req_parents, req_paths, req_linenos """ cwd = os.path.dirname(filename) try: fd = open(filename, 'r') for i, line in enumerate(fd.readlines(), 0): req = self.extract_requirement(line) # if the line is not a requirement statement if not req: continue req_path = req if not os.path.isabs(req_path): req_path = os.path.normpath(os.path.join(cwd, req_path)) if not os.path.exists(req_path): logging.warning("Requirement '{0}' could not be resolved: '{1}' does not exist.".format(req, req_path)) if self.flags['cleanup']: self.skip_unresolved_requirement(filename, i) continue # if the requirement is already added to the database, skip it if req_path in self.req_paths: logging.warning("Skipping duplicate requirement '{0}' at '{2}:{3}' [file '{1}'].".format( req, req_path, filename, i+1 # human-recognizable line number )) if self.flags['cleanup']: self.skip_unresolved_requirement(filename, i) continue # store requirements to the global database self.req_parents.append(filename) self.req_paths.append(req_path) self.req_linenos.append(i) # recursion self.parse_requirements(req_path) fd.close() except IOError as err: logging.warning("I/O error: {0}".format(err))
def set_value(self, value, timeout): """ Sets a new value and extends its expiration. :param value: a new cached value. :param timeout: a expiration timeout in milliseconds. """ self.value = value self.expiration = time.perf_counter() * 1000 + timeout
Sets a new value and extends its expiration. :param value: a new cached value. :param timeout: a expiration timeout in milliseconds.
Below is the the instruction that describes the task: ### Input: Sets a new value and extends its expiration. :param value: a new cached value. :param timeout: a expiration timeout in milliseconds. ### Response: def set_value(self, value, timeout): """ Sets a new value and extends its expiration. :param value: a new cached value. :param timeout: a expiration timeout in milliseconds. """ self.value = value self.expiration = time.perf_counter() * 1000 + timeout
def reboot(self, timeout=None, wait_polling_interval=None): """Reboot the device, waiting for the adb connection to become stable. :param timeout: Maximum time to wait for reboot. :param wait_polling_interval: Interval at which to poll for device readiness. """ if timeout is None: timeout = self._timeout if wait_polling_interval is None: wait_polling_interval = self._wait_polling_interval self._logger.info("Rebooting device") self.wait_for_device_ready(timeout, after_first=lambda:self.command_output(["reboot"]))
Reboot the device, waiting for the adb connection to become stable. :param timeout: Maximum time to wait for reboot. :param wait_polling_interval: Interval at which to poll for device readiness.
Below is the the instruction that describes the task: ### Input: Reboot the device, waiting for the adb connection to become stable. :param timeout: Maximum time to wait for reboot. :param wait_polling_interval: Interval at which to poll for device readiness. ### Response: def reboot(self, timeout=None, wait_polling_interval=None): """Reboot the device, waiting for the adb connection to become stable. :param timeout: Maximum time to wait for reboot. :param wait_polling_interval: Interval at which to poll for device readiness. """ if timeout is None: timeout = self._timeout if wait_polling_interval is None: wait_polling_interval = self._wait_polling_interval self._logger.info("Rebooting device") self.wait_for_device_ready(timeout, after_first=lambda:self.command_output(["reboot"]))
def encode_dict(values_dict): """Encode a dictionary into protobuf ``Value``-s. Args: values_dict (dict): The dictionary to encode as protobuf fields. Returns: Dict[str, ~google.cloud.firestore_v1beta1.types.Value]: A dictionary of string keys and ``Value`` protobufs as dictionary values. """ return {key: encode_value(value) for key, value in six.iteritems(values_dict)}
Encode a dictionary into protobuf ``Value``-s. Args: values_dict (dict): The dictionary to encode as protobuf fields. Returns: Dict[str, ~google.cloud.firestore_v1beta1.types.Value]: A dictionary of string keys and ``Value`` protobufs as dictionary values.
Below is the the instruction that describes the task: ### Input: Encode a dictionary into protobuf ``Value``-s. Args: values_dict (dict): The dictionary to encode as protobuf fields. Returns: Dict[str, ~google.cloud.firestore_v1beta1.types.Value]: A dictionary of string keys and ``Value`` protobufs as dictionary values. ### Response: def encode_dict(values_dict): """Encode a dictionary into protobuf ``Value``-s. Args: values_dict (dict): The dictionary to encode as protobuf fields. Returns: Dict[str, ~google.cloud.firestore_v1beta1.types.Value]: A dictionary of string keys and ``Value`` protobufs as dictionary values. """ return {key: encode_value(value) for key, value in six.iteritems(values_dict)}
def list_container_objects(container_name, profile, **libcloud_kwargs): ''' List container objects (e.g. files) for the given container_id on the given profile :param container_name: Container name :type container_name: ``str`` :param profile: The profile key :type profile: ``str`` :param libcloud_kwargs: Extra arguments for the driver's list_container_objects method :type libcloud_kwargs: ``dict`` CLI Example: .. code-block:: bash salt myminion libcloud_storage.list_container_objects MyFolder profile1 ''' conn = _get_driver(profile=profile) container = conn.get_container(container_name) libcloud_kwargs = salt.utils.args.clean_kwargs(**libcloud_kwargs) objects = conn.list_container_objects(container, **libcloud_kwargs) ret = [] for obj in objects: ret.append({ 'name': obj.name, 'size': obj.size, 'hash': obj.hash, 'container': obj.container.name, 'extra': obj.extra, 'meta_data': obj.meta_data }) return ret
List container objects (e.g. files) for the given container_id on the given profile :param container_name: Container name :type container_name: ``str`` :param profile: The profile key :type profile: ``str`` :param libcloud_kwargs: Extra arguments for the driver's list_container_objects method :type libcloud_kwargs: ``dict`` CLI Example: .. code-block:: bash salt myminion libcloud_storage.list_container_objects MyFolder profile1
Below is the the instruction that describes the task: ### Input: List container objects (e.g. files) for the given container_id on the given profile :param container_name: Container name :type container_name: ``str`` :param profile: The profile key :type profile: ``str`` :param libcloud_kwargs: Extra arguments for the driver's list_container_objects method :type libcloud_kwargs: ``dict`` CLI Example: .. code-block:: bash salt myminion libcloud_storage.list_container_objects MyFolder profile1 ### Response: def list_container_objects(container_name, profile, **libcloud_kwargs): ''' List container objects (e.g. files) for the given container_id on the given profile :param container_name: Container name :type container_name: ``str`` :param profile: The profile key :type profile: ``str`` :param libcloud_kwargs: Extra arguments for the driver's list_container_objects method :type libcloud_kwargs: ``dict`` CLI Example: .. code-block:: bash salt myminion libcloud_storage.list_container_objects MyFolder profile1 ''' conn = _get_driver(profile=profile) container = conn.get_container(container_name) libcloud_kwargs = salt.utils.args.clean_kwargs(**libcloud_kwargs) objects = conn.list_container_objects(container, **libcloud_kwargs) ret = [] for obj in objects: ret.append({ 'name': obj.name, 'size': obj.size, 'hash': obj.hash, 'container': obj.container.name, 'extra': obj.extra, 'meta_data': obj.meta_data }) return ret
def addSRNLayers(self, inc, hidc, outc): """ Wraps SRN.addThreeLayers() for compatibility. """ self.addThreeLayers(inc, hidc, outc)
Wraps SRN.addThreeLayers() for compatibility.
Below is the the instruction that describes the task: ### Input: Wraps SRN.addThreeLayers() for compatibility. ### Response: def addSRNLayers(self, inc, hidc, outc): """ Wraps SRN.addThreeLayers() for compatibility. """ self.addThreeLayers(inc, hidc, outc)
def get_with_rtdcbase(self, col1, col2, method, dataset, viscosity=None, add_px_err=False): """Convenience method that extracts the metadata from RTDCBase Parameters ---------- col1: str Name of the first feature of all isoelastics (e.g. isoel[0][:,0]) col2: str Name of the second feature of all isoelastics (e.g. isoel[0][:,1]) method: str The method used to compute the isoelastics (must be one of `VALID_METHODS`). dataset: dclab.rtdc_dataset.RTDCBase The dataset from which to obtain the metadata. viscosity: float or `None` Viscosity of the medium in mPa*s. If set to `None`, the flow rate of the imported data will be used (only do this if you do not need the correct values for elastic moduli). add_px_err: bool If True, add pixelation errors according to C. Herold (2017), https://arxiv.org/abs/1704.00572 """ cfg = dataset.config return self.get(col1=col1, col2=col2, method=method, channel_width=cfg["setup"]["channel width"], flow_rate=cfg["setup"]["flow rate"], viscosity=viscosity, add_px_err=add_px_err, px_um=cfg["imaging"]["pixel size"])
Convenience method that extracts the metadata from RTDCBase Parameters ---------- col1: str Name of the first feature of all isoelastics (e.g. isoel[0][:,0]) col2: str Name of the second feature of all isoelastics (e.g. isoel[0][:,1]) method: str The method used to compute the isoelastics (must be one of `VALID_METHODS`). dataset: dclab.rtdc_dataset.RTDCBase The dataset from which to obtain the metadata. viscosity: float or `None` Viscosity of the medium in mPa*s. If set to `None`, the flow rate of the imported data will be used (only do this if you do not need the correct values for elastic moduli). add_px_err: bool If True, add pixelation errors according to C. Herold (2017), https://arxiv.org/abs/1704.00572
Below is the the instruction that describes the task: ### Input: Convenience method that extracts the metadata from RTDCBase Parameters ---------- col1: str Name of the first feature of all isoelastics (e.g. isoel[0][:,0]) col2: str Name of the second feature of all isoelastics (e.g. isoel[0][:,1]) method: str The method used to compute the isoelastics (must be one of `VALID_METHODS`). dataset: dclab.rtdc_dataset.RTDCBase The dataset from which to obtain the metadata. viscosity: float or `None` Viscosity of the medium in mPa*s. If set to `None`, the flow rate of the imported data will be used (only do this if you do not need the correct values for elastic moduli). add_px_err: bool If True, add pixelation errors according to C. Herold (2017), https://arxiv.org/abs/1704.00572 ### Response: def get_with_rtdcbase(self, col1, col2, method, dataset, viscosity=None, add_px_err=False): """Convenience method that extracts the metadata from RTDCBase Parameters ---------- col1: str Name of the first feature of all isoelastics (e.g. isoel[0][:,0]) col2: str Name of the second feature of all isoelastics (e.g. isoel[0][:,1]) method: str The method used to compute the isoelastics (must be one of `VALID_METHODS`). dataset: dclab.rtdc_dataset.RTDCBase The dataset from which to obtain the metadata. viscosity: float or `None` Viscosity of the medium in mPa*s. If set to `None`, the flow rate of the imported data will be used (only do this if you do not need the correct values for elastic moduli). add_px_err: bool If True, add pixelation errors according to C. Herold (2017), https://arxiv.org/abs/1704.00572 """ cfg = dataset.config return self.get(col1=col1, col2=col2, method=method, channel_width=cfg["setup"]["channel width"], flow_rate=cfg["setup"]["flow rate"], viscosity=viscosity, add_px_err=add_px_err, px_um=cfg["imaging"]["pixel size"])
def read_metadata(self, file_path): # type: (str) ->str """ Get version out of a .ini file (or .cfg) :return: """ config = configparser.ConfigParser() config.read(file_path) try: return unicode(config["metadata"]["version"]) except KeyError: return ""
Get version out of a .ini file (or .cfg) :return:
Below is the the instruction that describes the task: ### Input: Get version out of a .ini file (or .cfg) :return: ### Response: def read_metadata(self, file_path): # type: (str) ->str """ Get version out of a .ini file (or .cfg) :return: """ config = configparser.ConfigParser() config.read(file_path) try: return unicode(config["metadata"]["version"]) except KeyError: return ""
def load(path, name, cluster): """ Load a node from from the path on disk to the config files, the node name and the cluster the node is part of. """ node_path = os.path.join(path, name) filename = os.path.join(node_path, 'node.conf') with open(filename, 'r') as f: data = yaml.safe_load(f) try: itf = data['interfaces'] initial_token = None if 'initial_token' in data: initial_token = data['initial_token'] cassandra_version = None if 'cassandra_version' in data: cassandra_version = LooseVersion(data['cassandra_version']) remote_debug_port = 2000 if 'remote_debug_port' in data: remote_debug_port = data['remote_debug_port'] binary_interface = None if 'binary' in itf and itf['binary'] is not None: binary_interface = tuple(itf['binary']) thrift_interface = None if 'thrift' in itf and itf['thrift'] is not None: thrift_interface = tuple(itf['thrift']) node = cluster.create_node(data['name'], data['auto_bootstrap'], thrift_interface, tuple(itf['storage']), data['jmx_port'], remote_debug_port, initial_token, save=False, binary_interface=binary_interface, byteman_port=data['byteman_port'], derived_cassandra_version=cassandra_version) node.status = data['status'] if 'pid' in data: node.pid = int(data['pid']) if 'install_dir' in data: node.__install_dir = data['install_dir'] if 'config_options' in data: node.__config_options = data['config_options'] if 'dse_config_options' in data: node._dse_config_options = data['dse_config_options'] if 'environment_variables' in data: node.__environment_variables = data['environment_variables'] if 'data_center' in data: node.data_center = data['data_center'] if 'workloads' in data: node.workloads = data['workloads'] return node except KeyError as k: raise common.LoadError("Error Loading " + filename + ", missing property: " + str(k))
Load a node from from the path on disk to the config files, the node name and the cluster the node is part of.
Below is the the instruction that describes the task: ### Input: Load a node from from the path on disk to the config files, the node name and the cluster the node is part of. ### Response: def load(path, name, cluster): """ Load a node from from the path on disk to the config files, the node name and the cluster the node is part of. """ node_path = os.path.join(path, name) filename = os.path.join(node_path, 'node.conf') with open(filename, 'r') as f: data = yaml.safe_load(f) try: itf = data['interfaces'] initial_token = None if 'initial_token' in data: initial_token = data['initial_token'] cassandra_version = None if 'cassandra_version' in data: cassandra_version = LooseVersion(data['cassandra_version']) remote_debug_port = 2000 if 'remote_debug_port' in data: remote_debug_port = data['remote_debug_port'] binary_interface = None if 'binary' in itf and itf['binary'] is not None: binary_interface = tuple(itf['binary']) thrift_interface = None if 'thrift' in itf and itf['thrift'] is not None: thrift_interface = tuple(itf['thrift']) node = cluster.create_node(data['name'], data['auto_bootstrap'], thrift_interface, tuple(itf['storage']), data['jmx_port'], remote_debug_port, initial_token, save=False, binary_interface=binary_interface, byteman_port=data['byteman_port'], derived_cassandra_version=cassandra_version) node.status = data['status'] if 'pid' in data: node.pid = int(data['pid']) if 'install_dir' in data: node.__install_dir = data['install_dir'] if 'config_options' in data: node.__config_options = data['config_options'] if 'dse_config_options' in data: node._dse_config_options = data['dse_config_options'] if 'environment_variables' in data: node.__environment_variables = data['environment_variables'] if 'data_center' in data: node.data_center = data['data_center'] if 'workloads' in data: node.workloads = data['workloads'] return node except KeyError as k: raise common.LoadError("Error Loading " + filename + ", missing property: " + str(k))
def with_setup(self, colormode=None, colorpalette=None, extend_colors=False): """ Return a new Colorful object with the given color config. """ colorful = Colorful( colormode=self.colorful.colormode, colorpalette=copy.copy(self.colorful.colorpalette) ) colorful.setup( colormode=colormode, colorpalette=colorpalette, extend_colors=extend_colors ) yield colorful
Return a new Colorful object with the given color config.
Below is the the instruction that describes the task: ### Input: Return a new Colorful object with the given color config. ### Response: def with_setup(self, colormode=None, colorpalette=None, extend_colors=False): """ Return a new Colorful object with the given color config. """ colorful = Colorful( colormode=self.colorful.colormode, colorpalette=copy.copy(self.colorful.colorpalette) ) colorful.setup( colormode=colormode, colorpalette=colorpalette, extend_colors=extend_colors ) yield colorful
def templated_docstring(**docs): """ Decorator allowing the use of templated docstrings. Examples -------- >>> @templated_docstring(foo='bar') ... def my_func(self, foo): ... '''{foo}''' ... >>> my_func.__doc__ 'bar' """ def decorator(f): f.__doc__ = format_docstring(f.__name__, f.__doc__, docs) return f return decorator
Decorator allowing the use of templated docstrings. Examples -------- >>> @templated_docstring(foo='bar') ... def my_func(self, foo): ... '''{foo}''' ... >>> my_func.__doc__ 'bar'
Below is the the instruction that describes the task: ### Input: Decorator allowing the use of templated docstrings. Examples -------- >>> @templated_docstring(foo='bar') ... def my_func(self, foo): ... '''{foo}''' ... >>> my_func.__doc__ 'bar' ### Response: def templated_docstring(**docs): """ Decorator allowing the use of templated docstrings. Examples -------- >>> @templated_docstring(foo='bar') ... def my_func(self, foo): ... '''{foo}''' ... >>> my_func.__doc__ 'bar' """ def decorator(f): f.__doc__ = format_docstring(f.__name__, f.__doc__, docs) return f return decorator
def findOptimalResults(expName, suite, outFile): """ Go through every experiment in the specified folder. For each experiment, find the iteration with the best validation score, and return the metrics associated with that iteration. """ writer = csv.writer(outFile) headers = ["testAccuracy", "bgAccuracy", "maxTotalAccuracy", "experiment path"] writer.writerow(headers) info = [] print("\n================",expName,"=====================") try: # Retrieve the last totalCorrect from each experiment # Print them sorted from best to worst values, params = suite.get_values_fix_params( expName, 0, "testerror", "last") for p in params: expPath = p["name"] if not "results" in expPath: expPath = os.path.join("results", expPath) maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(expPath, suite) row = [maxTestAccuracy, maxBGAccuracy, maxTotalAccuracy, expPath] info.append(row) writer.writerow(row) print(tabulate(info, headers=headers, tablefmt="grid")) except: print("Couldn't analyze experiment",expName)
Go through every experiment in the specified folder. For each experiment, find the iteration with the best validation score, and return the metrics associated with that iteration.
Below is the the instruction that describes the task: ### Input: Go through every experiment in the specified folder. For each experiment, find the iteration with the best validation score, and return the metrics associated with that iteration. ### Response: def findOptimalResults(expName, suite, outFile): """ Go through every experiment in the specified folder. For each experiment, find the iteration with the best validation score, and return the metrics associated with that iteration. """ writer = csv.writer(outFile) headers = ["testAccuracy", "bgAccuracy", "maxTotalAccuracy", "experiment path"] writer.writerow(headers) info = [] print("\n================",expName,"=====================") try: # Retrieve the last totalCorrect from each experiment # Print them sorted from best to worst values, params = suite.get_values_fix_params( expName, 0, "testerror", "last") for p in params: expPath = p["name"] if not "results" in expPath: expPath = os.path.join("results", expPath) maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(expPath, suite) row = [maxTestAccuracy, maxBGAccuracy, maxTotalAccuracy, expPath] info.append(row) writer.writerow(row) print(tabulate(info, headers=headers, tablefmt="grid")) except: print("Couldn't analyze experiment",expName)
def from_json(cls, json): """Creates an instance of the InputReader for the given input shard's state. Args: json: The InputReader state as a dict-like object. Returns: An instance of the InputReader configured using the given JSON parameters. """ # Strip out unrecognized parameters, as introduced by b/5960884. params = dict((str(k), v) for k, v in json.iteritems() if k in cls._PARAMS) # This is not symmetric with to_json() wrt. PROTOTYPE_REQUEST_PARAM because # the constructor parameters need to be JSON-encodable, so the decoding # needs to happen there anyways. if cls._OFFSET_PARAM in params: params[cls._OFFSET_PARAM] = base64.b64decode(params[cls._OFFSET_PARAM]) return cls(**params)
Creates an instance of the InputReader for the given input shard's state. Args: json: The InputReader state as a dict-like object. Returns: An instance of the InputReader configured using the given JSON parameters.
Below is the the instruction that describes the task: ### Input: Creates an instance of the InputReader for the given input shard's state. Args: json: The InputReader state as a dict-like object. Returns: An instance of the InputReader configured using the given JSON parameters. ### Response: def from_json(cls, json): """Creates an instance of the InputReader for the given input shard's state. Args: json: The InputReader state as a dict-like object. Returns: An instance of the InputReader configured using the given JSON parameters. """ # Strip out unrecognized parameters, as introduced by b/5960884. params = dict((str(k), v) for k, v in json.iteritems() if k in cls._PARAMS) # This is not symmetric with to_json() wrt. PROTOTYPE_REQUEST_PARAM because # the constructor parameters need to be JSON-encodable, so the decoding # needs to happen there anyways. if cls._OFFSET_PARAM in params: params[cls._OFFSET_PARAM] = base64.b64decode(params[cls._OFFSET_PARAM]) return cls(**params)
def _padding_slices_inner(lhs_arr, rhs_arr, axis, offset, pad_mode): """Return slices into the inner array part for a given ``pad_mode``. When performing padding, these slices yield the values from the inner part of a larger array that are to be assigned to the excess part of the same array. Slices for both sides ("left", "right") of the arrays in a given ``axis`` are returned. """ # Calculate the start and stop indices for the inner part istart_inner = offset[axis] n_large = max(lhs_arr.shape[axis], rhs_arr.shape[axis]) n_small = min(lhs_arr.shape[axis], rhs_arr.shape[axis]) istop_inner = istart_inner + n_small # Number of values padded to left and right n_pad_l = istart_inner n_pad_r = n_large - istop_inner if pad_mode == 'periodic': # left: n_pad_l forward, ending at istop_inner - 1 pad_slc_l = slice(istop_inner - n_pad_l, istop_inner) # right: n_pad_r forward, starting at istart_inner pad_slc_r = slice(istart_inner, istart_inner + n_pad_r) elif pad_mode == 'symmetric': # left: n_pad_l backward, ending at istart_inner + 1 pad_slc_l = slice(istart_inner + n_pad_l, istart_inner, -1) # right: n_pad_r backward, starting at istop_inner - 2 # For the corner case that the stopping index is -1, we need to # replace it with None, since -1 as stopping index is equivalent # to the last index, which is not what we want (0 as last index). istop_r = istop_inner - 2 - n_pad_r if istop_r == -1: istop_r = None pad_slc_r = slice(istop_inner - 2, istop_r, -1) elif pad_mode in ('order0', 'order1'): # left: only the first entry, using a slice to avoid squeezing pad_slc_l = slice(istart_inner, istart_inner + 1) # right: only last entry pad_slc_r = slice(istop_inner - 1, istop_inner) else: # Slices are not used, returning trivial ones. The function should not # be used for other modes anyway. pad_slc_l, pad_slc_r = slice(0), slice(0) return pad_slc_l, pad_slc_r
Return slices into the inner array part for a given ``pad_mode``. When performing padding, these slices yield the values from the inner part of a larger array that are to be assigned to the excess part of the same array. Slices for both sides ("left", "right") of the arrays in a given ``axis`` are returned.
Below is the the instruction that describes the task: ### Input: Return slices into the inner array part for a given ``pad_mode``. When performing padding, these slices yield the values from the inner part of a larger array that are to be assigned to the excess part of the same array. Slices for both sides ("left", "right") of the arrays in a given ``axis`` are returned. ### Response: def _padding_slices_inner(lhs_arr, rhs_arr, axis, offset, pad_mode): """Return slices into the inner array part for a given ``pad_mode``. When performing padding, these slices yield the values from the inner part of a larger array that are to be assigned to the excess part of the same array. Slices for both sides ("left", "right") of the arrays in a given ``axis`` are returned. """ # Calculate the start and stop indices for the inner part istart_inner = offset[axis] n_large = max(lhs_arr.shape[axis], rhs_arr.shape[axis]) n_small = min(lhs_arr.shape[axis], rhs_arr.shape[axis]) istop_inner = istart_inner + n_small # Number of values padded to left and right n_pad_l = istart_inner n_pad_r = n_large - istop_inner if pad_mode == 'periodic': # left: n_pad_l forward, ending at istop_inner - 1 pad_slc_l = slice(istop_inner - n_pad_l, istop_inner) # right: n_pad_r forward, starting at istart_inner pad_slc_r = slice(istart_inner, istart_inner + n_pad_r) elif pad_mode == 'symmetric': # left: n_pad_l backward, ending at istart_inner + 1 pad_slc_l = slice(istart_inner + n_pad_l, istart_inner, -1) # right: n_pad_r backward, starting at istop_inner - 2 # For the corner case that the stopping index is -1, we need to # replace it with None, since -1 as stopping index is equivalent # to the last index, which is not what we want (0 as last index). istop_r = istop_inner - 2 - n_pad_r if istop_r == -1: istop_r = None pad_slc_r = slice(istop_inner - 2, istop_r, -1) elif pad_mode in ('order0', 'order1'): # left: only the first entry, using a slice to avoid squeezing pad_slc_l = slice(istart_inner, istart_inner + 1) # right: only last entry pad_slc_r = slice(istop_inner - 1, istop_inner) else: # Slices are not used, returning trivial ones. The function should not # be used for other modes anyway. pad_slc_l, pad_slc_r = slice(0), slice(0) return pad_slc_l, pad_slc_r
def update_member(self, member_id, peer_urls): """ Update the configuration of an existing member in the cluster. :param member_id: ID of the member to update :param peer_urls: new list of peer urls the member will use to communicate with the cluster """ member_update_request = etcdrpc.MemberUpdateRequest(ID=member_id, peerURLs=peer_urls) self.clusterstub.MemberUpdate( member_update_request, self.timeout, credentials=self.call_credentials, metadata=self.metadata )
Update the configuration of an existing member in the cluster. :param member_id: ID of the member to update :param peer_urls: new list of peer urls the member will use to communicate with the cluster
Below is the the instruction that describes the task: ### Input: Update the configuration of an existing member in the cluster. :param member_id: ID of the member to update :param peer_urls: new list of peer urls the member will use to communicate with the cluster ### Response: def update_member(self, member_id, peer_urls): """ Update the configuration of an existing member in the cluster. :param member_id: ID of the member to update :param peer_urls: new list of peer urls the member will use to communicate with the cluster """ member_update_request = etcdrpc.MemberUpdateRequest(ID=member_id, peerURLs=peer_urls) self.clusterstub.MemberUpdate( member_update_request, self.timeout, credentials=self.call_credentials, metadata=self.metadata )
def index(self, item, minindex=0, maxindex=None): """Provide an index of first occurence of item in the list. (or raise a ValueError if item not present) If item is not a string, will raise a TypeError. minindex and maxindex are also optional arguments s.index(x[, i[, j]]) return smallest k such that s[k] == x and i <= k < j """ if maxindex is None: maxindex = len(self) minindex = max(0, minindex) - 1 maxindex = min(len(self), maxindex) if not isinstance(item, str): raise TypeError( 'Members of this object must be strings. ' 'You supplied \"%s\"' % type(item)) index = minindex while index < maxindex: index += 1 if item.lower() == self[index].lower(): return index raise ValueError(': list.index(x): x not in list')
Provide an index of first occurence of item in the list. (or raise a ValueError if item not present) If item is not a string, will raise a TypeError. minindex and maxindex are also optional arguments s.index(x[, i[, j]]) return smallest k such that s[k] == x and i <= k < j
Below is the the instruction that describes the task: ### Input: Provide an index of first occurence of item in the list. (or raise a ValueError if item not present) If item is not a string, will raise a TypeError. minindex and maxindex are also optional arguments s.index(x[, i[, j]]) return smallest k such that s[k] == x and i <= k < j ### Response: def index(self, item, minindex=0, maxindex=None): """Provide an index of first occurence of item in the list. (or raise a ValueError if item not present) If item is not a string, will raise a TypeError. minindex and maxindex are also optional arguments s.index(x[, i[, j]]) return smallest k such that s[k] == x and i <= k < j """ if maxindex is None: maxindex = len(self) minindex = max(0, minindex) - 1 maxindex = min(len(self), maxindex) if not isinstance(item, str): raise TypeError( 'Members of this object must be strings. ' 'You supplied \"%s\"' % type(item)) index = minindex while index < maxindex: index += 1 if item.lower() == self[index].lower(): return index raise ValueError(': list.index(x): x not in list')
def get_user_by_id(bridge_id, include_course_summary=True): """ :param bridge_id: integer Return a list of BridgeUsers objects with custom fields """ url = author_id_url(bridge_id) + "?%s" % CUSTOM_FIELD if include_course_summary: url = "%s&%s" % (url, COURSE_SUMMARY) resp = get_resource(url) return _process_json_resp_data(resp)
:param bridge_id: integer Return a list of BridgeUsers objects with custom fields
Below is the the instruction that describes the task: ### Input: :param bridge_id: integer Return a list of BridgeUsers objects with custom fields ### Response: def get_user_by_id(bridge_id, include_course_summary=True): """ :param bridge_id: integer Return a list of BridgeUsers objects with custom fields """ url = author_id_url(bridge_id) + "?%s" % CUSTOM_FIELD if include_course_summary: url = "%s&%s" % (url, COURSE_SUMMARY) resp = get_resource(url) return _process_json_resp_data(resp)
def recall(self, label=None): """ Returns recall or recall for a given label (category) if specified. """ if label is None: return self.call("recall") else: return self.call("recall", float(label))
Returns recall or recall for a given label (category) if specified.
Below is the the instruction that describes the task: ### Input: Returns recall or recall for a given label (category) if specified. ### Response: def recall(self, label=None): """ Returns recall or recall for a given label (category) if specified. """ if label is None: return self.call("recall") else: return self.call("recall", float(label))
def estimate_position_angle(self,param='position_angle',burn=None,clip=10.0,alpha=0.32): """ Estimate the position angle from the posterior dealing with periodicity. """ # Transform so peak in the middle of the distribution pa = self.samples.get(param,burn=burn,clip=clip) peak = ugali.utils.stats.kde_peak(pa) shift = 180.*((pa+90-peak)>180) pa -= shift # Get the kde interval ret = ugali.utils.stats.peak_interval(pa,alpha) if ret[0] < 0: ret[0] += 180.; ret[1][0] += 180.; ret[1][1] += 180.; return ret
Estimate the position angle from the posterior dealing with periodicity.
Below is the the instruction that describes the task: ### Input: Estimate the position angle from the posterior dealing with periodicity. ### Response: def estimate_position_angle(self,param='position_angle',burn=None,clip=10.0,alpha=0.32): """ Estimate the position angle from the posterior dealing with periodicity. """ # Transform so peak in the middle of the distribution pa = self.samples.get(param,burn=burn,clip=clip) peak = ugali.utils.stats.kde_peak(pa) shift = 180.*((pa+90-peak)>180) pa -= shift # Get the kde interval ret = ugali.utils.stats.peak_interval(pa,alpha) if ret[0] < 0: ret[0] += 180.; ret[1][0] += 180.; ret[1][1] += 180.; return ret
def basic(request, response, verify_user, realm='simple', context=None, **kwargs): """Basic HTTP Authentication""" http_auth = request.auth response.set_header('WWW-Authenticate', 'Basic') if http_auth is None: return if isinstance(http_auth, bytes): http_auth = http_auth.decode('utf8') try: auth_type, user_and_key = http_auth.split(' ', 1) except ValueError: raise HTTPUnauthorized('Authentication Error', 'Authentication header is improperly formed', challenges=('Basic realm="{}"'.format(realm), )) if auth_type.lower() == 'basic': try: user_id, key = base64.decodebytes(bytes(user_and_key.strip(), 'utf8')).decode('utf8').split(':', 1) try: user = verify_user(user_id, key) except TypeError: user = verify_user(user_id, key, context) if user: response.set_header('WWW-Authenticate', '') return user except (binascii.Error, ValueError): raise HTTPUnauthorized('Authentication Error', 'Unable to determine user and password with provided encoding', challenges=('Basic realm="{}"'.format(realm), )) return False
Basic HTTP Authentication
Below is the the instruction that describes the task: ### Input: Basic HTTP Authentication ### Response: def basic(request, response, verify_user, realm='simple', context=None, **kwargs): """Basic HTTP Authentication""" http_auth = request.auth response.set_header('WWW-Authenticate', 'Basic') if http_auth is None: return if isinstance(http_auth, bytes): http_auth = http_auth.decode('utf8') try: auth_type, user_and_key = http_auth.split(' ', 1) except ValueError: raise HTTPUnauthorized('Authentication Error', 'Authentication header is improperly formed', challenges=('Basic realm="{}"'.format(realm), )) if auth_type.lower() == 'basic': try: user_id, key = base64.decodebytes(bytes(user_and_key.strip(), 'utf8')).decode('utf8').split(':', 1) try: user = verify_user(user_id, key) except TypeError: user = verify_user(user_id, key, context) if user: response.set_header('WWW-Authenticate', '') return user except (binascii.Error, ValueError): raise HTTPUnauthorized('Authentication Error', 'Unable to determine user and password with provided encoding', challenges=('Basic realm="{}"'.format(realm), )) return False
def unit(self, parameter): "Get the unit for given parameter" parameter = self._get_parameter_name(parameter).lower() return self._parameters[parameter]['Unit']
Get the unit for given parameter
Below is the the instruction that describes the task: ### Input: Get the unit for given parameter ### Response: def unit(self, parameter): "Get the unit for given parameter" parameter = self._get_parameter_name(parameter).lower() return self._parameters[parameter]['Unit']