code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
| text
stringlengths 164
112k
|
---|---|---|
def tica(data=None, lag=10, dim=-1, var_cutoff=0.95, kinetic_map=True, commute_map=False, weights='empirical',
stride=1, remove_mean=True, skip=0, reversible=True, ncov_max=float('inf'), chunksize=None, **kwargs):
r""" Time-lagged independent component analysis (TICA).
TICA is a linear transformation method. In contrast to PCA, which finds
coordinates of maximal variance, TICA finds coordinates of maximal
autocorrelation at the given lag time. Therefore, TICA is useful in order
to find the *slow* components in a dataset and thus an excellent choice to
transform molecular dynamics data before clustering data for the
construction of a Markov model. When the input data is the result of a
Markov process (such as thermostatted molecular dynamics), TICA finds in
fact an approximation to the eigenfunctions and eigenvalues of the
underlying Markov operator [1]_.
It estimates a TICA transformation from *data*. When input data is given as
an argument, the estimation will be carried out straight away, and the
resulting object can be used to obtain eigenvalues, eigenvectors or project
input data onto the slowest TICA components. If no data is given, this
object is an empty estimator and can be put into a :func:`pipeline` in
order to use TICA in the streaming mode.
Parameters
----------
data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by
source function array with the data, if available. When given, the TICA
transformation is immediately computed and can be used to transform data.
lag : int, optional, default = 10
the lag time, in multiples of the input time step
dim : int, optional, default -1
the number of dimensions (independent components) to project onto. A
call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function
reduces the d-dimensional input to only dim dimensions such that the
data preserves the maximum possible autocorrelation amongst
dim-dimensional linear projections. -1 means all numerically available
dimensions will be used unless reduced by var_cutoff.
Setting dim to a positive value is exclusive with var_cutoff.
var_cutoff : float in the range [0,1], optional, default 0.95
Determines the number of output dimensions by including dimensions
until their cumulative kinetic variance exceeds the fraction
subspace_variance. var_cutoff=1.0 means all numerically available
dimensions (see epsilon) will be used, unless set by dim. Setting
var_cutoff smaller than 1.0 is exclusive with dim
kinetic_map : bool, optional, default True
Eigenvectors will be scaled by eigenvalues. As a result, Euclidean
distances in the transformed data approximate kinetic distances [4]_.
This is a good choice when the data is further processed by clustering.
commute_map : bool, optional, default False
Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed
data will approximate commute distances [5]_.
stride : int, optional, default = 1
If set to 1, all input data will be used for estimation. Note that this
could cause this calculation to be very slow for large data sets. Since
molecular dynamics data is usually correlated at short timescales, it is
often sufficient to estimate transformations at a longer stride. Note
that the stride option in the get_output() function of the returned
object is independent, so you can parametrize at a long stride, and
still map all frames through the transformer.
weights : optional, default="empirical"
Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data.
* "empirical": no re-weighting
* "koopman": use re-weighting procedure from [6]_
* weights: An object that allows to compute re-weighting factors. It must possess a method
weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of
re-weighting factors (np.ndarray(T,)).
remove_mean: bool, optional, default True
remove mean during covariance estimation. Should not be turned off.
skip : int, default=0
skip the first initial n frames per trajectory.
reversible: bool, default=True
symmetrize correlation matrices C_0, C_{\tau}.
ncov_max : int, default=infinity
limit the memory usage of the algorithm from [7]_ to an amount that corresponds
to ncov_max additional copies of each correlation matrix
chunksize: int, default=None
Number of data frames to process at once. Choose a higher value here,
to optimize thread usage and gain processing speed. If None is passed,
use the default value of the underlying reader/data source. Choose zero to
disable chunking at all.
Returns
-------
tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object
Object for time-lagged independent component (TICA) analysis.
it contains TICA eigenvalues and eigenvectors, and the projection of
input data to the dominant TICA
Notes
-----
Given a sequence of multivariate data :math:`X_t`, it computes the
mean-free covariance and time-lagged covariance matrix:
.. math::
C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\
C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu)
where w is a vector of weights for each time step. By default, these weights
are all equal to one, but different weights are possible, like the re-weighting
to equilibrium described in [6]_. Subsequently, the eigenvalue problem
.. math:: C_{\tau} r_i = C_0 \lambda_i r_i,
is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are
their respective normalized time-autocorrelations. The eigenvalues are
related to the relaxation timescale by
.. math::
t_i = -\frac{\tau}{\ln |\lambda_i|}.
When used as a dimension reduction method, the input data is projected
onto the dominant independent components.
TICA was originally introduced for signal processing in [2]_. It was
introduced to molecular dynamics and as a method for the construction
of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied
to molecular dynamics data, TICA is an approximation to the eigenvalues
and eigenvectors of the true underlying dynamics.
Examples
--------
Invoke TICA transformation with a given lag time and output dimension:
>>> import numpy as np
>>> from pyemma.coordinates import tica
>>> data = np.random.random((100,3))
>>> projected_data = tica(data, lag=2, dim=1).get_output()[0]
For a brief explaination why TICA outperforms PCA to extract a good reaction
coordinate have a look `here
<http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_.
See also
--------
:class:`TICA <pyemma.coordinates.transform.TICA>` : tica object
:func:`pca <pyemma.coordinates.pca>` : for principal component analysis
.. autoclass:: pyemma.coordinates.transform.tica.TICA
:members:
:undoc-members:
.. rubric:: Methods
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:methods:
.. rubric:: Attributes
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:attributes:
References
----------
.. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013.
Identification of slow molecular order parameters for Markov model construction
J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489
.. [2] L. Molgedey and H. G. Schuster. 1994.
Separation of a mixture of independent signals using time delayed correlations
Phys. Rev. Lett. 72, 3634.
.. [3] Schwantes C, V S Pande. 2013.
Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9
J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a
.. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation.
J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553
.. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations
for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762
.. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational
approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted),
https://arxiv.org/abs/1610.06773.
.. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for
computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University.
"""
from pyemma.coordinates.transform.tica import TICA
from pyemma.coordinates.estimation.koopman import _KoopmanEstimator
import types
from pyemma.util.reflection import get_default_args
cs = _check_old_chunksize_arg(chunksize, get_default_args(tica)['chunksize'], **kwargs)
if isinstance(weights, _string_types):
if weights == "koopman":
if data is None:
raise ValueError("Data must be supplied for reweighting='koopman'")
if not reversible:
raise ValueError("Koopman re-weighting is designed for reversible processes, set reversible=True")
koop = _KoopmanEstimator(lag=lag, stride=stride, skip=skip, ncov_max=ncov_max)
koop.estimate(data, chunksize=cs)
weights = koop.weights
elif weights == "empirical":
weights = None
else:
raise ValueError("reweighting must be either 'empirical', 'koopman' "
"or an object with a weights(data) method.")
elif hasattr(weights, 'weights') and type(getattr(weights, 'weights')) == types.MethodType:
weights = weights
elif isinstance(weights, (list, tuple)) and all(isinstance(w, _np.ndarray) for w in weights):
if data is not None and len(data) != len(weights):
raise ValueError("len of weights({}) must match len of data({}).".format(len(weights), len(data)))
else:
raise ValueError("reweighting must be either 'empirical', 'koopman' or an object with a weights(data) method.")
if not remove_mean:
import warnings
user_msg = 'remove_mean option is deprecated. The mean is removed from the data by default, otherwise it' \
'cannot be guaranteed that all eigenvalues will be smaller than one. Some functionalities might' \
'become useless in this case (e.g. commute_maps). Also, not removing the mean will not result in' \
'a significant speed up of calculations.'
warnings.warn(
user_msg,
category=_PyEMMA_DeprecationWarning)
res = TICA(lag, dim=dim, var_cutoff=var_cutoff, kinetic_map=kinetic_map, commute_map=commute_map, skip=skip, stride=stride,
weights=weights, reversible=reversible, ncov_max=ncov_max)
if data is not None:
res.estimate(data, chunksize=cs)
else:
res.chunksize = cs
return res | r""" Time-lagged independent component analysis (TICA).
TICA is a linear transformation method. In contrast to PCA, which finds
coordinates of maximal variance, TICA finds coordinates of maximal
autocorrelation at the given lag time. Therefore, TICA is useful in order
to find the *slow* components in a dataset and thus an excellent choice to
transform molecular dynamics data before clustering data for the
construction of a Markov model. When the input data is the result of a
Markov process (such as thermostatted molecular dynamics), TICA finds in
fact an approximation to the eigenfunctions and eigenvalues of the
underlying Markov operator [1]_.
It estimates a TICA transformation from *data*. When input data is given as
an argument, the estimation will be carried out straight away, and the
resulting object can be used to obtain eigenvalues, eigenvectors or project
input data onto the slowest TICA components. If no data is given, this
object is an empty estimator and can be put into a :func:`pipeline` in
order to use TICA in the streaming mode.
Parameters
----------
data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by
source function array with the data, if available. When given, the TICA
transformation is immediately computed and can be used to transform data.
lag : int, optional, default = 10
the lag time, in multiples of the input time step
dim : int, optional, default -1
the number of dimensions (independent components) to project onto. A
call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function
reduces the d-dimensional input to only dim dimensions such that the
data preserves the maximum possible autocorrelation amongst
dim-dimensional linear projections. -1 means all numerically available
dimensions will be used unless reduced by var_cutoff.
Setting dim to a positive value is exclusive with var_cutoff.
var_cutoff : float in the range [0,1], optional, default 0.95
Determines the number of output dimensions by including dimensions
until their cumulative kinetic variance exceeds the fraction
subspace_variance. var_cutoff=1.0 means all numerically available
dimensions (see epsilon) will be used, unless set by dim. Setting
var_cutoff smaller than 1.0 is exclusive with dim
kinetic_map : bool, optional, default True
Eigenvectors will be scaled by eigenvalues. As a result, Euclidean
distances in the transformed data approximate kinetic distances [4]_.
This is a good choice when the data is further processed by clustering.
commute_map : bool, optional, default False
Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed
data will approximate commute distances [5]_.
stride : int, optional, default = 1
If set to 1, all input data will be used for estimation. Note that this
could cause this calculation to be very slow for large data sets. Since
molecular dynamics data is usually correlated at short timescales, it is
often sufficient to estimate transformations at a longer stride. Note
that the stride option in the get_output() function of the returned
object is independent, so you can parametrize at a long stride, and
still map all frames through the transformer.
weights : optional, default="empirical"
Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data.
* "empirical": no re-weighting
* "koopman": use re-weighting procedure from [6]_
* weights: An object that allows to compute re-weighting factors. It must possess a method
weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of
re-weighting factors (np.ndarray(T,)).
remove_mean: bool, optional, default True
remove mean during covariance estimation. Should not be turned off.
skip : int, default=0
skip the first initial n frames per trajectory.
reversible: bool, default=True
symmetrize correlation matrices C_0, C_{\tau}.
ncov_max : int, default=infinity
limit the memory usage of the algorithm from [7]_ to an amount that corresponds
to ncov_max additional copies of each correlation matrix
chunksize: int, default=None
Number of data frames to process at once. Choose a higher value here,
to optimize thread usage and gain processing speed. If None is passed,
use the default value of the underlying reader/data source. Choose zero to
disable chunking at all.
Returns
-------
tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object
Object for time-lagged independent component (TICA) analysis.
it contains TICA eigenvalues and eigenvectors, and the projection of
input data to the dominant TICA
Notes
-----
Given a sequence of multivariate data :math:`X_t`, it computes the
mean-free covariance and time-lagged covariance matrix:
.. math::
C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\
C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu)
where w is a vector of weights for each time step. By default, these weights
are all equal to one, but different weights are possible, like the re-weighting
to equilibrium described in [6]_. Subsequently, the eigenvalue problem
.. math:: C_{\tau} r_i = C_0 \lambda_i r_i,
is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are
their respective normalized time-autocorrelations. The eigenvalues are
related to the relaxation timescale by
.. math::
t_i = -\frac{\tau}{\ln |\lambda_i|}.
When used as a dimension reduction method, the input data is projected
onto the dominant independent components.
TICA was originally introduced for signal processing in [2]_. It was
introduced to molecular dynamics and as a method for the construction
of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied
to molecular dynamics data, TICA is an approximation to the eigenvalues
and eigenvectors of the true underlying dynamics.
Examples
--------
Invoke TICA transformation with a given lag time and output dimension:
>>> import numpy as np
>>> from pyemma.coordinates import tica
>>> data = np.random.random((100,3))
>>> projected_data = tica(data, lag=2, dim=1).get_output()[0]
For a brief explaination why TICA outperforms PCA to extract a good reaction
coordinate have a look `here
<http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_.
See also
--------
:class:`TICA <pyemma.coordinates.transform.TICA>` : tica object
:func:`pca <pyemma.coordinates.pca>` : for principal component analysis
.. autoclass:: pyemma.coordinates.transform.tica.TICA
:members:
:undoc-members:
.. rubric:: Methods
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:methods:
.. rubric:: Attributes
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:attributes:
References
----------
.. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013.
Identification of slow molecular order parameters for Markov model construction
J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489
.. [2] L. Molgedey and H. G. Schuster. 1994.
Separation of a mixture of independent signals using time delayed correlations
Phys. Rev. Lett. 72, 3634.
.. [3] Schwantes C, V S Pande. 2013.
Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9
J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a
.. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation.
J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553
.. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations
for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762
.. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational
approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted),
https://arxiv.org/abs/1610.06773.
.. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for
computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University. | Below is the the instruction that describes the task:
### Input:
r""" Time-lagged independent component analysis (TICA).
TICA is a linear transformation method. In contrast to PCA, which finds
coordinates of maximal variance, TICA finds coordinates of maximal
autocorrelation at the given lag time. Therefore, TICA is useful in order
to find the *slow* components in a dataset and thus an excellent choice to
transform molecular dynamics data before clustering data for the
construction of a Markov model. When the input data is the result of a
Markov process (such as thermostatted molecular dynamics), TICA finds in
fact an approximation to the eigenfunctions and eigenvalues of the
underlying Markov operator [1]_.
It estimates a TICA transformation from *data*. When input data is given as
an argument, the estimation will be carried out straight away, and the
resulting object can be used to obtain eigenvalues, eigenvectors or project
input data onto the slowest TICA components. If no data is given, this
object is an empty estimator and can be put into a :func:`pipeline` in
order to use TICA in the streaming mode.
Parameters
----------
data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by
source function array with the data, if available. When given, the TICA
transformation is immediately computed and can be used to transform data.
lag : int, optional, default = 10
the lag time, in multiples of the input time step
dim : int, optional, default -1
the number of dimensions (independent components) to project onto. A
call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function
reduces the d-dimensional input to only dim dimensions such that the
data preserves the maximum possible autocorrelation amongst
dim-dimensional linear projections. -1 means all numerically available
dimensions will be used unless reduced by var_cutoff.
Setting dim to a positive value is exclusive with var_cutoff.
var_cutoff : float in the range [0,1], optional, default 0.95
Determines the number of output dimensions by including dimensions
until their cumulative kinetic variance exceeds the fraction
subspace_variance. var_cutoff=1.0 means all numerically available
dimensions (see epsilon) will be used, unless set by dim. Setting
var_cutoff smaller than 1.0 is exclusive with dim
kinetic_map : bool, optional, default True
Eigenvectors will be scaled by eigenvalues. As a result, Euclidean
distances in the transformed data approximate kinetic distances [4]_.
This is a good choice when the data is further processed by clustering.
commute_map : bool, optional, default False
Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed
data will approximate commute distances [5]_.
stride : int, optional, default = 1
If set to 1, all input data will be used for estimation. Note that this
could cause this calculation to be very slow for large data sets. Since
molecular dynamics data is usually correlated at short timescales, it is
often sufficient to estimate transformations at a longer stride. Note
that the stride option in the get_output() function of the returned
object is independent, so you can parametrize at a long stride, and
still map all frames through the transformer.
weights : optional, default="empirical"
Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data.
* "empirical": no re-weighting
* "koopman": use re-weighting procedure from [6]_
* weights: An object that allows to compute re-weighting factors. It must possess a method
weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of
re-weighting factors (np.ndarray(T,)).
remove_mean: bool, optional, default True
remove mean during covariance estimation. Should not be turned off.
skip : int, default=0
skip the first initial n frames per trajectory.
reversible: bool, default=True
symmetrize correlation matrices C_0, C_{\tau}.
ncov_max : int, default=infinity
limit the memory usage of the algorithm from [7]_ to an amount that corresponds
to ncov_max additional copies of each correlation matrix
chunksize: int, default=None
Number of data frames to process at once. Choose a higher value here,
to optimize thread usage and gain processing speed. If None is passed,
use the default value of the underlying reader/data source. Choose zero to
disable chunking at all.
Returns
-------
tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object
Object for time-lagged independent component (TICA) analysis.
it contains TICA eigenvalues and eigenvectors, and the projection of
input data to the dominant TICA
Notes
-----
Given a sequence of multivariate data :math:`X_t`, it computes the
mean-free covariance and time-lagged covariance matrix:
.. math::
C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\
C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu)
where w is a vector of weights for each time step. By default, these weights
are all equal to one, but different weights are possible, like the re-weighting
to equilibrium described in [6]_. Subsequently, the eigenvalue problem
.. math:: C_{\tau} r_i = C_0 \lambda_i r_i,
is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are
their respective normalized time-autocorrelations. The eigenvalues are
related to the relaxation timescale by
.. math::
t_i = -\frac{\tau}{\ln |\lambda_i|}.
When used as a dimension reduction method, the input data is projected
onto the dominant independent components.
TICA was originally introduced for signal processing in [2]_. It was
introduced to molecular dynamics and as a method for the construction
of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied
to molecular dynamics data, TICA is an approximation to the eigenvalues
and eigenvectors of the true underlying dynamics.
Examples
--------
Invoke TICA transformation with a given lag time and output dimension:
>>> import numpy as np
>>> from pyemma.coordinates import tica
>>> data = np.random.random((100,3))
>>> projected_data = tica(data, lag=2, dim=1).get_output()[0]
For a brief explaination why TICA outperforms PCA to extract a good reaction
coordinate have a look `here
<http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_.
See also
--------
:class:`TICA <pyemma.coordinates.transform.TICA>` : tica object
:func:`pca <pyemma.coordinates.pca>` : for principal component analysis
.. autoclass:: pyemma.coordinates.transform.tica.TICA
:members:
:undoc-members:
.. rubric:: Methods
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:methods:
.. rubric:: Attributes
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:attributes:
References
----------
.. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013.
Identification of slow molecular order parameters for Markov model construction
J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489
.. [2] L. Molgedey and H. G. Schuster. 1994.
Separation of a mixture of independent signals using time delayed correlations
Phys. Rev. Lett. 72, 3634.
.. [3] Schwantes C, V S Pande. 2013.
Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9
J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a
.. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation.
J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553
.. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations
for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762
.. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational
approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted),
https://arxiv.org/abs/1610.06773.
.. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for
computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University.
### Response:
def tica(data=None, lag=10, dim=-1, var_cutoff=0.95, kinetic_map=True, commute_map=False, weights='empirical',
stride=1, remove_mean=True, skip=0, reversible=True, ncov_max=float('inf'), chunksize=None, **kwargs):
r""" Time-lagged independent component analysis (TICA).
TICA is a linear transformation method. In contrast to PCA, which finds
coordinates of maximal variance, TICA finds coordinates of maximal
autocorrelation at the given lag time. Therefore, TICA is useful in order
to find the *slow* components in a dataset and thus an excellent choice to
transform molecular dynamics data before clustering data for the
construction of a Markov model. When the input data is the result of a
Markov process (such as thermostatted molecular dynamics), TICA finds in
fact an approximation to the eigenfunctions and eigenvalues of the
underlying Markov operator [1]_.
It estimates a TICA transformation from *data*. When input data is given as
an argument, the estimation will be carried out straight away, and the
resulting object can be used to obtain eigenvalues, eigenvectors or project
input data onto the slowest TICA components. If no data is given, this
object is an empty estimator and can be put into a :func:`pipeline` in
order to use TICA in the streaming mode.
Parameters
----------
data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by
source function array with the data, if available. When given, the TICA
transformation is immediately computed and can be used to transform data.
lag : int, optional, default = 10
the lag time, in multiples of the input time step
dim : int, optional, default -1
the number of dimensions (independent components) to project onto. A
call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function
reduces the d-dimensional input to only dim dimensions such that the
data preserves the maximum possible autocorrelation amongst
dim-dimensional linear projections. -1 means all numerically available
dimensions will be used unless reduced by var_cutoff.
Setting dim to a positive value is exclusive with var_cutoff.
var_cutoff : float in the range [0,1], optional, default 0.95
Determines the number of output dimensions by including dimensions
until their cumulative kinetic variance exceeds the fraction
subspace_variance. var_cutoff=1.0 means all numerically available
dimensions (see epsilon) will be used, unless set by dim. Setting
var_cutoff smaller than 1.0 is exclusive with dim
kinetic_map : bool, optional, default True
Eigenvectors will be scaled by eigenvalues. As a result, Euclidean
distances in the transformed data approximate kinetic distances [4]_.
This is a good choice when the data is further processed by clustering.
commute_map : bool, optional, default False
Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed
data will approximate commute distances [5]_.
stride : int, optional, default = 1
If set to 1, all input data will be used for estimation. Note that this
could cause this calculation to be very slow for large data sets. Since
molecular dynamics data is usually correlated at short timescales, it is
often sufficient to estimate transformations at a longer stride. Note
that the stride option in the get_output() function of the returned
object is independent, so you can parametrize at a long stride, and
still map all frames through the transformer.
weights : optional, default="empirical"
Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data.
* "empirical": no re-weighting
* "koopman": use re-weighting procedure from [6]_
* weights: An object that allows to compute re-weighting factors. It must possess a method
weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of
re-weighting factors (np.ndarray(T,)).
remove_mean: bool, optional, default True
remove mean during covariance estimation. Should not be turned off.
skip : int, default=0
skip the first initial n frames per trajectory.
reversible: bool, default=True
symmetrize correlation matrices C_0, C_{\tau}.
ncov_max : int, default=infinity
limit the memory usage of the algorithm from [7]_ to an amount that corresponds
to ncov_max additional copies of each correlation matrix
chunksize: int, default=None
Number of data frames to process at once. Choose a higher value here,
to optimize thread usage and gain processing speed. If None is passed,
use the default value of the underlying reader/data source. Choose zero to
disable chunking at all.
Returns
-------
tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object
Object for time-lagged independent component (TICA) analysis.
it contains TICA eigenvalues and eigenvectors, and the projection of
input data to the dominant TICA
Notes
-----
Given a sequence of multivariate data :math:`X_t`, it computes the
mean-free covariance and time-lagged covariance matrix:
.. math::
C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\
C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu)
where w is a vector of weights for each time step. By default, these weights
are all equal to one, but different weights are possible, like the re-weighting
to equilibrium described in [6]_. Subsequently, the eigenvalue problem
.. math:: C_{\tau} r_i = C_0 \lambda_i r_i,
is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are
their respective normalized time-autocorrelations. The eigenvalues are
related to the relaxation timescale by
.. math::
t_i = -\frac{\tau}{\ln |\lambda_i|}.
When used as a dimension reduction method, the input data is projected
onto the dominant independent components.
TICA was originally introduced for signal processing in [2]_. It was
introduced to molecular dynamics and as a method for the construction
of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied
to molecular dynamics data, TICA is an approximation to the eigenvalues
and eigenvectors of the true underlying dynamics.
Examples
--------
Invoke TICA transformation with a given lag time and output dimension:
>>> import numpy as np
>>> from pyemma.coordinates import tica
>>> data = np.random.random((100,3))
>>> projected_data = tica(data, lag=2, dim=1).get_output()[0]
For a brief explaination why TICA outperforms PCA to extract a good reaction
coordinate have a look `here
<http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_.
See also
--------
:class:`TICA <pyemma.coordinates.transform.TICA>` : tica object
:func:`pca <pyemma.coordinates.pca>` : for principal component analysis
.. autoclass:: pyemma.coordinates.transform.tica.TICA
:members:
:undoc-members:
.. rubric:: Methods
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:methods:
.. rubric:: Attributes
.. autoautosummary:: pyemma.coordinates.transform.tica.TICA
:attributes:
References
----------
.. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013.
Identification of slow molecular order parameters for Markov model construction
J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489
.. [2] L. Molgedey and H. G. Schuster. 1994.
Separation of a mixture of independent signals using time delayed correlations
Phys. Rev. Lett. 72, 3634.
.. [3] Schwantes C, V S Pande. 2013.
Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9
J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a
.. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation.
J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553
.. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations
for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762
.. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational
approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted),
https://arxiv.org/abs/1610.06773.
.. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for
computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University.
"""
from pyemma.coordinates.transform.tica import TICA
from pyemma.coordinates.estimation.koopman import _KoopmanEstimator
import types
from pyemma.util.reflection import get_default_args
cs = _check_old_chunksize_arg(chunksize, get_default_args(tica)['chunksize'], **kwargs)
if isinstance(weights, _string_types):
if weights == "koopman":
if data is None:
raise ValueError("Data must be supplied for reweighting='koopman'")
if not reversible:
raise ValueError("Koopman re-weighting is designed for reversible processes, set reversible=True")
koop = _KoopmanEstimator(lag=lag, stride=stride, skip=skip, ncov_max=ncov_max)
koop.estimate(data, chunksize=cs)
weights = koop.weights
elif weights == "empirical":
weights = None
else:
raise ValueError("reweighting must be either 'empirical', 'koopman' "
"or an object with a weights(data) method.")
elif hasattr(weights, 'weights') and type(getattr(weights, 'weights')) == types.MethodType:
weights = weights
elif isinstance(weights, (list, tuple)) and all(isinstance(w, _np.ndarray) for w in weights):
if data is not None and len(data) != len(weights):
raise ValueError("len of weights({}) must match len of data({}).".format(len(weights), len(data)))
else:
raise ValueError("reweighting must be either 'empirical', 'koopman' or an object with a weights(data) method.")
if not remove_mean:
import warnings
user_msg = 'remove_mean option is deprecated. The mean is removed from the data by default, otherwise it' \
'cannot be guaranteed that all eigenvalues will be smaller than one. Some functionalities might' \
'become useless in this case (e.g. commute_maps). Also, not removing the mean will not result in' \
'a significant speed up of calculations.'
warnings.warn(
user_msg,
category=_PyEMMA_DeprecationWarning)
res = TICA(lag, dim=dim, var_cutoff=var_cutoff, kinetic_map=kinetic_map, commute_map=commute_map, skip=skip, stride=stride,
weights=weights, reversible=reversible, ncov_max=ncov_max)
if data is not None:
res.estimate(data, chunksize=cs)
else:
res.chunksize = cs
return res |
def _run_automation(self, conf):
"""
1. Fill form.
2. Run scenario.
3. Delay.
"""
self._fill_form(self._find_form(conf))
self._run_scenario(conf)
self._delay(conf) | 1. Fill form.
2. Run scenario.
3. Delay. | Below is the the instruction that describes the task:
### Input:
1. Fill form.
2. Run scenario.
3. Delay.
### Response:
def _run_automation(self, conf):
"""
1. Fill form.
2. Run scenario.
3. Delay.
"""
self._fill_form(self._find_form(conf))
self._run_scenario(conf)
self._delay(conf) |
def get_traffic(self, subreddit):
"""Return the json dictionary containing traffic stats for a subreddit.
:param subreddit: The subreddit whose /about/traffic page we will
collect.
"""
url = self.config['subreddit_traffic'].format(
subreddit=six.text_type(subreddit))
return self.request_json(url) | Return the json dictionary containing traffic stats for a subreddit.
:param subreddit: The subreddit whose /about/traffic page we will
collect. | Below is the the instruction that describes the task:
### Input:
Return the json dictionary containing traffic stats for a subreddit.
:param subreddit: The subreddit whose /about/traffic page we will
collect.
### Response:
def get_traffic(self, subreddit):
"""Return the json dictionary containing traffic stats for a subreddit.
:param subreddit: The subreddit whose /about/traffic page we will
collect.
"""
url = self.config['subreddit_traffic'].format(
subreddit=six.text_type(subreddit))
return self.request_json(url) |
def get_subclasses(c):
"""Gets the subclasses of a class."""
subclasses = c.__subclasses__()
for d in list(subclasses):
subclasses.extend(get_subclasses(d))
return subclasses | Gets the subclasses of a class. | Below is the the instruction that describes the task:
### Input:
Gets the subclasses of a class.
### Response:
def get_subclasses(c):
"""Gets the subclasses of a class."""
subclasses = c.__subclasses__()
for d in list(subclasses):
subclasses.extend(get_subclasses(d))
return subclasses |
def meta_tags(cls, **kwargs):
"""
Meta allows you to add meta data to site
:params **kwargs:
meta keys we're expecting:
title (str)
description (str)
url (str) (Will pick it up by itself if not set)
image (str)
site_name (str) (but can pick it up from config file)
object_type (str)
keywords (list)
locale (str)
card (str)
**Boolean By default these keys are True
use_opengraph
use_twitter
use_googleplus
"""
page_meta = cls._global.get("__META__", {})
page_meta.update(**kwargs)
cls.g(__META__=page_meta) | Meta allows you to add meta data to site
:params **kwargs:
meta keys we're expecting:
title (str)
description (str)
url (str) (Will pick it up by itself if not set)
image (str)
site_name (str) (but can pick it up from config file)
object_type (str)
keywords (list)
locale (str)
card (str)
**Boolean By default these keys are True
use_opengraph
use_twitter
use_googleplus | Below is the the instruction that describes the task:
### Input:
Meta allows you to add meta data to site
:params **kwargs:
meta keys we're expecting:
title (str)
description (str)
url (str) (Will pick it up by itself if not set)
image (str)
site_name (str) (but can pick it up from config file)
object_type (str)
keywords (list)
locale (str)
card (str)
**Boolean By default these keys are True
use_opengraph
use_twitter
use_googleplus
### Response:
def meta_tags(cls, **kwargs):
"""
Meta allows you to add meta data to site
:params **kwargs:
meta keys we're expecting:
title (str)
description (str)
url (str) (Will pick it up by itself if not set)
image (str)
site_name (str) (but can pick it up from config file)
object_type (str)
keywords (list)
locale (str)
card (str)
**Boolean By default these keys are True
use_opengraph
use_twitter
use_googleplus
"""
page_meta = cls._global.get("__META__", {})
page_meta.update(**kwargs)
cls.g(__META__=page_meta) |
def _parse_args(arg):
'''
yamlify `arg` and ensure it's outermost datatype is a list
'''
yaml_args = salt.utils.args.yamlify_arg(arg)
if yaml_args is None:
return []
elif not isinstance(yaml_args, list):
return [yaml_args]
else:
return yaml_args | yamlify `arg` and ensure it's outermost datatype is a list | Below is the the instruction that describes the task:
### Input:
yamlify `arg` and ensure it's outermost datatype is a list
### Response:
def _parse_args(arg):
'''
yamlify `arg` and ensure it's outermost datatype is a list
'''
yaml_args = salt.utils.args.yamlify_arg(arg)
if yaml_args is None:
return []
elif not isinstance(yaml_args, list):
return [yaml_args]
else:
return yaml_args |
def parse_JSON(self, JSON_string):
"""
Parses a list of *UVIndex* instances out of raw JSON data. Only certain
properties of the data are used: if these properties are not found or
cannot be parsed, an error is issued.
:param JSON_string: a raw JSON string
:type JSON_string: str
:returns: a list of *UVIndex* instances or an empty list if no data is
available
:raises: *ParseResponseError* if it is impossible to find or parse the
data needed to build the result, *APIResponseError* if the JSON
string embeds an HTTP status error
"""
if JSON_string is None:
raise parse_response_error.ParseResponseError('JSON data is None')
d = json.loads(JSON_string)
uvindex_parser = UVIndexParser()
return [uvindex_parser.parse_JSON(json.dumps(item)) for item in d] | Parses a list of *UVIndex* instances out of raw JSON data. Only certain
properties of the data are used: if these properties are not found or
cannot be parsed, an error is issued.
:param JSON_string: a raw JSON string
:type JSON_string: str
:returns: a list of *UVIndex* instances or an empty list if no data is
available
:raises: *ParseResponseError* if it is impossible to find or parse the
data needed to build the result, *APIResponseError* if the JSON
string embeds an HTTP status error | Below is the the instruction that describes the task:
### Input:
Parses a list of *UVIndex* instances out of raw JSON data. Only certain
properties of the data are used: if these properties are not found or
cannot be parsed, an error is issued.
:param JSON_string: a raw JSON string
:type JSON_string: str
:returns: a list of *UVIndex* instances or an empty list if no data is
available
:raises: *ParseResponseError* if it is impossible to find or parse the
data needed to build the result, *APIResponseError* if the JSON
string embeds an HTTP status error
### Response:
def parse_JSON(self, JSON_string):
"""
Parses a list of *UVIndex* instances out of raw JSON data. Only certain
properties of the data are used: if these properties are not found or
cannot be parsed, an error is issued.
:param JSON_string: a raw JSON string
:type JSON_string: str
:returns: a list of *UVIndex* instances or an empty list if no data is
available
:raises: *ParseResponseError* if it is impossible to find or parse the
data needed to build the result, *APIResponseError* if the JSON
string embeds an HTTP status error
"""
if JSON_string is None:
raise parse_response_error.ParseResponseError('JSON data is None')
d = json.loads(JSON_string)
uvindex_parser = UVIndexParser()
return [uvindex_parser.parse_JSON(json.dumps(item)) for item in d] |
def decompose(self):
"""
decompose CGR to pair of Molecules, which represents reactants and products state of reaction
:return: tuple of two molecules
"""
mc = self._get_subclass('MoleculeContainer')
reactants = mc()
products = mc()
for n, atom in self.atoms():
reactants.add_atom(atom._reactant, n)
products.add_atom(atom._product, n)
for n, m, bond in self.bonds():
if bond._reactant is not None:
reactants.add_bond(n, m, bond._reactant)
if bond._product is not None:
products.add_bond(n, m, bond._product)
return reactants, products | decompose CGR to pair of Molecules, which represents reactants and products state of reaction
:return: tuple of two molecules | Below is the the instruction that describes the task:
### Input:
decompose CGR to pair of Molecules, which represents reactants and products state of reaction
:return: tuple of two molecules
### Response:
def decompose(self):
"""
decompose CGR to pair of Molecules, which represents reactants and products state of reaction
:return: tuple of two molecules
"""
mc = self._get_subclass('MoleculeContainer')
reactants = mc()
products = mc()
for n, atom in self.atoms():
reactants.add_atom(atom._reactant, n)
products.add_atom(atom._product, n)
for n, m, bond in self.bonds():
if bond._reactant is not None:
reactants.add_bond(n, m, bond._reactant)
if bond._product is not None:
products.add_bond(n, m, bond._product)
return reactants, products |
def apply_mask(self, x=None):
'''
Returns the outlier mask, an array of indices corresponding to the
non-outliers.
:param numpy.ndarray x: If specified, returns the masked version of \
:py:obj:`x` instead. Default :py:obj:`None`
'''
if x is None:
return np.delete(np.arange(len(self.time)), self.mask)
else:
return np.delete(x, self.mask, axis=0) | Returns the outlier mask, an array of indices corresponding to the
non-outliers.
:param numpy.ndarray x: If specified, returns the masked version of \
:py:obj:`x` instead. Default :py:obj:`None` | Below is the the instruction that describes the task:
### Input:
Returns the outlier mask, an array of indices corresponding to the
non-outliers.
:param numpy.ndarray x: If specified, returns the masked version of \
:py:obj:`x` instead. Default :py:obj:`None`
### Response:
def apply_mask(self, x=None):
'''
Returns the outlier mask, an array of indices corresponding to the
non-outliers.
:param numpy.ndarray x: If specified, returns the masked version of \
:py:obj:`x` instead. Default :py:obj:`None`
'''
if x is None:
return np.delete(np.arange(len(self.time)), self.mask)
else:
return np.delete(x, self.mask, axis=0) |
def plot_risk_exposures(exposures, ax=None,
title='Daily risk factor exposures'):
"""
Parameters
----------
exposures : pd.DataFrame
df indexed by datetime, with factors as columns
- Example:
momentum reversal
dt
2017-01-01 -0.238655 0.077123
2017-01-02 0.821872 1.520515
ax : matplotlib.axes.Axes
axes on which plots are made. if None, current axes will be used
Returns
-------
ax : matplotlib.axes.Axes
"""
if ax is None:
ax = plt.gca()
for col in exposures:
ax.plot(exposures[col])
configure_legend(ax, change_colors=True)
ax.set_ylabel('Factor exposures')
ax.set_title(title)
return ax | Parameters
----------
exposures : pd.DataFrame
df indexed by datetime, with factors as columns
- Example:
momentum reversal
dt
2017-01-01 -0.238655 0.077123
2017-01-02 0.821872 1.520515
ax : matplotlib.axes.Axes
axes on which plots are made. if None, current axes will be used
Returns
-------
ax : matplotlib.axes.Axes | Below is the the instruction that describes the task:
### Input:
Parameters
----------
exposures : pd.DataFrame
df indexed by datetime, with factors as columns
- Example:
momentum reversal
dt
2017-01-01 -0.238655 0.077123
2017-01-02 0.821872 1.520515
ax : matplotlib.axes.Axes
axes on which plots are made. if None, current axes will be used
Returns
-------
ax : matplotlib.axes.Axes
### Response:
def plot_risk_exposures(exposures, ax=None,
title='Daily risk factor exposures'):
"""
Parameters
----------
exposures : pd.DataFrame
df indexed by datetime, with factors as columns
- Example:
momentum reversal
dt
2017-01-01 -0.238655 0.077123
2017-01-02 0.821872 1.520515
ax : matplotlib.axes.Axes
axes on which plots are made. if None, current axes will be used
Returns
-------
ax : matplotlib.axes.Axes
"""
if ax is None:
ax = plt.gca()
for col in exposures:
ax.plot(exposures[col])
configure_legend(ax, change_colors=True)
ax.set_ylabel('Factor exposures')
ax.set_title(title)
return ax |
def load_certificate(source):
"""
Loads an x509 certificate into a Certificate object
:param source:
A byte string of file contents, a unicode string filename or an
asn1crypto.x509.Certificate object
:raises:
ValueError - when any of the parameters contain an invalid value
TypeError - when any of the parameters are of the wrong type
OSError - when an error is returned by the OS crypto library
:return:
A Certificate object
"""
if isinstance(source, asn1x509.Certificate):
certificate = source
elif isinstance(source, byte_cls):
certificate = parse_certificate(source)
elif isinstance(source, str_cls):
with open(source, 'rb') as f:
certificate = parse_certificate(f.read())
else:
raise TypeError(pretty_message(
'''
source must be a byte string, unicode string or
asn1crypto.x509.Certificate object, not %s
''',
type_name(source)
))
return _load_x509(certificate) | Loads an x509 certificate into a Certificate object
:param source:
A byte string of file contents, a unicode string filename or an
asn1crypto.x509.Certificate object
:raises:
ValueError - when any of the parameters contain an invalid value
TypeError - when any of the parameters are of the wrong type
OSError - when an error is returned by the OS crypto library
:return:
A Certificate object | Below is the the instruction that describes the task:
### Input:
Loads an x509 certificate into a Certificate object
:param source:
A byte string of file contents, a unicode string filename or an
asn1crypto.x509.Certificate object
:raises:
ValueError - when any of the parameters contain an invalid value
TypeError - when any of the parameters are of the wrong type
OSError - when an error is returned by the OS crypto library
:return:
A Certificate object
### Response:
def load_certificate(source):
"""
Loads an x509 certificate into a Certificate object
:param source:
A byte string of file contents, a unicode string filename or an
asn1crypto.x509.Certificate object
:raises:
ValueError - when any of the parameters contain an invalid value
TypeError - when any of the parameters are of the wrong type
OSError - when an error is returned by the OS crypto library
:return:
A Certificate object
"""
if isinstance(source, asn1x509.Certificate):
certificate = source
elif isinstance(source, byte_cls):
certificate = parse_certificate(source)
elif isinstance(source, str_cls):
with open(source, 'rb') as f:
certificate = parse_certificate(f.read())
else:
raise TypeError(pretty_message(
'''
source must be a byte string, unicode string or
asn1crypto.x509.Certificate object, not %s
''',
type_name(source)
))
return _load_x509(certificate) |
def startMultiple(self, zones):
"""Start multiple zones."""
path = 'zone/start_multiple'
payload = {'zones': zones}
return self.rachio.put(path, payload) | Start multiple zones. | Below is the the instruction that describes the task:
### Input:
Start multiple zones.
### Response:
def startMultiple(self, zones):
"""Start multiple zones."""
path = 'zone/start_multiple'
payload = {'zones': zones}
return self.rachio.put(path, payload) |
async def activate_module(channel, module_name, activate):
"""
Changes a modules activated/deactivated state for a server
Args:
channel: The channel to send the message to
module_name: The name of the module to change state for
activate: The activated/deactivated state of the module
"""
data = datatools.get_data()
server_id = channel.server.id
_dir = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
_dir_modules = "{}/../".format(_dir)
if not os.path.isfile("{}/{}/_data.py".format(_dir_modules, module_name)):
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "No module found named '{}'".format(module_name))
await embed.send()
return
try:
import_name = ".discord_modis.modules.{}.{}".format(module_name, "_data")
module_data = importlib.import_module(import_name, "modis")
# Don't try and deactivate this module (not that it would do anything)
if module_data.modulename == _data.modulename:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "I'm sorry, Dave. I'm afraid I can't do that.")
await embed.send()
return
# This /should/ never happen if everything goes well
if module_data.modulename not in data["discord"]["servers"][server_id]:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error",
"No data found for module '{}'".format(module_data.modulename))
await embed.send()
return
# Modify the module
if "activated" in data["discord"]["servers"][server_id][module_data.modulename]:
data["discord"]["servers"][server_id][module_data.modulename]["activated"] = activate
# Write the data
datatools.write_data(data)
await client.send_typing(channel)
embed = ui_embed.modify_module(channel, module_data.modulename, activate)
await embed.send()
return
else:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "Can't deactivate module '{}'".format(module_data.modulename))
await embed.send()
return
except Exception as e:
logger.error("Could not modify module {}".format(module_name))
logger.exception(e) | Changes a modules activated/deactivated state for a server
Args:
channel: The channel to send the message to
module_name: The name of the module to change state for
activate: The activated/deactivated state of the module | Below is the the instruction that describes the task:
### Input:
Changes a modules activated/deactivated state for a server
Args:
channel: The channel to send the message to
module_name: The name of the module to change state for
activate: The activated/deactivated state of the module
### Response:
async def activate_module(channel, module_name, activate):
"""
Changes a modules activated/deactivated state for a server
Args:
channel: The channel to send the message to
module_name: The name of the module to change state for
activate: The activated/deactivated state of the module
"""
data = datatools.get_data()
server_id = channel.server.id
_dir = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
_dir_modules = "{}/../".format(_dir)
if not os.path.isfile("{}/{}/_data.py".format(_dir_modules, module_name)):
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "No module found named '{}'".format(module_name))
await embed.send()
return
try:
import_name = ".discord_modis.modules.{}.{}".format(module_name, "_data")
module_data = importlib.import_module(import_name, "modis")
# Don't try and deactivate this module (not that it would do anything)
if module_data.modulename == _data.modulename:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "I'm sorry, Dave. I'm afraid I can't do that.")
await embed.send()
return
# This /should/ never happen if everything goes well
if module_data.modulename not in data["discord"]["servers"][server_id]:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error",
"No data found for module '{}'".format(module_data.modulename))
await embed.send()
return
# Modify the module
if "activated" in data["discord"]["servers"][server_id][module_data.modulename]:
data["discord"]["servers"][server_id][module_data.modulename]["activated"] = activate
# Write the data
datatools.write_data(data)
await client.send_typing(channel)
embed = ui_embed.modify_module(channel, module_data.modulename, activate)
await embed.send()
return
else:
await client.send_typing(channel)
embed = ui_embed.error(channel, "Error", "Can't deactivate module '{}'".format(module_data.modulename))
await embed.send()
return
except Exception as e:
logger.error("Could not modify module {}".format(module_name))
logger.exception(e) |
async def close_authenticator_async(self):
"""Close the CBS auth channel and session asynchronously."""
_logger.info("Shutting down CBS session on connection: %r.", self._connection.container_id)
try:
self._cbs_auth.destroy()
_logger.info("Auth closed, destroying session on connection: %r.", self._connection.container_id)
await self._session.destroy_async()
finally:
_logger.info("Finished shutting down CBS session on connection: %r.", self._connection.container_id) | Close the CBS auth channel and session asynchronously. | Below is the the instruction that describes the task:
### Input:
Close the CBS auth channel and session asynchronously.
### Response:
async def close_authenticator_async(self):
"""Close the CBS auth channel and session asynchronously."""
_logger.info("Shutting down CBS session on connection: %r.", self._connection.container_id)
try:
self._cbs_auth.destroy()
_logger.info("Auth closed, destroying session on connection: %r.", self._connection.container_id)
await self._session.destroy_async()
finally:
_logger.info("Finished shutting down CBS session on connection: %r.", self._connection.container_id) |
def expireat(key, timestamp, host=None, port=None, db=None, password=None):
'''
Set a keys expire at given UNIX time
CLI Example:
.. code-block:: bash
salt '*' redis.expireat foo 1400000000
'''
server = _connect(host, port, db, password)
return server.expireat(key, timestamp) | Set a keys expire at given UNIX time
CLI Example:
.. code-block:: bash
salt '*' redis.expireat foo 1400000000 | Below is the the instruction that describes the task:
### Input:
Set a keys expire at given UNIX time
CLI Example:
.. code-block:: bash
salt '*' redis.expireat foo 1400000000
### Response:
def expireat(key, timestamp, host=None, port=None, db=None, password=None):
'''
Set a keys expire at given UNIX time
CLI Example:
.. code-block:: bash
salt '*' redis.expireat foo 1400000000
'''
server = _connect(host, port, db, password)
return server.expireat(key, timestamp) |
def serialize(self):
"""
Render dependency back in string using:
~= if package is internal
== otherwise
"""
if self.is_vcs:
return self.without_editable(self.line).strip()
equal = '~=' if self.is_compatible else '=='
package_version = '{package}{equal}{version} '.format(
package=self.without_editable(self.package),
version=self.version,
equal=equal,
)
if self.hashes:
hashes = self.hashes.split()
lines = [package_version.strip()]
lines.extend(hashes)
if self.comment:
lines.append(self.comment)
return ' \\\n '.join(lines)
else:
return '{0}{1}'.format(
package_version.ljust(self.COMMENT_JUSTIFICATION),
self.comment,
).rstrip() | Render dependency back in string using:
~= if package is internal
== otherwise | Below is the the instruction that describes the task:
### Input:
Render dependency back in string using:
~= if package is internal
== otherwise
### Response:
def serialize(self):
"""
Render dependency back in string using:
~= if package is internal
== otherwise
"""
if self.is_vcs:
return self.without_editable(self.line).strip()
equal = '~=' if self.is_compatible else '=='
package_version = '{package}{equal}{version} '.format(
package=self.without_editable(self.package),
version=self.version,
equal=equal,
)
if self.hashes:
hashes = self.hashes.split()
lines = [package_version.strip()]
lines.extend(hashes)
if self.comment:
lines.append(self.comment)
return ' \\\n '.join(lines)
else:
return '{0}{1}'.format(
package_version.ljust(self.COMMENT_JUSTIFICATION),
self.comment,
).rstrip() |
def add_child_family(self, family_id, child_id):
"""Adds a child to a family.
arg: family_id (osid.id.Id): the ``Id`` of a family
arg: child_id (osid.id.Id): the ``Id`` of the new child
raise: AlreadyExists - ``family_id`` is already a parent of
``child_id``
raise: NotFound - ``family_id`` or ``child_id`` not found
raise: NullArgument - ``family_id`` or ``child_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.BinHierarchyDesignSession.add_child_bin_template
if self._catalog_session is not None:
return self._catalog_session.add_child_catalog(catalog_id=family_id, child_id=child_id)
return self._hierarchy_session.add_child(id_=family_id, child_id=child_id) | Adds a child to a family.
arg: family_id (osid.id.Id): the ``Id`` of a family
arg: child_id (osid.id.Id): the ``Id`` of the new child
raise: AlreadyExists - ``family_id`` is already a parent of
``child_id``
raise: NotFound - ``family_id`` or ``child_id`` not found
raise: NullArgument - ``family_id`` or ``child_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.* | Below is the the instruction that describes the task:
### Input:
Adds a child to a family.
arg: family_id (osid.id.Id): the ``Id`` of a family
arg: child_id (osid.id.Id): the ``Id`` of the new child
raise: AlreadyExists - ``family_id`` is already a parent of
``child_id``
raise: NotFound - ``family_id`` or ``child_id`` not found
raise: NullArgument - ``family_id`` or ``child_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
### Response:
def add_child_family(self, family_id, child_id):
"""Adds a child to a family.
arg: family_id (osid.id.Id): the ``Id`` of a family
arg: child_id (osid.id.Id): the ``Id`` of the new child
raise: AlreadyExists - ``family_id`` is already a parent of
``child_id``
raise: NotFound - ``family_id`` or ``child_id`` not found
raise: NullArgument - ``family_id`` or ``child_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.BinHierarchyDesignSession.add_child_bin_template
if self._catalog_session is not None:
return self._catalog_session.add_child_catalog(catalog_id=family_id, child_id=child_id)
return self._hierarchy_session.add_child(id_=family_id, child_id=child_id) |
def finalize_configs(is_training):
"""
Run some sanity checks, and populate some configs from others
"""
_C.freeze(False) # populate new keys now
_C.DATA.NUM_CLASS = _C.DATA.NUM_CATEGORY + 1 # +1 background
_C.DATA.BASEDIR = os.path.expanduser(_C.DATA.BASEDIR)
if isinstance(_C.DATA.VAL, six.string_types): # support single string (the typical case) as well
_C.DATA.VAL = (_C.DATA.VAL, )
assert _C.BACKBONE.NORM in ['FreezeBN', 'SyncBN', 'GN', 'None'], _C.BACKBONE.NORM
if _C.BACKBONE.NORM != 'FreezeBN':
assert not _C.BACKBONE.FREEZE_AFFINE
assert _C.BACKBONE.FREEZE_AT in [0, 1, 2]
_C.RPN.NUM_ANCHOR = len(_C.RPN.ANCHOR_SIZES) * len(_C.RPN.ANCHOR_RATIOS)
assert len(_C.FPN.ANCHOR_STRIDES) == len(_C.RPN.ANCHOR_SIZES)
# image size into the backbone has to be multiple of this number
_C.FPN.RESOLUTION_REQUIREMENT = _C.FPN.ANCHOR_STRIDES[3] # [3] because we build FPN with features r2,r3,r4,r5
if _C.MODE_FPN:
size_mult = _C.FPN.RESOLUTION_REQUIREMENT * 1.
_C.PREPROC.MAX_SIZE = np.ceil(_C.PREPROC.MAX_SIZE / size_mult) * size_mult
assert _C.FPN.PROPOSAL_MODE in ['Level', 'Joint']
assert _C.FPN.FRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.MRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.NORM in ['None', 'GN']
if _C.FPN.CASCADE:
# the first threshold is the proposal sampling threshold
assert _C.CASCADE.IOUS[0] == _C.FRCNN.FG_THRESH
assert len(_C.CASCADE.BBOX_REG_WEIGHTS) == len(_C.CASCADE.IOUS)
if is_training:
train_scales = _C.PREPROC.TRAIN_SHORT_EDGE_SIZE
if isinstance(train_scales, (list, tuple)) and train_scales[1] - train_scales[0] > 100:
# don't autotune if augmentation is on
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
os.environ['TF_AUTOTUNE_THRESHOLD'] = '1'
assert _C.TRAINER in ['horovod', 'replicated'], _C.TRAINER
# setup NUM_GPUS
if _C.TRAINER == 'horovod':
import horovod.tensorflow as hvd
ngpu = hvd.size()
if ngpu == hvd.local_size():
logger.warn("It's not recommended to use horovod for single-machine training. "
"Replicated trainer is more stable and has the same efficiency.")
else:
assert 'OMPI_COMM_WORLD_SIZE' not in os.environ
ngpu = get_num_gpu()
assert ngpu > 0, "Has to train with GPU!"
assert ngpu % 8 == 0 or 8 % ngpu == 0, "Can only train with 1,2,4 or >=8 GPUs, but found {} GPUs".format(ngpu)
else:
# autotune is too slow for inference
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
ngpu = get_num_gpu()
if _C.TRAIN.NUM_GPUS is None:
_C.TRAIN.NUM_GPUS = ngpu
else:
if _C.TRAINER == 'horovod':
assert _C.TRAIN.NUM_GPUS == ngpu
else:
assert _C.TRAIN.NUM_GPUS <= ngpu
_C.freeze()
logger.info("Config: ------------------------------------------\n" + str(_C)) | Run some sanity checks, and populate some configs from others | Below is the the instruction that describes the task:
### Input:
Run some sanity checks, and populate some configs from others
### Response:
def finalize_configs(is_training):
"""
Run some sanity checks, and populate some configs from others
"""
_C.freeze(False) # populate new keys now
_C.DATA.NUM_CLASS = _C.DATA.NUM_CATEGORY + 1 # +1 background
_C.DATA.BASEDIR = os.path.expanduser(_C.DATA.BASEDIR)
if isinstance(_C.DATA.VAL, six.string_types): # support single string (the typical case) as well
_C.DATA.VAL = (_C.DATA.VAL, )
assert _C.BACKBONE.NORM in ['FreezeBN', 'SyncBN', 'GN', 'None'], _C.BACKBONE.NORM
if _C.BACKBONE.NORM != 'FreezeBN':
assert not _C.BACKBONE.FREEZE_AFFINE
assert _C.BACKBONE.FREEZE_AT in [0, 1, 2]
_C.RPN.NUM_ANCHOR = len(_C.RPN.ANCHOR_SIZES) * len(_C.RPN.ANCHOR_RATIOS)
assert len(_C.FPN.ANCHOR_STRIDES) == len(_C.RPN.ANCHOR_SIZES)
# image size into the backbone has to be multiple of this number
_C.FPN.RESOLUTION_REQUIREMENT = _C.FPN.ANCHOR_STRIDES[3] # [3] because we build FPN with features r2,r3,r4,r5
if _C.MODE_FPN:
size_mult = _C.FPN.RESOLUTION_REQUIREMENT * 1.
_C.PREPROC.MAX_SIZE = np.ceil(_C.PREPROC.MAX_SIZE / size_mult) * size_mult
assert _C.FPN.PROPOSAL_MODE in ['Level', 'Joint']
assert _C.FPN.FRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.MRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.NORM in ['None', 'GN']
if _C.FPN.CASCADE:
# the first threshold is the proposal sampling threshold
assert _C.CASCADE.IOUS[0] == _C.FRCNN.FG_THRESH
assert len(_C.CASCADE.BBOX_REG_WEIGHTS) == len(_C.CASCADE.IOUS)
if is_training:
train_scales = _C.PREPROC.TRAIN_SHORT_EDGE_SIZE
if isinstance(train_scales, (list, tuple)) and train_scales[1] - train_scales[0] > 100:
# don't autotune if augmentation is on
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
os.environ['TF_AUTOTUNE_THRESHOLD'] = '1'
assert _C.TRAINER in ['horovod', 'replicated'], _C.TRAINER
# setup NUM_GPUS
if _C.TRAINER == 'horovod':
import horovod.tensorflow as hvd
ngpu = hvd.size()
if ngpu == hvd.local_size():
logger.warn("It's not recommended to use horovod for single-machine training. "
"Replicated trainer is more stable and has the same efficiency.")
else:
assert 'OMPI_COMM_WORLD_SIZE' not in os.environ
ngpu = get_num_gpu()
assert ngpu > 0, "Has to train with GPU!"
assert ngpu % 8 == 0 or 8 % ngpu == 0, "Can only train with 1,2,4 or >=8 GPUs, but found {} GPUs".format(ngpu)
else:
# autotune is too slow for inference
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
ngpu = get_num_gpu()
if _C.TRAIN.NUM_GPUS is None:
_C.TRAIN.NUM_GPUS = ngpu
else:
if _C.TRAINER == 'horovod':
assert _C.TRAIN.NUM_GPUS == ngpu
else:
assert _C.TRAIN.NUM_GPUS <= ngpu
_C.freeze()
logger.info("Config: ------------------------------------------\n" + str(_C)) |
def event_return(events):
'''
Write event data (return data and non-return data) to file on the master.
'''
if not events:
# events is an empty list.
# Don't open the logfile in vain.
return
opts = _get_options({}) # Pass in empty ret, since this is a list of events
try:
with salt.utils.files.flopen(opts['filename'], 'a') as logfile:
for event in events:
salt.utils.json.dump(event, logfile)
logfile.write(str('\n')) # future lint: disable=blacklisted-function
except Exception:
log.error('Could not write to rawdata_json file %s', opts['filename'])
raise | Write event data (return data and non-return data) to file on the master. | Below is the the instruction that describes the task:
### Input:
Write event data (return data and non-return data) to file on the master.
### Response:
def event_return(events):
'''
Write event data (return data and non-return data) to file on the master.
'''
if not events:
# events is an empty list.
# Don't open the logfile in vain.
return
opts = _get_options({}) # Pass in empty ret, since this is a list of events
try:
with salt.utils.files.flopen(opts['filename'], 'a') as logfile:
for event in events:
salt.utils.json.dump(event, logfile)
logfile.write(str('\n')) # future lint: disable=blacklisted-function
except Exception:
log.error('Could not write to rawdata_json file %s', opts['filename'])
raise |
def new_transaction(
vm: VM,
from_: Address,
to: Address,
amount: int=0,
private_key: PrivateKey=None,
gas_price: int=10,
gas: int=100000,
data: bytes=b'') -> BaseTransaction:
"""
Create and return a transaction sending amount from <from_> to <to>.
The transaction will be signed with the given private key.
"""
nonce = vm.state.get_nonce(from_)
tx = vm.create_unsigned_transaction(
nonce=nonce,
gas_price=gas_price,
gas=gas,
to=to,
value=amount,
data=data,
)
return tx.as_signed_transaction(private_key) | Create and return a transaction sending amount from <from_> to <to>.
The transaction will be signed with the given private key. | Below is the the instruction that describes the task:
### Input:
Create and return a transaction sending amount from <from_> to <to>.
The transaction will be signed with the given private key.
### Response:
def new_transaction(
vm: VM,
from_: Address,
to: Address,
amount: int=0,
private_key: PrivateKey=None,
gas_price: int=10,
gas: int=100000,
data: bytes=b'') -> BaseTransaction:
"""
Create and return a transaction sending amount from <from_> to <to>.
The transaction will be signed with the given private key.
"""
nonce = vm.state.get_nonce(from_)
tx = vm.create_unsigned_transaction(
nonce=nonce,
gas_price=gas_price,
gas=gas,
to=to,
value=amount,
data=data,
)
return tx.as_signed_transaction(private_key) |
def action(arguments):
"""
Trim the alignment as specified
"""
# Determine file format for input and output
source_format = (arguments.source_format or
fileformat.from_handle(arguments.source_file))
output_format = (arguments.output_format or
fileformat.from_handle(arguments.output_file))
# Load the alignment
with arguments.source_file:
sequences = SeqIO.parse(
arguments.source_file,
source_format,
alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet))
# Locate primers
(forward_start, forward_end), (reverse_start, reverse_end) = locate_primers(
sequences, arguments.forward_primer,
arguments.reverse_primer, arguments.reverse_complement,
arguments.max_hamming_distance)
# Generate slice indexes
if arguments.include_primers:
start = forward_start
end = reverse_end + 1
else:
start = forward_end + 1
end = reverse_start
# Rewind the input file
arguments.source_file.seek(0)
sequences = SeqIO.parse(
arguments.source_file,
source_format,
alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet))
# Apply the transformation
prune_action = _ACTIONS[arguments.prune_action]
transformed_sequences = prune_action(sequences, start, end)
with arguments.output_file:
SeqIO.write(transformed_sequences, arguments.output_file,
output_format) | Trim the alignment as specified | Below is the the instruction that describes the task:
### Input:
Trim the alignment as specified
### Response:
def action(arguments):
"""
Trim the alignment as specified
"""
# Determine file format for input and output
source_format = (arguments.source_format or
fileformat.from_handle(arguments.source_file))
output_format = (arguments.output_format or
fileformat.from_handle(arguments.output_file))
# Load the alignment
with arguments.source_file:
sequences = SeqIO.parse(
arguments.source_file,
source_format,
alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet))
# Locate primers
(forward_start, forward_end), (reverse_start, reverse_end) = locate_primers(
sequences, arguments.forward_primer,
arguments.reverse_primer, arguments.reverse_complement,
arguments.max_hamming_distance)
# Generate slice indexes
if arguments.include_primers:
start = forward_start
end = reverse_end + 1
else:
start = forward_end + 1
end = reverse_start
# Rewind the input file
arguments.source_file.seek(0)
sequences = SeqIO.parse(
arguments.source_file,
source_format,
alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet))
# Apply the transformation
prune_action = _ACTIONS[arguments.prune_action]
transformed_sequences = prune_action(sequences, start, end)
with arguments.output_file:
SeqIO.write(transformed_sequences, arguments.output_file,
output_format) |
def setReplicationStatusResponse(
self, pid, nodeRef, status, dataoneError=None, vendorSpecific=None
):
"""CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) →
boolean https://releases.dataone.org/online/api-documentatio
n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus.
Args:
pid:
nodeRef:
status:
dataoneError:
vendorSpecific:
Returns:
"""
mmp_dict = {'nodeRef': nodeRef, 'status': status} # .toxml('utf-8'),
if dataoneError is not None:
mmp_dict['failure'] = ('failure.xml', dataoneError.serialize_to_transport())
return self.PUT(
['replicaNotifications', pid], fields=mmp_dict, headers=vendorSpecific
) | CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) →
boolean https://releases.dataone.org/online/api-documentatio
n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus.
Args:
pid:
nodeRef:
status:
dataoneError:
vendorSpecific:
Returns: | Below is the the instruction that describes the task:
### Input:
CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) →
boolean https://releases.dataone.org/online/api-documentatio
n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus.
Args:
pid:
nodeRef:
status:
dataoneError:
vendorSpecific:
Returns:
### Response:
def setReplicationStatusResponse(
self, pid, nodeRef, status, dataoneError=None, vendorSpecific=None
):
"""CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) →
boolean https://releases.dataone.org/online/api-documentatio
n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus.
Args:
pid:
nodeRef:
status:
dataoneError:
vendorSpecific:
Returns:
"""
mmp_dict = {'nodeRef': nodeRef, 'status': status} # .toxml('utf-8'),
if dataoneError is not None:
mmp_dict['failure'] = ('failure.xml', dataoneError.serialize_to_transport())
return self.PUT(
['replicaNotifications', pid], fields=mmp_dict, headers=vendorSpecific
) |
def rollback(self):
"""Rollback the changes previously made by remove()."""
if self.save_dir is None:
logger.error(
"Can't roll back %s; was not uninstalled",
self.dist.project_name,
)
return False
logger.info('Rolling back uninstall of %s', self.dist.project_name)
for path in self._moved_paths:
tmp_path = self._stash(path)
logger.debug('Replacing %s', path)
renames(tmp_path, path)
for pth in self.pth.values():
pth.rollback() | Rollback the changes previously made by remove(). | Below is the the instruction that describes the task:
### Input:
Rollback the changes previously made by remove().
### Response:
def rollback(self):
"""Rollback the changes previously made by remove()."""
if self.save_dir is None:
logger.error(
"Can't roll back %s; was not uninstalled",
self.dist.project_name,
)
return False
logger.info('Rolling back uninstall of %s', self.dist.project_name)
for path in self._moved_paths:
tmp_path = self._stash(path)
logger.debug('Replacing %s', path)
renames(tmp_path, path)
for pth in self.pth.values():
pth.rollback() |
def hicpro_capture_chart (self):
""" Generate Capture Hi-C plot"""
keys = OrderedDict()
keys['valid_pairs_on_target_cap_cap'] = { 'color': '#0039e6', 'name': 'Capture-Capture interactions' }
keys['valid_pairs_on_target_cap_rep'] = { 'color': '#809fff', 'name': 'Capture-Reporter interactions' }
keys['valid_pairs_off_target'] = { 'color': '#cccccc', 'name': 'Off-target valid pairs' }
# Check capture info are available
num_samples = 0
for s_name in self.hicpro_data:
for k in keys:
num_samples += sum([1 if k in self.hicpro_data[s_name] else 0])
if num_samples == 0:
return False
# Config for the plot
config = {
'id': 'hicpro_cap_plot',
'title': 'HiC-Pro: Capture Statistics',
'ylab': '# Pairs',
'cpswitch_counts_label': 'Number of Pairs'
}
return bargraph.plot(self.hicpro_data, keys, config) | Generate Capture Hi-C plot | Below is the the instruction that describes the task:
### Input:
Generate Capture Hi-C plot
### Response:
def hicpro_capture_chart (self):
""" Generate Capture Hi-C plot"""
keys = OrderedDict()
keys['valid_pairs_on_target_cap_cap'] = { 'color': '#0039e6', 'name': 'Capture-Capture interactions' }
keys['valid_pairs_on_target_cap_rep'] = { 'color': '#809fff', 'name': 'Capture-Reporter interactions' }
keys['valid_pairs_off_target'] = { 'color': '#cccccc', 'name': 'Off-target valid pairs' }
# Check capture info are available
num_samples = 0
for s_name in self.hicpro_data:
for k in keys:
num_samples += sum([1 if k in self.hicpro_data[s_name] else 0])
if num_samples == 0:
return False
# Config for the plot
config = {
'id': 'hicpro_cap_plot',
'title': 'HiC-Pro: Capture Statistics',
'ylab': '# Pairs',
'cpswitch_counts_label': 'Number of Pairs'
}
return bargraph.plot(self.hicpro_data, keys, config) |
def wait(self, wait_time=0):
"""
Blocking call to check if the worker returns the result. One can use
job.result after this call returns ``True``.
:arg wait_time: Time in seconds to wait, default is infinite.
:return: `True` or `False`.
.. note::
This is a blocking call, you can specity wait_time argument for timeout.
"""
if self.__result:
return True
data = self.rdb.brpop(self.urn, wait_time)
if data:
self.rdb.delete(self.urn)
data = json.loads(data[1])
self.__result = data
return True
else:
return False | Blocking call to check if the worker returns the result. One can use
job.result after this call returns ``True``.
:arg wait_time: Time in seconds to wait, default is infinite.
:return: `True` or `False`.
.. note::
This is a blocking call, you can specity wait_time argument for timeout. | Below is the the instruction that describes the task:
### Input:
Blocking call to check if the worker returns the result. One can use
job.result after this call returns ``True``.
:arg wait_time: Time in seconds to wait, default is infinite.
:return: `True` or `False`.
.. note::
This is a blocking call, you can specity wait_time argument for timeout.
### Response:
def wait(self, wait_time=0):
"""
Blocking call to check if the worker returns the result. One can use
job.result after this call returns ``True``.
:arg wait_time: Time in seconds to wait, default is infinite.
:return: `True` or `False`.
.. note::
This is a blocking call, you can specity wait_time argument for timeout.
"""
if self.__result:
return True
data = self.rdb.brpop(self.urn, wait_time)
if data:
self.rdb.delete(self.urn)
data = json.loads(data[1])
self.__result = data
return True
else:
return False |
def is_module_or_package(path):
"""Return True if path is a Python module/package"""
is_module = osp.isfile(path) and osp.splitext(path)[1] in ('.py', '.pyw')
is_package = osp.isdir(path) and osp.isfile(osp.join(path, '__init__.py'))
return is_module or is_package | Return True if path is a Python module/package | Below is the the instruction that describes the task:
### Input:
Return True if path is a Python module/package
### Response:
def is_module_or_package(path):
"""Return True if path is a Python module/package"""
is_module = osp.isfile(path) and osp.splitext(path)[1] in ('.py', '.pyw')
is_package = osp.isdir(path) and osp.isfile(osp.join(path, '__init__.py'))
return is_module or is_package |
def _skip_newlines(self):
"""Increment over newlines."""
while self._cur_token['type'] is TT.lbreak and not self._finished:
self._increment() | Increment over newlines. | Below is the the instruction that describes the task:
### Input:
Increment over newlines.
### Response:
def _skip_newlines(self):
"""Increment over newlines."""
while self._cur_token['type'] is TT.lbreak and not self._finished:
self._increment() |
def find_synonym(self, word):
"""
Given a string and a dict of synonyms, returns the 'preferred'
word. Case insensitive.
Args:
word (str): A word.
Returns:
str: The preferred word, or the input word if not found.
Example:
>>> syn = {'snake': ['python', 'adder']}
>>> find_synonym('adder', syn)
'snake'
>>> find_synonym('rattler', syn)
'rattler'
TODO:
Make it handle case, returning the same case it received.
"""
if word and self.synonyms:
# Make the reverse look-up table.
reverse_lookup = {}
for k, v in self.synonyms.items():
for i in v:
reverse_lookup[i.lower()] = k.lower()
# Now check words against this table.
if word.lower() in reverse_lookup:
return reverse_lookup[word.lower()]
return word | Given a string and a dict of synonyms, returns the 'preferred'
word. Case insensitive.
Args:
word (str): A word.
Returns:
str: The preferred word, or the input word if not found.
Example:
>>> syn = {'snake': ['python', 'adder']}
>>> find_synonym('adder', syn)
'snake'
>>> find_synonym('rattler', syn)
'rattler'
TODO:
Make it handle case, returning the same case it received. | Below is the the instruction that describes the task:
### Input:
Given a string and a dict of synonyms, returns the 'preferred'
word. Case insensitive.
Args:
word (str): A word.
Returns:
str: The preferred word, or the input word if not found.
Example:
>>> syn = {'snake': ['python', 'adder']}
>>> find_synonym('adder', syn)
'snake'
>>> find_synonym('rattler', syn)
'rattler'
TODO:
Make it handle case, returning the same case it received.
### Response:
def find_synonym(self, word):
"""
Given a string and a dict of synonyms, returns the 'preferred'
word. Case insensitive.
Args:
word (str): A word.
Returns:
str: The preferred word, or the input word if not found.
Example:
>>> syn = {'snake': ['python', 'adder']}
>>> find_synonym('adder', syn)
'snake'
>>> find_synonym('rattler', syn)
'rattler'
TODO:
Make it handle case, returning the same case it received.
"""
if word and self.synonyms:
# Make the reverse look-up table.
reverse_lookup = {}
for k, v in self.synonyms.items():
for i in v:
reverse_lookup[i.lower()] = k.lower()
# Now check words against this table.
if word.lower() in reverse_lookup:
return reverse_lookup[word.lower()]
return word |
def _available(name, ret):
'''
Check if the service is available
'''
avail = False
if 'service.available' in __salt__:
avail = __salt__['service.available'](name)
elif 'service.get_all' in __salt__:
avail = name in __salt__['service.get_all']()
if not avail:
ret['result'] = False
ret['comment'] = 'The named service {0} is not available'.format(name)
return avail | Check if the service is available | Below is the the instruction that describes the task:
### Input:
Check if the service is available
### Response:
def _available(name, ret):
'''
Check if the service is available
'''
avail = False
if 'service.available' in __salt__:
avail = __salt__['service.available'](name)
elif 'service.get_all' in __salt__:
avail = name in __salt__['service.get_all']()
if not avail:
ret['result'] = False
ret['comment'] = 'The named service {0} is not available'.format(name)
return avail |
def get_cache_key(page):
"""
Create the cache key for the current page and language
"""
try:
site_id = page.node.site_id
except AttributeError:
site_id = page.site_id
return _get_cache_key('page_sitemap', page, 'default', site_id) | Create the cache key for the current page and language | Below is the the instruction that describes the task:
### Input:
Create the cache key for the current page and language
### Response:
def get_cache_key(page):
"""
Create the cache key for the current page and language
"""
try:
site_id = page.node.site_id
except AttributeError:
site_id = page.site_id
return _get_cache_key('page_sitemap', page, 'default', site_id) |
def make_path(config, *endings):
"""
Create a path based on component configuration.
All paths are relative to the component's configuration directory; usually
this will be the same for an entire session, but this function supuports
component-specific configuration directories.
Arguments:
config - the configuration object for a component
endings - a list of file paths to append to the component's configuration
directory
"""
config_dir = config.get("dp.config_dir")
return os.path.join(config_dir, *endings) | Create a path based on component configuration.
All paths are relative to the component's configuration directory; usually
this will be the same for an entire session, but this function supuports
component-specific configuration directories.
Arguments:
config - the configuration object for a component
endings - a list of file paths to append to the component's configuration
directory | Below is the the instruction that describes the task:
### Input:
Create a path based on component configuration.
All paths are relative to the component's configuration directory; usually
this will be the same for an entire session, but this function supuports
component-specific configuration directories.
Arguments:
config - the configuration object for a component
endings - a list of file paths to append to the component's configuration
directory
### Response:
def make_path(config, *endings):
"""
Create a path based on component configuration.
All paths are relative to the component's configuration directory; usually
this will be the same for an entire session, but this function supuports
component-specific configuration directories.
Arguments:
config - the configuration object for a component
endings - a list of file paths to append to the component's configuration
directory
"""
config_dir = config.get("dp.config_dir")
return os.path.join(config_dir, *endings) |
async def async_delete_all_keys(session, host, port, api_key, api_keys=[]):
"""Delete all API keys except for the ones provided to the method."""
url = 'http://{}:{}/api/{}/config'.format(host, str(port), api_key)
response = await async_request(session.get, url)
api_keys.append(api_key)
for key in response['whitelist'].keys():
if key not in api_keys:
await async_delete_api_key(session, host, port, key) | Delete all API keys except for the ones provided to the method. | Below is the the instruction that describes the task:
### Input:
Delete all API keys except for the ones provided to the method.
### Response:
async def async_delete_all_keys(session, host, port, api_key, api_keys=[]):
"""Delete all API keys except for the ones provided to the method."""
url = 'http://{}:{}/api/{}/config'.format(host, str(port), api_key)
response = await async_request(session.get, url)
api_keys.append(api_key)
for key in response['whitelist'].keys():
if key not in api_keys:
await async_delete_api_key(session, host, port, key) |
def watch(self, keys, on_watch, filters=None, start_revision=None, return_previous=None):
"""
Watch one or more keys or key sets and invoke a callback.
Watch watches for events happening or that have happened. The entire event history
can be watched starting from the last compaction revision.
:param keys: Watch these keys / key sets.
:type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet`
:param on_watch: The callback to invoke upon receiving
a watch event.
:type on_watch: callable
:param filters: Any filters to apply.
:param start_revision: start_revision is an optional
revision to watch from (inclusive). No start_revision is "now".
:type start_revision: int
:param return_previous: Flag to request returning previous values.
:returns: A deferred that just fires when watching has started successfully,
or which fires with an error in case the watching could not be started.
:rtype: twisted.internet.Deferred
"""
d = self._start_watching(keys, on_watch, filters, start_revision, return_previous)
#
# ODD: Trying to use a parameter instead of *args errors out as soon as the
# parameter is accessed.
#
def on_err(*args):
if args[0].type not in [CancelledError, ResponseFailed]:
self.log.warn('etcd watch terminated with "{error}"', error=args[0].type)
return args[0]
d.addErrback(on_err)
return d | Watch one or more keys or key sets and invoke a callback.
Watch watches for events happening or that have happened. The entire event history
can be watched starting from the last compaction revision.
:param keys: Watch these keys / key sets.
:type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet`
:param on_watch: The callback to invoke upon receiving
a watch event.
:type on_watch: callable
:param filters: Any filters to apply.
:param start_revision: start_revision is an optional
revision to watch from (inclusive). No start_revision is "now".
:type start_revision: int
:param return_previous: Flag to request returning previous values.
:returns: A deferred that just fires when watching has started successfully,
or which fires with an error in case the watching could not be started.
:rtype: twisted.internet.Deferred | Below is the the instruction that describes the task:
### Input:
Watch one or more keys or key sets and invoke a callback.
Watch watches for events happening or that have happened. The entire event history
can be watched starting from the last compaction revision.
:param keys: Watch these keys / key sets.
:type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet`
:param on_watch: The callback to invoke upon receiving
a watch event.
:type on_watch: callable
:param filters: Any filters to apply.
:param start_revision: start_revision is an optional
revision to watch from (inclusive). No start_revision is "now".
:type start_revision: int
:param return_previous: Flag to request returning previous values.
:returns: A deferred that just fires when watching has started successfully,
or which fires with an error in case the watching could not be started.
:rtype: twisted.internet.Deferred
### Response:
def watch(self, keys, on_watch, filters=None, start_revision=None, return_previous=None):
"""
Watch one or more keys or key sets and invoke a callback.
Watch watches for events happening or that have happened. The entire event history
can be watched starting from the last compaction revision.
:param keys: Watch these keys / key sets.
:type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet`
:param on_watch: The callback to invoke upon receiving
a watch event.
:type on_watch: callable
:param filters: Any filters to apply.
:param start_revision: start_revision is an optional
revision to watch from (inclusive). No start_revision is "now".
:type start_revision: int
:param return_previous: Flag to request returning previous values.
:returns: A deferred that just fires when watching has started successfully,
or which fires with an error in case the watching could not be started.
:rtype: twisted.internet.Deferred
"""
d = self._start_watching(keys, on_watch, filters, start_revision, return_previous)
#
# ODD: Trying to use a parameter instead of *args errors out as soon as the
# parameter is accessed.
#
def on_err(*args):
if args[0].type not in [CancelledError, ResponseFailed]:
self.log.warn('etcd watch terminated with "{error}"', error=args[0].type)
return args[0]
d.addErrback(on_err)
return d |
def massage_type_name(cls, type_name):
""" Returns a readable type according to a java type
"""
if type_name.lower() in ("enum", "enumeration"):
return "enum"
if type_name.lower() in ("str", "string"):
return "string"
if type_name.lower() in ("boolean", "bool"):
return "boolean"
if type_name.lower() in ("int", "integer"):
return "integer"
if type_name.lower() in ("date", "datetime", "time"):
return "time"
if type_name.lower() in ("double", "float", "long"):
return "float"
if type_name.lower() in ("list", "array"):
return "list"
if type_name.lower() in ("object", "dict"):
return "object"
if "array" in type_name.lower():
return "list"
return "string" | Returns a readable type according to a java type | Below is the the instruction that describes the task:
### Input:
Returns a readable type according to a java type
### Response:
def massage_type_name(cls, type_name):
""" Returns a readable type according to a java type
"""
if type_name.lower() in ("enum", "enumeration"):
return "enum"
if type_name.lower() in ("str", "string"):
return "string"
if type_name.lower() in ("boolean", "bool"):
return "boolean"
if type_name.lower() in ("int", "integer"):
return "integer"
if type_name.lower() in ("date", "datetime", "time"):
return "time"
if type_name.lower() in ("double", "float", "long"):
return "float"
if type_name.lower() in ("list", "array"):
return "list"
if type_name.lower() in ("object", "dict"):
return "object"
if "array" in type_name.lower():
return "list"
return "string" |
def set_data(self, value):
"""Sets a new string as response. The value set must either by a
unicode or bytestring. If a unicode string is set it's encoded
automatically to the charset of the response (utf-8 by default).
.. versionadded:: 0.9
"""
# if an unicode string is set, it's encoded directly so that we
# can set the content length
if isinstance(value, text_type):
value = value.encode(self.charset)
else:
value = bytes(value)
self.response = [value]
if self.automatically_set_content_length:
self.headers['Content-Length'] = str(len(value)) | Sets a new string as response. The value set must either by a
unicode or bytestring. If a unicode string is set it's encoded
automatically to the charset of the response (utf-8 by default).
.. versionadded:: 0.9 | Below is the the instruction that describes the task:
### Input:
Sets a new string as response. The value set must either by a
unicode or bytestring. If a unicode string is set it's encoded
automatically to the charset of the response (utf-8 by default).
.. versionadded:: 0.9
### Response:
def set_data(self, value):
"""Sets a new string as response. The value set must either by a
unicode or bytestring. If a unicode string is set it's encoded
automatically to the charset of the response (utf-8 by default).
.. versionadded:: 0.9
"""
# if an unicode string is set, it's encoded directly so that we
# can set the content length
if isinstance(value, text_type):
value = value.encode(self.charset)
else:
value = bytes(value)
self.response = [value]
if self.automatically_set_content_length:
self.headers['Content-Length'] = str(len(value)) |
def duplicate(self):
'''
Returns a copy of the current group, including its lines.
@returns: Group
'''
return self.__class__(amount=self.amount, date=self.date,
method=self.method, ref=self.ref) | Returns a copy of the current group, including its lines.
@returns: Group | Below is the the instruction that describes the task:
### Input:
Returns a copy of the current group, including its lines.
@returns: Group
### Response:
def duplicate(self):
'''
Returns a copy of the current group, including its lines.
@returns: Group
'''
return self.__class__(amount=self.amount, date=self.date,
method=self.method, ref=self.ref) |
def __add_tier(self, tier, token_tier_name):
"""
adds a tier to the document graph (either as additional attributes
to the token nodes or as a span node with outgoing edges to the token
nodes it represents)
"""
if tier.attrib['category'] == token_tier_name:
self.__add_tokens(tier)
else:
if self.is_token_annotation_tier(tier):
self.__add_token_annotation_tier(tier)
else:
self.__add_span_tier(tier) | adds a tier to the document graph (either as additional attributes
to the token nodes or as a span node with outgoing edges to the token
nodes it represents) | Below is the the instruction that describes the task:
### Input:
adds a tier to the document graph (either as additional attributes
to the token nodes or as a span node with outgoing edges to the token
nodes it represents)
### Response:
def __add_tier(self, tier, token_tier_name):
"""
adds a tier to the document graph (either as additional attributes
to the token nodes or as a span node with outgoing edges to the token
nodes it represents)
"""
if tier.attrib['category'] == token_tier_name:
self.__add_tokens(tier)
else:
if self.is_token_annotation_tier(tier):
self.__add_token_annotation_tier(tier)
else:
self.__add_span_tier(tier) |
def re_general(Vel, Area, PerimWetted, Nu):
"""Return the Reynolds Number for a general cross section."""
#Checking input validity - inputs not checked here are checked by
#functions this function calls.
ut.check_range([Vel, ">=0", "Velocity"], [Nu, ">0", "Nu"])
return 4 * radius_hydraulic_general(Area, PerimWetted).magnitude * Vel / Nu | Return the Reynolds Number for a general cross section. | Below is the the instruction that describes the task:
### Input:
Return the Reynolds Number for a general cross section.
### Response:
def re_general(Vel, Area, PerimWetted, Nu):
"""Return the Reynolds Number for a general cross section."""
#Checking input validity - inputs not checked here are checked by
#functions this function calls.
ut.check_range([Vel, ">=0", "Velocity"], [Nu, ">0", "Nu"])
return 4 * radius_hydraulic_general(Area, PerimWetted).magnitude * Vel / Nu |
def service_info(self, name):
"""Pull descriptive info of a service by name.
Information returned includes the service's user friendly
name and whether it was preregistered or added dynamically.
Returns:
dict: A dictionary of service information with the following keys
set:
long_name (string): The user friendly name of the service
preregistered (bool): Whether the service was explicitly
called out as a preregistered service.
"""
return self._loop.run_coroutine(self._client.service_info(name)) | Pull descriptive info of a service by name.
Information returned includes the service's user friendly
name and whether it was preregistered or added dynamically.
Returns:
dict: A dictionary of service information with the following keys
set:
long_name (string): The user friendly name of the service
preregistered (bool): Whether the service was explicitly
called out as a preregistered service. | Below is the the instruction that describes the task:
### Input:
Pull descriptive info of a service by name.
Information returned includes the service's user friendly
name and whether it was preregistered or added dynamically.
Returns:
dict: A dictionary of service information with the following keys
set:
long_name (string): The user friendly name of the service
preregistered (bool): Whether the service was explicitly
called out as a preregistered service.
### Response:
def service_info(self, name):
"""Pull descriptive info of a service by name.
Information returned includes the service's user friendly
name and whether it was preregistered or added dynamically.
Returns:
dict: A dictionary of service information with the following keys
set:
long_name (string): The user friendly name of the service
preregistered (bool): Whether the service was explicitly
called out as a preregistered service.
"""
return self._loop.run_coroutine(self._client.service_info(name)) |
def assert_on_branch(branch_name):
# type: (str) -> None
""" Print error and exit if *branch_name* is not the current branch.
Args:
branch_name (str):
The supposed name of the current branch.
"""
branch = git.current_branch(refresh=True)
if branch.name != branch_name:
if context.get('pretend', False):
log.info("Would assert that you're on a <33>{}<32> branch",
branch_name)
else:
log.err("You're not on a <33>{}<31> branch!", branch_name)
sys.exit(1) | Print error and exit if *branch_name* is not the current branch.
Args:
branch_name (str):
The supposed name of the current branch. | Below is the the instruction that describes the task:
### Input:
Print error and exit if *branch_name* is not the current branch.
Args:
branch_name (str):
The supposed name of the current branch.
### Response:
def assert_on_branch(branch_name):
# type: (str) -> None
""" Print error and exit if *branch_name* is not the current branch.
Args:
branch_name (str):
The supposed name of the current branch.
"""
branch = git.current_branch(refresh=True)
if branch.name != branch_name:
if context.get('pretend', False):
log.info("Would assert that you're on a <33>{}<32> branch",
branch_name)
else:
log.err("You're not on a <33>{}<31> branch!", branch_name)
sys.exit(1) |
def compatible_shape(self, shape):
"""Determine whether `shape` is compatible with the (possibly
variable-sized) shape for this descriptor"""
if len(shape) != len(self.shape):
return False
for x, y in zip(self.shape, shape):
if x is not None and x != y:
return False
return True | Determine whether `shape` is compatible with the (possibly
variable-sized) shape for this descriptor | Below is the the instruction that describes the task:
### Input:
Determine whether `shape` is compatible with the (possibly
variable-sized) shape for this descriptor
### Response:
def compatible_shape(self, shape):
"""Determine whether `shape` is compatible with the (possibly
variable-sized) shape for this descriptor"""
if len(shape) != len(self.shape):
return False
for x, y in zip(self.shape, shape):
if x is not None and x != y:
return False
return True |
def factorize(self, show_progress=False, compute_w=True, compute_h=True,
compute_err=True, niter=1):
""" Factorize s.t. WH = data
Parameters
----------
show_progress : bool
print some extra information to stdout.
compute_h : bool
iteratively update values for H.
compute_w : bool
iteratively update values for W.
compute_err : bool
compute Frobenius norm |data-WH| after each update and store
it to .ferr[k].
Updated Values
--------------
.W : updated values for W.
.H : updated values for H.
.ferr : Frobenius norm |data-WH|.
"""
AA.factorize(self, niter=1, show_progress=show_progress,
compute_w=compute_w, compute_h=compute_h,
compute_err=compute_err) | Factorize s.t. WH = data
Parameters
----------
show_progress : bool
print some extra information to stdout.
compute_h : bool
iteratively update values for H.
compute_w : bool
iteratively update values for W.
compute_err : bool
compute Frobenius norm |data-WH| after each update and store
it to .ferr[k].
Updated Values
--------------
.W : updated values for W.
.H : updated values for H.
.ferr : Frobenius norm |data-WH|. | Below is the the instruction that describes the task:
### Input:
Factorize s.t. WH = data
Parameters
----------
show_progress : bool
print some extra information to stdout.
compute_h : bool
iteratively update values for H.
compute_w : bool
iteratively update values for W.
compute_err : bool
compute Frobenius norm |data-WH| after each update and store
it to .ferr[k].
Updated Values
--------------
.W : updated values for W.
.H : updated values for H.
.ferr : Frobenius norm |data-WH|.
### Response:
def factorize(self, show_progress=False, compute_w=True, compute_h=True,
compute_err=True, niter=1):
""" Factorize s.t. WH = data
Parameters
----------
show_progress : bool
print some extra information to stdout.
compute_h : bool
iteratively update values for H.
compute_w : bool
iteratively update values for W.
compute_err : bool
compute Frobenius norm |data-WH| after each update and store
it to .ferr[k].
Updated Values
--------------
.W : updated values for W.
.H : updated values for H.
.ferr : Frobenius norm |data-WH|.
"""
AA.factorize(self, niter=1, show_progress=show_progress,
compute_w=compute_w, compute_h=compute_h,
compute_err=compute_err) |
def clean(self):
""" Validates the forum instance. """
super().clean()
if self.parent and self.parent.is_link:
raise ValidationError(_('A forum can not have a link forum as parent'))
if self.is_category and self.parent and self.parent.is_category:
raise ValidationError(_('A category can not have another category as parent'))
if self.is_link and not self.link:
raise ValidationError(_('A link forum must have a link associated with it')) | Validates the forum instance. | Below is the the instruction that describes the task:
### Input:
Validates the forum instance.
### Response:
def clean(self):
""" Validates the forum instance. """
super().clean()
if self.parent and self.parent.is_link:
raise ValidationError(_('A forum can not have a link forum as parent'))
if self.is_category and self.parent and self.parent.is_category:
raise ValidationError(_('A category can not have another category as parent'))
if self.is_link and not self.link:
raise ValidationError(_('A link forum must have a link associated with it')) |
def run(self):
"""Start the main loop as a background process. *nix only"""
if win_based:
raise NotImplementedError("Please run main_loop, "
"backgrounding not supported on Windows")
self.background_process = mp.Process(target=self.main_loop)
self.background_process.start() | Start the main loop as a background process. *nix only | Below is the the instruction that describes the task:
### Input:
Start the main loop as a background process. *nix only
### Response:
def run(self):
"""Start the main loop as a background process. *nix only"""
if win_based:
raise NotImplementedError("Please run main_loop, "
"backgrounding not supported on Windows")
self.background_process = mp.Process(target=self.main_loop)
self.background_process.start() |
def _separate_multiple_def(self, defstring, parent, refstring, refline):
"""Separates the text after '::' in a variable definition to extract all the variables,
their dimensions and default values.
"""
import pyparsing
nester = pyparsing.nestedExpr('(', ')')
try:
parsed = nester.parseString("(" + re.sub("=(>?)", " =\\1 ", defstring) + ")").asList()[0]
except pyparsing.ParseException as err:
from fortpy import msg
repl = (parent.name, refline[0], refstring,
defstring, ''.join(['-']*(err.loc-1))+'^', err.msg)
msg.err("parsing variable from '{}:{} >> {}': \n'{}'\n{} {}.".format(*repl))
raise
i = 0
clean = []
while i < len(parsed):
if (isinstance(parsed[i], str) and not re.match("=>?", parsed[i]) and
i+1 < len(parsed) and isinstance(parsed[i+1], list)):
clean.append((parsed[i], parsed[i+1]))
i += 2
elif isinstance(parsed[i], str) and parsed[i] == ",":
i += 1
else:
clean.append(parsed[i])
i += 1
#Now pass through again to handle the default values.
i = 0
ready = []
while i < len(clean):
if isinstance(clean[i], str) and re.match("=>?", clean[i]):
ready.pop()
if ">" in clean[i]:
ready.append([clean[i-1], ("> " + clean[i+1][0], clean[i+1][1])])
else:
ready.append([clean[i-1], clean[i+1]])
i += 2
else:
ready.append(clean[i])
i += 1
return ready | Separates the text after '::' in a variable definition to extract all the variables,
their dimensions and default values. | Below is the the instruction that describes the task:
### Input:
Separates the text after '::' in a variable definition to extract all the variables,
their dimensions and default values.
### Response:
def _separate_multiple_def(self, defstring, parent, refstring, refline):
"""Separates the text after '::' in a variable definition to extract all the variables,
their dimensions and default values.
"""
import pyparsing
nester = pyparsing.nestedExpr('(', ')')
try:
parsed = nester.parseString("(" + re.sub("=(>?)", " =\\1 ", defstring) + ")").asList()[0]
except pyparsing.ParseException as err:
from fortpy import msg
repl = (parent.name, refline[0], refstring,
defstring, ''.join(['-']*(err.loc-1))+'^', err.msg)
msg.err("parsing variable from '{}:{} >> {}': \n'{}'\n{} {}.".format(*repl))
raise
i = 0
clean = []
while i < len(parsed):
if (isinstance(parsed[i], str) and not re.match("=>?", parsed[i]) and
i+1 < len(parsed) and isinstance(parsed[i+1], list)):
clean.append((parsed[i], parsed[i+1]))
i += 2
elif isinstance(parsed[i], str) and parsed[i] == ",":
i += 1
else:
clean.append(parsed[i])
i += 1
#Now pass through again to handle the default values.
i = 0
ready = []
while i < len(clean):
if isinstance(clean[i], str) and re.match("=>?", clean[i]):
ready.pop()
if ">" in clean[i]:
ready.append([clean[i-1], ("> " + clean[i+1][0], clean[i+1][1])])
else:
ready.append([clean[i-1], clean[i+1]])
i += 2
else:
ready.append(clean[i])
i += 1
return ready |
def set_centers_mean_cov(self, estimation, centers_mean_cov):
"""Set estimation on centers
Parameters
----------
estimation : 1D arrary
Either prior of posterior estimation
centers : 2D array, in shape [K, n_dim]
Estimation on centers
"""
estimation[self.map_offset[2]:self.map_offset[3]] =\
centers_mean_cov.ravel() | Set estimation on centers
Parameters
----------
estimation : 1D arrary
Either prior of posterior estimation
centers : 2D array, in shape [K, n_dim]
Estimation on centers | Below is the the instruction that describes the task:
### Input:
Set estimation on centers
Parameters
----------
estimation : 1D arrary
Either prior of posterior estimation
centers : 2D array, in shape [K, n_dim]
Estimation on centers
### Response:
def set_centers_mean_cov(self, estimation, centers_mean_cov):
"""Set estimation on centers
Parameters
----------
estimation : 1D arrary
Either prior of posterior estimation
centers : 2D array, in shape [K, n_dim]
Estimation on centers
"""
estimation[self.map_offset[2]:self.map_offset[3]] =\
centers_mean_cov.ravel() |
def get_key(self, key_type, key_id):
"""
get_key constructs a key given a key type and a key id.
Keyword arguments:
key_type -- the type of key (e.g.: 'friend_request')
key_id -- the key id (e.g.: '12345')
returns a string representing the key
(e.g.: 'friend_request:{12345}')
"""
return "{0}:{1}{2}{3}".format(key_type, self._hash_start, key_id,
self._hash_stop) | get_key constructs a key given a key type and a key id.
Keyword arguments:
key_type -- the type of key (e.g.: 'friend_request')
key_id -- the key id (e.g.: '12345')
returns a string representing the key
(e.g.: 'friend_request:{12345}') | Below is the the instruction that describes the task:
### Input:
get_key constructs a key given a key type and a key id.
Keyword arguments:
key_type -- the type of key (e.g.: 'friend_request')
key_id -- the key id (e.g.: '12345')
returns a string representing the key
(e.g.: 'friend_request:{12345}')
### Response:
def get_key(self, key_type, key_id):
"""
get_key constructs a key given a key type and a key id.
Keyword arguments:
key_type -- the type of key (e.g.: 'friend_request')
key_id -- the key id (e.g.: '12345')
returns a string representing the key
(e.g.: 'friend_request:{12345}')
"""
return "{0}:{1}{2}{3}".format(key_type, self._hash_start, key_id,
self._hash_stop) |
def get_agents_by_genus_type(self, agent_genus_type):
"""Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``.
In plenary mode, the returned list contains all known agents or
an error results. Otherwise, the returned list may contain only
those agents that are accessible through this session.
arg: agent_genus_type (osid.type.Type): an agent genus type
return: (osid.authentication.AgentList) - the returned ``Agent``
list
raise: NullArgument - ``agent_genus_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceLookupSession.get_resources_by_genus_type
# NOTE: This implementation currently ignores plenary view
collection = JSONClientValidated('authentication',
collection='Agent',
runtime=self._runtime)
result = collection.find(
dict({'genusTypeId': str(agent_genus_type)},
**self._view_filter())).sort('_id', DESCENDING)
return objects.AgentList(result, runtime=self._runtime, proxy=self._proxy) | Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``.
In plenary mode, the returned list contains all known agents or
an error results. Otherwise, the returned list may contain only
those agents that are accessible through this session.
arg: agent_genus_type (osid.type.Type): an agent genus type
return: (osid.authentication.AgentList) - the returned ``Agent``
list
raise: NullArgument - ``agent_genus_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.* | Below is the the instruction that describes the task:
### Input:
Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``.
In plenary mode, the returned list contains all known agents or
an error results. Otherwise, the returned list may contain only
those agents that are accessible through this session.
arg: agent_genus_type (osid.type.Type): an agent genus type
return: (osid.authentication.AgentList) - the returned ``Agent``
list
raise: NullArgument - ``agent_genus_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
### Response:
def get_agents_by_genus_type(self, agent_genus_type):
"""Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``.
In plenary mode, the returned list contains all known agents or
an error results. Otherwise, the returned list may contain only
those agents that are accessible through this session.
arg: agent_genus_type (osid.type.Type): an agent genus type
return: (osid.authentication.AgentList) - the returned ``Agent``
list
raise: NullArgument - ``agent_genus_type`` is ``null``
raise: OperationFailed - unable to complete request
raise: PermissionDenied - authorization failure
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceLookupSession.get_resources_by_genus_type
# NOTE: This implementation currently ignores plenary view
collection = JSONClientValidated('authentication',
collection='Agent',
runtime=self._runtime)
result = collection.find(
dict({'genusTypeId': str(agent_genus_type)},
**self._view_filter())).sort('_id', DESCENDING)
return objects.AgentList(result, runtime=self._runtime, proxy=self._proxy) |
def validate(ticket):
"""
Will attempt to validate the ticket. If validation fails, then False
is returned. If validation is successful, then True is returned
and the validated username is saved in the session under the
key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary
is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'.
"""
cas_username_session_key = current_app.config['CAS_USERNAME_SESSION_KEY']
cas_attributes_session_key = current_app.config['CAS_ATTRIBUTES_SESSION_KEY']
current_app.logger.debug("validating token {0}".format(ticket))
cas_validate_url = create_cas_validate_url(
current_app.config['CAS_SERVER'],
current_app.config['CAS_VALIDATE_ROUTE'],
flask.url_for('.login', origin=flask.session.get('CAS_AFTER_LOGIN_SESSION_URL'), _external=True),
ticket)
current_app.logger.debug("Making GET request to {0}".format(
cas_validate_url))
xml_from_dict = {}
isValid = False
try:
xmldump = urlopen(cas_validate_url).read().strip().decode('utf8', 'ignore')
xml_from_dict = parse(xmldump)
isValid = True if "cas:authenticationSuccess" in xml_from_dict["cas:serviceResponse"] else False
except ValueError:
current_app.logger.error("CAS returned unexpected result")
if isValid:
current_app.logger.debug("valid")
xml_from_dict = xml_from_dict["cas:serviceResponse"]["cas:authenticationSuccess"]
username = xml_from_dict["cas:user"]
flask.session[cas_username_session_key] = username
if "cas:attributes" in xml_from_dict:
attributes = xml_from_dict["cas:attributes"]
if "cas:memberOf" in attributes:
attributes["cas:memberOf"] = attributes["cas:memberOf"].lstrip('[').rstrip(']').split(',')
for group_number in range(0, len(attributes['cas:memberOf'])):
attributes['cas:memberOf'][group_number] = attributes['cas:memberOf'][group_number].lstrip(' ').rstrip(' ')
flask.session[cas_attributes_session_key] = attributes
else:
current_app.logger.debug("invalid")
return isValid | Will attempt to validate the ticket. If validation fails, then False
is returned. If validation is successful, then True is returned
and the validated username is saved in the session under the
key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary
is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'. | Below is the the instruction that describes the task:
### Input:
Will attempt to validate the ticket. If validation fails, then False
is returned. If validation is successful, then True is returned
and the validated username is saved in the session under the
key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary
is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'.
### Response:
def validate(ticket):
"""
Will attempt to validate the ticket. If validation fails, then False
is returned. If validation is successful, then True is returned
and the validated username is saved in the session under the
key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary
is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'.
"""
cas_username_session_key = current_app.config['CAS_USERNAME_SESSION_KEY']
cas_attributes_session_key = current_app.config['CAS_ATTRIBUTES_SESSION_KEY']
current_app.logger.debug("validating token {0}".format(ticket))
cas_validate_url = create_cas_validate_url(
current_app.config['CAS_SERVER'],
current_app.config['CAS_VALIDATE_ROUTE'],
flask.url_for('.login', origin=flask.session.get('CAS_AFTER_LOGIN_SESSION_URL'), _external=True),
ticket)
current_app.logger.debug("Making GET request to {0}".format(
cas_validate_url))
xml_from_dict = {}
isValid = False
try:
xmldump = urlopen(cas_validate_url).read().strip().decode('utf8', 'ignore')
xml_from_dict = parse(xmldump)
isValid = True if "cas:authenticationSuccess" in xml_from_dict["cas:serviceResponse"] else False
except ValueError:
current_app.logger.error("CAS returned unexpected result")
if isValid:
current_app.logger.debug("valid")
xml_from_dict = xml_from_dict["cas:serviceResponse"]["cas:authenticationSuccess"]
username = xml_from_dict["cas:user"]
flask.session[cas_username_session_key] = username
if "cas:attributes" in xml_from_dict:
attributes = xml_from_dict["cas:attributes"]
if "cas:memberOf" in attributes:
attributes["cas:memberOf"] = attributes["cas:memberOf"].lstrip('[').rstrip(']').split(',')
for group_number in range(0, len(attributes['cas:memberOf'])):
attributes['cas:memberOf'][group_number] = attributes['cas:memberOf'][group_number].lstrip(' ').rstrip(' ')
flask.session[cas_attributes_session_key] = attributes
else:
current_app.logger.debug("invalid")
return isValid |
def url_builder(self, endpoint, *, root=None, params=None, url_params=None):
"""Create a URL for the specified endpoint.
Arguments:
endpoint (:py:class:`str`): The API endpoint to access.
root: (:py:class:`str`, optional): The root URL for the
service API.
params: (:py:class:`dict`, optional): The values for format
into the created URL (defaults to ``None``).
url_params: (:py:class:`dict`, optional): Parameters to add
to the end of the URL (defaults to ``None``).
Returns:
:py:class:`str`: The resulting URL.
"""
if root is None:
root = self.ROOT
scheme, netloc, path, _, _ = urlsplit(root)
return urlunsplit((
scheme,
netloc,
urljoin(path, endpoint),
urlencode(url_params or {}),
'',
)).format(**params or {}) | Create a URL for the specified endpoint.
Arguments:
endpoint (:py:class:`str`): The API endpoint to access.
root: (:py:class:`str`, optional): The root URL for the
service API.
params: (:py:class:`dict`, optional): The values for format
into the created URL (defaults to ``None``).
url_params: (:py:class:`dict`, optional): Parameters to add
to the end of the URL (defaults to ``None``).
Returns:
:py:class:`str`: The resulting URL. | Below is the the instruction that describes the task:
### Input:
Create a URL for the specified endpoint.
Arguments:
endpoint (:py:class:`str`): The API endpoint to access.
root: (:py:class:`str`, optional): The root URL for the
service API.
params: (:py:class:`dict`, optional): The values for format
into the created URL (defaults to ``None``).
url_params: (:py:class:`dict`, optional): Parameters to add
to the end of the URL (defaults to ``None``).
Returns:
:py:class:`str`: The resulting URL.
### Response:
def url_builder(self, endpoint, *, root=None, params=None, url_params=None):
"""Create a URL for the specified endpoint.
Arguments:
endpoint (:py:class:`str`): The API endpoint to access.
root: (:py:class:`str`, optional): The root URL for the
service API.
params: (:py:class:`dict`, optional): The values for format
into the created URL (defaults to ``None``).
url_params: (:py:class:`dict`, optional): Parameters to add
to the end of the URL (defaults to ``None``).
Returns:
:py:class:`str`: The resulting URL.
"""
if root is None:
root = self.ROOT
scheme, netloc, path, _, _ = urlsplit(root)
return urlunsplit((
scheme,
netloc,
urljoin(path, endpoint),
urlencode(url_params or {}),
'',
)).format(**params or {}) |
def packages(self):
"""
Property for accessing :class:`PackageManager` instance, which is used to manage packages.
:rtype: yagocd.resources.package.PackageManager
"""
if self._package_manager is None:
self._package_manager = PackageManager(session=self._session)
return self._package_manager | Property for accessing :class:`PackageManager` instance, which is used to manage packages.
:rtype: yagocd.resources.package.PackageManager | Below is the the instruction that describes the task:
### Input:
Property for accessing :class:`PackageManager` instance, which is used to manage packages.
:rtype: yagocd.resources.package.PackageManager
### Response:
def packages(self):
"""
Property for accessing :class:`PackageManager` instance, which is used to manage packages.
:rtype: yagocd.resources.package.PackageManager
"""
if self._package_manager is None:
self._package_manager = PackageManager(session=self._session)
return self._package_manager |
def complete(
self, config, prompt, session, context, current_arguments, current
):
# type: (CompletionInfo, str, ShellSession, BundleContext, List[str], str) -> List[str]
"""
Returns the list of services IDs matching the current state
:param config: Configuration of the current completion
:param prompt: Shell prompt (for re-display)
:param session: Shell session (to display in shell)
:param context: Bundle context of the Shell bundle
:param current_arguments: Current arguments (without the command itself)
:param current: Current word
:return: A list of matches
"""
# Register a method to display helpful completion
self.set_display_hook(self.display_hook, prompt, session, context)
# Return a list of component factories
with use_ipopo(context) as ipopo:
return [
"{} ".format(factory)
for factory in ipopo.get_factories()
if factory.startswith(current)
] | Returns the list of services IDs matching the current state
:param config: Configuration of the current completion
:param prompt: Shell prompt (for re-display)
:param session: Shell session (to display in shell)
:param context: Bundle context of the Shell bundle
:param current_arguments: Current arguments (without the command itself)
:param current: Current word
:return: A list of matches | Below is the the instruction that describes the task:
### Input:
Returns the list of services IDs matching the current state
:param config: Configuration of the current completion
:param prompt: Shell prompt (for re-display)
:param session: Shell session (to display in shell)
:param context: Bundle context of the Shell bundle
:param current_arguments: Current arguments (without the command itself)
:param current: Current word
:return: A list of matches
### Response:
def complete(
self, config, prompt, session, context, current_arguments, current
):
# type: (CompletionInfo, str, ShellSession, BundleContext, List[str], str) -> List[str]
"""
Returns the list of services IDs matching the current state
:param config: Configuration of the current completion
:param prompt: Shell prompt (for re-display)
:param session: Shell session (to display in shell)
:param context: Bundle context of the Shell bundle
:param current_arguments: Current arguments (without the command itself)
:param current: Current word
:return: A list of matches
"""
# Register a method to display helpful completion
self.set_display_hook(self.display_hook, prompt, session, context)
# Return a list of component factories
with use_ipopo(context) as ipopo:
return [
"{} ".format(factory)
for factory in ipopo.get_factories()
if factory.startswith(current)
] |
def read(self):
# type: () -> Optional[str]
""" Read the project version from .py file.
This will regex search in the file for a
``__version__ = VERSION_STRING`` and read the version string.
"""
with open(self.version_file) as fp:
version = fp.read().strip()
if is_valid(version):
return version
return None | Read the project version from .py file.
This will regex search in the file for a
``__version__ = VERSION_STRING`` and read the version string. | Below is the the instruction that describes the task:
### Input:
Read the project version from .py file.
This will regex search in the file for a
``__version__ = VERSION_STRING`` and read the version string.
### Response:
def read(self):
# type: () -> Optional[str]
""" Read the project version from .py file.
This will regex search in the file for a
``__version__ = VERSION_STRING`` and read the version string.
"""
with open(self.version_file) as fp:
version = fp.read().strip()
if is_valid(version):
return version
return None |
def existing_config_files():
"""
Method that calculates all the configuration files that are valid, according to the
'set_paths' and other methods for this module.
"""
global _ETC_PATHS
global _MAIN_CONFIG_FILE
global _CONFIG_VAR_INCLUDE
global _CONFIG_FILTER
config_files = []
for possible in _ETC_PATHS:
config_files = config_files + glob.glob("%s%s" % (possible, _MAIN_CONFIG_FILE))
if _CONFIG_VAR_INCLUDE != "":
main_config = Configuration("general", {
_CONFIG_VAR_INCLUDE:""
}, _MAIN_CONFIG_FILE)
if main_config.CONFIG_DIR != "":
for possible in _ETC_PATHS:
config_files = config_files + glob.glob("%s%s/%s" % (possible, main_config.CONFIG_DIR, _CONFIG_FILTER))
return config_files | Method that calculates all the configuration files that are valid, according to the
'set_paths' and other methods for this module. | Below is the the instruction that describes the task:
### Input:
Method that calculates all the configuration files that are valid, according to the
'set_paths' and other methods for this module.
### Response:
def existing_config_files():
"""
Method that calculates all the configuration files that are valid, according to the
'set_paths' and other methods for this module.
"""
global _ETC_PATHS
global _MAIN_CONFIG_FILE
global _CONFIG_VAR_INCLUDE
global _CONFIG_FILTER
config_files = []
for possible in _ETC_PATHS:
config_files = config_files + glob.glob("%s%s" % (possible, _MAIN_CONFIG_FILE))
if _CONFIG_VAR_INCLUDE != "":
main_config = Configuration("general", {
_CONFIG_VAR_INCLUDE:""
}, _MAIN_CONFIG_FILE)
if main_config.CONFIG_DIR != "":
for possible in _ETC_PATHS:
config_files = config_files + glob.glob("%s%s/%s" % (possible, main_config.CONFIG_DIR, _CONFIG_FILTER))
return config_files |
def get_name(self, type_, id_):
"""
Read a cached name if available.
:param type_: str, "owner" or "tag"
:param id_: int, eg. 123456
:returns: str, or None
"""
cachefile = self.filename(type_, id_)
try:
with open(cachefile, 'r') as f:
return f.read()
except (OSError, IOError) as e:
if e.errno != errno.ENOENT:
raise | Read a cached name if available.
:param type_: str, "owner" or "tag"
:param id_: int, eg. 123456
:returns: str, or None | Below is the the instruction that describes the task:
### Input:
Read a cached name if available.
:param type_: str, "owner" or "tag"
:param id_: int, eg. 123456
:returns: str, or None
### Response:
def get_name(self, type_, id_):
"""
Read a cached name if available.
:param type_: str, "owner" or "tag"
:param id_: int, eg. 123456
:returns: str, or None
"""
cachefile = self.filename(type_, id_)
try:
with open(cachefile, 'r') as f:
return f.read()
except (OSError, IOError) as e:
if e.errno != errno.ENOENT:
raise |
def camel_to_snake(self, s):
"""Constructs nice dir name from class name, e.g. FooBar => foo_bar.
:param s: The string which should be converted to snake_case.
"""
return self._underscore_re2.sub(r'\1_\2', self._underscore_re1.sub(r'\1_\2', s)).lower() | Constructs nice dir name from class name, e.g. FooBar => foo_bar.
:param s: The string which should be converted to snake_case. | Below is the the instruction that describes the task:
### Input:
Constructs nice dir name from class name, e.g. FooBar => foo_bar.
:param s: The string which should be converted to snake_case.
### Response:
def camel_to_snake(self, s):
"""Constructs nice dir name from class name, e.g. FooBar => foo_bar.
:param s: The string which should be converted to snake_case.
"""
return self._underscore_re2.sub(r'\1_\2', self._underscore_re1.sub(r'\1_\2', s)).lower() |
def online_time_to_string(value, timeFormat, utcOffset=0):
"""Converts AGOL timestamp to formatted string.
Args:
value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000)
timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`.
utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC).
Returns:
str: A string representation of the timestamp.
Examples:
>>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S")
'2016-03-05 00:41:01'
>>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00
'03/05/1993 12:35:15'
See Also:
:py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp
"""
try:
return datetime.datetime.fromtimestamp(value/1000 + utcOffset*3600).strftime(timeFormat)
except:
return ""
finally:
pass | Converts AGOL timestamp to formatted string.
Args:
value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000)
timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`.
utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC).
Returns:
str: A string representation of the timestamp.
Examples:
>>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S")
'2016-03-05 00:41:01'
>>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00
'03/05/1993 12:35:15'
See Also:
:py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp | Below is the the instruction that describes the task:
### Input:
Converts AGOL timestamp to formatted string.
Args:
value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000)
timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`.
utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC).
Returns:
str: A string representation of the timestamp.
Examples:
>>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S")
'2016-03-05 00:41:01'
>>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00
'03/05/1993 12:35:15'
See Also:
:py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp
### Response:
def online_time_to_string(value, timeFormat, utcOffset=0):
"""Converts AGOL timestamp to formatted string.
Args:
value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000)
timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`.
utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC).
Returns:
str: A string representation of the timestamp.
Examples:
>>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S")
'2016-03-05 00:41:01'
>>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00
'03/05/1993 12:35:15'
See Also:
:py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp
"""
try:
return datetime.datetime.fromtimestamp(value/1000 + utcOffset*3600).strftime(timeFormat)
except:
return ""
finally:
pass |
def df(objs, labels=None):
"""
Create pandas DataFrame from the sequence of same-type objects.
When a list of labels is given then only retain those labels and
drop the rest.
"""
import pandas as pd
from .objects import Object, DynamicObject
if objs:
objs = list(objs)
obj = objs[0]
if isinstance(obj, Object):
df = pd.DataFrame.from_records(o.tuple() for o in objs)
df.columns = obj.__class__.defaults
elif isinstance(obj, DynamicObject):
df = pd.DataFrame.from_records(o.__dict__ for o in objs)
else:
df = pd.DataFrame.from_records(objs)
if isinstance(obj, tuple) and hasattr(obj, '_fields'):
# assume it's a namedtuple
df.columns = obj.__class__._fields
else:
df = None
if labels:
exclude = [label for label in df if label not in labels]
df = df.drop(exclude, axis=1)
return df | Create pandas DataFrame from the sequence of same-type objects.
When a list of labels is given then only retain those labels and
drop the rest. | Below is the the instruction that describes the task:
### Input:
Create pandas DataFrame from the sequence of same-type objects.
When a list of labels is given then only retain those labels and
drop the rest.
### Response:
def df(objs, labels=None):
"""
Create pandas DataFrame from the sequence of same-type objects.
When a list of labels is given then only retain those labels and
drop the rest.
"""
import pandas as pd
from .objects import Object, DynamicObject
if objs:
objs = list(objs)
obj = objs[0]
if isinstance(obj, Object):
df = pd.DataFrame.from_records(o.tuple() for o in objs)
df.columns = obj.__class__.defaults
elif isinstance(obj, DynamicObject):
df = pd.DataFrame.from_records(o.__dict__ for o in objs)
else:
df = pd.DataFrame.from_records(objs)
if isinstance(obj, tuple) and hasattr(obj, '_fields'):
# assume it's a namedtuple
df.columns = obj.__class__._fields
else:
df = None
if labels:
exclude = [label for label in df if label not in labels]
df = df.drop(exclude, axis=1)
return df |
def registrant_monitor(self, query, exclude=[], days_back=0, limit=None, **kwargs):
"""One or more terms as a Python list or separated by the pipe character ( | )."""
return self._results('registrant-alert', '/v1/registrant-alert', query=delimited(query),
exclude=delimited(exclude), days_back=days_back, limit=limit, items_path=('alerts', ),
**kwargs) | One or more terms as a Python list or separated by the pipe character ( | ). | Below is the the instruction that describes the task:
### Input:
One or more terms as a Python list or separated by the pipe character ( | ).
### Response:
def registrant_monitor(self, query, exclude=[], days_back=0, limit=None, **kwargs):
"""One or more terms as a Python list or separated by the pipe character ( | )."""
return self._results('registrant-alert', '/v1/registrant-alert', query=delimited(query),
exclude=delimited(exclude), days_back=days_back, limit=limit, items_path=('alerts', ),
**kwargs) |
def discrete_ksD(data, xmin, alpha):
"""
given a sorted data set, a minimum, and an alpha, returns the power law ks-test
D value w/data
The returned value is the "D" parameter in the ks test
(this is implemented differently from the continuous version because there
are potentially multiple identical points that need comparison to the power
law)
"""
zz = np.sort(data[data>=xmin])
nn = float(len(zz))
if nn < 2:
return np.inf
#cx = np.arange(nn,dtype='float')/float(nn)
#cf = 1.0-(zz/xmin)**(1.0-alpha)
model_cdf = 1.0-(zz.astype('float')/float(xmin))**(1.0-alpha)
data_cdf = np.searchsorted(zz,zz,side='left')/(float(nn))
ks = max(abs(model_cdf-data_cdf))
return ks | given a sorted data set, a minimum, and an alpha, returns the power law ks-test
D value w/data
The returned value is the "D" parameter in the ks test
(this is implemented differently from the continuous version because there
are potentially multiple identical points that need comparison to the power
law) | Below is the the instruction that describes the task:
### Input:
given a sorted data set, a minimum, and an alpha, returns the power law ks-test
D value w/data
The returned value is the "D" parameter in the ks test
(this is implemented differently from the continuous version because there
are potentially multiple identical points that need comparison to the power
law)
### Response:
def discrete_ksD(data, xmin, alpha):
"""
given a sorted data set, a minimum, and an alpha, returns the power law ks-test
D value w/data
The returned value is the "D" parameter in the ks test
(this is implemented differently from the continuous version because there
are potentially multiple identical points that need comparison to the power
law)
"""
zz = np.sort(data[data>=xmin])
nn = float(len(zz))
if nn < 2:
return np.inf
#cx = np.arange(nn,dtype='float')/float(nn)
#cf = 1.0-(zz/xmin)**(1.0-alpha)
model_cdf = 1.0-(zz.astype('float')/float(xmin))**(1.0-alpha)
data_cdf = np.searchsorted(zz,zz,side='left')/(float(nn))
ks = max(abs(model_cdf-data_cdf))
return ks |
def readByte(self):
"""
Reads a byte value from the L{ReadData} stream object.
@rtype: int
@return: The byte value read from the L{ReadData} stream.
"""
byte = unpack('B' if not self.signed else 'b', self.readAt(self.offset, 1))[0]
self.offset += 1
return byte | Reads a byte value from the L{ReadData} stream object.
@rtype: int
@return: The byte value read from the L{ReadData} stream. | Below is the the instruction that describes the task:
### Input:
Reads a byte value from the L{ReadData} stream object.
@rtype: int
@return: The byte value read from the L{ReadData} stream.
### Response:
def readByte(self):
"""
Reads a byte value from the L{ReadData} stream object.
@rtype: int
@return: The byte value read from the L{ReadData} stream.
"""
byte = unpack('B' if not self.signed else 'b', self.readAt(self.offset, 1))[0]
self.offset += 1
return byte |
def build_frontend(self, frontend_node):
"""parse `frontend` sections, and return a config.Frontend
Args:
frontend_node (TreeNode): Description
Raises:
Exception: Description
Returns:
config.Frontend: an object
"""
proxy_name = frontend_node.frontend_header.proxy_name.text
service_address_node = frontend_node.frontend_header.service_address
# parse the config block
config_block_lines = self.__build_config_block(
frontend_node.config_block)
# parse host and port
host, port = '', ''
if isinstance(service_address_node, pegnode.ServiceAddress):
host = service_address_node.host.text
port = service_address_node.port.text
else:
# use `bind` in config lines to fill in host and port
# just use the first
for line in config_block_lines:
if isinstance(line, config.Bind):
host, port = line.host, line.port
break
else:
raise Exception(
'Not specify host and port in `frontend` definition')
return config.Frontend(
name=proxy_name, host=host, port=port,
config_block=config_block_lines) | parse `frontend` sections, and return a config.Frontend
Args:
frontend_node (TreeNode): Description
Raises:
Exception: Description
Returns:
config.Frontend: an object | Below is the the instruction that describes the task:
### Input:
parse `frontend` sections, and return a config.Frontend
Args:
frontend_node (TreeNode): Description
Raises:
Exception: Description
Returns:
config.Frontend: an object
### Response:
def build_frontend(self, frontend_node):
"""parse `frontend` sections, and return a config.Frontend
Args:
frontend_node (TreeNode): Description
Raises:
Exception: Description
Returns:
config.Frontend: an object
"""
proxy_name = frontend_node.frontend_header.proxy_name.text
service_address_node = frontend_node.frontend_header.service_address
# parse the config block
config_block_lines = self.__build_config_block(
frontend_node.config_block)
# parse host and port
host, port = '', ''
if isinstance(service_address_node, pegnode.ServiceAddress):
host = service_address_node.host.text
port = service_address_node.port.text
else:
# use `bind` in config lines to fill in host and port
# just use the first
for line in config_block_lines:
if isinstance(line, config.Bind):
host, port = line.host, line.port
break
else:
raise Exception(
'Not specify host and port in `frontend` definition')
return config.Frontend(
name=proxy_name, host=host, port=port,
config_block=config_block_lines) |
def createReport(self,
out_file_path,
studyAreas,
report=None,
format="PDF",
reportFields=None,
studyAreasOptions=None,
useData=None,
inSR=4326,
):
"""
The GeoEnrichment Create Report method uses the concept of a study
area to define the location of the point or area that you want to
enrich with generated reports. This method allows you to create
many types of high-quality reports for a variety of use cases
describing the input area. If a point is used as a study area, the
service will create a 1-mile ring buffer around the point to
collect and append enrichment data. Optionally, you can create a
buffer ring or drive-time service area around the points to prepare
PDF or Excel reports for the study areas.
Note:
For full examples for each input, please review the following:
http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/
Inputs:
out_file_path - save location of the report
studyAreas - Required parameter to specify a list of input
features to be enriched. The input can be a Point, Polygon,
Adress, or named administrative boundary. The locations can be
passed in as a single object or as a list of objects.
report - Default report to generate.
format - specify the generated report. Options are: XLSX or PDF
reportFields - Optional parameter specifies additional choices to
customize reports. See the URL above to see all the options.
studyAreasOptions - Optional parameter to specify enrichment
behavior. For points described as map coordinates, a 1-mile
ring area centered on each site will be used by default. You
can use this parameter to change these default settings.
With this parameter, the caller can override the default
behavior describing how the enrichment attributes are appended
to the input features described in studyAreas. For example,
you can change the output ring buffer to 5 miles, change the
number of output buffers created around each point, and also
change the output buffer type to a drive-time service area
rather than a simple ring buffer.
useData - By default, the service will automatically determine
the country or dataset that is associated with each location or
area submitted in the studyAreas parameter; however, there is
an associated computational cost which may lengthen the time it
takes to return a response. To skip this intermediate step and
potentially improve the speed and performance of the service,
the caller can specify the country or dataset information up
front through this parameter.
inSR - parameter to define the input geometries in the studyAreas
parameter in a specified spatial reference system.
"""
url = self._base_url + self._url_create_report
if isinstance(studyAreas, list) == False:
studyAreas = [studyAreas]
studyAreas = self.__geometryToDict(studyAreas)
params = {
"f" : "bin",
"studyAreas" : studyAreas,
"inSR" : inSR,
}
if not report is None:
params['report'] = report
if format is None:
format = "pdf"
elif format.lower() in ['pdf', 'xlsx']:
params['format'] = format.lower()
else:
raise AttributeError("Invalid format value.")
if not reportFields is None:
params['reportFields'] = reportFields
if not studyAreasOptions is None:
params['studyAreasOptions'] = studyAreasOptions
if not useData is None:
params['useData'] = useData
result = self._get(url=url,
param_dict=params,
securityHandler=self._securityHandler,
proxy_url=self._proxy_url,
proxy_port=self._proxy_port,
out_folder=os.path.dirname(out_file_path))
return result | The GeoEnrichment Create Report method uses the concept of a study
area to define the location of the point or area that you want to
enrich with generated reports. This method allows you to create
many types of high-quality reports for a variety of use cases
describing the input area. If a point is used as a study area, the
service will create a 1-mile ring buffer around the point to
collect and append enrichment data. Optionally, you can create a
buffer ring or drive-time service area around the points to prepare
PDF or Excel reports for the study areas.
Note:
For full examples for each input, please review the following:
http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/
Inputs:
out_file_path - save location of the report
studyAreas - Required parameter to specify a list of input
features to be enriched. The input can be a Point, Polygon,
Adress, or named administrative boundary. The locations can be
passed in as a single object or as a list of objects.
report - Default report to generate.
format - specify the generated report. Options are: XLSX or PDF
reportFields - Optional parameter specifies additional choices to
customize reports. See the URL above to see all the options.
studyAreasOptions - Optional parameter to specify enrichment
behavior. For points described as map coordinates, a 1-mile
ring area centered on each site will be used by default. You
can use this parameter to change these default settings.
With this parameter, the caller can override the default
behavior describing how the enrichment attributes are appended
to the input features described in studyAreas. For example,
you can change the output ring buffer to 5 miles, change the
number of output buffers created around each point, and also
change the output buffer type to a drive-time service area
rather than a simple ring buffer.
useData - By default, the service will automatically determine
the country or dataset that is associated with each location or
area submitted in the studyAreas parameter; however, there is
an associated computational cost which may lengthen the time it
takes to return a response. To skip this intermediate step and
potentially improve the speed and performance of the service,
the caller can specify the country or dataset information up
front through this parameter.
inSR - parameter to define the input geometries in the studyAreas
parameter in a specified spatial reference system. | Below is the the instruction that describes the task:
### Input:
The GeoEnrichment Create Report method uses the concept of a study
area to define the location of the point or area that you want to
enrich with generated reports. This method allows you to create
many types of high-quality reports for a variety of use cases
describing the input area. If a point is used as a study area, the
service will create a 1-mile ring buffer around the point to
collect and append enrichment data. Optionally, you can create a
buffer ring or drive-time service area around the points to prepare
PDF or Excel reports for the study areas.
Note:
For full examples for each input, please review the following:
http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/
Inputs:
out_file_path - save location of the report
studyAreas - Required parameter to specify a list of input
features to be enriched. The input can be a Point, Polygon,
Adress, or named administrative boundary. The locations can be
passed in as a single object or as a list of objects.
report - Default report to generate.
format - specify the generated report. Options are: XLSX or PDF
reportFields - Optional parameter specifies additional choices to
customize reports. See the URL above to see all the options.
studyAreasOptions - Optional parameter to specify enrichment
behavior. For points described as map coordinates, a 1-mile
ring area centered on each site will be used by default. You
can use this parameter to change these default settings.
With this parameter, the caller can override the default
behavior describing how the enrichment attributes are appended
to the input features described in studyAreas. For example,
you can change the output ring buffer to 5 miles, change the
number of output buffers created around each point, and also
change the output buffer type to a drive-time service area
rather than a simple ring buffer.
useData - By default, the service will automatically determine
the country or dataset that is associated with each location or
area submitted in the studyAreas parameter; however, there is
an associated computational cost which may lengthen the time it
takes to return a response. To skip this intermediate step and
potentially improve the speed and performance of the service,
the caller can specify the country or dataset information up
front through this parameter.
inSR - parameter to define the input geometries in the studyAreas
parameter in a specified spatial reference system.
### Response:
def createReport(self,
out_file_path,
studyAreas,
report=None,
format="PDF",
reportFields=None,
studyAreasOptions=None,
useData=None,
inSR=4326,
):
"""
The GeoEnrichment Create Report method uses the concept of a study
area to define the location of the point or area that you want to
enrich with generated reports. This method allows you to create
many types of high-quality reports for a variety of use cases
describing the input area. If a point is used as a study area, the
service will create a 1-mile ring buffer around the point to
collect and append enrichment data. Optionally, you can create a
buffer ring or drive-time service area around the points to prepare
PDF or Excel reports for the study areas.
Note:
For full examples for each input, please review the following:
http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/
Inputs:
out_file_path - save location of the report
studyAreas - Required parameter to specify a list of input
features to be enriched. The input can be a Point, Polygon,
Adress, or named administrative boundary. The locations can be
passed in as a single object or as a list of objects.
report - Default report to generate.
format - specify the generated report. Options are: XLSX or PDF
reportFields - Optional parameter specifies additional choices to
customize reports. See the URL above to see all the options.
studyAreasOptions - Optional parameter to specify enrichment
behavior. For points described as map coordinates, a 1-mile
ring area centered on each site will be used by default. You
can use this parameter to change these default settings.
With this parameter, the caller can override the default
behavior describing how the enrichment attributes are appended
to the input features described in studyAreas. For example,
you can change the output ring buffer to 5 miles, change the
number of output buffers created around each point, and also
change the output buffer type to a drive-time service area
rather than a simple ring buffer.
useData - By default, the service will automatically determine
the country or dataset that is associated with each location or
area submitted in the studyAreas parameter; however, there is
an associated computational cost which may lengthen the time it
takes to return a response. To skip this intermediate step and
potentially improve the speed and performance of the service,
the caller can specify the country or dataset information up
front through this parameter.
inSR - parameter to define the input geometries in the studyAreas
parameter in a specified spatial reference system.
"""
url = self._base_url + self._url_create_report
if isinstance(studyAreas, list) == False:
studyAreas = [studyAreas]
studyAreas = self.__geometryToDict(studyAreas)
params = {
"f" : "bin",
"studyAreas" : studyAreas,
"inSR" : inSR,
}
if not report is None:
params['report'] = report
if format is None:
format = "pdf"
elif format.lower() in ['pdf', 'xlsx']:
params['format'] = format.lower()
else:
raise AttributeError("Invalid format value.")
if not reportFields is None:
params['reportFields'] = reportFields
if not studyAreasOptions is None:
params['studyAreasOptions'] = studyAreasOptions
if not useData is None:
params['useData'] = useData
result = self._get(url=url,
param_dict=params,
securityHandler=self._securityHandler,
proxy_url=self._proxy_url,
proxy_port=self._proxy_port,
out_folder=os.path.dirname(out_file_path))
return result |
def dumps(value,encoding=None):
"""dumps(object,encoding=None) -> string
This function dumps a python object as a tnetstring.
"""
# This uses a deque to collect output fragments in reverse order,
# then joins them together at the end. It's measurably faster
# than creating all the intermediate strings.
# If you're reading this to get a handle on the tnetstring format,
# consider the _gdumps() function instead; it's a standard top-down
# generator that's simpler to understand but much less efficient.
q = deque()
_rdumpq(q,0,value,encoding)
return "".join(q) | dumps(object,encoding=None) -> string
This function dumps a python object as a tnetstring. | Below is the the instruction that describes the task:
### Input:
dumps(object,encoding=None) -> string
This function dumps a python object as a tnetstring.
### Response:
def dumps(value,encoding=None):
"""dumps(object,encoding=None) -> string
This function dumps a python object as a tnetstring.
"""
# This uses a deque to collect output fragments in reverse order,
# then joins them together at the end. It's measurably faster
# than creating all the intermediate strings.
# If you're reading this to get a handle on the tnetstring format,
# consider the _gdumps() function instead; it's a standard top-down
# generator that's simpler to understand but much less efficient.
q = deque()
_rdumpq(q,0,value,encoding)
return "".join(q) |
def apps_notify_create(self, data, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app"
api_path = "/api/v2/apps/notify.json"
return self.call(api_path, method="POST", data=data, **kwargs) | https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app | Below is the the instruction that describes the task:
### Input:
https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app
### Response:
def apps_notify_create(self, data, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app"
api_path = "/api/v2/apps/notify.json"
return self.call(api_path, method="POST", data=data, **kwargs) |
def run_with_output(self, *args, **kwargs):
"""Runs command on every first job in the run, returns stdout."""
for job in self.jobs:
job.run_with_output(*args, **kwargs) | Runs command on every first job in the run, returns stdout. | Below is the the instruction that describes the task:
### Input:
Runs command on every first job in the run, returns stdout.
### Response:
def run_with_output(self, *args, **kwargs):
"""Runs command on every first job in the run, returns stdout."""
for job in self.jobs:
job.run_with_output(*args, **kwargs) |
def register_user(self, user, allow_login=None, send_email=None,
_force_login_without_confirmation=False):
"""
Service method to register a user.
Sends signal `user_registered`.
Returns True if the user has been logged in, False otherwise.
"""
should_login_user = (not self.security.confirmable
or self.security.login_without_confirmation
or _force_login_without_confirmation)
should_login_user = (should_login_user if allow_login is None
else allow_login and should_login_user)
if should_login_user:
user.active = True
# confirmation token depends on having user.id set, which requires
# the user be committed to the database
self.user_manager.save(user, commit=True)
confirmation_link, token = None, None
if self.security.confirmable and not _force_login_without_confirmation:
token = self.security_utils_service.generate_confirmation_token(user)
confirmation_link = url_for('security_controller.confirm_email',
token=token, _external=True)
user_registered.send(app._get_current_object(),
user=user, confirm_token=token)
if (send_email or (
send_email is None
and app.config.SECURITY_SEND_REGISTER_EMAIL)):
self.send_mail(_('flask_unchained.bundles.security:email_subject.register'),
to=user.email,
template='security/email/welcome.html',
user=user,
confirmation_link=confirmation_link)
if should_login_user:
return self.login_user(user, force=_force_login_without_confirmation)
return False | Service method to register a user.
Sends signal `user_registered`.
Returns True if the user has been logged in, False otherwise. | Below is the the instruction that describes the task:
### Input:
Service method to register a user.
Sends signal `user_registered`.
Returns True if the user has been logged in, False otherwise.
### Response:
def register_user(self, user, allow_login=None, send_email=None,
_force_login_without_confirmation=False):
"""
Service method to register a user.
Sends signal `user_registered`.
Returns True if the user has been logged in, False otherwise.
"""
should_login_user = (not self.security.confirmable
or self.security.login_without_confirmation
or _force_login_without_confirmation)
should_login_user = (should_login_user if allow_login is None
else allow_login and should_login_user)
if should_login_user:
user.active = True
# confirmation token depends on having user.id set, which requires
# the user be committed to the database
self.user_manager.save(user, commit=True)
confirmation_link, token = None, None
if self.security.confirmable and not _force_login_without_confirmation:
token = self.security_utils_service.generate_confirmation_token(user)
confirmation_link = url_for('security_controller.confirm_email',
token=token, _external=True)
user_registered.send(app._get_current_object(),
user=user, confirm_token=token)
if (send_email or (
send_email is None
and app.config.SECURITY_SEND_REGISTER_EMAIL)):
self.send_mail(_('flask_unchained.bundles.security:email_subject.register'),
to=user.email,
template='security/email/welcome.html',
user=user,
confirmation_link=confirmation_link)
if should_login_user:
return self.login_user(user, force=_force_login_without_confirmation)
return False |
def available(self):
"""Check whether the ADB connection is intact."""
if not self.adb_server_ip:
# python-adb
return bool(self._adb)
# pure-python-adb
try:
# make sure the server is available
adb_devices = self._adb_client.devices()
# make sure the device is available
try:
# case 1: the device is currently available
if any([self.host in dev.get_serial_no() for dev in adb_devices]):
if not self._available:
self._available = True
return True
# case 2: the device is not currently available
if self._available:
logging.error('ADB server is not connected to the device.')
self._available = False
return False
except RuntimeError:
if self._available:
logging.error('ADB device is unavailable; encountered an error when searching for device.')
self._available = False
return False
except RuntimeError:
if self._available:
logging.error('ADB server is unavailable.')
self._available = False
return False | Check whether the ADB connection is intact. | Below is the the instruction that describes the task:
### Input:
Check whether the ADB connection is intact.
### Response:
def available(self):
"""Check whether the ADB connection is intact."""
if not self.adb_server_ip:
# python-adb
return bool(self._adb)
# pure-python-adb
try:
# make sure the server is available
adb_devices = self._adb_client.devices()
# make sure the device is available
try:
# case 1: the device is currently available
if any([self.host in dev.get_serial_no() for dev in adb_devices]):
if not self._available:
self._available = True
return True
# case 2: the device is not currently available
if self._available:
logging.error('ADB server is not connected to the device.')
self._available = False
return False
except RuntimeError:
if self._available:
logging.error('ADB device is unavailable; encountered an error when searching for device.')
self._available = False
return False
except RuntimeError:
if self._available:
logging.error('ADB server is unavailable.')
self._available = False
return False |
def update(self, resource, id_or_uri=None, timeout=-1):
"""
Updates the specified alert resource.
Args:
resource (dict): Object to update.
timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation
in OneView; it just stops waiting for its completion.
Returns:
dict: Updated alert.
"""
uri = resource.pop('uri', None)
if not uri:
if not id_or_uri:
raise ValueError("URI was not provided")
uri = self._client.build_uri(id_or_uri)
return self._client.update(resource=resource, uri=uri, timeout=timeout) | Updates the specified alert resource.
Args:
resource (dict): Object to update.
timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation
in OneView; it just stops waiting for its completion.
Returns:
dict: Updated alert. | Below is the the instruction that describes the task:
### Input:
Updates the specified alert resource.
Args:
resource (dict): Object to update.
timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation
in OneView; it just stops waiting for its completion.
Returns:
dict: Updated alert.
### Response:
def update(self, resource, id_or_uri=None, timeout=-1):
"""
Updates the specified alert resource.
Args:
resource (dict): Object to update.
timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation
in OneView; it just stops waiting for its completion.
Returns:
dict: Updated alert.
"""
uri = resource.pop('uri', None)
if not uri:
if not id_or_uri:
raise ValueError("URI was not provided")
uri = self._client.build_uri(id_or_uri)
return self._client.update(resource=resource, uri=uri, timeout=timeout) |
def abspath(path):
"""Return the absolute path to a file and canonicalize it
Path is returned without a trailing slash and without redundant slashes.
Caches the user's home directory.
:param path: A string for the path. This should not have any wildcards.
:returns: Absolute path to the file
:raises IOError: If unsuccessful
"""
global _USER_HOME_DIR
# FIXME(brandyn): User's home directory must exist
# FIXME(brandyn): Requires something to be in home dir
if path[0] == '/':
return os.path.abspath(path)
if _USER_HOME_DIR is None:
try:
_USER_HOME_DIR = _get_home_dir()
except IOError, e:
if not exists('.'):
raise IOError("Home directory doesn't exist")
raise e
return os.path.abspath(os.path.join(_USER_HOME_DIR, path)) | Return the absolute path to a file and canonicalize it
Path is returned without a trailing slash and without redundant slashes.
Caches the user's home directory.
:param path: A string for the path. This should not have any wildcards.
:returns: Absolute path to the file
:raises IOError: If unsuccessful | Below is the the instruction that describes the task:
### Input:
Return the absolute path to a file and canonicalize it
Path is returned without a trailing slash and without redundant slashes.
Caches the user's home directory.
:param path: A string for the path. This should not have any wildcards.
:returns: Absolute path to the file
:raises IOError: If unsuccessful
### Response:
def abspath(path):
"""Return the absolute path to a file and canonicalize it
Path is returned without a trailing slash and without redundant slashes.
Caches the user's home directory.
:param path: A string for the path. This should not have any wildcards.
:returns: Absolute path to the file
:raises IOError: If unsuccessful
"""
global _USER_HOME_DIR
# FIXME(brandyn): User's home directory must exist
# FIXME(brandyn): Requires something to be in home dir
if path[0] == '/':
return os.path.abspath(path)
if _USER_HOME_DIR is None:
try:
_USER_HOME_DIR = _get_home_dir()
except IOError, e:
if not exists('.'):
raise IOError("Home directory doesn't exist")
raise e
return os.path.abspath(os.path.join(_USER_HOME_DIR, path)) |
def _get_site_scaling(self, C, pga_rock, sites, period, rjb):
"""
Returns the site-scaling term (equation 5), broken down into a
linear scaling, a nonlinear scaling and a basin scaling term
"""
flin = self._get_linear_site_term(C, sites.vs30)
fnl = self._get_nonlinear_site_term(C, sites.vs30, pga_rock)
fbd = self._get_basin_depth_term(C, sites, period)
return flin + fnl + fbd | Returns the site-scaling term (equation 5), broken down into a
linear scaling, a nonlinear scaling and a basin scaling term | Below is the the instruction that describes the task:
### Input:
Returns the site-scaling term (equation 5), broken down into a
linear scaling, a nonlinear scaling and a basin scaling term
### Response:
def _get_site_scaling(self, C, pga_rock, sites, period, rjb):
"""
Returns the site-scaling term (equation 5), broken down into a
linear scaling, a nonlinear scaling and a basin scaling term
"""
flin = self._get_linear_site_term(C, sites.vs30)
fnl = self._get_nonlinear_site_term(C, sites.vs30, pga_rock)
fbd = self._get_basin_depth_term(C, sites, period)
return flin + fnl + fbd |
async def async_input(prompt):
"""
Python's ``input()`` is blocking, which means the event loop we set
above can't be running while we're blocking there. This method will
let the loop run while we wait for input.
"""
print(prompt, end='', flush=True)
return (await loop.run_in_executor(None, sys.stdin.readline)).rstrip() | Python's ``input()`` is blocking, which means the event loop we set
above can't be running while we're blocking there. This method will
let the loop run while we wait for input. | Below is the the instruction that describes the task:
### Input:
Python's ``input()`` is blocking, which means the event loop we set
above can't be running while we're blocking there. This method will
let the loop run while we wait for input.
### Response:
async def async_input(prompt):
"""
Python's ``input()`` is blocking, which means the event loop we set
above can't be running while we're blocking there. This method will
let the loop run while we wait for input.
"""
print(prompt, end='', flush=True)
return (await loop.run_in_executor(None, sys.stdin.readline)).rstrip() |
def get_target(self, row, col, row_count, col_count):
"""
Moves cursor to the specified position and returns in.
"""
# This method is called for almost any operation so it should
# be maximally optimized.
#
# Any comparison here is negligible compared to UNO call. So we do all
# possible checks which can prevent an unnecessary cursor movement.
#
# Generally we need to expand or collapse selection to the desired
# size and move it to the desired position. But both of these actions
# can fail if there is not enough space. For this reason we must
# determine which of the actions has to be done first. In some cases
# we must even move the cursor twice (cursor movement is faster than
# selection change).
#
target = self._target
# If we cannot resize selection now then we must move cursor first.
if self.row + row_count > self.max_row_count or self.col + col_count > self.max_col_count:
# Move cursor to the desired position if possible.
row_delta = row - self.row if row + self.row_count <= self.max_row_count else 0
col_delta = col - self.col if col + self.col_count <= self.max_col_count else 0
target.gotoOffset(col_delta, row_delta)
self.row += row_delta
self.col += col_delta
# Resize selection
if (row_count, col_count) != (self.row_count, self.col_count):
target.collapseToSize(col_count, row_count)
self.row_count = row_count
self.col_count = col_count
# Move cursor to the desired position
if (row, col) != (self.row, self.col):
target.gotoOffset(col - self.col, row - self.row)
self.row = row
self.col = col
return target | Moves cursor to the specified position and returns in. | Below is the the instruction that describes the task:
### Input:
Moves cursor to the specified position and returns in.
### Response:
def get_target(self, row, col, row_count, col_count):
"""
Moves cursor to the specified position and returns in.
"""
# This method is called for almost any operation so it should
# be maximally optimized.
#
# Any comparison here is negligible compared to UNO call. So we do all
# possible checks which can prevent an unnecessary cursor movement.
#
# Generally we need to expand or collapse selection to the desired
# size and move it to the desired position. But both of these actions
# can fail if there is not enough space. For this reason we must
# determine which of the actions has to be done first. In some cases
# we must even move the cursor twice (cursor movement is faster than
# selection change).
#
target = self._target
# If we cannot resize selection now then we must move cursor first.
if self.row + row_count > self.max_row_count or self.col + col_count > self.max_col_count:
# Move cursor to the desired position if possible.
row_delta = row - self.row if row + self.row_count <= self.max_row_count else 0
col_delta = col - self.col if col + self.col_count <= self.max_col_count else 0
target.gotoOffset(col_delta, row_delta)
self.row += row_delta
self.col += col_delta
# Resize selection
if (row_count, col_count) != (self.row_count, self.col_count):
target.collapseToSize(col_count, row_count)
self.row_count = row_count
self.col_count = col_count
# Move cursor to the desired position
if (row, col) != (self.row, self.col):
target.gotoOffset(col - self.col, row - self.row)
self.row = row
self.col = col
return target |
def groups(self) -> typing.Iterator['Group']:
"""
Returns: generator of all groups in this country
"""
for group_category in Mission.valid_group_categories:
if group_category in self._section_this_country.keys():
for group_index in self._section_this_country[group_category]['group']:
if group_index not in self.__groups[group_category]:
self.__groups[group_category][group_index] = Group(self.d, self.l10n, self.coa_color,
self.country_index, group_category,
group_index)
yield self.__groups[group_category][group_index] | Returns: generator of all groups in this country | Below is the the instruction that describes the task:
### Input:
Returns: generator of all groups in this country
### Response:
def groups(self) -> typing.Iterator['Group']:
"""
Returns: generator of all groups in this country
"""
for group_category in Mission.valid_group_categories:
if group_category in self._section_this_country.keys():
for group_index in self._section_this_country[group_category]['group']:
if group_index not in self.__groups[group_category]:
self.__groups[group_category][group_index] = Group(self.d, self.l10n, self.coa_color,
self.country_index, group_category,
group_index)
yield self.__groups[group_category][group_index] |
def save(self):
""" Creates a new user and account. Returns the newly created user. """
username, email, password, first_name, last_name = (self.cleaned_data['username'],
self.cleaned_data['email'],
self.cleaned_data['password1'],
self.cleaned_data['first_name'],
self.cleaned_data['last_name'],)
new_user = get_user_model()(username=username,
email=email,
first_name=first_name,
last_name=last_name)
new_user.set_password(password)
new_user.save()
return new_user | Creates a new user and account. Returns the newly created user. | Below is the the instruction that describes the task:
### Input:
Creates a new user and account. Returns the newly created user.
### Response:
def save(self):
""" Creates a new user and account. Returns the newly created user. """
username, email, password, first_name, last_name = (self.cleaned_data['username'],
self.cleaned_data['email'],
self.cleaned_data['password1'],
self.cleaned_data['first_name'],
self.cleaned_data['last_name'],)
new_user = get_user_model()(username=username,
email=email,
first_name=first_name,
last_name=last_name)
new_user.set_password(password)
new_user.save()
return new_user |
def read_last_checkpoint(self):
"""Read the last checkpoint from the oplog progress dictionary.
"""
# In versions of mongo-connector 2.3 and before,
# we used the repr of the
# oplog collection as keys in the oplog_progress dictionary.
# In versions thereafter, we use the replica set name. For backwards
# compatibility, we check for both.
oplog_str = str(self.oplog)
ret_val = None
with self.oplog_progress as oplog_prog:
oplog_dict = oplog_prog.get_dict()
try:
# New format.
ret_val = oplog_dict[self.replset_name]
except KeyError:
try:
# Old format.
ret_val = oplog_dict[oplog_str]
except KeyError:
pass
LOG.debug("OplogThread: reading last checkpoint as %s " % str(ret_val))
self.checkpoint = ret_val
return ret_val | Read the last checkpoint from the oplog progress dictionary. | Below is the the instruction that describes the task:
### Input:
Read the last checkpoint from the oplog progress dictionary.
### Response:
def read_last_checkpoint(self):
"""Read the last checkpoint from the oplog progress dictionary.
"""
# In versions of mongo-connector 2.3 and before,
# we used the repr of the
# oplog collection as keys in the oplog_progress dictionary.
# In versions thereafter, we use the replica set name. For backwards
# compatibility, we check for both.
oplog_str = str(self.oplog)
ret_val = None
with self.oplog_progress as oplog_prog:
oplog_dict = oplog_prog.get_dict()
try:
# New format.
ret_val = oplog_dict[self.replset_name]
except KeyError:
try:
# Old format.
ret_val = oplog_dict[oplog_str]
except KeyError:
pass
LOG.debug("OplogThread: reading last checkpoint as %s " % str(ret_val))
self.checkpoint = ret_val
return ret_val |
def paracrawl_v3_pairs(paracrawl_file):
"""Generates raw (English, other) pairs from a ParaCrawl V3.0 data file.
Args:
paracrawl_file: A ParaCrawl V3.0 en-.. data file.
Yields:
Pairs of (sentence_en, sentence_xx), as Unicode strings.
Raises:
StopIteration: If the file ends while this method is in the middle of
creating a translation pair.
"""
raw_sentences = _raw_sentences(paracrawl_file)
for s_en in raw_sentences:
try:
s_xx = next(raw_sentences)
if s_en and s_xx: # Prevent empty string examples.
yield s_en, s_xx
except StopIteration:
tf.logging.error(
'Unmatched final sentence while reading in sentence pairs: [%s]',
s_en) | Generates raw (English, other) pairs from a ParaCrawl V3.0 data file.
Args:
paracrawl_file: A ParaCrawl V3.0 en-.. data file.
Yields:
Pairs of (sentence_en, sentence_xx), as Unicode strings.
Raises:
StopIteration: If the file ends while this method is in the middle of
creating a translation pair. | Below is the the instruction that describes the task:
### Input:
Generates raw (English, other) pairs from a ParaCrawl V3.0 data file.
Args:
paracrawl_file: A ParaCrawl V3.0 en-.. data file.
Yields:
Pairs of (sentence_en, sentence_xx), as Unicode strings.
Raises:
StopIteration: If the file ends while this method is in the middle of
creating a translation pair.
### Response:
def paracrawl_v3_pairs(paracrawl_file):
"""Generates raw (English, other) pairs from a ParaCrawl V3.0 data file.
Args:
paracrawl_file: A ParaCrawl V3.0 en-.. data file.
Yields:
Pairs of (sentence_en, sentence_xx), as Unicode strings.
Raises:
StopIteration: If the file ends while this method is in the middle of
creating a translation pair.
"""
raw_sentences = _raw_sentences(paracrawl_file)
for s_en in raw_sentences:
try:
s_xx = next(raw_sentences)
if s_en and s_xx: # Prevent empty string examples.
yield s_en, s_xx
except StopIteration:
tf.logging.error(
'Unmatched final sentence while reading in sentence pairs: [%s]',
s_en) |
def always_executed_hook(self):
"""Run for each command."""
_logT = self._devProxy.get_logging_target()
if 'device::sip_sdp_logger' not in _logT:
try:
self._devProxy.add_logging_target('device::sip_sdp/elt/logger')
self.info_stream("Test of Tango logging from "
"'tc_tango_master'")
except Exception as e:
LOG.debug('Failed to setup Tango logging %s', e ) | Run for each command. | Below is the the instruction that describes the task:
### Input:
Run for each command.
### Response:
def always_executed_hook(self):
"""Run for each command."""
_logT = self._devProxy.get_logging_target()
if 'device::sip_sdp_logger' not in _logT:
try:
self._devProxy.add_logging_target('device::sip_sdp/elt/logger')
self.info_stream("Test of Tango logging from "
"'tc_tango_master'")
except Exception as e:
LOG.debug('Failed to setup Tango logging %s', e ) |
def get(self, label_sn):
"""
Get tags by a label's sn key
:param label_sn: A corresponding label's ``sn`` key.
:type label_sn: str or int
:return: A list of matching tags. An empty list is returned if there are
not any matches
:rtype: list of dict
:raises: This will raise a
:class:`ServerException<logentries_api.exceptions.ServerException>`
if there is an error from Logentries
"""
tags = self.list()
return [
tag
for tag
in tags
if str(label_sn) in tag.get('args', {}).values()
] | Get tags by a label's sn key
:param label_sn: A corresponding label's ``sn`` key.
:type label_sn: str or int
:return: A list of matching tags. An empty list is returned if there are
not any matches
:rtype: list of dict
:raises: This will raise a
:class:`ServerException<logentries_api.exceptions.ServerException>`
if there is an error from Logentries | Below is the the instruction that describes the task:
### Input:
Get tags by a label's sn key
:param label_sn: A corresponding label's ``sn`` key.
:type label_sn: str or int
:return: A list of matching tags. An empty list is returned if there are
not any matches
:rtype: list of dict
:raises: This will raise a
:class:`ServerException<logentries_api.exceptions.ServerException>`
if there is an error from Logentries
### Response:
def get(self, label_sn):
"""
Get tags by a label's sn key
:param label_sn: A corresponding label's ``sn`` key.
:type label_sn: str or int
:return: A list of matching tags. An empty list is returned if there are
not any matches
:rtype: list of dict
:raises: This will raise a
:class:`ServerException<logentries_api.exceptions.ServerException>`
if there is an error from Logentries
"""
tags = self.list()
return [
tag
for tag
in tags
if str(label_sn) in tag.get('args', {}).values()
] |
def configure(self, subscription_id, tenant_id, client_id="", client_secret="", environment='AzurePublicCloud',
mount_point=DEFAULT_MOUNT_POINT):
"""Configure the credentials required for the plugin to perform API calls to Azure.
These credentials will be used to query roles and create/delete service principals. Environment variables will
override any parameters set in the config.
Supported methods:
POST: /{mount_point}/config. Produces: 204 (empty body)
:param subscription_id: The subscription id for the Azure Active Directory
:type subscription_id: str | unicode
:param tenant_id: The tenant id for the Azure Active Directory.
:type tenant_id: str | unicode
:param client_id: The OAuth2 client id to connect to Azure.
:type client_id: str | unicode
:param client_secret: The OAuth2 client secret to connect to Azure.
:type client_secret: str | unicode
:param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud.
:type environment: str | unicode
:param mount_point: The OAuth2 client secret to connect to Azure.
:type mount_point: str | unicode
:return: The response of the request.
:rtype: requests.Response
"""
if environment not in VALID_ENVIRONMENTS:
error_msg = 'invalid environment argument provided "{arg}", supported environments: "{environments}"'
raise exceptions.ParamValidationError(error_msg.format(
arg=environment,
environments=','.join(VALID_ENVIRONMENTS),
))
params = {
'subscription_id': subscription_id,
'tenant_id': tenant_id,
'client_id': client_id,
'client_secret': client_secret,
'environment': environment,
}
api_path = '/v1/{mount_point}/config'.format(mount_point=mount_point)
return self._adapter.post(
url=api_path,
json=params,
) | Configure the credentials required for the plugin to perform API calls to Azure.
These credentials will be used to query roles and create/delete service principals. Environment variables will
override any parameters set in the config.
Supported methods:
POST: /{mount_point}/config. Produces: 204 (empty body)
:param subscription_id: The subscription id for the Azure Active Directory
:type subscription_id: str | unicode
:param tenant_id: The tenant id for the Azure Active Directory.
:type tenant_id: str | unicode
:param client_id: The OAuth2 client id to connect to Azure.
:type client_id: str | unicode
:param client_secret: The OAuth2 client secret to connect to Azure.
:type client_secret: str | unicode
:param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud.
:type environment: str | unicode
:param mount_point: The OAuth2 client secret to connect to Azure.
:type mount_point: str | unicode
:return: The response of the request.
:rtype: requests.Response | Below is the the instruction that describes the task:
### Input:
Configure the credentials required for the plugin to perform API calls to Azure.
These credentials will be used to query roles and create/delete service principals. Environment variables will
override any parameters set in the config.
Supported methods:
POST: /{mount_point}/config. Produces: 204 (empty body)
:param subscription_id: The subscription id for the Azure Active Directory
:type subscription_id: str | unicode
:param tenant_id: The tenant id for the Azure Active Directory.
:type tenant_id: str | unicode
:param client_id: The OAuth2 client id to connect to Azure.
:type client_id: str | unicode
:param client_secret: The OAuth2 client secret to connect to Azure.
:type client_secret: str | unicode
:param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud.
:type environment: str | unicode
:param mount_point: The OAuth2 client secret to connect to Azure.
:type mount_point: str | unicode
:return: The response of the request.
:rtype: requests.Response
### Response:
def configure(self, subscription_id, tenant_id, client_id="", client_secret="", environment='AzurePublicCloud',
mount_point=DEFAULT_MOUNT_POINT):
"""Configure the credentials required for the plugin to perform API calls to Azure.
These credentials will be used to query roles and create/delete service principals. Environment variables will
override any parameters set in the config.
Supported methods:
POST: /{mount_point}/config. Produces: 204 (empty body)
:param subscription_id: The subscription id for the Azure Active Directory
:type subscription_id: str | unicode
:param tenant_id: The tenant id for the Azure Active Directory.
:type tenant_id: str | unicode
:param client_id: The OAuth2 client id to connect to Azure.
:type client_id: str | unicode
:param client_secret: The OAuth2 client secret to connect to Azure.
:type client_secret: str | unicode
:param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud.
:type environment: str | unicode
:param mount_point: The OAuth2 client secret to connect to Azure.
:type mount_point: str | unicode
:return: The response of the request.
:rtype: requests.Response
"""
if environment not in VALID_ENVIRONMENTS:
error_msg = 'invalid environment argument provided "{arg}", supported environments: "{environments}"'
raise exceptions.ParamValidationError(error_msg.format(
arg=environment,
environments=','.join(VALID_ENVIRONMENTS),
))
params = {
'subscription_id': subscription_id,
'tenant_id': tenant_id,
'client_id': client_id,
'client_secret': client_secret,
'environment': environment,
}
api_path = '/v1/{mount_point}/config'.format(mount_point=mount_point)
return self._adapter.post(
url=api_path,
json=params,
) |
def plot(self, workflow=None, view=True, depth=-1, name=NONE, comment=NONE,
format=NONE, engine=NONE, encoding=NONE, graph_attr=NONE,
node_attr=NONE, edge_attr=NONE, body=NONE, node_styles=NONE,
node_data=NONE, node_function=NONE, edge_data=NONE, max_lines=NONE,
max_width=NONE, directory=None, sites=None, index=False):
"""
Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:return:
A SiteMap.
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())])
"""
d = {
'name': name, 'comment': comment, 'format': format,
'engine': engine, 'encoding': encoding, 'graph_attr': graph_attr,
'node_attr': node_attr, 'edge_attr': edge_attr, 'body': body,
}
options = {
'digraph': {k: v for k, v in d.items() if v is not NONE} or NONE,
'node_styles': node_styles,
'node_data': node_data,
'node_function': node_function,
'edge_data': edge_data,
'max_lines': max_lines, # 5
'max_width': max_width, # 200
}
options = {k: v for k, v in options.items() if v is not NONE}
from .drw import SiteMap
from .sol import Solution
if workflow is None and isinstance(self, Solution):
workflow = True
else:
workflow = workflow or False
sitemap = SiteMap()
sitemap.add_items(self, workflow=workflow, depth=depth, **options)
if view:
import tempfile
directory = directory or tempfile.mkdtemp()
if sites is None:
sitemap.render(directory=directory, view=True, index=index)
else:
sites.add(sitemap.site(directory, view=True, index=index))
return sitemap | Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:return:
A SiteMap.
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())]) | Below is the the instruction that describes the task:
### Input:
Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:return:
A SiteMap.
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())])
### Response:
def plot(self, workflow=None, view=True, depth=-1, name=NONE, comment=NONE,
format=NONE, engine=NONE, encoding=NONE, graph_attr=NONE,
node_attr=NONE, edge_attr=NONE, body=NONE, node_styles=NONE,
node_data=NONE, node_function=NONE, edge_data=NONE, max_lines=NONE,
max_width=NONE, directory=None, sites=None, index=False):
"""
Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:return:
A SiteMap.
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())])
"""
d = {
'name': name, 'comment': comment, 'format': format,
'engine': engine, 'encoding': encoding, 'graph_attr': graph_attr,
'node_attr': node_attr, 'edge_attr': edge_attr, 'body': body,
}
options = {
'digraph': {k: v for k, v in d.items() if v is not NONE} or NONE,
'node_styles': node_styles,
'node_data': node_data,
'node_function': node_function,
'edge_data': edge_data,
'max_lines': max_lines, # 5
'max_width': max_width, # 200
}
options = {k: v for k, v in options.items() if v is not NONE}
from .drw import SiteMap
from .sol import Solution
if workflow is None and isinstance(self, Solution):
workflow = True
else:
workflow = workflow or False
sitemap = SiteMap()
sitemap.add_items(self, workflow=workflow, depth=depth, **options)
if view:
import tempfile
directory = directory or tempfile.mkdtemp()
if sites is None:
sitemap.render(directory=directory, view=True, index=index)
else:
sites.add(sitemap.site(directory, view=True, index=index))
return sitemap |
def _doIdRes(self, message, endpoint, return_to):
"""Handle id_res responses that are not cancellations of
immediate mode requests.
@param message: the response paramaters.
@param endpoint: the discovered endpoint object. May be None.
@raises ProtocolError: If the message contents are not
well-formed according to the OpenID specification. This
includes missing fields or not signing fields that should
be signed.
@raises DiscoveryFailure: If the subject of the id_res message
does not match the supplied endpoint, and discovery on the
identifier in the message fails (this should only happen
when using OpenID 2)
@returntype: L{Response}
"""
# Checks for presence of appropriate fields (and checks
# signed list fields)
self._idResCheckForFields(message)
if not self._checkReturnTo(message, return_to):
raise ProtocolError(
"return_to does not match return URL. Expected %r, got %r"
% (return_to, message.getArg(OPENID_NS, 'return_to')))
# Verify discovery information:
endpoint = self._verifyDiscoveryResults(message, endpoint)
logging.info("Received id_res response from %s using association %s" %
(endpoint.server_url,
message.getArg(OPENID_NS, 'assoc_handle')))
self._idResCheckSignature(message, endpoint.server_url)
# Will raise a ProtocolError if the nonce is bad
self._idResCheckNonce(message, endpoint)
signed_list_str = message.getArg(OPENID_NS, 'signed', no_default)
signed_list = signed_list_str.split(',')
signed_fields = ["openid." + s for s in signed_list]
return SuccessResponse(endpoint, message, signed_fields) | Handle id_res responses that are not cancellations of
immediate mode requests.
@param message: the response paramaters.
@param endpoint: the discovered endpoint object. May be None.
@raises ProtocolError: If the message contents are not
well-formed according to the OpenID specification. This
includes missing fields or not signing fields that should
be signed.
@raises DiscoveryFailure: If the subject of the id_res message
does not match the supplied endpoint, and discovery on the
identifier in the message fails (this should only happen
when using OpenID 2)
@returntype: L{Response} | Below is the the instruction that describes the task:
### Input:
Handle id_res responses that are not cancellations of
immediate mode requests.
@param message: the response paramaters.
@param endpoint: the discovered endpoint object. May be None.
@raises ProtocolError: If the message contents are not
well-formed according to the OpenID specification. This
includes missing fields or not signing fields that should
be signed.
@raises DiscoveryFailure: If the subject of the id_res message
does not match the supplied endpoint, and discovery on the
identifier in the message fails (this should only happen
when using OpenID 2)
@returntype: L{Response}
### Response:
def _doIdRes(self, message, endpoint, return_to):
"""Handle id_res responses that are not cancellations of
immediate mode requests.
@param message: the response paramaters.
@param endpoint: the discovered endpoint object. May be None.
@raises ProtocolError: If the message contents are not
well-formed according to the OpenID specification. This
includes missing fields or not signing fields that should
be signed.
@raises DiscoveryFailure: If the subject of the id_res message
does not match the supplied endpoint, and discovery on the
identifier in the message fails (this should only happen
when using OpenID 2)
@returntype: L{Response}
"""
# Checks for presence of appropriate fields (and checks
# signed list fields)
self._idResCheckForFields(message)
if not self._checkReturnTo(message, return_to):
raise ProtocolError(
"return_to does not match return URL. Expected %r, got %r"
% (return_to, message.getArg(OPENID_NS, 'return_to')))
# Verify discovery information:
endpoint = self._verifyDiscoveryResults(message, endpoint)
logging.info("Received id_res response from %s using association %s" %
(endpoint.server_url,
message.getArg(OPENID_NS, 'assoc_handle')))
self._idResCheckSignature(message, endpoint.server_url)
# Will raise a ProtocolError if the nonce is bad
self._idResCheckNonce(message, endpoint)
signed_list_str = message.getArg(OPENID_NS, 'signed', no_default)
signed_list = signed_list_str.split(',')
signed_fields = ["openid." + s for s in signed_list]
return SuccessResponse(endpoint, message, signed_fields) |
def set_target_temperature_by_id(self, zone_id, target_temperature):
"""
Set the target temperature for a zone by id
"""
if not self._do_auth():
raise RuntimeError("Unable to login")
data = {
"ZoneId": zone_id,
"TargetTemperature": target_temperature
}
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
'Authorization':
'Bearer ' + self.login_data['token']['accessToken']
}
url = self.api_base_url + "Home/ZoneTargetTemperature"
response = requests.post(url, data=json.dumps(
data), headers=headers, timeout=10)
if response.status_code != 200:
return False
zone_change_data = response.json()
return zone_change_data.get("isSuccess", False) | Set the target temperature for a zone by id | Below is the the instruction that describes the task:
### Input:
Set the target temperature for a zone by id
### Response:
def set_target_temperature_by_id(self, zone_id, target_temperature):
"""
Set the target temperature for a zone by id
"""
if not self._do_auth():
raise RuntimeError("Unable to login")
data = {
"ZoneId": zone_id,
"TargetTemperature": target_temperature
}
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
'Authorization':
'Bearer ' + self.login_data['token']['accessToken']
}
url = self.api_base_url + "Home/ZoneTargetTemperature"
response = requests.post(url, data=json.dumps(
data), headers=headers, timeout=10)
if response.status_code != 200:
return False
zone_change_data = response.json()
return zone_change_data.get("isSuccess", False) |
def area_pipe_min(self):
"""The minimum cross-sectional area of the LFOM pipe that assures
a safety factor."""
return (self.safety_factor * self.q / self.vel_critical).to(u.cm**2) | The minimum cross-sectional area of the LFOM pipe that assures
a safety factor. | Below is the the instruction that describes the task:
### Input:
The minimum cross-sectional area of the LFOM pipe that assures
a safety factor.
### Response:
def area_pipe_min(self):
"""The minimum cross-sectional area of the LFOM pipe that assures
a safety factor."""
return (self.safety_factor * self.q / self.vel_critical).to(u.cm**2) |
def getDate(self):
"returns the GMT response datetime or None"
date = self.headers.get('date')
if date:
date = self.convertTimeString(date)
return date | returns the GMT response datetime or None | Below is the the instruction that describes the task:
### Input:
returns the GMT response datetime or None
### Response:
def getDate(self):
"returns the GMT response datetime or None"
date = self.headers.get('date')
if date:
date = self.convertTimeString(date)
return date |
def process_js_mod(self, data, name, idx, sub_idx):
"""
Processes one moduli from JSON
:param data:
:param name:
:param idx:
:param sub_idx:
:return:
"""
if isinstance(data, (int, long)):
js = collections.OrderedDict()
js['type'] = 'js-mod-num'
js['fname'] = name
js['idx'] = idx
js['sub_idx'] = sub_idx
js['n'] = '0x%x' % data
if self.has_fingerprint(data):
logger.warning('Fingerprint found in json int modulus %s idx %s %s' % (name, idx, sub_idx))
self.mark_and_add_effort(data, js)
if self.do_print:
print(json.dumps(js))
return TestResult(js)
self.process_mod_line(data, name, idx, aux={'stype': 'json', 'sub_idx': sub_idx}) | Processes one moduli from JSON
:param data:
:param name:
:param idx:
:param sub_idx:
:return: | Below is the the instruction that describes the task:
### Input:
Processes one moduli from JSON
:param data:
:param name:
:param idx:
:param sub_idx:
:return:
### Response:
def process_js_mod(self, data, name, idx, sub_idx):
"""
Processes one moduli from JSON
:param data:
:param name:
:param idx:
:param sub_idx:
:return:
"""
if isinstance(data, (int, long)):
js = collections.OrderedDict()
js['type'] = 'js-mod-num'
js['fname'] = name
js['idx'] = idx
js['sub_idx'] = sub_idx
js['n'] = '0x%x' % data
if self.has_fingerprint(data):
logger.warning('Fingerprint found in json int modulus %s idx %s %s' % (name, idx, sub_idx))
self.mark_and_add_effort(data, js)
if self.do_print:
print(json.dumps(js))
return TestResult(js)
self.process_mod_line(data, name, idx, aux={'stype': 'json', 'sub_idx': sub_idx}) |
def add_from_file(self, filename, handler_decorator=None):
"""
Wrapper around add() that reads the handlers from the
file with the given name. The file is a Python script containing
a list named 'commands' of tuples that map command names to
handlers.
:type filename: str
:param filename: The name of the file containing the tuples.
:type handler_decorator: function
:param handler_decorator: A function that is used to decorate
each of the handlers in the file.
"""
args = {}
execfile(filename, args)
commands = args.get('commands')
if commands is None:
raise Exception(filename + ' has no variable named "commands"')
elif not hasattr(commands, '__iter__'):
raise Exception(filename + ': "commands" is not iterable')
for key, handler in commands:
if handler_decorator:
handler = handler_decorator(handler)
self.add(key, handler) | Wrapper around add() that reads the handlers from the
file with the given name. The file is a Python script containing
a list named 'commands' of tuples that map command names to
handlers.
:type filename: str
:param filename: The name of the file containing the tuples.
:type handler_decorator: function
:param handler_decorator: A function that is used to decorate
each of the handlers in the file. | Below is the the instruction that describes the task:
### Input:
Wrapper around add() that reads the handlers from the
file with the given name. The file is a Python script containing
a list named 'commands' of tuples that map command names to
handlers.
:type filename: str
:param filename: The name of the file containing the tuples.
:type handler_decorator: function
:param handler_decorator: A function that is used to decorate
each of the handlers in the file.
### Response:
def add_from_file(self, filename, handler_decorator=None):
"""
Wrapper around add() that reads the handlers from the
file with the given name. The file is a Python script containing
a list named 'commands' of tuples that map command names to
handlers.
:type filename: str
:param filename: The name of the file containing the tuples.
:type handler_decorator: function
:param handler_decorator: A function that is used to decorate
each of the handlers in the file.
"""
args = {}
execfile(filename, args)
commands = args.get('commands')
if commands is None:
raise Exception(filename + ' has no variable named "commands"')
elif not hasattr(commands, '__iter__'):
raise Exception(filename + ': "commands" is not iterable')
for key, handler in commands:
if handler_decorator:
handler = handler_decorator(handler)
self.add(key, handler) |
def _get_usage(self, account_number, number):
"""Get Fido usage.
Get the following data
- talk
- text
- data
Roaming data is not supported yet
"""
# Prepare data
data = {"ctn": number,
"language": "en-US",
"accountNumber": account_number}
# Http request
try:
raw_res = yield from self._session.post(USAGE_URL,
data=data,
headers=self._headers,
timeout=self._timeout)
except OSError:
raise PyFidoError("Can not get usage")
# Load answer as json
try:
output = yield from raw_res.json()
except (OSError, ValueError):
raise PyFidoError("Can not get usage as json")
# Format data
ret_data = {}
for data_name, keys in DATA_MAP.items():
key, subkey = keys
for data in output.get(key)[0].get('wirelessUsageSummaryInfoList'):
if data.get('usageSummaryType') == subkey:
# Prepare keys:
used_key = "{}_used".format(data_name)
remaining_key = "{}_remaining".format(data_name)
limit_key = "{}_limit".format(data_name)
# Get values
ret_data[used_key] = data.get('used', 0.0)
if data.get('remaining') >= 0:
ret_data[remaining_key] = data.get('remaining')
else:
ret_data[remaining_key] = None
if data.get('total') >= 0:
ret_data[limit_key] = data.get('total')
else:
ret_data[limit_key] = None
return ret_data | Get Fido usage.
Get the following data
- talk
- text
- data
Roaming data is not supported yet | Below is the the instruction that describes the task:
### Input:
Get Fido usage.
Get the following data
- talk
- text
- data
Roaming data is not supported yet
### Response:
def _get_usage(self, account_number, number):
"""Get Fido usage.
Get the following data
- talk
- text
- data
Roaming data is not supported yet
"""
# Prepare data
data = {"ctn": number,
"language": "en-US",
"accountNumber": account_number}
# Http request
try:
raw_res = yield from self._session.post(USAGE_URL,
data=data,
headers=self._headers,
timeout=self._timeout)
except OSError:
raise PyFidoError("Can not get usage")
# Load answer as json
try:
output = yield from raw_res.json()
except (OSError, ValueError):
raise PyFidoError("Can not get usage as json")
# Format data
ret_data = {}
for data_name, keys in DATA_MAP.items():
key, subkey = keys
for data in output.get(key)[0].get('wirelessUsageSummaryInfoList'):
if data.get('usageSummaryType') == subkey:
# Prepare keys:
used_key = "{}_used".format(data_name)
remaining_key = "{}_remaining".format(data_name)
limit_key = "{}_limit".format(data_name)
# Get values
ret_data[used_key] = data.get('used', 0.0)
if data.get('remaining') >= 0:
ret_data[remaining_key] = data.get('remaining')
else:
ret_data[remaining_key] = None
if data.get('total') >= 0:
ret_data[limit_key] = data.get('total')
else:
ret_data[limit_key] = None
return ret_data |
def close_state_machine(self, widget, page_number, event=None):
"""Triggered when the close button in the tab is clicked
"""
page = widget.get_nth_page(page_number)
for tab_info in self.tabs.values():
if tab_info['page'] is page:
state_machine_m = tab_info['state_machine_m']
self.on_close_clicked(event, state_machine_m, None, force=False)
return | Triggered when the close button in the tab is clicked | Below is the the instruction that describes the task:
### Input:
Triggered when the close button in the tab is clicked
### Response:
def close_state_machine(self, widget, page_number, event=None):
"""Triggered when the close button in the tab is clicked
"""
page = widget.get_nth_page(page_number)
for tab_info in self.tabs.values():
if tab_info['page'] is page:
state_machine_m = tab_info['state_machine_m']
self.on_close_clicked(event, state_machine_m, None, force=False)
return |
def _is_already_configured(configuration_details):
"""Returns `True` when alias already in shell config."""
path = Path(configuration_details.path).expanduser()
with path.open('r') as shell_config:
return configuration_details.content in shell_config.read() | Returns `True` when alias already in shell config. | Below is the the instruction that describes the task:
### Input:
Returns `True` when alias already in shell config.
### Response:
def _is_already_configured(configuration_details):
"""Returns `True` when alias already in shell config."""
path = Path(configuration_details.path).expanduser()
with path.open('r') as shell_config:
return configuration_details.content in shell_config.read() |
def tran3(self, a, b, c, n):
"""Get accumulator for a transition n between chars a, b, c."""
return (((TRAN[(a+n)&255]^TRAN[b]*(n+n+1))+TRAN[(c)^TRAN[n]])&255) | Get accumulator for a transition n between chars a, b, c. | Below is the the instruction that describes the task:
### Input:
Get accumulator for a transition n between chars a, b, c.
### Response:
def tran3(self, a, b, c, n):
"""Get accumulator for a transition n between chars a, b, c."""
return (((TRAN[(a+n)&255]^TRAN[b]*(n+n+1))+TRAN[(c)^TRAN[n]])&255) |
def generate_start_command(server, options_override=None, standalone=False):
"""
Check if we need to use numactl if we are running on a NUMA box.
10gen recommends using numactl on NUMA. For more info, see
http://www.mongodb.org/display/DOCS/NUMA
"""
command = []
if mongod_needs_numactl():
log_info("Running on a NUMA machine...")
command = apply_numactl(command)
# append the mongod executable
command.append(get_server_executable(server))
# create the command args
cmd_options = server.export_cmd_options(options_override=options_override,
standalone=standalone)
command.extend(options_to_command_args(cmd_options))
return command | Check if we need to use numactl if we are running on a NUMA box.
10gen recommends using numactl on NUMA. For more info, see
http://www.mongodb.org/display/DOCS/NUMA | Below is the the instruction that describes the task:
### Input:
Check if we need to use numactl if we are running on a NUMA box.
10gen recommends using numactl on NUMA. For more info, see
http://www.mongodb.org/display/DOCS/NUMA
### Response:
def generate_start_command(server, options_override=None, standalone=False):
"""
Check if we need to use numactl if we are running on a NUMA box.
10gen recommends using numactl on NUMA. For more info, see
http://www.mongodb.org/display/DOCS/NUMA
"""
command = []
if mongod_needs_numactl():
log_info("Running on a NUMA machine...")
command = apply_numactl(command)
# append the mongod executable
command.append(get_server_executable(server))
# create the command args
cmd_options = server.export_cmd_options(options_override=options_override,
standalone=standalone)
command.extend(options_to_command_args(cmd_options))
return command |
def ffill(arr, dim=None, limit=None):
'''forward fill missing values'''
import bottleneck as bn
axis = arr.get_axis_num(dim)
# work around for bottleneck 178
_limit = limit if limit is not None else arr.shape[axis]
return apply_ufunc(bn.push, arr,
dask='parallelized',
keep_attrs=True,
output_dtypes=[arr.dtype],
kwargs=dict(n=_limit, axis=axis)).transpose(*arr.dims) | forward fill missing values | Below is the the instruction that describes the task:
### Input:
forward fill missing values
### Response:
def ffill(arr, dim=None, limit=None):
'''forward fill missing values'''
import bottleneck as bn
axis = arr.get_axis_num(dim)
# work around for bottleneck 178
_limit = limit if limit is not None else arr.shape[axis]
return apply_ufunc(bn.push, arr,
dask='parallelized',
keep_attrs=True,
output_dtypes=[arr.dtype],
kwargs=dict(n=_limit, axis=axis)).transpose(*arr.dims) |
def _check_versionlock():
'''
Ensure that the appropriate versionlock plugin is present
'''
if _yum() == 'dnf':
if int(__grains__.get('osmajorrelease')) >= 26:
if six.PY3:
vl_plugin = 'python3-dnf-plugin-versionlock'
else:
vl_plugin = 'python2-dnf-plugin-versionlock'
else:
if six.PY3:
vl_plugin = 'python3-dnf-plugins-extras-versionlock'
else:
vl_plugin = 'python-dnf-plugins-extras-versionlock'
else:
vl_plugin = 'yum-versionlock' \
if __grains__.get('osmajorrelease') == '5' \
else 'yum-plugin-versionlock'
if vl_plugin not in list_pkgs():
raise SaltInvocationError(
'Cannot proceed, {0} is not installed.'.format(vl_plugin)
) | Ensure that the appropriate versionlock plugin is present | Below is the the instruction that describes the task:
### Input:
Ensure that the appropriate versionlock plugin is present
### Response:
def _check_versionlock():
'''
Ensure that the appropriate versionlock plugin is present
'''
if _yum() == 'dnf':
if int(__grains__.get('osmajorrelease')) >= 26:
if six.PY3:
vl_plugin = 'python3-dnf-plugin-versionlock'
else:
vl_plugin = 'python2-dnf-plugin-versionlock'
else:
if six.PY3:
vl_plugin = 'python3-dnf-plugins-extras-versionlock'
else:
vl_plugin = 'python-dnf-plugins-extras-versionlock'
else:
vl_plugin = 'yum-versionlock' \
if __grains__.get('osmajorrelease') == '5' \
else 'yum-plugin-versionlock'
if vl_plugin not in list_pkgs():
raise SaltInvocationError(
'Cannot proceed, {0} is not installed.'.format(vl_plugin)
) |
def create_ip_arp_reply(srchw, dsthw, srcip, targetip):
'''
Create an ARP reply (just change what needs to be changed
from a request)
'''
pkt = create_ip_arp_request(srchw, srcip, targetip)
pkt[0].dst = dsthw
pkt[1].operation = ArpOperation.Reply
pkt[1].targethwaddr = dsthw
return pkt | Create an ARP reply (just change what needs to be changed
from a request) | Below is the the instruction that describes the task:
### Input:
Create an ARP reply (just change what needs to be changed
from a request)
### Response:
def create_ip_arp_reply(srchw, dsthw, srcip, targetip):
'''
Create an ARP reply (just change what needs to be changed
from a request)
'''
pkt = create_ip_arp_request(srchw, srcip, targetip)
pkt[0].dst = dsthw
pkt[1].operation = ArpOperation.Reply
pkt[1].targethwaddr = dsthw
return pkt |
def get_connection(self, from_obj, to_obj):
"""
Returns a ``Connection`` instance for the given objects or ``None`` if
there's no connection.
"""
self._validate_ctypes(from_obj, to_obj)
try:
return self.connections.get(from_pk=from_obj.pk, to_pk=to_obj.pk)
except Connection.DoesNotExist:
return None | Returns a ``Connection`` instance for the given objects or ``None`` if
there's no connection. | Below is the the instruction that describes the task:
### Input:
Returns a ``Connection`` instance for the given objects or ``None`` if
there's no connection.
### Response:
def get_connection(self, from_obj, to_obj):
"""
Returns a ``Connection`` instance for the given objects or ``None`` if
there's no connection.
"""
self._validate_ctypes(from_obj, to_obj)
try:
return self.connections.get(from_pk=from_obj.pk, to_pk=to_obj.pk)
except Connection.DoesNotExist:
return None |
def date_proc(func):
""" An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data.
Else, return PTT data with right date.
Args:
func: function you want to decorate.
request: WSGI request parameter getten from django.
Returns:
date:
a datetime variable, you can only give year, year + month or year + month + day, three type.
The missing part would be assigned default value 1 (for month is Jan, for day is 1).
"""
@wraps(func)
def wrapped(request, *args, **kwargs):
if 'date' in request.GET and request.GET['date'] == '':
raise Http404("api does not exist")
elif 'date' not in request.GET:
date = datetime.today()
return func(request, date)
else:
date = tuple(int(intValue) for intValue in request.GET['date'].split('-'))
if len(date) == 3:
date = datetime(*date)
elif len(date) == 2:
date = datetime(*date, day = 1)
else:
date = datetime(*date, month = 1, day = 1)
return func(request, date)
return wrapped | An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data.
Else, return PTT data with right date.
Args:
func: function you want to decorate.
request: WSGI request parameter getten from django.
Returns:
date:
a datetime variable, you can only give year, year + month or year + month + day, three type.
The missing part would be assigned default value 1 (for month is Jan, for day is 1). | Below is the the instruction that describes the task:
### Input:
An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data.
Else, return PTT data with right date.
Args:
func: function you want to decorate.
request: WSGI request parameter getten from django.
Returns:
date:
a datetime variable, you can only give year, year + month or year + month + day, three type.
The missing part would be assigned default value 1 (for month is Jan, for day is 1).
### Response:
def date_proc(func):
""" An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data.
Else, return PTT data with right date.
Args:
func: function you want to decorate.
request: WSGI request parameter getten from django.
Returns:
date:
a datetime variable, you can only give year, year + month or year + month + day, three type.
The missing part would be assigned default value 1 (for month is Jan, for day is 1).
"""
@wraps(func)
def wrapped(request, *args, **kwargs):
if 'date' in request.GET and request.GET['date'] == '':
raise Http404("api does not exist")
elif 'date' not in request.GET:
date = datetime.today()
return func(request, date)
else:
date = tuple(int(intValue) for intValue in request.GET['date'].split('-'))
if len(date) == 3:
date = datetime(*date)
elif len(date) == 2:
date = datetime(*date, day = 1)
else:
date = datetime(*date, month = 1, day = 1)
return func(request, date)
return wrapped |
def find_xml_generator(name="castxml"):
"""
Try to find a c++ parser (xml generator)
Args:
name (str): name of the c++ parser (e.g. castxml)
Returns:
path (str), name (str): path to the xml generator and it's name
If no c++ parser is found the function raises an exception.
pygccxml does currently only support castxml as c++ parser.
"""
if sys.version_info[:2] >= (3, 3):
path = _find_xml_generator_for_python_greater_equals_33(name)
else:
path = _find_xml_generator_for_legacy_python(name)
if path == "" or path is None:
raise Exception("No c++ parser found. Please install castxml.")
return path.rstrip(), name | Try to find a c++ parser (xml generator)
Args:
name (str): name of the c++ parser (e.g. castxml)
Returns:
path (str), name (str): path to the xml generator and it's name
If no c++ parser is found the function raises an exception.
pygccxml does currently only support castxml as c++ parser. | Below is the the instruction that describes the task:
### Input:
Try to find a c++ parser (xml generator)
Args:
name (str): name of the c++ parser (e.g. castxml)
Returns:
path (str), name (str): path to the xml generator and it's name
If no c++ parser is found the function raises an exception.
pygccxml does currently only support castxml as c++ parser.
### Response:
def find_xml_generator(name="castxml"):
"""
Try to find a c++ parser (xml generator)
Args:
name (str): name of the c++ parser (e.g. castxml)
Returns:
path (str), name (str): path to the xml generator and it's name
If no c++ parser is found the function raises an exception.
pygccxml does currently only support castxml as c++ parser.
"""
if sys.version_info[:2] >= (3, 3):
path = _find_xml_generator_for_python_greater_equals_33(name)
else:
path = _find_xml_generator_for_legacy_python(name)
if path == "" or path is None:
raise Exception("No c++ parser found. Please install castxml.")
return path.rstrip(), name |
def wait_for_exists(self, timeout=0, *args, **selectors):
"""
Wait for the object which has *selectors* within the given timeout.
Return true if the object *appear* in the given timeout. Else return false.
"""
return self.device(**selectors).wait.exists(timeout=timeout) | Wait for the object which has *selectors* within the given timeout.
Return true if the object *appear* in the given timeout. Else return false. | Below is the the instruction that describes the task:
### Input:
Wait for the object which has *selectors* within the given timeout.
Return true if the object *appear* in the given timeout. Else return false.
### Response:
def wait_for_exists(self, timeout=0, *args, **selectors):
"""
Wait for the object which has *selectors* within the given timeout.
Return true if the object *appear* in the given timeout. Else return false.
"""
return self.device(**selectors).wait.exists(timeout=timeout) |
def join_paths(path1: Optional[str], path2: Optional[str]) -> Optional[str]:
""" Joins two paths if neither of them is None """
if path1 is not None and path2 is not None:
return os.path.join(path1, path2) | Joins two paths if neither of them is None | Below is the the instruction that describes the task:
### Input:
Joins two paths if neither of them is None
### Response:
def join_paths(path1: Optional[str], path2: Optional[str]) -> Optional[str]:
""" Joins two paths if neither of them is None """
if path1 is not None and path2 is not None:
return os.path.join(path1, path2) |
Subsets and Splits