text
stringlengths 4
1.02M
| meta
dict |
---|---|
"""
Gage adjustment
^^^^^^^^^^^^^^^
Concept
-------
The objective of this module is the adjustment of radar-based rainfall
estimates by rain gage observations. However, this module could also be
applied to adjust satellite rainfall by rain gage observations, remotely
sensed soil moisture patterns by ground truthing moisture sensors, or any
dense spatial point pattern which could be adjusted by sparse point
measurements (ground truth).
Basically, we only need two data sources:
- point observations (e.g. rain gage observations)
- set of (potentially irregular) unadjusted point values
(e.g. remotely sensed rainfall)
:cite:`Goudenhoofdt2009` provide an excellent overview of adjustment
procedures. The general idea is that we quantify the error of the
remotely sensed rainfall at the rain gage locations, assuming the rain
gage observation to be accurate.
The error can be assumed to be purely additive
(:class:`~wradlib.adjust.AdjustAdd`), purely multiplicative
(:class:`~wradlib.adjust.AdjustMultiply`, :class:`~wradlib.adjust.AdjustMFB`)
or a mixture of both (:class:`~wradlib.adjust.AdjustMixed`).
If the error is assumed to be heterogeneous in space
(:class:`~wradlib.adjust.AdjustAdd`, :class:`~wradlib.adjust.AdjustMultiply`,
:class:`~wradlib.adjust.AdjustMixed`), the error at the rain gage locations is
interpolated to the radar bin locations and then used to adjust (correct)
the raw radar rainfall estimates. In case of the AdjustMFB approach, though,
the multiplicative error is assumed to be homogeneous in space.
Quick start
-----------
The basic procedure consists of creating an adjustment object from the class
you want to use for adjustment. After that, you can call the object with the
actual data that is to be adjusted. The following example is using the
additive error model with default settings. ``obs_coords`` and
``raw_coords`` represent arrays with coordinate pairs for the gage
observations and the radar bins, respectively. ``obs`` and ``raw`` are
arrays containing the actual data::
adjuster = AdjustAdd(obs_coords, raw_coords)
adjusted = adjuster(obs, raw)
Both ``obs`` and ``raw`` need to be flat (1-dimensional) arrays of shape (n,)
that have the same length as the the ``obs_coords`` and ``raw_coords`` arrays,
respectively.
The user can specify the approach that should be used to interpolate the error
in space, as well as the keyword arguments which control the behaviour of the
interpolation approach. For this purpose, all interpolation classes from the
:mod:`wradlib.ipol` module are available and can be passed by using the
``ipclass`` argument. The default interpolation class is
Inverse Distance Weighting (:class:`~wradlib.ipol.Idw`). If you want to use
e.g. linear barycentric interpolation::
import wradlib.ipol as ipol
adjuster = AdjustAdd(obs_coords, raw_coords, ipclass=ipol.Linear)
adjusted = adjuster(obs, raw)
Warning
-------
Be aware that there are a lot of control parameters that can dramatically
influence the behaviour of the adjustment (which gauges are considered,
how is an error interpolation carried out, ...). Read the docs carefully
and try to experiment with the effects of the different control parameters.
There might be situations in which the algorithms decides - based on the
control parameter - not to do an adjustment and just return the unadjusted
values.
Cross validation
----------------
Another helpful feature is an easy-to-use method for leave-one-out
cross-validation :cite:`Cross-validation`. Cross validation is a standard
procedure for verifying rain gage adjustment or interpolation procedures. You
can start the cross validation in the same way as you start the actual
adjustment, however, you call the :meth:`~wradlib.adjust.AdjustBase.xvalidate`
method instead. The result of the cross validation are pairs of observation
and the corresponding adjustment result at the observation location. Using the
:mod:`wradlib.verify` module, you can compute error metrics for the cross
validation results::
adjuster = AdjustAdd(obs_coords, raw_coords)
observed, estimated = adjuster.xvalidate(obs, raw)
from wradlib.verify import ErrorMetrics
metrics = ErrorMetrics(observed, estimated)
metrics.report()
.. autosummary::
:nosignatures:
:toctree: generated/
{}
"""
__all__ = [
"AdjustBase",
"AdjustMFB",
"AdjustMultiply",
"AdjustAdd",
"AdjustMixed",
"RawAtObs",
"GageOnly",
"AdjustNone",
]
__doc__ = __doc__.format("\n ".join(__all__))
import numpy as np
from scipy import spatial, stats
from wradlib import ipol, util
class AdjustBase(ipol.IpolBase):
"""The basic adjustment class that inherits to all other classes.
All methods except the :meth:`~wradlib.adjust.AdjustBase.__call__` method
are inherited to the following adjustment classes.
Parameters
----------
obs_coords : :py:class:`numpy:numpy.ndarray`
array of floats of shape (number of points, 2)
x and y coordinate pairs of observation locations (e.g. rain gauges).
raw_coords : :py:class:`numpy:numpy.ndarray`
array of floats of shape (number of points, 2)
x and y coordinate pairs of raw (unadjusted) radar field
nnear_raws : int
Defaults to 9. This parameter controls the number of radar bins or
grid cells (in the neighbourhood of a rain gauge) which is used to
compute the value of the radar observation AT a rain gauge.
stat : str
Defaults to 'median'. Must be either 'mean', 'median', or 'best'.
This parameter controls the statistic that is used to compute the value
of the radar observation AT a rain gauge based on the neighbourhood
specified by parameter ``nnear_raws``.
mingages : int
Defaults to 5. Minimum number of valid gages required for an
adjustment. If less valid gauges are available, the adjustment
procedure will return unadjusted raw values. If you do not want to use
this feature, you need to set ``mingages=0``.
minval : float
If the gage or radar observation is below this threshold, the location
will not be used for adjustment. For additive adjustment, this value
should be set to zero (default value). For multiplicative adjustment,
values larger than zero might be chosen in order to minimize
artifacts.
mfb_args : dict
**Only used for AdjustMFB** - This set of parameters controls how the
mean field bias is computed. Items of the dictionary are:
- *method*: string
defaults to 'linregr' which fits a regression line through observed
and estimated values and than gets the bias from the inverse of
the slope.
Other values: 'mean' or 'median' compute the mean or the median of
the ratios between gauge and radar observations.
- *minslope*, *minr*, *maxp*:
When using method='linregr', these parameters control whether a
linear regression turned out to be robust (minimum allowable slope,
minimum allowable correlation, maximim allowable p-value). If the
regression result is not considered robust, no adjustment will
take place.
Ipclass : :class:`wradlib.ipol.IpolBase`
an interpolation class from :mod:`wradlib.ipol`
**Not used for AdjustMFB** - default value is
:class:`~wradlib.ipol.Idw` (Inverse Distance Weighting).
ipargs : dict
keyword arguments to create an instance of ipclass
**Not used for AdjustMFB** - for :class:`~wradlib.ipol.Idw`, these
keyword arguments would e.g. be ``nnear`` or ``p``.
Examples
--------
See :ref:`/notebooks/multisensor/wradlib_adjust_example.ipynb`.
"""
def __init__(
self,
obs_coords,
raw_coords,
nnear_raws=9,
stat="median",
mingages=5,
minval=0.0,
mfb_args=None,
ipclass=ipol.Idw,
**ipargs,
):
# Check arguments
if mfb_args is None:
mfb_args = dict(method="linregr", minslope=0.1, minr=0.5, maxp=0.01)
assert mfb_args["method"] in ["mean", "median", "linregr"], (
"Argument mfb_args['method'] has to be one "
"out of 'mean', 'median' or 'linregr'."
)
# These are the coordinates of the rain gage locations and
# the radar bin locations
self.obs_coords = self._make_coord_arrays(obs_coords)
self.raw_coords = self._make_coord_arrays(raw_coords)
# These are the general control parameters
# for all adjustment procedures
self.nnear_raws = nnear_raws
self.stat = stat
self.mingages = mingages
self.minval = minval
# Control parameters for specific adjustment procedures
# for AdjustMFB
self.mfb_args = mfb_args
# interpolation class and its keyword arguments
# ((needed for AdjustAdd, AdjustMultiply, AdjustMixed)
self.ipclass = ipclass
self.ipargs = ipargs
# create a default instance of interpolator
self.ip = ipclass(src=self.obs_coords, trg=self.raw_coords, **ipargs)
# This method will quickly retrieve the actual radar values
# at the gage locations
self.get_raw_at_obs = RawAtObs(
self.obs_coords, self.raw_coords, nnear=nnear_raws, stat=stat
)
def _checkip(self, ix, targets):
"""INTERNAL: Return a revised instance of the Interpolator class.
When an instance of an Adjust... class is created, an instance of the
desired
Interpolation class (argument ipclass) is created as attribute
*self.ip*). However, this instance is only valid in case all
observation points (attribute *self.obs_coords*) have valid
observation-radar pairs. In case points are missing (or in case the
instance is called in the sourse of cross validation), a new instance
has to be created which consideres the new constellation of
observation-radar pairs.
This method computes and returns this new instance.
Parameters
----------
ix : :py:class:`numpy:numpy.ndarray`
array of integers
These are the indices of observation points with valid
observation-radar pairs
targets : :py:class:`numpy:numpy.ndarray`
array of floats of shape (number of target points, 2)
Target coordinates for the interpolation
Returns
-------
output : :class:`wradlib.ipol.IpolBase`
an instance of a class that inherited from :class:`wradlib.ipol.IpolBase`
"""
# first, set interpolation targets (default: the radar coordinates)
targets_default = False
if targets is None:
targets = self.raw_coords
targets_default = True
# second, compute inverse distance neighbours
if (not len(ix) == len(self.obs_coords)) or (not targets_default):
return self.ipclass(self.obs_coords[ix], targets, **self.ipargs)
else:
return self.ip
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Parameters
----------
obs : :py:class:`numpy:numpy.ndarray`
flat (1-D) array of floats with shape (num gauges,)
These are the gage observations used for adjustment. This array
needs to be of the same length as the array "obs_coords" used to
initialize the adjustment object.
raw : :py:class:`numpy:numpy.ndarray`
flat (1-D) array of floats with shape (num radar cells,)
These are the raw (unadjusted) radar rainfall values. This array
needs to be of the same length as the array "raw_coords" used to
initialize the adjustment object.
targets : :py:class:`numpy:numpy.ndarray`
(INTERNAL - DO NOT USE)
Array of floats. Coordinate pairs for locations on which the final
adjustment product is interpolated
Defaults to None. In this case, the output locations will be
identical to the radar coordinates
rawatobs : :py:class:`numpy:numpy.ndarray`
(INTERNAL - DO NOT USE)
Array of floats. For internal use from AdjustBase.xvalidate only
(defaults to None)
ix : :py:class:`numpy:numpy.ndarray`
(INTERNAL - DO NOT USE)
Array of integers. For internal use from AdjustBase.xvalidate only
(defaults to None)
"""
def _check_shape(self, obs, raw):
"""INTERNAL: Check consistency of the input data obs and raw with
the shapes of the coordinates
"""
# TODO
def _get_valid_pairs(self, obs, raw):
"""INTERNAL: Helper method to identify valid obs-raw pairs"""
# checking input shape consistency
self._check_shape(obs, raw)
# radar values at gage locations
rawatobs = self.get_raw_at_obs(raw, obs)
# check where both gage and radar observations are valid
ix = np.intersect1d(
util._idvalid(obs, minval=self.minval),
util._idvalid(rawatobs, minval=self.minval),
)
return rawatobs, ix
def xvalidate(self, obs, raw):
"""Leave-One-Out Cross Validation, applicable to all gage adjustment
classes.
This method will be inherited to other Adjust classes. It should thus
be applicable to all adjustment procedures without any modification.
This way, the actual adjustment procedure has only to be defined *once*
in the :meth:`~wradlib.adjust.AdjustBase.__call__` method.
The output of this method can be evaluated by using the
`verify.ErrorMetrics` class.
Parameters
----------
obs : :py:class:`numpy:numpy.ndarray`
array of floats
raw : :py:class:`numpy:numpy.ndarray`
array of floats
Returns
-------
obs : :py:class:`numpy:numpy.ndarray`
array of floats
valid observations at those locations which have a valid radar
observation
estatobs : :py:class:`numpy:numpy.ndarray`
array of floats
estimated values at the valid observation locations
"""
rawatobs, ix = self._get_valid_pairs(obs, raw)
self.get_raws_directly_at_obs = RawAtObs(
self.obs_coords, self.raw_coords, nnear=1
)
raws_directly_at_obs = self.get_raws_directly_at_obs(raw)
ix = np.intersect1d(ix, util._idvalid(raws_directly_at_obs, minval=self.minval))
# Container for estimation results at the observation location
estatobs = np.zeros(obs.shape) * np.nan
# check whether enough gages remain for adjustment
if len(ix) <= (self.mingages - 1):
# not enough gages for cross validation: return empty arrays
return obs, estatobs
# Now iterate over valid pairs
for i in ix:
# Pass all valid pairs except ONE which you pass as target
ix_adjust = np.setdiff1d(ix, [i])
estatobs[i] = self.__call__(
obs,
raws_directly_at_obs[i],
self.obs_coords[i].reshape((1, -1)),
rawatobs,
ix_adjust,
)
return obs, estatobs
class AdjustAdd(AdjustBase):
"""Gage adjustment using an additive error model.
First, an instance of AdjustAdd has to be created. Calling this instance
then does the actual adjustment. The motivation behind this performance.
In case the observation points are always the same for different time
steps, the computation of neighbours and inverse distance weights only
needs to be performed once.
AdjustAdd automatically takes care of invalid gage or radar observations
(e.g. NaN, Inf or other typical missing data flags such as -9999).
However, in case e.g. the observation data contains missing values, the
computation of the inverse distance weights needs to be repeated in
:meth:`~wradlib.adjust.AdjustAdd.__call__` which is at the expense of
performance.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:class:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of adjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
# Get new Interpolator instance if necessary
ip = self._checkip(ix, targets)
# -----------------THIS IS THE ACTUAL ADJUSTMENT APPROACH--------------
# The error is a difference
error = obs[ix] - rawatobs[ix]
# interpolate the error field
iperror = ip(error)
# add error field to raw and make sure no negatives occur
return np.where((raw + iperror) < 0.0, 0.0, raw + iperror)
class AdjustMultiply(AdjustBase):
"""Gage adjustment using a multiplicative error model
First, an instance of AdjustMultiply has to be created. Calling this
instance then does the actual adjustment. The motivation behind this
performance. In case the observation points are always the same for
different time steps, the computation of neighbours and inverse distance
weights only needs to be performed once during initialisation.
AdjustMultiply automatically takes care of invalid gage or radar
observations (e.g. NaN, Inf or other typical missing data flags such as
-9999). However, in case e.g. the observation data contain missing values,
the computation of the inverse distance weights needs to be repeated in
:meth:`~wradlib.adjust.AdjustMultiply.__call__` which is at the expense of
performance.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:meth:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of adjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
# Get new Interpolator instance if necessary
ip = self._checkip(ix, targets)
# -----------------THIS IS THE ACTUAL ADJUSTMENT APPROACH--------------
# computing the error
error = obs[ix] / rawatobs[ix]
# interpolate error field
iperror = ip(error)
# multiply error field with raw
return iperror * raw
class AdjustMixed(AdjustBase):
"""Gage adjustment using a mixed error model (additive and multiplicative).
The mixed error model assumes that you have both a multiplicative and an
additive error term. The intention is to overcome the drawbacks of the
purely additive and multiplicative approaches (see
:class:`~wradlib.adjust.AdjustAdd` and
:class:`~wradlib.adjust.AdjustMultiply`). The formal representation of the
error model according to :cite:`Pfaff2010` is:
.. math::
R_{gage} = R_{radar} \\cdot (1 + \\delta) +0 \\epsilon
:math:`\\delta` and :math:`\\epsilon` have to be assumed to be independent
and normally distributed. The present implementation is based on a Least
Squares estimation of :math:`\\delta` and :math:`\\epsilon` for each rain
gage location. :math:`\\delta` and :math:`\\epsilon` are then interpolated
and used to correct the radar rainfall field.
The least squares implementation uses the equation for the error model plus
the condition to minimize (:math:`\\delta^2 + \\epsilon^2`) for each gage
location. The idea behind this is that :math:`\\epsilon` dominates the
adjustment for small deviations between radar and gage while
:math:`\\delta` dominates in case of large deviations.
**Usage**:
First, an instance of AdjustMixed has to be created. Calling this instance
then does the actual adjustment. The motivation behind this is performance.
In case the observation points are always the same for different time
steps, the computation of neighbours and inverse distance weights only
needs to be performed once during initialisation.
AdjustMixed automatically takes care of invalid gage or radar observations
(e.g. NaN, Inf or other typical missing data flags such as -9999).
However, in case e.g. the observation data contain missing values, the
computation of the inverse distance weights needs to be repeated in
:func:`~wradlib.adjust.AdjustMixed.__call__` which is at the expense of
performance.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:class:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of adjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
# Get new Interpolator instance if necessary
ip = self._checkip(ix, targets)
# -----------------THIS IS THE ACTUAL ADJUSTMENT APPROACH--------------
# computing epsilon and delta from least squares
epsilon = (obs[ix] - rawatobs[ix]) / (rawatobs[ix] ** 2 + 1.0)
delta = ((obs[ix] - epsilon) / rawatobs[ix]) - 1.0
# interpolate error fields
ipepsilon = ip(epsilon)
ipdelta = ip(delta)
# compute adjusted radar rainfall field
return (1.0 + ipdelta) * raw + ipepsilon
class AdjustMFB(AdjustBase):
"""Multiplicative gage adjustment using *one* correction factor for the \
entire domain.
This method is also known as the Mean Field Bias correction.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:class:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of adjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
# # Get new Interpolator instance if necessary
# ip = self._checkip(ix, targets)
# -----------------THIS IS THE ACTUAL ADJUSTMENT APPROACH--------------
# compute ratios for each valid observation point
ratios = np.ma.masked_invalid(obs[ix] / rawatobs.ravel()[ix])
if len(np.where(np.logical_not(ratios.mask))[0]) < self.mingages:
# Not enough valid pairs of raw and obs
return raw
if self.mfb_args["method"] == "mean":
corrfact = np.mean(ratios)
elif self.mfb_args["method"] == "median":
corrfact = np.median(ratios)
elif self.mfb_args["method"] == "linregr":
corrfact = 1.0
ix_ = np.where(np.logical_not(ratios.mask))[0]
x = obs[ix][ix_]
y = rawatobs[ix][ix_]
# check whether we should adjust or not
try:
slope, intercept, r, p, stderr = stats.linregress(x, y)
except Exception:
slope, r, p = 0, 0, np.inf
if (
(slope > self.mfb_args["minslope"])
and (r > self.mfb_args["minr"])
and (p < self.mfb_args["maxp"])
):
x = x[:, np.newaxis]
try:
slope, _, _, _ = np.linalg.lstsq(x, y)
if not slope[0] == 0:
corrfact = 1.0 / slope[0]
except Exception:
# no correction if linear regression fails
pass
if type(corrfact) == np.ma.core.MaskedConstant:
corrfact = 1.0
return corrfact * raw
class AdjustNone(AdjustBase):
"""Same behaviour as the other adjustment classes, but simply returns the \
unadjusted data.
This class can be used for benchmark verification experiments as a control
for unadjusted data.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:class:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of unadjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
return raw
class GageOnly(AdjustBase):
"""Same behaviour as the other adjustment classes, but returns an \
interpolation of rain gage observations
First, an instance of GageOnly has to be created. Calling this instance
then does the actual adjustment. The motivation behind this performance.
In case the observation points are always the same for different time
steps, the computation of neighbours and inverse distance weights only
needs to be performed once during initialisation.
GageOnly automatically takes care of invalid gage or radar observations
(e.g. NaN, Inf or other typical missing data flags such as -9999).
However, in case e.g. the observation data contain missing values, the
computation of the inverse distance weights needs to be repeated in
:meth:`~wradlib.adjust.GageOnly.__call__` which is at the expense of
performance.
Note
----
Inherits from :class:`wradlib.adjust.AdjustBase`
For a complete overview of parameters for the initialisation of adjustment
objects, as well as an extensive example, please see
:class:`wradlib.adjust.AdjustBase`.
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
array of adjusted radar values
"""
def __call__(self, obs, raw, targets=None, rawatobs=None, ix=None):
"""Returns an array of ``raw`` values that are adjusted by ``obs``.
Calling an adjustment object works the same for all adjustment classes.
Detailed instructions on the parameters ``obs`` and ``raw`` are
provided in :meth:`wradlib.adjust.AdjustBase.__call__`.
"""
# ----------------GENERIC PART FOR MOST __call__ methods---------------
if (ix is None) or (rawatobs is None):
# Check for valid observation-radar pairs in case this method has
# not been called from self.xvalidate
rawatobs, ix = self._get_valid_pairs(obs, raw)
if len(ix) < self.mingages:
# Not enough valid gages for adjustment? - return unadjusted data
return raw
# Get new Interpolator instance if necessary
ip = self._checkip(ix, targets)
# -----------------THIS IS THE ACTUAL ADJUSTMENT APPROACH--------------
# interpolate gage observations
return ip(obs[ix])
class RawAtObs:
"""Get the raw values in the neighbourhood of the observation points
Parameters
----------
obs_coords : :py:class:`numpy:numpy.ndarray`
array of float
coordinate pairs of observations points
raw_coords : :py:class:`numpy:numpy.ndarray`
array of float
coordinate pairs of raw (unadjusted) field
nnear: int
number of neighbours which should be considered in the vicinity of each
point in obs
stat: str
function name
"""
def __init__(self, obs_coords, raw_coords, nnear=9, stat="median"):
self.statfunc = _get_statfunc(stat)
self.raw_ix = _get_neighbours_ix(obs_coords, raw_coords, nnear)
def __call__(self, raw, obs=None):
"""
Returns the values of raw at the observation locations
Parameters
----------
raw : :py:class:`numpy:numpy.ndarray`
array of float
raw values
"""
# get the values of the raw neighbours of obs
raw_neighbs = raw[self.raw_ix]
# and summarize the values of these neighbours
# by using a statistics option
# (only needed in case nnear > 1, i.e. multiple neighbours
# per observation location)
if raw_neighbs.ndim > 1:
return self.statfunc(obs, raw_neighbs)
else:
return raw_neighbs
def _get_neighbours_ix(obs_coords, raw_coords, nnear):
"""Returns ``nnear`` neighbour indices per ``obs_coords`` coordinate pair
Parameters
----------
obs_coords : :py:class:`numpy:numpy.ndarray`
array of float of shape (num_points,ndim)
in the neighbourhood of these coordinate pairs we look for neighbours
raw_coords : :py:class:`numpy:numpy.ndarray`
array of float of shape (num_points,ndim)
from these coordinate pairs the neighbours are selected
nnear : int
number of neighbours to be selected per coordinate pair of
``obs_coords``
"""
# plant a tree
tree = spatial.cKDTree(raw_coords)
# return nearest neighbour indices
return tree.query(obs_coords, k=nnear)[1]
def _get_statfunc(funcname):
"""Returns a function that corresponds to parameter ``funcname``
Parameters
----------
funcname : str
a name of a numpy function OR another option known by _get_statfunc
Potential options: 'mean', 'median', 'best'
"""
try:
# first try to find a numpy function which corresponds to <funcname>
func = getattr(np, funcname)
def newfunc(x, y):
return func(y, axis=1)
except Exception:
# then try to find a function in this module with name funcname
if funcname == "best":
newfunc = best
else:
# if no function can be found, raise an Exception
raise NameError("Unknown function name option: " + funcname)
return newfunc
def best(x, y):
"""Find the values of y which corresponds best to x
If x is an array, the comparison is carried out for each element of x
Parameters
----------
x : float | :py:class:`numpy:numpy.ndarray`
float or 1-d array of float
y : :py:class:`numpy:numpy.ndarray`
array of float
Returns
-------
output : :py:class:`numpy:numpy.ndarray`
1-d array of float with length len(y)
"""
if type(x) == np.ndarray:
assert x.ndim == 1, "x must be a 1-d array of floats or a float."
assert len(x) == len(y), "Length of x and y must be equal."
if type(y) == np.ndarray:
assert y.ndim <= 2, "y must be 1-d or 2-d array of floats."
else:
raise ValueError("y must be 1-d or 2-d array of floats.")
x = np.array(x).reshape((-1, 1))
if y.ndim == 1:
y = np.array(y).reshape((1, -1))
axis = None
else:
axis = 1
return y[np.arange(len(y)), np.argmin(np.abs(x - y), axis=axis)]
if __name__ == "__main__":
print("wradlib: Calling module <adjust> as main...")
| {
"content_hash": "eaec60bdd6649f24ddbad166e83c9828",
"timestamp": "",
"source": "github",
"line_count": 892,
"max_line_length": 88,
"avg_line_length": 39.539237668161434,
"alnum_prop": 0.6403640590887182,
"repo_name": "kmuehlbauer/wradlib",
"id": "06f3311ef480e83d2efd61cca013184239148fe5",
"size": "35407",
"binary": false,
"copies": "3",
"ref": "refs/heads/main",
"path": "wradlib/adjust.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Fortran",
"bytes": "2191"
},
{
"name": "Python",
"bytes": "1277715"
},
{
"name": "Shell",
"bytes": "1209"
}
],
"symlink_target": ""
} |
"""
FCKeditor - The text editor for internet
Copyright (C) 2003-2006 Frederico Caldeira Knabben
Licensed under the terms of the GNU Lesser General Public License:
http://www.opensource.org/licenses/lgpl-license.php
For further information visit:
http://www.fckeditor.net/
"Support Open Source software. What about a donation today?"
File Name: connector.py
Connector for Python.
Tested With:
Standard:
Python 2.3.3
Zope:
Zope Version: (Zope 2.8.1-final, python 2.3.5, linux2)
Python Version: 2.3.5 (#4, Mar 10 2005, 01:40:25)
[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)]
System Platform: linux2
File Authors:
Andrew Liu ([email protected])
"""
"""
Author Notes (04 December 2005):
This module has gone through quite a few phases of change. Obviously,
I am only supporting that part of the code that I use. Initially
I had the upload directory as a part of zope (ie. uploading files
directly into Zope), before realising that there were too many
complex intricacies within Zope to deal with. Zope is one ugly piece
of code. So I decided to complement Zope by an Apache server (which
I had running anyway, and doing nothing). So I mapped all uploads
from an arbitrary server directory to an arbitrary web directory.
All the FCKeditor uploading occurred this way, and I didn't have to
stuff around with fiddling with Zope objects and the like (which are
terribly complex and something you don't want to do - trust me).
Maybe a Zope expert can touch up the Zope components. In the end,
I had FCKeditor loaded in Zope (probably a bad idea as well), and
I replaced the connector.py with an alias to a server module.
Right now, all Zope components will simple remain as is because
I've had enough of Zope.
See notes right at the end of this file for how I aliased out of Zope.
Anyway, most of you probably wont use Zope, so things are pretty
simple in that regard.
Typically, SERVER_DIR is the root of WEB_DIR (not necessarily).
Most definitely, SERVER_USERFILES_DIR points to WEB_USERFILES_DIR.
"""
import cgi
import re
import os
import string
"""
escape
Converts the special characters '<', '>', and '&'.
RFC 1866 specifies that these characters be represented
in HTML as < > and & respectively. In Python
1.5 we use the new string.replace() function for speed.
"""
def escape(text, replace=string.replace):
text = replace(text, '&', '&') # must be done 1st
text = replace(text, '<', '<')
text = replace(text, '>', '>')
text = replace(text, '"', '"')
return text
"""
getFCKeditorConnector
Creates a new instance of an FCKeditorConnector, and runs it
"""
def getFCKeditorConnector(context=None):
# Called from Zope. Passes the context through
connector = FCKeditorConnector(context=context)
return connector.run()
"""
FCKeditorRequest
A wrapper around the request object
Can handle normal CGI request, or a Zope request
Extend as required
"""
class FCKeditorRequest(object):
def __init__(self, context=None):
if (context is not None):
r = context.REQUEST
else:
r = cgi.FieldStorage()
self.context = context
self.request = r
def isZope(self):
if (self.context is not None):
return True
return False
def has_key(self, key):
return self.request.has_key(key)
def get(self, key, default=None):
value = None
if (self.isZope()):
value = self.request.get(key, default)
else:
if key in self.request.keys():
value = self.request[key].value
else:
value = default
return value
"""
FCKeditorConnector
The connector class
"""
class FCKeditorConnector(object):
# Configuration for FCKEditor
# can point to another server here, if linked correctly
#WEB_HOST = "http://127.0.0.1/"
WEB_HOST = ""
SERVER_DIR = "/var/www/html/"
WEB_USERFILES_FOLDER = WEB_HOST + "upload/"
SERVER_USERFILES_FOLDER = SERVER_DIR + "upload/"
# Allow access (Zope)
__allow_access_to_unprotected_subobjects__ = 1
# Class Attributes
parentFolderRe = re.compile("[\/][^\/]+[\/]?$")
"""
Constructor
"""
def __init__(self, context=None):
# The given root path will NOT be shown to the user
# Only the userFilesPath will be shown
# Instance Attributes
self.context = context
self.request = FCKeditorRequest(context=context)
self.rootPath = self.SERVER_DIR
self.userFilesFolder = self.SERVER_USERFILES_FOLDER
self.webUserFilesFolder = self.WEB_USERFILES_FOLDER
# Enables / Disables the connector
self.enabled = False # Set to True to enable this connector
# These are instance variables
self.zopeRootContext = None
self.zopeUploadContext = None
# Copied from php module =)
self.allowedExtensions = {
"File": None,
"Image": None,
"Flash": None,
"Media": None
}
self.deniedExtensions = {
"File": [ "php","php2","php3","php4","php5","phtml","pwml","inc","asp","aspx","ascx","jsp","cfm","cfc","pl","bat","exe","com","dll","vbs","js","reg","cgi","htaccess" ],
"Image": [ "php","php2","php3","php4","php5","phtml","pwml","inc","asp","aspx","ascx","jsp","cfm","cfc","pl","bat","exe","com","dll","vbs","js","reg","cgi","htaccess" ],
"Flash": [ "php","php2","php3","php4","php5","phtml","pwml","inc","asp","aspx","ascx","jsp","cfm","cfc","pl","bat","exe","com","dll","vbs","js","reg","cgi","htaccess" ],
"Media": [ "php","php2","php3","php4","php5","phtml","pwml","inc","asp","aspx","ascx","jsp","cfm","cfc","pl","bat","exe","com","dll","vbs","js","reg","cgi","htaccess" ]
}
"""
Zope specific functions
"""
def isZope(self):
# The context object is the zope object
if (self.context is not None):
return True
return False
def getZopeRootContext(self):
if self.zopeRootContext is None:
self.zopeRootContext = self.context.getPhysicalRoot()
return self.zopeRootContext
def getZopeUploadContext(self):
if self.zopeUploadContext is None:
folderNames = self.userFilesFolder.split("/")
c = self.getZopeRootContext()
for folderName in folderNames:
if (folderName <> ""):
c = c[folderName]
self.zopeUploadContext = c
return self.zopeUploadContext
"""
Generic manipulation functions
"""
def getUserFilesFolder(self):
return self.userFilesFolder
def getWebUserFilesFolder(self):
return self.webUserFilesFolder
def getAllowedExtensions(self, resourceType):
return self.allowedExtensions[resourceType]
def getDeniedExtensions(self, resourceType):
return self.deniedExtensions[resourceType]
def removeFromStart(self, string, char):
return string.lstrip(char)
def removeFromEnd(self, string, char):
return string.rstrip(char)
def convertToXmlAttribute(self, value):
if (value is None):
value = ""
return escape(value)
def convertToPath(self, path):
if (path[-1] <> "/"):
return path + "/"
else:
return path
def getUrlFromPath(self, resourceType, path):
if (resourceType is None) or (resourceType == ''):
url = "%s%s" % (
self.removeFromEnd(self.getUserFilesFolder(), '/'),
path
)
else:
url = "%s%s%s" % (
self.getUserFilesFolder(),
resourceType,
path
)
return url
def getWebUrlFromPath(self, resourceType, path):
if (resourceType is None) or (resourceType == ''):
url = "%s%s" % (
self.removeFromEnd(self.getWebUserFilesFolder(), '/'),
path
)
else:
url = "%s%s%s" % (
self.getWebUserFilesFolder(),
resourceType,
path
)
return url
def removeExtension(self, fileName):
index = fileName.rindex(".")
newFileName = fileName[0:index]
return newFileName
def getExtension(self, fileName):
index = fileName.rindex(".") + 1
fileExtension = fileName[index:]
return fileExtension
def getParentFolder(self, folderPath):
parentFolderPath = self.parentFolderRe.sub('', folderPath)
return parentFolderPath
"""
serverMapFolder
Purpose: works out the folder map on the server
"""
def serverMapFolder(self, resourceType, folderPath):
# Get the resource type directory
resourceTypeFolder = "%s%s/" % (
self.getUserFilesFolder(),
resourceType
)
# Ensure that the directory exists
self.createServerFolder(resourceTypeFolder)
# Return the resource type directory combined with the
# required path
return "%s%s" % (
resourceTypeFolder,
self.removeFromStart(folderPath, '/')
)
"""
createServerFolder
Purpose: physically creates a folder on the server
"""
def createServerFolder(self, folderPath):
# Check if the parent exists
parentFolderPath = self.getParentFolder(folderPath)
if not(os.path.exists(parentFolderPath)):
errorMsg = self.createServerFolder(parentFolderPath)
if errorMsg is not None:
return errorMsg
# Check if this exists
if not(os.path.exists(folderPath)):
os.mkdir(folderPath)
os.chmod(folderPath, 0755)
errorMsg = None
else:
if os.path.isdir(folderPath):
errorMsg = None
else:
raise "createServerFolder: Non-folder of same name already exists"
return errorMsg
"""
getRootPath
Purpose: returns the root path on the server
"""
def getRootPath(self):
return self.rootPath
"""
setXmlHeaders
Purpose: to prepare the headers for the xml to return
"""
def setXmlHeaders(self):
#now = self.context.BS_get_now()
#yesterday = now - 1
self.setHeader("Content-Type", "text/xml")
#self.setHeader("Expires", yesterday)
#self.setHeader("Last-Modified", now)
#self.setHeader("Cache-Control", "no-store, no-cache, must-revalidate")
self.printHeaders()
return
def setHeader(self, key, value):
if (self.isZope()):
self.context.REQUEST.RESPONSE.setHeader(key, value)
else:
print "%s: %s" % (key, value)
return
def printHeaders(self):
# For non-Zope requests, we need to print an empty line
# to denote the end of headers
if (not(self.isZope())):
print ""
"""
createXmlFooter
Purpose: returns the xml header
"""
def createXmlHeader(self, command, resourceType, currentFolder):
self.setXmlHeaders()
s = ""
# Create the XML document header
s += """<?xml version="1.0" encoding="utf-8" ?>"""
# Create the main connector node
s += """<Connector command="%s" resourceType="%s">""" % (
command,
resourceType
)
# Add the current folder node
s += """<CurrentFolder path="%s" url="%s" />""" % (
self.convertToXmlAttribute(currentFolder),
self.convertToXmlAttribute(
self.getWebUrlFromPath(
resourceType,
currentFolder
)
),
)
return s
"""
createXmlFooter
Purpose: returns the xml footer
"""
def createXmlFooter(self):
s = """</Connector>"""
return s
"""
sendError
Purpose: in the event of an error, return an xml based error
"""
def sendError(self, number, text):
self.setXmlHeaders()
s = ""
# Create the XML document header
s += """<?xml version="1.0" encoding="utf-8" ?>"""
s += """<Connector>"""
s += """<Error number="%s" text="%s" />""" % (number, text)
s += """</Connector>"""
return s
"""
getFolders
Purpose: command to recieve a list of folders
"""
def getFolders(self, resourceType, currentFolder):
if (self.isZope()):
return self.getZopeFolders(resourceType, currentFolder)
else:
return self.getNonZopeFolders(resourceType, currentFolder)
def getZopeFolders(self, resourceType, currentFolder):
# Open the folders node
s = ""
s += """<Folders>"""
zopeFolder = self.findZopeFolder(resourceType, currentFolder)
for (name, o) in zopeFolder.objectItems(["Folder"]):
s += """<Folder name="%s" />""" % (
self.convertToXmlAttribute(name)
)
# Close the folders node
s += """</Folders>"""
return s
def getNonZopeFolders(self, resourceType, currentFolder):
# Map the virtual path to our local server
serverPath = self.serverMapFolder(resourceType, currentFolder)
# Open the folders node
s = ""
s += """<Folders>"""
for someObject in os.listdir(serverPath):
someObjectPath = os.path.join(serverPath, someObject)
if os.path.isdir(someObjectPath):
s += """<Folder name="%s" />""" % (
self.convertToXmlAttribute(someObject)
)
# Close the folders node
s += """</Folders>"""
return s
"""
getFoldersAndFiles
Purpose: command to recieve a list of folders and files
"""
def getFoldersAndFiles(self, resourceType, currentFolder):
if (self.isZope()):
return self.getZopeFoldersAndFiles(resourceType, currentFolder)
else:
return self.getNonZopeFoldersAndFiles(resourceType, currentFolder)
def getNonZopeFoldersAndFiles(self, resourceType, currentFolder):
# Map the virtual path to our local server
serverPath = self.serverMapFolder(resourceType, currentFolder)
# Open the folders / files node
folders = """<Folders>"""
files = """<Files>"""
for someObject in os.listdir(serverPath):
someObjectPath = os.path.join(serverPath, someObject)
if os.path.isdir(someObjectPath):
folders += """<Folder name="%s" />""" % (
self.convertToXmlAttribute(someObject)
)
elif os.path.isfile(someObjectPath):
size = os.path.getsize(someObjectPath)
files += """<File name="%s" size="%s" />""" % (
self.convertToXmlAttribute(someObject),
os.path.getsize(someObjectPath)
)
# Close the folders / files node
folders += """</Folders>"""
files += """</Files>"""
# Return it
s = folders + files
return s
def getZopeFoldersAndFiles(self, resourceType, currentFolder):
folders = self.getZopeFolders(resourceType, currentFolder)
files = self.getZopeFiles(resourceType, currentFolder)
s = folders + files
return s
def getZopeFiles(self, resourceType, currentFolder):
# Open the files node
s = ""
s += """<Files>"""
zopeFolder = self.findZopeFolder(resourceType, currentFolder)
for (name, o) in zopeFolder.objectItems(["File","Image"]):
s += """<File name="%s" size="%s" />""" % (
self.convertToXmlAttribute(name),
((o.get_size() / 1024) + 1)
)
# Close the files node
s += """</Files>"""
return s
def findZopeFolder(self, resourceType, folderName):
# returns the context of the resource / folder
zopeFolder = self.getZopeUploadContext()
folderName = self.removeFromStart(folderName, "/")
folderName = self.removeFromEnd(folderName, "/")
if (resourceType <> ""):
try:
zopeFolder = zopeFolder[resourceType]
except:
zopeFolder.manage_addProduct["OFSP"].manage_addFolder(id=resourceType, title=resourceType)
zopeFolder = zopeFolder[resourceType]
if (folderName <> ""):
folderNames = folderName.split("/")
for folderName in folderNames:
zopeFolder = zopeFolder[folderName]
return zopeFolder
"""
createFolder
Purpose: command to create a new folder
"""
def createFolder(self, resourceType, currentFolder):
if (self.isZope()):
return self.createZopeFolder(resourceType, currentFolder)
else:
return self.createNonZopeFolder(resourceType, currentFolder)
def createZopeFolder(self, resourceType, currentFolder):
# Find out where we are
zopeFolder = self.findZopeFolder(resourceType, currentFolder)
errorNo = 0
errorMsg = ""
if self.request.has_key("NewFolderName"):
newFolder = self.request.get("NewFolderName", None)
zopeFolder.manage_addProduct["OFSP"].manage_addFolder(id=newFolder, title=newFolder)
else:
errorNo = 102
error = """<Error number="%s" originalDescription="%s" />""" % (
errorNo,
self.convertToXmlAttribute(errorMsg)
)
return error
def createNonZopeFolder(self, resourceType, currentFolder):
errorNo = 0
errorMsg = ""
if self.request.has_key("NewFolderName"):
newFolder = self.request.get("NewFolderName", None)
currentFolderPath = self.serverMapFolder(
resourceType,
currentFolder
)
try:
newFolderPath = currentFolderPath + newFolder
errorMsg = self.createServerFolder(newFolderPath)
if (errorMsg is not None):
errorNo = 110
except:
errorNo = 103
else:
errorNo = 102
error = """<Error number="%s" originalDescription="%s" />""" % (
errorNo,
self.convertToXmlAttribute(errorMsg)
)
return error
"""
getFileName
Purpose: helper function to extrapolate the filename
"""
def getFileName(self, filename):
for splitChar in ["/", "\\"]:
array = filename.split(splitChar)
if (len(array) > 1):
filename = array[-1]
return filename
"""
fileUpload
Purpose: command to upload files to server
"""
def fileUpload(self, resourceType, currentFolder):
if (self.isZope()):
return self.zopeFileUpload(resourceType, currentFolder)
else:
return self.nonZopeFileUpload(resourceType, currentFolder)
def zopeFileUpload(self, resourceType, currentFolder, count=None):
zopeFolder = self.findZopeFolder(resourceType, currentFolder)
file = self.request.get("NewFile", None)
fileName = self.getFileName(file.filename)
fileNameOnly = self.removeExtension(fileName)
fileExtension = self.getExtension(fileName).lower()
if (count):
nid = "%s.%s.%s" % (fileNameOnly, count, fileExtension)
else:
nid = fileName
title = nid
try:
zopeFolder.manage_addProduct['OFSP'].manage_addFile(
id=nid,
title=title,
file=file.read()
)
except:
if (count):
count += 1
else:
count = 1
self.zopeFileUpload(resourceType, currentFolder, count)
return
def nonZopeFileUpload(self, resourceType, currentFolder):
errorNo = 0
errorMsg = ""
if self.request.has_key("NewFile"):
# newFile has all the contents we need
newFile = self.request.get("NewFile", "")
# Get the file name
newFileName = newFile.filename
newFileNameOnly = self.removeExtension(newFileName)
newFileExtension = self.getExtension(newFileName).lower()
allowedExtensions = self.getAllowedExtensions(resourceType)
deniedExtensions = self.getDeniedExtensions(resourceType)
if (allowedExtensions is not None):
# Check for allowed
isAllowed = False
if (newFileExtension in allowedExtensions):
isAllowed = True
elif (deniedExtensions is not None):
# Check for denied
isAllowed = True
if (newFileExtension in deniedExtensions):
isAllowed = False
else:
# No extension limitations
isAllowed = True
if (isAllowed):
if (self.isZope()):
# Upload into zope
self.zopeFileUpload(resourceType, currentFolder)
else:
# Upload to operating system
# Map the virtual path to the local server path
currentFolderPath = self.serverMapFolder(
resourceType,
currentFolder
)
i = 0
while (True):
newFilePath = "%s%s" % (
currentFolderPath,
newFileName
)
if os.path.exists(newFilePath):
i += 1
newFilePath = "%s%s(%s).%s" % (
currentFolderPath,
newFileNameOnly,
i,
newFileExtension
)
errorNo = 201
break
else:
fileHandle = open(newFilePath,'w')
linecount = 0
while (1):
#line = newFile.file.readline()
line = newFile.readline()
if not line: break
fileHandle.write("%s" % line)
linecount += 1
os.chmod(newFilePath, 0777)
break
else:
newFileName = "Extension not allowed"
errorNo = 203
else:
newFileName = "No File"
errorNo = 202
string = """
<script type="text/javascript">
window.parent.frames["frmUpload"].OnUploadCompleted(%s,"%s");
</script>
""" % (
errorNo,
newFileName.replace('"',"'")
)
return string
def run(self):
s = ""
try:
# Check if this is disabled
if not(self.enabled):
return self.sendError(1, "This connector is disabled. Please check the connector configurations and try again")
# Make sure we have valid inputs
if not(
(self.request.has_key("Command")) and
(self.request.has_key("Type")) and
(self.request.has_key("CurrentFolder"))
):
return
# Get command
command = self.request.get("Command", None)
# Get resource type
resourceType = self.request.get("Type", None)
# folder syntax must start and end with "/"
currentFolder = self.request.get("CurrentFolder", None)
if (currentFolder[-1] <> "/"):
currentFolder += "/"
if (currentFolder[0] <> "/"):
currentFolder = "/" + currentFolder
# Check for invalid paths
if (".." in currentFolder):
return self.sendError(102, "")
# File upload doesn't have to return XML, so intercept
# her:e
if (command == "FileUpload"):
return self.fileUpload(resourceType, currentFolder)
# Begin XML
s += self.createXmlHeader(command, resourceType, currentFolder)
# Execute the command
if (command == "GetFolders"):
f = self.getFolders
elif (command == "GetFoldersAndFiles"):
f = self.getFoldersAndFiles
elif (command == "CreateFolder"):
f = self.createFolder
else:
f = None
if (f is not None):
s += f(resourceType, currentFolder)
s += self.createXmlFooter()
except Exception, e:
s = "ERROR: %s" % e
return s
# Running from command line
if __name__ == '__main__':
# To test the output, uncomment the standard headers
#print "Content-Type: text/html"
#print ""
print getFCKeditorConnector()
"""
Running from zope, you will need to modify this connector.
If you have uploaded the FCKeditor into Zope (like me), you need to
move this connector out of Zope, and replace the "connector" with an
alias as below. The key to it is to pass the Zope context in, as
we then have a like to the Zope context.
## Script (Python) "connector.py"
##bind container=container
##bind context=context
##bind namespace=
##bind script=script
##bind subpath=traverse_subpath
##parameters=*args, **kws
##title=ALIAS
##
import Products.connector as connector
return connector.getFCKeditorConnector(context=context).run()
"""
| {
"content_hash": "0fa971b6e1f0c84d1bcc2b604b3801cf",
"timestamp": "",
"source": "github",
"line_count": 780,
"max_line_length": 173,
"avg_line_length": 28.82948717948718,
"alnum_prop": 0.6552674878818873,
"repo_name": "sknlim/classified-2",
"id": "67a107c59ad26de8605f4cce7663f0b506193e58",
"size": "22510",
"binary": false,
"copies": "6",
"ref": "refs/heads/master",
"path": "admin/FCKeditor/editor/filemanager/browser/default/connectors/py/connector.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "ASP",
"bytes": "57659"
},
{
"name": "CSS",
"bytes": "62164"
},
{
"name": "ColdFusion",
"bytes": "47523"
},
{
"name": "HTML",
"bytes": "536599"
},
{
"name": "JavaScript",
"bytes": "1525992"
},
{
"name": "Lasso",
"bytes": "35556"
},
{
"name": "PHP",
"bytes": "847275"
},
{
"name": "Perl",
"bytes": "58984"
},
{
"name": "Python",
"bytes": "30250"
}
],
"symlink_target": ""
} |
from django import template
from django.core.cache import cache
from django.http import HttpRequest
from django.test import TestCase
from django.urls.exceptions import NoReverseMatch
from django.utils.safestring import SafeText
from wagtail.core.models import Page, Site
from wagtail.core.templatetags.wagtailcore_tags import richtext, slugurl
from wagtail.core.utils import resolve_model_string
from wagtail.tests.testapp.models import SimplePage
class TestPageUrlTags(TestCase):
fixtures = ['test.json']
def test_pageurl_tag(self):
response = self.client.get('/events/')
self.assertEqual(response.status_code, 200)
self.assertContains(response,
'<a href="/events/christmas/">Christmas</a>')
def test_pageurl_fallback(self):
tpl = template.Template('''{% load wagtailcore_tags %}<a href="{% pageurl page fallback='fallback' %}">Fallback</a>''')
result = tpl.render(template.Context({'page': None}))
self.assertIn('<a href="/fallback/">Fallback</a>', result)
def test_pageurl_fallback_without_valid_fallback(self):
tpl = template.Template('''{% load wagtailcore_tags %}<a href="{% pageurl page fallback='not-existing-endpoint' %}">Fallback</a>''')
with self.assertRaises(NoReverseMatch):
tpl.render(template.Context({'page': None}))
def test_slugurl_tag(self):
response = self.client.get('/events/christmas/')
self.assertEqual(response.status_code, 200)
self.assertContains(response,
'<a href="/events/">Back to events index</a>')
def test_pageurl_without_request_in_context(self):
page = Page.objects.get(url_path='/home/events/')
tpl = template.Template('''{% load wagtailcore_tags %}<a href="{% pageurl page %}">{{ page.title }}</a>''')
# no 'request' object in context
result = tpl.render(template.Context({'page': page}))
self.assertIn('<a href="/events/">Events</a>', result)
# 'request' object in context, but no 'site' attribute
result = tpl.render(template.Context({'page': page, 'request': HttpRequest()}))
self.assertIn('<a href="/events/">Events</a>', result)
def test_bad_pageurl(self):
tpl = template.Template('''{% load wagtailcore_tags %}<a href="{% pageurl page %}">{{ page.title }}</a>''')
with self.assertRaisesRegex(ValueError, "pageurl tag expected a Page object, got None"):
tpl.render(template.Context({'page': None}))
def test_bad_slugurl(self):
# no 'request' object in context
result = slugurl(template.Context({}), 'bad-slug-doesnt-exist')
self.assertEqual(result, None)
# 'request' object in context, but no 'site' attribute
result = slugurl(context=template.Context({'request': HttpRequest()}), slug='bad-slug-doesnt-exist')
self.assertEqual(result, None)
def test_slugurl_tag_returns_url_for_current_site(self):
home_page = Page.objects.get(url_path='/home/')
new_home_page = home_page.copy(update_attrs={'title': "New home page", 'slug': 'new-home'})
second_site = Site.objects.create(hostname='site2.example.com', root_page=new_home_page)
# Add a page to the new site that has a slug that is the same as one on
# the first site, but is in a different position in the treeself.
new_christmas_page = Page(title='Christmas', slug='christmas')
new_home_page.add_child(instance=new_christmas_page)
request = HttpRequest()
request.site = second_site
url = slugurl(context=template.Context({'request': request}), slug='christmas')
self.assertEqual(url, '/christmas/')
def test_slugurl_tag_returns_url_for_other_site(self):
home_page = Page.objects.get(url_path='/home/')
new_home_page = home_page.copy(update_attrs={'title': "New home page", 'slug': 'new-home'})
second_site = Site.objects.create(hostname='site2.example.com', root_page=new_home_page)
request = HttpRequest()
request.site = second_site
# There is no page with this slug on the current site, so this
# should return an absolute URL for the page on the first site.
url = slugurl(slug='christmas', context=template.Context({'request': request}))
self.assertEqual(url, 'http://localhost/events/christmas/')
def test_slugurl_without_request_in_context(self):
# no 'request' object in context
result = slugurl(template.Context({}), 'events')
self.assertEqual(result, '/events/')
# 'request' object in context, but no 'site' attribute
result = slugurl(template.Context({'request': HttpRequest()}), 'events')
self.assertEqual(result, '/events/')
class TestSiteRootPathsCache(TestCase):
fixtures = ['test.json']
def test_cache(self):
"""
This tests that the cache is populated when building URLs
"""
# Get homepage
homepage = Page.objects.get(url_path='/home/')
# Warm up the cache by getting the url
_ = homepage.url # noqa
# Check that the cache has been set correctly
self.assertEqual(cache.get('wagtail_site_root_paths'), [(1, '/home/', 'http://localhost')])
def test_cache_clears_when_site_saved(self):
"""
This tests that the cache is cleared whenever a site is saved
"""
# Get homepage
homepage = Page.objects.get(url_path='/home/')
# Warm up the cache by getting the url
_ = homepage.url # noqa
# Check that the cache has been set
self.assertTrue(cache.get('wagtail_site_root_paths'))
# Save the site
Site.objects.get(is_default_site=True).save()
# Check that the cache has been cleared
self.assertFalse(cache.get('wagtail_site_root_paths'))
def test_cache_clears_when_site_deleted(self):
"""
This tests that the cache is cleared whenever a site is deleted
"""
# Get homepage
homepage = Page.objects.get(url_path='/home/')
# Warm up the cache by getting the url
_ = homepage.url # noqa
# Check that the cache has been set
self.assertTrue(cache.get('wagtail_site_root_paths'))
# Delete the site
Site.objects.get(is_default_site=True).delete()
# Check that the cache has been cleared
self.assertFalse(cache.get('wagtail_site_root_paths'))
def test_cache_clears_when_site_root_moves(self):
"""
This tests for an issue where if a site root page was moved, all
the page urls in that site would change to None.
The issue was caused by the 'wagtail_site_root_paths' cache
variable not being cleared when a site root page was moved. Which
left all the child pages thinking that they are no longer in the
site and return None as their url.
Fix: d6cce69a397d08d5ee81a8cbc1977ab2c9db2682
Discussion: https://github.com/wagtail/wagtail/issues/7
"""
# Get homepage, root page and site
root_page = Page.objects.get(id=1)
homepage = Page.objects.get(url_path='/home/')
default_site = Site.objects.get(is_default_site=True)
# Create a new homepage under current homepage
new_homepage = SimplePage(title="New Homepage", slug="new-homepage", content="hello")
homepage.add_child(instance=new_homepage)
# Set new homepage as the site root page
default_site.root_page = new_homepage
default_site.save()
# Warm up the cache by getting the url
_ = homepage.url # noqa
# Move new homepage to root
new_homepage.move(root_page, pos='last-child')
# Get fresh instance of new_homepage
new_homepage = Page.objects.get(id=new_homepage.id)
# Check url
self.assertEqual(new_homepage.url, '/')
def test_cache_clears_when_site_root_slug_changes(self):
"""
This tests for an issue where if a site root pages slug was
changed, all the page urls in that site would change to None.
The issue was caused by the 'wagtail_site_root_paths' cache
variable not being cleared when a site root page was changed.
Which left all the child pages thinking that they are no longer in
the site and return None as their url.
Fix: d6cce69a397d08d5ee81a8cbc1977ab2c9db2682
Discussion: https://github.com/wagtail/wagtail/issues/157
"""
# Get homepage
homepage = Page.objects.get(url_path='/home/')
# Warm up the cache by getting the url
_ = homepage.url # noqa
# Change homepage title and slug
homepage.title = "New home"
homepage.slug = "new-home"
homepage.save()
# Get fresh instance of homepage
homepage = Page.objects.get(id=homepage.id)
# Check url
self.assertEqual(homepage.url, '/')
class TestResolveModelString(TestCase):
def test_resolve_from_string(self):
model = resolve_model_string('wagtailcore.Page')
self.assertEqual(model, Page)
def test_resolve_from_string_with_default_app(self):
model = resolve_model_string('Page', default_app='wagtailcore')
self.assertEqual(model, Page)
def test_resolve_from_string_with_different_default_app(self):
model = resolve_model_string('wagtailcore.Page', default_app='wagtailadmin')
self.assertEqual(model, Page)
def test_resolve_from_class(self):
model = resolve_model_string(Page)
self.assertEqual(model, Page)
def test_resolve_from_string_invalid(self):
self.assertRaises(ValueError, resolve_model_string, 'wagtail.core.Page')
def test_resolve_from_string_with_incorrect_default_app(self):
self.assertRaises(LookupError, resolve_model_string, 'Page', default_app='wagtailadmin')
def test_resolve_from_string_with_unknown_model_string(self):
self.assertRaises(LookupError, resolve_model_string, 'wagtailadmin.Page')
def test_resolve_from_string_with_no_default_app(self):
self.assertRaises(ValueError, resolve_model_string, 'Page')
def test_resolve_from_class_that_isnt_a_model(self):
self.assertRaises(ValueError, resolve_model_string, object)
def test_resolve_from_bad_type(self):
self.assertRaises(ValueError, resolve_model_string, resolve_model_string)
def test_resolve_from_none(self):
self.assertRaises(ValueError, resolve_model_string, None)
class TestRichtextTag(TestCase):
def test_call_with_text(self):
result = richtext("Hello world!")
self.assertEqual(result, '<div class="rich-text">Hello world!</div>')
self.assertIsInstance(result, SafeText)
def test_call_with_none(self):
result = richtext(None)
self.assertEqual(result, '<div class="rich-text"></div>')
def test_call_with_invalid_value(self):
with self.assertRaisesRegex(TypeError, "'richtext' template filter received an invalid value"):
richtext(42)
def test_call_with_bytes(self):
with self.assertRaisesRegex(TypeError, "'richtext' template filter received an invalid value"):
richtext(b"Hello world!")
| {
"content_hash": "084346e9e557cfcc8f08036586f593f3",
"timestamp": "",
"source": "github",
"line_count": 282,
"max_line_length": 140,
"avg_line_length": 40.234042553191486,
"alnum_prop": 0.6502732240437158,
"repo_name": "nealtodd/wagtail",
"id": "20001ca12c1185ed4496994aed449b0a27198986",
"size": "11346",
"binary": false,
"copies": "3",
"ref": "refs/heads/master",
"path": "wagtail/core/tests/tests.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "190511"
},
{
"name": "Dockerfile",
"bytes": "703"
},
{
"name": "HTML",
"bytes": "371011"
},
{
"name": "JavaScript",
"bytes": "262163"
},
{
"name": "Makefile",
"bytes": "992"
},
{
"name": "Python",
"bytes": "3564287"
},
{
"name": "Shell",
"bytes": "8289"
}
],
"symlink_target": ""
} |
from django.conf.urls.defaults import *
from django.conf import settings
from django.contrib.auth.decorators import login_required
from bootstrapit.views import *
urlpatterns=patterns('',
url(r'^$', DesignView.as_view(), name='bootstrapit-index'),
url(r'^(?P<theme>[\d\w-]+)$', DesignEditView.as_view(), name='bootstrapit-edit'),
url(r'^(?P<theme>[\d\w-]+)/edit$',
login_required(EditorView.as_view()), name='bootstrapit-editor'),
url(r'^api/editor/$', login_required(EditorBackend.as_view()),
name='bootstrapit-editor-backend'),
url(r'^api/file/(?P<theme>[\d\w-]+)/(?P<filepath>.*)$',
login_required(FileBackend.as_view()),
name='bootstrapit-file-backend'),
#url(r'all$', HomeView.as_view(), name='bootstrapit-home'),
#url(r'^editor/recive$', EditorReciveAjax.as_view(), name='bootstrapit-editor-recive'),
#url(r'^v/(?P<slug>[\d\w-]+)$', Version.as_view(), name="bootstrapversion-home"),
#url(r'^v/(?P<slug>[\d\w-]+)/json$$', JsonVersion.as_view(), name="bootstrapversion-json"),
)
| {
"content_hash": "e48cf35e22b1a999f7a8739ee20bc073",
"timestamp": "",
"source": "github",
"line_count": 26,
"max_line_length": 100,
"avg_line_length": 41.65384615384615,
"alnum_prop": 0.6278855032317636,
"repo_name": "h3/bootstrapit",
"id": "faefa2cd0bdfcbcba958d5cc30737e38ce3cd541",
"size": "1083",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "bootstrapit/urls.py",
"mode": "33188",
"license": "bsd-2-clause",
"language": [
{
"name": "JavaScript",
"bytes": "600694"
},
{
"name": "Python",
"bytes": "26500"
}
],
"symlink_target": ""
} |
import re
import sys
import time
import tarfile
from six import BytesIO
from bs4 import BeautifulSoup
from dlkit.abstract_osid.repository import record_templates as abc_repository_records
from dlkit.abstract_osid.osid.errors import IllegalState, NotFound
from dlkit.primordium.id.primitives import Id
from dlkit.primordium.type.primitives import Type
from ...osid.base_records import TextsRecord, TextsFormRecord
from ..registry import COMPOSITION_GENUS_TYPES, COMPOSITION_RECORD_TYPES
from ..edx.utilities import EdXUtilitiesMixin, clean_str
EDX_COMPOSITION_RECORD_TYPE = Type(**COMPOSITION_RECORD_TYPES['edx-composition'])
class LoreRepositoryRecord(TextsRecord, abc_repository_records.RepositoryRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'lore-repo',
'texts'
]
@property
def id(self):
return self.my_osid_object.ident
@property
def name(self):
return self.my_osid_object.display_name
@property
def slug(self):
if sys.version_info >= (3, 6):
raised_exception = ModuleNotFoundError
else:
raised_exception = ImportError
try:
from django.utils.text import slugify
return slugify(self.my_osid_object.display_name.text)
except raised_exception:
name = re.sub('[^\w\s-]', '', self.my_osid_object.display_name.text.lower())
return re.sub('[-\s]+', '-', name)
def _update_object_map(self, obj_map):
obj_map.update({
'slug': self.my_osid_object.slug
})
try:
super(LoreRepositoryRecord, self)._update_object_map(obj_map)
except AttributeError:
pass
class LoreRepositoryFormRecord(TextsFormRecord, abc_repository_records.RepositoryFormRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'lore-repo',
'texts'
]
def __init__(self, osid_object_form):
if osid_object_form is not None:
self.my_osid_object_form = osid_object_form
self._init_metadata()
if not osid_object_form.is_for_update():
self._init_map()
def _init_map(self):
TextsFormRecord._init_map(self)
def _init_metadata(self):
TextsFormRecord._init_metadata(self)
class LoreCourseRepositoryRecord(TextsRecord, abc_repository_records.RepositoryRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'course-repo',
'texts-item'
]
@property
def org(self):
return self.get_text('org')
class LoreCourseRepositoryFormRecord(TextsFormRecord, abc_repository_records.RepositoryFormRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'course-repo',
'texts-item'
]
def __init__(self, osid_object_form):
if osid_object_form is not None:
self.my_osid_object_form = osid_object_form
self._init_metadata()
if not osid_object_form.is_for_update():
self._init_map()
super(LoreCourseRepositoryFormRecord, self).__init__()
def _init_map(self):
TextsFormRecord._init_map(self)
super(LoreCourseRepositoryFormRecord, self)._init_map()
def _init_metadata(self):
TextsFormRecord._init_metadata(self)
super(LoreCourseRepositoryFormRecord, self)._init_metadata()
def set_org(self, org):
self.add_text(str(org), 'org')
def clear_org(self):
self.clear_text('org')
class LoreCourseRunRepositoryRecord(TextsRecord, EdXUtilitiesMixin, abc_repository_records.RepositoryRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'run-repo',
'texts-item'
]
@property
def platform(self):
return self.get_text('platform')
@property
def grading_policy(self):
return self.get_text('gradingPolicy')
@property
def policy(self):
return self.get_text('policy')
@property
def course_node(self):
rm = self.my_osid_object._get_provider_manager('REPOSITORY')
if self.my_osid_object._proxy is None:
cls = rm.get_composition_lookup_session_for_repository(self.my_osid_object.ident)
else:
cls = rm.get_composition_lookup_session_for_repository(
self.my_osid_object.ident,
proxy=self.my_osid_object._proxy
)
cls.use_unsequestered_composition_view()
try:
return next(cls.get_compositions_by_genus_type(
Type(**COMPOSITION_GENUS_TYPES['course'])))
except StopIteration:
raise AttributeError
def export_olx(self):
run_repo = self.my_osid_object
rm = self.my_osid_object._get_provider_manager('REPOSITORY')
if self.my_osid_object._proxy is None:
rhs = rm.get_repository_hierarchy_session()
else:
rhs = rm.get_repository_hierarchy_session(proxy=self.my_osid_object._proxy)
course_repo = next(rhs.get_parent_repositories(run_repo.ident))
filename = '{0}_{1}_{2}'.format(course_repo.display_name.text,
run_repo.display_name.text,
str(int(time.time())))
filename = clean_str(filename) + '.tar.gz'
root_path = '{0}/'.format(run_repo.display_name.text)
olx = BytesIO()
tarball = tarfile.open(filename, mode='w', fileobj=olx)
# write the course.xml files first
soup = BeautifulSoup('<course/>', 'xml')
soup.course['course'] = course_repo.display_name.text
soup.course['url_name'] = run_repo.display_name.text
soup.course['org'] = course_repo.org.text
course_path = '{0}course.xml'.format(root_path)
self.write_to_tarfile(tarball, course_path, soup)
root_file_path = '{0}roots/{1}.xml'.format(root_path,
run_repo.display_name.text)
self.write_to_tarfile(tarball, root_file_path, soup)
# write policy files
try:
policy = run_repo.policy
except IllegalState:
pass
else:
policy_path = '{0}policies/{1}/policy.json'.format(root_path,
run_repo.display_name.text)
self.write_to_tarfile(tarball,
policy_path,
soup=policy.text,
prettify=False)
try:
grading_policy = run_repo.grading_policy
except IllegalState:
pass
else:
policy_path = '{0}policies/{1}/grading_policy.json'.format(root_path,
run_repo.display_name.text)
self.write_to_tarfile(tarball,
policy_path,
soup=grading_policy.text,
prettify=False)
exceptions = BeautifulSoup('<errors/>', 'xml')
course_xml = BeautifulSoup('<course/>', 'xml')
course_xml.course['number'] = course_repo.display_name.text
course_xml_path = '{0}course/{1}.xml'.format(root_path,
run_repo.display_name.text)
course_node = self.course_node
for child_id in course_node.get_child_ids():
if self.my_osid_object._proxy is None:
cls = rm.get_composition_lookup_session_for_repository(run_repo.ident)
else:
cls = rm.get_composition_lookup_session_for_repository(
run_repo.ident,
proxy=self.my_osid_object._proxy
)
try:
child = cls.get_composition(child_id)
except NotFound:
if self.my_osid_object._proxy is None:
cls = rm.get_composition_lookup_session()
else:
cls = rm.get_composition_lookup_session(proxy=self.my_osid_object._proxy)
cls.use_unsequestered_composition_view()
cls.use_federated_repository_view()
child = cls.get_composition(child_id)
try:
if child.is_sequestered():
pass
else:
# add to the course/run.xml file
child_type = child.genus_type.identifier
child_tag = course_xml.new_tag(child_type)
# child_tag['display_name'] = child.display_name.text
child_tag['url_name'] = child.url
course_xml.course.append(child_tag)
child.export_olx(tarball, root_path)
except AttributeError:
bad_child = exceptions.new_tag('composition')
bad_child['id'] = str(child_id)
bad_child.string = child.display_name.text
exceptions.errors.append(bad_child)
self.write_to_tarfile(tarball, course_xml_path, course_xml)
errors_path = '{0}errors.xml'.format(root_path)
self.write_to_tarfile(tarball, errors_path, exceptions)
tarball.close()
olx.seek(0)
return filename, olx
def clean_up(self, user_repo):
# For now, remove the old run, but need to check
# any assets / compositions referred to. If they belong to another
# composition, keep them. If they will be orphaned,
# remove them.
rm = self.my_osid_object._get_provider_manager('REPOSITORY')
am = self.my_osid_object._get_provider_manager('ASSESSMENT')
run_repo = self.my_osid_object
if self.my_osid_object._proxy is None:
bls = am.get_bank_lookup_session()
else:
bls = am.get_bank_lookup_session(proxy=self.my_osid_object._proxy)
user_bank = bls.get_bank(user_repo.ident)
if self.my_osid_object._proxy is None:
cls = rm.get_composition_lookup_session_for_repository(run_repo.ident)
cqs = rm.get_composition_query_session_for_repository(run_repo.ident)
cas = rm.get_composition_admin_session_for_repository(run_repo.ident)
cras = rm.get_composition_repository_assignment_session()
crs = rm.get_composition_repository_session()
acs = rm.get_asset_composition_session_for_repository(user_repo.ident)
aas = rm.get_asset_admin_session_for_repository(user_repo.ident)
assessment_as = am.get_assessment_admin_session_for_bank(user_bank.ident)
abas = am.get_assessment_basic_authoring_session_for_bank(user_bank.ident)
ias = am.get_item_admin_session_for_bank(user_bank.ident)
aqs = am.get_assessment_query_session_for_bank(user_bank.ident)
else:
cls = rm.get_composition_lookup_session_for_repository(
run_repo.ident,
proxy=self.my_osid_object._proxy
)
cqs = rm.get_composition_query_session_for_repository(
run_repo.ident,
proxy=self.my_osid_object._proxy
)
cas = rm.get_composition_admin_session_for_repository(
run_repo.ident,
proxy=self.my_osid_object._proxy
)
cras = rm.get_composition_repository_assignment_session(proxy=self.my_osid_object._proxy)
crs = rm.get_composition_repository_session(proxy=self.my_osid_object._proxy)
acs = rm.get_asset_composition_session_for_repository(user_repo.ident,
proxy=self.my_osid_object._proxy)
aas = rm.get_asset_admin_session_for_repository(user_repo.ident,
proxy=self.my_osid_object._proxy)
assessment_as = am.get_assessment_admin_session_for_bank(user_bank.ident,
proxy=self.my_osid_object._proxy)
abas = am.get_assessment_basic_authoring_session_for_bank(user_bank.ident,
proxy=self.my_osid_object._proxy)
ias = am.get_item_admin_session_for_bank(user_bank.ident,
proxy=self.my_osid_object._proxy)
aqs = am.get_assessment_query_session_for_bank(user_bank.ident,
proxy=self.my_osid_object._proxy)
cls.use_unsequestered_composition_view()
cls.use_federated_repository_view()
cqs.use_unsequestered_composition_view()
cqs.use_federated_repository_view()
acs.use_federated_repository_view()
aqs.use_federated_bank_view()
for composition in cls.get_compositions():
# first check the composition assets
try:
for asset in acs.get_composition_assets(composition.ident):
if acs.get_compositions_by_asset(asset.ident).available() == 1:
# if asset is an enclosed asset, delete the enclosed object
try:
obj = aas.get_enclosed_object()
items = abas.get_items(obj.ident)
assessment_as.delete_assessment(obj.ident)
for item in items:
# only delete the item if it is not in any other assessments
querier = aqs.get_assessment_query()
querier.match_item_id(item.ident, True)
if aqs.get_assessments_by_query(querier).available() == 0:
ias.delete_item(item.ident)
except AttributeError:
pass
# delete the asset / wrapper
aas.delete_asset(asset.ident)
else:
# keep the asset if used in multiple places
pass
except NotFound:
pass
if crs.get_repositories_by_composition(composition.ident).available() == 1:
# remove composition from all parents
querier = cqs.get_composition_query()
querier.match_contained_composition_id(composition.ident, True)
for parent in cqs.get_compositions_by_query(querier):
# if parent belongs to another repository, update its child list
# otherwise, pass, because parent will be deleted eventually
if parent.object_map['assignedRepositoryIds'][0] != str(run_repo.ident):
current_child_idstrs = [str(i) for i in parent.get_child_ids()]
current_child_idstrs.remove(str(composition.ident))
updated_child_ids = [Id(i) for i in current_child_idstrs]
form = cas.get_composition_form_for_update(parent.ident)
form.set_children(updated_child_ids)
cas.update_composition(form)
# delete the composition
cas.delete_composition(composition.ident)
else:
# unassign the composition from this repository
cras.unassign_composition_from_repository(composition.ident, run_repo.ident)
if self.my_osid_object._proxy is None:
ras = rm.get_repository_admin_session()
else:
ras = rm.get_repository_admin_session(proxy=self.my_osid_object._proxy)
ras.delete_repository(run_repo.ident)
class LoreCourseRunRepositoryFormRecord(TextsFormRecord, abc_repository_records.RepositoryFormRecord):
"""basic LORE repository content"""
_implemented_record_type_identifiers = [
'run-repo',
'texts-item'
]
def __init__(self, osid_object_form):
if osid_object_form is not None:
self.my_osid_object_form = osid_object_form
self._init_metadata()
if not osid_object_form.is_for_update():
self._init_map()
super(LoreCourseRunRepositoryFormRecord, self).__init__()
def _init_map(self):
TextsFormRecord._init_map(self)
super(LoreCourseRunRepositoryFormRecord, self)._init_map()
def _init_metadata(self):
TextsFormRecord._init_metadata(self)
super(LoreCourseRunRepositoryFormRecord, self)._init_metadata()
def set_platform(self, platform):
self.add_text(str(platform), 'platform')
def clear_platform(self):
self.clear_text('platform')
def set_policy(self, policy):
self.add_text(str(policy), 'policy')
def clear_policy(self):
self.clear_text('policy')
def set_grading_policy(self, grading_policy):
self.add_text(str(grading_policy), 'gradingPolicy')
def clear_grading_policy(self):
self.clear_text('gradingPolicy')
| {
"content_hash": "d625fd9389eb6c1d1c6c5ced92ef18cc",
"timestamp": "",
"source": "github",
"line_count": 425,
"max_line_length": 109,
"avg_line_length": 40.76705882352941,
"alnum_prop": 0.5717996075262611,
"repo_name": "mitsei/dlkit",
"id": "0335ebc4dc7cc7d55a2b1813cbfc0a315c57049a",
"size": "17326",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "dlkit/records/repository/lore/repository_extensions.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "25170465"
},
{
"name": "TeX",
"bytes": "1088"
}
],
"symlink_target": ""
} |
import random
from enum import Enum
from bomb_defusal.modules.module import NeedyModule
class KnobOrientation(Enum):
Right = 0
Up = 1
Left = 2
Down = 3
class LedSet(Enum):
UpOne = '001011111101'
UpTwo = '101010011011'
DownOne = '011001111101'
DownTwo = '101010010001'
LeftOne = '000010100111'
LeftTwo = '000010000110'
RightOne = '101111111010'
RightTwo = '101100111010'
class Knobs(NeedyModule):
def __init__(self, position, expiration):
super().__init__(position, expiration)
self._orientation = KnobOrientation.Up
self._led_set = random.choice(list(LedSet))
self._on_time_up = self._check_orientation
@property
def leds(self):
return self._led_set.value
@property
def orientation(self):
return self._orientation
@orientation.setter
def orientation(self, value):
self._orientation = value
def _reset(self):
super()._reset()
self._led_set = random.choice(list(LedSet))
def _check_orientation(self):
if self.orientation == KnobOrientation.Right and self._led_set in {LedSet.RightOne, LedSet.RightTwo}:
return
if self.orientation == KnobOrientation.Up and self._led_set in {LedSet.UpOne, LedSet.UpTwo}:
return
if self.orientation == KnobOrientation.Left and self._led_set in {LedSet.LeftOne, LedSet.LeftTwo}:
return
if self.orientation == KnobOrientation.Down and self._led_set in {LedSet.DownOne, LedSet.DownTwo}:
return
self.on_wrong_try()
| {
"content_hash": "1558073e0e3f875fc896b925a6dff7d5",
"timestamp": "",
"source": "github",
"line_count": 62,
"max_line_length": 109,
"avg_line_length": 25.822580645161292,
"alnum_prop": 0.6389756402248594,
"repo_name": "leupibr/BombDefusal",
"id": "9a7804b2f43c789aadfe6563bf55e62030d60975",
"size": "1601",
"binary": false,
"copies": "1",
"ref": "refs/heads/develop",
"path": "bomb_defusal/modules/knobs.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "200225"
}
],
"symlink_target": ""
} |
"""
This file was generated with the customdashboard management command and
contains the class for the main dashboard.
To activate your index dashboard add the following to your settings.py::
GRAPPELLI_INDEX_DASHBOARD = 'protocolle.dashboard.CustomIndexDashboard'
"""
from django.utils.translation import ugettext_lazy as _
# from django.core.urlresolvers import reverse
from grappelli.dashboard import modules, Dashboard
from grappelli.dashboard.utils import get_admin_site_name
class CustomIndexDashboard(Dashboard):
"""
Custom index dashboard for www.
"""
def init_with_context(self, context):
site_name = get_admin_site_name(context)
# append a group for "Administration" & "Applications"
self.children.append(modules.Group(
_(u'Acesso Rápido'),
column=1,
collapsible=True,
children=[
# modules.AppList(
# _('Administration'),
# column=1,
# collapsible=False,
# models=('django.contrib.*',),
# ),
modules.AppList(
_('Aplicativos'),
column=1,
css_classes=('collapse closed',),
exclude=('django.contrib.*',),
)
]
))
# # append an app list module for "Applications"
# self.children.append(modules.AppList(
# _('AppList: Applications'),
# collapsible=True,
# column=1,
# css_classes=('collapse closed',),
# exclude=('django.contrib.*',),
# ))
# # append an app list module for "Administration"
# self.children.append(modules.ModelList(
# _('ModelList: Administration'),
# column=1,
# collapsible=False,
# models=('django.contrib.*',),
# ))
# # append another link list module for "support".
# self.children.append(modules.LinkList(
# _('Media Management'),
# column=2,
# children=[
# {
# 'title': _('FileBrowser'),
# 'url': '/admin/filebrowser/browse/',
# 'external': False,
# },
# ]
# ))
# # append another link list module for "support".
# self.children.append(modules.LinkList(
# _('Support'),
# column=2,
# children=[
# {
# 'title': _('Django Documentation'),
# 'url': 'http://docs.djangoproject.com/',
# 'external': True,
# },
# {
# 'title': _('Grappelli Documentation'),
# 'url': 'http://packages.python.org/django-grappelli/',
# 'external': True,
# },
# {
# 'title': _('Grappelli Google-Code'),
# 'url': 'http://code.google.com/p/django-grappelli/',
# 'external': True,
# },
# ]
# ))
# # append a feed module
# self.children.append(modules.Feed(
# _('Latest Django News'),
# column=2,
# feed_url='http://www.djangoproject.com/rss/weblog/',
# limit=5
# ))
# append a recent actions module
self.children.append(modules.RecentActions(
_('Recent Actions'),
limit=5,
collapsible=False,
column=3,
))
| {
"content_hash": "4407a74339c78045197d52321fed9431",
"timestamp": "",
"source": "github",
"line_count": 111,
"max_line_length": 76,
"avg_line_length": 32.98198198198198,
"alnum_prop": 0.4766457252116908,
"repo_name": "klebercode/protocolle",
"id": "20b8848582512749014094c4c2414f9709026557",
"size": "3679",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "protocolle/dashboard.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "511"
},
{
"name": "HTML",
"bytes": "25408"
},
{
"name": "JavaScript",
"bytes": "3379"
},
{
"name": "Python",
"bytes": "70100"
}
],
"symlink_target": ""
} |
"""Tests for the process resources CLI arguments helper."""
from __future__ import unicode_literals
import argparse
import unittest
from plaso.cli import tools
from plaso.cli.helpers import process_resources
from plaso.lib import errors
from tests.cli import test_lib as cli_test_lib
class ProcessResourcesArgumentsHelperTest(cli_test_lib.CLIToolTestCase):
"""Tests for the process resources CLI arguments helper."""
# pylint: disable=no-member,protected-access
_EXPECTED_OUTPUT = """\
usage: cli_helper.py [--process_memory_limit SIZE]
Test argument parser.
optional arguments:
--process_memory_limit SIZE, --process-memory-limit SIZE
Maximum amount of memory (data segment) a process is
allowed to allocate in bytes, where 0 represents no
limit. The default limit is 4294967296 (4 GiB). This
applies to both the main (foreman) process and the
worker processes. This limit is enforced by the
operating system and will supersede the worker memory
limit (--worker_memory_limit).
"""
def testAddArguments(self):
"""Tests the AddArguments function."""
argument_parser = argparse.ArgumentParser(
prog='cli_helper.py', description='Test argument parser.',
add_help=False,
formatter_class=cli_test_lib.SortedArgumentsHelpFormatter)
process_resources.ProcessResourcesArgumentsHelper.AddArguments(
argument_parser)
output = self._RunArgparseFormatHelp(argument_parser)
self.assertEqual(output, self._EXPECTED_OUTPUT)
def testParseOptions(self):
"""Tests the ParseOptions function."""
options = cli_test_lib.TestOptions()
test_tool = tools.CLITool()
process_resources.ProcessResourcesArgumentsHelper.ParseOptions(
options, test_tool)
with self.assertRaises(errors.BadConfigObject):
process_resources.ProcessResourcesArgumentsHelper.ParseOptions(
options, None)
with self.assertRaises(errors.BadConfigOption):
options.process_memory_limit = 'bogus'
process_resources.ProcessResourcesArgumentsHelper.ParseOptions(
options, test_tool)
with self.assertRaises(errors.BadConfigOption):
options.process_memory_limit = -1
process_resources.ProcessResourcesArgumentsHelper.ParseOptions(
options, test_tool)
if __name__ == '__main__':
unittest.main()
| {
"content_hash": "b99087121412fc179a53edcceecb6c98",
"timestamp": "",
"source": "github",
"line_count": 73,
"max_line_length": 77,
"avg_line_length": 34,
"alnum_prop": 0.693392425463336,
"repo_name": "rgayon/plaso",
"id": "1ecaaed84db58334a1c811fd6a6191acf85601cf",
"size": "2529",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "tests/cli/helpers/process_resources.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "415"
},
{
"name": "Dockerfile",
"bytes": "1047"
},
{
"name": "Makefile",
"bytes": "712"
},
{
"name": "PowerShell",
"bytes": "17771"
},
{
"name": "Python",
"bytes": "4803191"
},
{
"name": "Ruby",
"bytes": "926"
},
{
"name": "Shell",
"bytes": "46225"
}
],
"symlink_target": ""
} |
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'PurchaseOrder.deal'
db.add_column(u'app_purchaseorder', 'deal',
self.gf('django.db.models.fields.CharField')(default='', max_length=255),
keep_default=False)
def backwards(self, orm):
# Deleting field 'PurchaseOrder.deal'
db.delete_column(u'app_purchaseorder', 'deal')
models = {
u'app.attribute': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'Attribute'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.brand': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'Brand'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.category': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'Category'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.contact': {
'Meta': {'ordering': "(u'name', u'represents')", 'unique_together': "((u'name', u'represents'),)", 'object_name': 'Contact'},
'address1': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'address2': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'address3': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'cell_phone': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'city': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'country': ('django.db.models.fields.CharField', [], {'default': "u'United States'", 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '255'}),
'fax': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'label': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['app.ContactLabel']", 'symmetrical': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'represents': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Supplier']"}),
'state': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'work_phone': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'zipcode': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'})
},
u'app.contactlabel': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'ContactLabel'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.costadjustment': {
'Meta': {'ordering': "(u'-created',)", 'object_name': 'CostAdjustment'},
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'detail': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new': ('django.db.models.fields.FloatField', [], {}),
'old': ('django.db.models.fields.FloatField', [], {}),
'reason': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.CostAdjustmentReason']"}),
'sku': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Sku']"}),
'who': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'app.costadjustmentreason': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'CostAdjustmentReason'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.purchaseorder': {
'Meta': {'ordering': "(u'-created',)", 'object_name': 'PurchaseOrder'},
'contact': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Contact']"}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'creator': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}),
'deal': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'note': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'receiver': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Receiver']"}),
'sales_tax': ('django.db.models.fields.FloatField', [], {'default': '0.0'}),
'shipping_cost': ('django.db.models.fields.FloatField', [], {'default': '0.0'}),
'supplier': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Supplier']"}),
'terms': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'tracking_url': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '512', 'null': 'True', 'blank': 'True'})
},
u'app.purchaseorderlineitem': {
'Meta': {'ordering': "(u'-purchase_order__id', u'sku__id')", 'object_name': 'PurchaseOrderLineItem'},
'discount_dollar': ('django.db.models.fields.FloatField', [], {'null': 'True', 'blank': 'True'}),
'discount_percent': ('django.db.models.fields.FloatField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'purchase_order': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.PurchaseOrder']"}),
'quantity_ordered': ('django.db.models.fields.IntegerField', [], {}),
'sku': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Sku']"}),
'unit_cost': ('django.db.models.fields.FloatField', [], {})
},
u'app.quantityadjustment': {
'Meta': {'ordering': "(u'-created',)", 'object_name': 'QuantityAdjustment'},
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'detail': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'new': ('django.db.models.fields.IntegerField', [], {}),
'old': ('django.db.models.fields.IntegerField', [], {}),
'reason': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.QuantityAdjustmentReason']"}),
'sku': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Sku']"}),
'who': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"})
},
u'app.quantityadjustmentreason': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'QuantityAdjustmentReason'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'app.receiver': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'Receiver'},
'address1': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'address2': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'address3': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'city': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'country': ('django.db.models.fields.CharField', [], {'default': "u'United States'", 'max_length': '255', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'state': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'zipcode': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'})
},
u'app.shipment': {
'Meta': {'ordering': "(u'-created',)", 'object_name': 'Shipment'},
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'creator': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'note': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'purchase_order': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.PurchaseOrder']"})
},
u'app.shipmentlineitem': {
'Meta': {'ordering': "(u'-shipment__id', u'sku__id')", 'object_name': 'ShipmentLineItem'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'quantity_received': ('django.db.models.fields.IntegerField', [], {}),
'shipment': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Shipment']"}),
'sku': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Sku']"})
},
u'app.sku': {
'Meta': {'ordering': "(u'id',)", 'object_name': 'Sku'},
'action': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'action_date': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'brand': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Brand']"}),
'case_quantity': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'categories': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['app.Category']", 'symmetrical': 'False'}),
'cost': ('django.db.models.fields.FloatField', [], {'default': '0', 'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.IntegerField', [], {'primary_key': 'True'}),
'in_live_deal': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'is_subscription': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'last_location': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'lead_time': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'minimum_quantity': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'modified': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'notes': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'notify_at_threshold': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}),
'quantity_on_hand': ('django.db.models.fields.IntegerField', [], {'default': '0', 'db_index': 'True'}),
'supplier': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Supplier']"}),
'supplier_sku': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}),
'upc': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'db_index': 'True', 'blank': 'True'})
},
u'app.skuattribute': {
'Meta': {'ordering': "(u'sku__id', u'attribute__name')", 'unique_together': "((u'sku', u'attribute'),)", 'object_name': 'SkuAttribute'},
'attribute': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Attribute']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'sku': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['app.Sku']"}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'})
},
u'app.supplier': {
'Meta': {'ordering': "(u'name',)", 'object_name': 'Supplier'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'db_index': 'True'})
},
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Group']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Permission']"}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
}
}
complete_apps = ['app'] | {
"content_hash": "e4b85e9e8802a790fefefe6f5c4d1433",
"timestamp": "",
"source": "github",
"line_count": 220,
"max_line_length": 195,
"avg_line_length": 82.94090909090909,
"alnum_prop": 0.5410204417164466,
"repo_name": "vforgione/dodger",
"id": "bc34f08711218c22b4d6b2451eef378d7df9e9ad",
"size": "18271",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "app/migrations/0003_auto__add_field_purchaseorder_deal.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "1513"
},
{
"name": "JavaScript",
"bytes": "21722"
},
{
"name": "Python",
"bytes": "267523"
}
],
"symlink_target": ""
} |
"""
Read a MAF from standard input and print the fraction of gap columns in
each block.
usage: %prog < maf > out
"""
from __future__ import division
import sys
import bx.align.maf
def main():
for m in bx.align.maf.Reader( sys.stdin ):
gaps = 0
for col in m.column_iter():
if ( '-' in col ): gaps += 1
print gaps / m.text_size
if __name__ == "__main__": main()
| {
"content_hash": "513cc34cc97d5a0497c61f4664d305f4",
"timestamp": "",
"source": "github",
"line_count": 21,
"max_line_length": 72,
"avg_line_length": 20.523809523809526,
"alnum_prop": 0.5475638051044084,
"repo_name": "bxlab/HiFive_Paper",
"id": "2a01cb00fc7dcd11e44d801ade78d91ce4ce13aa",
"size": "454",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "Scripts/HiCLib/bx-python-0.7.1/scripts/maf_gap_frequency.py",
"mode": "33261",
"license": "bsd-3-clause",
"language": [
{
"name": "Batchfile",
"bytes": "5096"
},
{
"name": "C",
"bytes": "107381"
},
{
"name": "C++",
"bytes": "182835"
},
{
"name": "CMake",
"bytes": "3353"
},
{
"name": "Forth",
"bytes": "152"
},
{
"name": "Makefile",
"bytes": "22978"
},
{
"name": "Perl",
"bytes": "25453"
},
{
"name": "Python",
"bytes": "4229513"
},
{
"name": "R",
"bytes": "43022"
},
{
"name": "Shell",
"bytes": "10798"
}
],
"symlink_target": ""
} |
"""
Host a WSGI app in mitmproxy.
This example shows how to graft a WSGI app onto mitmproxy. In this
instance, we're using the Flask framework (http://flask.pocoo.org/) to expose
a single simplest-possible page.
"""
from flask import Flask
from mitmproxy.addons import asgiapp
app = Flask("proxapp")
@app.route("/")
def hello_world() -> str:
return "Hello World!"
addons = [
# Host app at the magic domain "example.com" on port 80. Requests to this
# domain and port combination will now be routed to the WSGI app instance.
asgiapp.WSGIApp(app, "example.com", 80),
# TLS works too, but the magic domain needs to be resolvable from the mitmproxy machine due to mitmproxy's design.
# mitmproxy will connect to said domain and use its certificate but won't send any data.
# By using `--set upstream_cert=false` and `--set connection_strategy_lazy` the local certificate is used instead.
# asgiapp.WSGIApp(app, "example.com", 443),
]
| {
"content_hash": "ea4ae50523447db77134001dee451959",
"timestamp": "",
"source": "github",
"line_count": 28,
"max_line_length": 118,
"avg_line_length": 34.607142857142854,
"alnum_prop": 0.718266253869969,
"repo_name": "mhils/mitmproxy",
"id": "4f117f05abe1a129ca8cb74e39093ea677c6fad4",
"size": "969",
"binary": false,
"copies": "2",
"ref": "refs/heads/main",
"path": "examples/addons/wsgi-flask-app.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "3618"
},
{
"name": "Dockerfile",
"bytes": "618"
},
{
"name": "HTML",
"bytes": "10672"
},
{
"name": "JavaScript",
"bytes": "134086"
},
{
"name": "Kaitai Struct",
"bytes": "3670"
},
{
"name": "Less",
"bytes": "21203"
},
{
"name": "PowerShell",
"bytes": "258"
},
{
"name": "Python",
"bytes": "2367991"
},
{
"name": "Shell",
"bytes": "3055"
},
{
"name": "TypeScript",
"bytes": "279053"
}
],
"symlink_target": ""
} |
"""
Support for exposing Concord232 elements as sensors.
For more details about this platform, please refer to the documentation at
https://home-assistant.io/components/binary_sensor.concord232/
"""
import datetime
import logging
import requests
import voluptuous as vol
from homeassistant.components.binary_sensor import (
BinarySensorDevice, PLATFORM_SCHEMA, DEVICE_CLASSES)
from homeassistant.const import (CONF_HOST, CONF_PORT)
import homeassistant.helpers.config_validation as cv
REQUIREMENTS = ['concord232==0.14']
_LOGGER = logging.getLogger(__name__)
CONF_EXCLUDE_ZONES = 'exclude_zones'
CONF_ZONE_TYPES = 'zone_types'
DEFAULT_HOST = 'localhost'
DEFAULT_NAME = 'Alarm'
DEFAULT_PORT = '5007'
DEFAULT_SSL = False
SCAN_INTERVAL = datetime.timedelta(seconds=1)
ZONE_TYPES_SCHEMA = vol.Schema({
cv.positive_int: vol.In(DEVICE_CLASSES),
})
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Optional(CONF_EXCLUDE_ZONES, default=[]):
vol.All(cv.ensure_list, [cv.positive_int]),
vol.Optional(CONF_HOST, default=DEFAULT_HOST): cv.string,
vol.Optional(CONF_PORT, default=DEFAULT_PORT): cv.port,
vol.Optional(CONF_ZONE_TYPES, default={}): ZONE_TYPES_SCHEMA,
})
def setup_platform(hass, config, add_devices, discovery_info=None):
"""Set up the Concord232 binary sensor platform."""
from concord232 import client as concord232_client
host = config.get(CONF_HOST)
port = config.get(CONF_PORT)
exclude = config.get(CONF_EXCLUDE_ZONES)
zone_types = config.get(CONF_ZONE_TYPES)
sensors = []
try:
_LOGGER.debug("Initializing Client")
client = concord232_client.Client('http://{}:{}'.format(host, port))
client.zones = client.list_zones()
client.last_zone_update = datetime.datetime.now()
except requests.exceptions.ConnectionError as ex:
_LOGGER.error("Unable to connect to Concord232: %s", str(ex))
return False
# The order of zones returned by client.list_zones() can vary.
# When the zones are not named, this can result in the same entity
# name mapping to different sensors in an unpredictable way. Sort
# the zones by zone number to prevent this.
client.zones.sort(key=lambda zone: zone['number'])
for zone in client.zones:
_LOGGER.info("Loading Zone found: %s", zone['name'])
if zone['number'] not in exclude:
sensors.append(
Concord232ZoneSensor(
hass, client, zone, zone_types.get(
zone['number'], get_opening_type(zone))
)
)
add_devices(sensors, True)
def get_opening_type(zone):
"""Return the result of the type guessing from name."""
if 'MOTION' in zone['name']:
return 'motion'
if 'KEY' in zone['name']:
return 'safety'
if 'SMOKE' in zone['name']:
return 'smoke'
if 'WATER' in zone['name']:
return 'water'
return 'opening'
class Concord232ZoneSensor(BinarySensorDevice):
"""Representation of a Concord232 zone as a sensor."""
def __init__(self, hass, client, zone, zone_type):
"""Initialize the Concord232 binary sensor."""
self._hass = hass
self._client = client
self._zone = zone
self._number = zone['number']
self._zone_type = zone_type
@property
def device_class(self):
"""Return the class of this sensor, from DEVICE_CLASSES."""
return self._zone_type
@property
def should_poll(self):
"""No polling needed."""
return True
@property
def name(self):
"""Return the name of the binary sensor."""
return self._zone['name']
@property
def is_on(self):
"""Return true if the binary sensor is on."""
# True means "faulted" or "open" or "abnormal state"
return bool(self._zone['state'] != 'Normal')
def update(self):
"""Get updated stats from API."""
last_update = datetime.datetime.now() - self._client.last_zone_update
_LOGGER.debug("Zone: %s ", self._zone)
if last_update > datetime.timedelta(seconds=1):
self._client.zones = self._client.list_zones()
self._client.last_zone_update = datetime.datetime.now()
_LOGGER.debug("Updated from zone: %s", self._zone['name'])
if hasattr(self._client, 'zones'):
self._zone = next((x for x in self._client.zones
if x['number'] == self._number), None)
| {
"content_hash": "a3fe38e7f24fb941736dbe4c49d2c743",
"timestamp": "",
"source": "github",
"line_count": 141,
"max_line_length": 77,
"avg_line_length": 32.06382978723404,
"alnum_prop": 0.6357000663570007,
"repo_name": "ewandor/home-assistant",
"id": "73cf77f2b936017de89b36162239f70632ea14aa",
"size": "4521",
"binary": false,
"copies": "1",
"ref": "refs/heads/dev",
"path": "homeassistant/components/binary_sensor/concord232.py",
"mode": "33261",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "8860790"
},
{
"name": "Ruby",
"bytes": "517"
},
{
"name": "Shell",
"bytes": "12639"
}
],
"symlink_target": ""
} |
"""Test MNRead.getChecksum()"""
import io
import pytest
import responses
import d1_common.checksum
import d1_common.types
import d1_common.types.exceptions
import d1_gmn.tests.gmn_mock
import d1_gmn.tests.gmn_test_case
class TestGetChecksum(d1_gmn.tests.gmn_test_case.GMNTestCase):
@responses.activate
def test_1000(self, gmn_client_v1_v2):
"""MNRead.getChecksum(): Matching checksums."""
with d1_gmn.tests.gmn_mock.disable_auth():
pid, sid, sciobj_bytes, sysmeta_pyxb = self.create_obj(self.client_v2)
recv_checksum_pyxb = gmn_client_v1_v2.getChecksum(pid)
assert isinstance(
recv_checksum_pyxb, gmn_client_v1_v2.pyxb_binding.Checksum
)
send_checksum_pyxb = d1_common.checksum.create_checksum_object_from_bytes(
sciobj_bytes, recv_checksum_pyxb.algorithm
)
self.assert_checksums_equal(send_checksum_pyxb, recv_checksum_pyxb)
@responses.activate
def test_1010(self, gmn_client_v1_v2):
"""getChecksum(): Supported algorithms return matching checksum."""
def test(client, algorithm_str):
pid, sid, sciobj_bytes, send_sysmeta_pyxb = self.generate_sciobj_with_defaults(
client
)
send_checksum = d1_common.checksum.create_checksum_object_from_bytes(
sciobj_bytes, algorithm_str
)
send_sysmeta_pyxb.checksum = send_checksum
with d1_gmn.tests.gmn_mock.disable_sysmeta_sanity_checks():
client.create(pid, io.BytesIO(sciobj_bytes), send_sysmeta_pyxb)
recv_checksum = client.getChecksum(pid, algorithm_str)
d1_common.checksum.are_checksums_equal(
send_sysmeta_pyxb.checksum, recv_checksum
)
with d1_gmn.tests.gmn_mock.disable_auth():
test(gmn_client_v1_v2, "MD5")
test(gmn_client_v1_v2, "SHA-1")
@responses.activate
def test_1020(self, gmn_client_v1_v2):
"""getChecksum(): Unsupported algorithm returns InvalidRequest exception."""
with d1_gmn.tests.gmn_mock.disable_auth():
pid, sid, sciobj_bytes, sysmeta_pyxb = self.create_obj(gmn_client_v1_v2)
with pytest.raises(d1_common.types.exceptions.InvalidRequest):
gmn_client_v1_v2.getChecksum(pid, "INVALID_ALGORITHM")
@responses.activate
def test_1030(self, gmn_client_v1_v2):
"""getChecksum(): Non-existing object raises NotFound exception."""
with d1_gmn.tests.gmn_mock.disable_auth():
with pytest.raises(d1_common.types.exceptions.NotFound):
gmn_client_v1_v2.getChecksum("INVALID_PID")
| {
"content_hash": "602abd07e00e8b7ce0812da30635025a",
"timestamp": "",
"source": "github",
"line_count": 69,
"max_line_length": 91,
"avg_line_length": 39.47826086956522,
"alnum_prop": 0.6372980910425844,
"repo_name": "DataONEorg/d1_python",
"id": "c79078b26aef977c30b0ed0d7ecfcae54acfac80",
"size": "3512",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "gmn/src/d1_gmn/tests/test_get_checksum.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "4798"
},
{
"name": "HTML",
"bytes": "13358"
},
{
"name": "Inno Setup",
"bytes": "3430"
},
{
"name": "JavaScript",
"bytes": "2068"
},
{
"name": "Python",
"bytes": "3547939"
},
{
"name": "Shell",
"bytes": "5670"
},
{
"name": "XSLT",
"bytes": "89205"
}
],
"symlink_target": ""
} |
"""SCons.Tool.dmd
Tool-specific initialization for the Digital Mars D compiler.
(http://digitalmars.com/d)
Coded by Andy Friesen ([email protected])
15 November 2003
Amended by Russel Winder ([email protected])
2010-02-07
There are a number of problems with this script at this point in time.
The one that irritates me the most is the Windows linker setup. The D
linker doesn't have a way to add lib paths on the commandline, as far
as I can see. You have to specify paths relative to the SConscript or
use absolute paths. To hack around it, add '#/blah'. This will link
blah.lib from the directory where SConstruct resides.
Compiler variables:
DC - The name of the D compiler to use. Defaults to dmd or gdmd,
whichever is found.
DPATH - List of paths to search for import modules.
DVERSIONS - List of version tags to enable when compiling.
DDEBUG - List of debug tags to enable when compiling.
Linker related variables:
LIBS - List of library files to link in.
DLINK - Name of the linker to use. Defaults to dmd or gdmd.
DLINKFLAGS - List of linker flags.
Lib tool variables:
DLIB - Name of the lib tool to use. Defaults to lib.
DLIBFLAGS - List of flags to pass to the lib tool.
LIBS - Same as for the linker. (libraries to pull into the .lib)
"""
#
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/Tool/dmd.py 2014/03/02 14:18:15 garyo"
import os
import SCons.Action
import SCons.Builder
import SCons.Defaults
import SCons.Scanner.D
import SCons.Tool
# Adapted from c++.py
def isD(source):
if not source:
return 0
for s in source:
if s.sources:
ext = os.path.splitext(str(s.sources[0]))[1]
if ext == '.d':
return 1
return 0
smart_link = {}
smart_lib = {}
def generate(env):
global smart_link
global smart_lib
static_obj, shared_obj = SCons.Tool.createObjBuilders(env)
DAction = SCons.Action.Action('$DCOM', '$DCOMSTR')
static_obj.add_action('.d', DAction)
shared_obj.add_action('.d', DAction)
static_obj.add_emitter('.d', SCons.Defaults.StaticObjectEmitter)
shared_obj.add_emitter('.d', SCons.Defaults.SharedObjectEmitter)
dc = env.Detect(['dmd', 'gdmd'])
env['DC'] = dc
env['DCOM'] = '$DC $_DINCFLAGS $_DVERFLAGS $_DDEBUGFLAGS $_DFLAGS -c -of$TARGET $SOURCES'
env['_DINCFLAGS'] = '$( ${_concat(DINCPREFIX, DPATH, DINCSUFFIX, __env__, RDirs, TARGET, SOURCE)} $)'
env['_DVERFLAGS'] = '$( ${_concat(DVERPREFIX, DVERSIONS, DVERSUFFIX, __env__)} $)'
env['_DDEBUGFLAGS'] = '$( ${_concat(DDEBUGPREFIX, DDEBUG, DDEBUGSUFFIX, __env__)} $)'
env['_DFLAGS'] = '$( ${_concat(DFLAGPREFIX, DFLAGS, DFLAGSUFFIX, __env__)} $)'
env['DPATH'] = ['#/']
env['DFLAGS'] = []
env['DVERSIONS'] = []
env['DDEBUG'] = []
if dc:
# Add the path to the standard library.
# This is merely for the convenience of the dependency scanner.
dmd_path = env.WhereIs(dc)
if dmd_path:
x = dmd_path.rindex(dc)
phobosDir = dmd_path[:x] + '/../src/phobos'
if os.path.isdir(phobosDir):
env.Append(DPATH = [phobosDir])
env['DINCPREFIX'] = '-I'
env['DINCSUFFIX'] = ''
env['DVERPREFIX'] = '-version='
env['DVERSUFFIX'] = ''
env['DDEBUGPREFIX'] = '-debug='
env['DDEBUGSUFFIX'] = ''
env['DFLAGPREFIX'] = '-'
env['DFLAGSUFFIX'] = ''
env['DFILESUFFIX'] = '.d'
# Need to use the Digital Mars linker/lib on windows.
# *nix can just use GNU link.
if env['PLATFORM'] == 'win32':
env['DLINK'] = '$DC'
env['DLINKCOM'] = '$DLINK -of$TARGET $SOURCES $DFLAGS $DLINKFLAGS $_DLINKLIBFLAGS'
env['DLIB'] = 'lib'
env['DLIBCOM'] = '$DLIB $_DLIBFLAGS -c $TARGET $SOURCES $_DLINKLIBFLAGS'
env['_DLINKLIBFLAGS'] = '$( ${_concat(DLIBLINKPREFIX, LIBS, DLIBLINKSUFFIX, __env__, RDirs, TARGET, SOURCE)} $)'
env['_DLIBFLAGS'] = '$( ${_concat(DLIBFLAGPREFIX, DLIBFLAGS, DLIBFLAGSUFFIX, __env__)} $)'
env['DLINKFLAGS'] = []
env['DLIBLINKPREFIX'] = ''
env['DLIBLINKSUFFIX'] = '.lib'
env['DLIBFLAGPREFIX'] = '-'
env['DLIBFLAGSUFFIX'] = ''
env['DLINKFLAGPREFIX'] = '-'
env['DLINKFLAGSUFFIX'] = ''
SCons.Tool.createStaticLibBuilder(env)
# Basically, we hijack the link and ar builders with our own.
# these builders check for the presence of D source, and swap out
# the system's defaults for the Digital Mars tools. If there's no D
# source, then we silently return the previous settings.
linkcom = env.get('LINKCOM')
try:
env['SMART_LINKCOM'] = smart_link[linkcom]
except KeyError:
def _smartLink(source, target, env, for_signature,
defaultLinker=linkcom):
if isD(source):
# XXX I'm not sure how to add a $DLINKCOMSTR variable
# so that it works with this _smartLink() logic,
# and I don't have a D compiler/linker to try it out,
# so we'll leave it alone for now.
return '$DLINKCOM'
else:
return defaultLinker
env['SMART_LINKCOM'] = smart_link[linkcom] = _smartLink
arcom = env.get('ARCOM')
try:
env['SMART_ARCOM'] = smart_lib[arcom]
except KeyError:
def _smartLib(source, target, env, for_signature,
defaultLib=arcom):
if isD(source):
# XXX I'm not sure how to add a $DLIBCOMSTR variable
# so that it works with this _smartLib() logic, and
# I don't have a D compiler/archiver to try it out,
# so we'll leave it alone for now.
return '$DLIBCOM'
else:
return defaultLib
env['SMART_ARCOM'] = smart_lib[arcom] = _smartLib
# It is worth noting that the final space in these strings is
# absolutely pivotal. SCons sees these as actions and not generators
# if it is not there. (very bad)
env['ARCOM'] = '$SMART_ARCOM '
env['LINKCOM'] = '$SMART_LINKCOM '
else: # assuming linux
linkcom = env.get('LINKCOM')
try:
env['SMART_LINKCOM'] = smart_link[linkcom]
except KeyError:
def _smartLink(source, target, env, for_signature,
defaultLinker=linkcom, dc=dc):
if isD(source):
try:
libs = env['LIBS']
except KeyError:
libs = []
if dc == 'dmd':
# TODO: This assumes that the dmd executable is in the
# bin directory and that the libraries are in a peer
# directory lib. This true of the Digital Mars
# distribution but . . .
import glob
dHome = env.WhereIs(dc).replace('/dmd' , '/..')
if glob.glob(dHome + '/lib/*phobos2*'):
if 'phobos2' not in libs:
env.Append(LIBPATH = [dHome + '/lib'])
env.Append(LIBS = ['phobos2'])
# TODO: Find out when there will be a
# 64-bit version of D.
env.Append(LINKFLAGS = ['-m32'])
else:
if 'phobos' not in libs:
env.Append(LIBS = ['phobos'])
elif dc is 'gdmd':
env.Append(LIBS = ['gphobos'])
if 'pthread' not in libs:
env.Append(LIBS = ['pthread'])
if 'm' not in libs:
env.Append(LIBS = ['m'])
return defaultLinker
env['SMART_LINKCOM'] = smart_link[linkcom] = _smartLink
env['LINKCOM'] = '$SMART_LINKCOM '
def exists(env):
return env.Detect(['dmd', 'gdmd'])
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
| {
"content_hash": "fab687b0b1e06d715dc1ad00716e35a4",
"timestamp": "",
"source": "github",
"line_count": 240,
"max_line_length": 120,
"avg_line_length": 39.68333333333333,
"alnum_prop": 0.5758084838303233,
"repo_name": "sftd/scons",
"id": "67355b1ad52a14d64ba2bd9723e4957d842a2b30",
"size": "9524",
"binary": false,
"copies": "8",
"ref": "refs/heads/master",
"path": "scons-local/SCons/Tool/dmd.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "1913081"
}
],
"symlink_target": ""
} |
"""Generates script-specific samples (collections of chars) using cldr
exemplar data for languages written in a script."""
import argparse
import codecs
import collections
import locale
import os
from os import path
import re
import shutil
import xml.etree.cElementTree as ElementTree
from nototools import cldr_data
from nototools import create_image
from nototools import extra_locale_data
from nototools import notoconfig
from nototools import tool_utils
from nototools import unicode_data
try:
from icu import Locale, Collator
print 'will use icu locale-specific order'
_HAVE_ICU = True
except ImportError as e:
print 'will use default locale sort order'
_HAVE_ICU = False
NOTO_TOOLS = path.abspath(path.join(path.dirname(__file__), os.pardir))
CLDR_DIR = path.join(NOTO_TOOLS, 'third_party', 'cldr')
_VERBOSE = False
def get_script_to_exemplar_data_map():
"""Return a map from script to 3-tuples of:
- locale tuple (lang, script, region, variant)
- cldr_relative path to src of exemplar data
- tuple of the exemplar chars"""
script_map = collections.defaultdict(dict)
for directory in ['common', 'seed', 'exemplars']:
data_dir = path.join(directory, 'main')
for filename in os.listdir(path.join(CLDR_DIR, data_dir)):
if not filename.endswith('.xml'):
continue
exemplar_list = cldr_data.get_exemplar_from_file(path.join(data_dir, filename))
if not exemplar_list:
continue
lsrv = cldr_data.loc_tag_to_lsrv(filename[:-4])
if not lsrv:
continue
src = path.join(directory, filename)
script = lsrv[1]
if not script:
continue
loc_tag = cldr_data.lsrv_to_loc_tag(lsrv)
loc_to_exemplar_info = script_map[script]
if loc_tag in loc_to_exemplar_info:
if _VERBOSE:
print 'skipping %s, already have exemplars for %s from %s' % (
src, loc_tag, loc_to_exemplar_info[loc_tag][1])
continue
# fix exemplars that look incorrect
if script == 'Arab' and 'd' in exemplar_list:
if _VERBOSE:
print 'found \'d\' in %s for %s' % (src, lsrv)
no_latin = True
else:
no_latin = False
# exclude exemplar strings, and restrict to letters and digits
def accept_cp(cp):
if len(cp) != 1:
return False
cat = unicode_data.category(cp)
if cat[0] != 'L' and cat != 'Nd':
return False
if no_latin and cp in 'df':
return False
return True
filtered_exemplar_list = filter(accept_cp, exemplar_list)
# some exemplar lists don't surround strings with curly braces, and end up
# with duplicate characters. Flag these
exemplar_chars = set()
dup_chars = set()
fixed_exemplar_list = []
for cp in filtered_exemplar_list:
if cp in exemplar_chars:
dup_chars.add(cp)
else:
exemplar_chars.add(cp)
fixed_exemplar_list.append(cp)
if len(dup_chars) > 0 and _VERBOSE:
print 'duplicate exemplars in %s: %s' % (
src, ', '.join([u'\u200e%s\u200e (%x)' % (cp, ord(cp)) for cp in dup_chars]))
loc_to_exemplar_info[loc_tag] = (lsrv, src, tuple(fixed_exemplar_list))
# supplement with extra locale data
for loc_tag in extra_locale_data.EXEMPLARS:
exemplar_list = cldr_data.get_exemplar_from_extra_data(loc_tag)
lang, script = loc_tag.split('-')
lsrv = (lang, script, None, None)
loc_to_exemplar_info = script_map[script]
src = '[extra locale data]/%s' % loc_tag
if loc_tag in loc_to_exemplar_info:
if _VERBOSE:
print 'skipping %s, already have exemplars for %s from %s' % (
src, loc_tag, loc_to_exemplar_info[loc_tag][1])
continue
# restrict to letters, except for zsym
def accept_cp(cp):
cat = unicode_data.category(cp)
return cat[0] == 'L' or cat == 'Nd'
if 'Zsym' not in loc_tag:
filtered_exemplar_list = filter(accept_cp, exemplar_list)
if len(filtered_exemplar_list) != len(exemplar_list) and _VERBOSE:
print 'filtered some characters from %s' % src
else:
filtered_exemplar_list = exemplar_list
loc_to_exemplar_info[loc_tag] = (lsrv, src, tuple(filtered_exemplar_list))
return script_map
def show_rarely_used_char_info(script, loc_map, char_to_lang_map):
# let's list chars unique to each language
for loc_tag in sorted(loc_map):
unique_chars = []
dual_chars = []
dual_shared_with = set()
triple_chars = []
triple_shared_with = set()
info = loc_map[loc_tag]
exemplars = info[2]
for cp in exemplars:
num_common_langs = len(char_to_lang_map[cp])
if num_common_langs == 1:
unique_chars.append(cp)
elif num_common_langs == 2:
dual_chars.append(cp)
for shared_loc_tag in char_to_lang_map[cp]:
if shared_loc_tag != loc_tag:
dual_shared_with.add(shared_loc_tag)
elif num_common_langs == 3:
triple_chars.append(cp)
for shared_loc_tag in char_to_lang_map[cp]:
if shared_loc_tag != loc_tag:
triple_shared_with.add(shared_loc_tag)
script_tag = '-' + script
if unique_chars:
print '%s has %d unique chars: %s%s' % (
loc_tag, len(unique_chars), ' '.join(unique_chars[:100]),
'...' if len(unique_chars) > 100 else '')
if dual_chars:
print '%s shares %d chars (%s%s) with 1 other lang: %s' % (
loc_tag, len(dual_chars), ' '.join(dual_chars[:20]),
'...' if len(dual_chars) > 20 else '',
', '.join(sorted([loc.replace(script_tag, '') for loc in dual_shared_with])))
if triple_chars:
print '%s shares %d chars (%s%s) with 2 other langs: %s' % (
loc_tag, len(triple_chars), ' '.join(triple_chars[:20]),
'...' if len(triple_chars) > 20 else '',
', '.join(sorted([loc.replace(script_tag, '') for loc in triple_shared_with])))
if not (unique_chars or dual_chars or triple_chars):
print '%s shares all chars with 3+ other langs' % loc_tag
def get_char_to_lang_map(loc_map):
char_to_lang_map = collections.defaultdict(list)
for loc_tag in sorted(loc_map):
info = loc_map[loc_tag]
exemplars = info[2]
for cp in exemplars:
if loc_tag in char_to_lang_map[cp]:
print 'loc %s (from %s) already in char_to_lang_map for %s (%x)' % (
loc_tag, info[1], cp, ord(cp))
else:
char_to_lang_map[cp].append(loc_tag)
return char_to_lang_map
def char_lang_info(num_locales, char_to_lang_map):
"""Returns a tuple containing
- characters ordered by the number of langs that use them
- a list mapping number of shared langs to number of chars shared by those langs"""
freq_list = []
hist = [0] * (num_locales + 1)
for cp in char_to_lang_map:
num_shared_langs = len(char_to_lang_map[cp])
if num_shared_langs >= len(hist):
for shared_lang in char_to_lang_map[cp]:
if shared_lang not in loc_map:
print 'loc map does not have \'%s\'!' % shared_lang
freq_list.append((num_shared_langs, cp))
if num_shared_langs >= len(hist):
print 'num shared langs is %d but size of hist is %d' % (num_shared_langs, len(hist))
hist[num_shared_langs] += 1
freq_list.sort()
return [cp for nl, cp in freq_list], hist
def show_char_use_info(script, chars_by_num_langs, char_to_lang_map):
script_tag = '-' + script
for cp in chars_by_num_langs:
langs = char_to_lang_map[cp]
count = len(langs)
limit = 12
without_script = [loc.replace(script_tag, '') for loc in langs[:limit]]
without_script_str = ', '.join(sorted(without_script))
if count > limit:
without_script_str += '...'
print u'char %s\u200e (%x): %d %s' % (cp, ord(cp), count, without_script_str)
print 'total chars listed: %d' % len(char_to_lang_map)
def show_shared_langs_hist(hist):
# histogram - number of chars per number of shared languages
for i in range(1, len(hist)):
print '[%3d] %3d %s' % (i, hist[i], 'x' * hist[i])
def get_upper_case_list(char_list):
"""Return the upper case versions where they differ.
If no char in the list is a lower case variant, the result is empty."""
# keep in same order as input list.
upper_case_chars = []
for cp in char_list:
upcp = unicode_data.to_upper(cp)
if upcp != cp:
upper_case_chars.append(upcp)
return upper_case_chars
def show_tiers(char_list, num_tiers, tier_size):
for tier in range(1, num_tiers + 1):
if tier == 1:
subset = char_list[-tier_size:]
else:
subset = char_list[tier * -tier_size:(tier-1) * -tier_size]
if not subset:
break
tier_chars = sorted(subset)
print 'tier %d: %s' % (tier, ' '.join(tier_chars))
upper_case_chars = get_upper_case_list(tier_chars)
if upper_case_chars:
print ' upper: ' + ' '.join(upper_case_chars)
def get_rare_char_info(char_to_lang_map, shared_lang_threshold):
"""Returns a tuple of:
- a set of 'rare_chars' (those used threshold langs or fewer),
- a mapping from each locale with rare chars to a set of its rare chars"""
rare_chars = set()
locs_with_rare_chars = collections.defaultdict(set)
for cp in char_to_lang_map:
num_shared_langs = len(char_to_lang_map[cp])
if num_shared_langs <= shared_lang_threshold:
rare_chars.add(cp)
for lang_tag in char_to_lang_map[cp]:
locs_with_rare_chars[lang_tag].add(cp)
return rare_chars, locs_with_rare_chars
_lang_for_script_map = {}
def _init_lang_for_script_map():
locs_by_lit_pop = [loc for _, loc in cldr_data.get_lang_scrs_by_decreasing_global_lit_pop()]
for t in locs_by_lit_pop:
lsrv = cldr_data.loc_tag_to_lsrv(t)
script = lsrv[1]
if script not in _lang_for_script_map:
lang = lsrv[0]
# print '%s lang => %s' % (script, lang)
_lang_for_script_map[script] = lang
def lang_for_script(script):
"""Return the most common language for a script based on literate population."""
# should use likely subtag data for this.
# the current code assumes all we want is lang -> script, I'd have to change
# it to map locale->locale. Right now I dont' get Hant -> zh_Hant, only
# Hant -> zh, which isn't good enough I think.
if not _lang_for_script_map:
_init_lang_for_script_map()
return _lang_for_script_map.get(script)
def select_rare_chars_for_loc(script, locs_with_rare_chars, shared_lang_threshold,
char_to_lang_map):
"""Return a list of 2-tuples of loc and selected rare chars,
ordered by decreasing literate population of the locale."""
rarity_threshold_map = {}
for lang_tag in locs_with_rare_chars:
rarity_threshold_map[lang_tag] = shared_lang_threshold
selected = []
locs_by_lit_pop = [loc for _, loc in cldr_data.get_lang_scrs_by_decreasing_global_lit_pop()]
# examine locales in decreasing order of literate population
for loc_tag in locs_by_lit_pop:
if script not in loc_tag:
continue
loc_tag = loc_tag.replace('_', '-')
if loc_tag not in locs_with_rare_chars:
continue
most_specific_chars = set()
most_specific_chars_count = rarity_threshold_map[loc_tag]
# From the rare chars for this locale, select those that
# are most specific to this language. In most cases they
# are unique to this language.
for cp in locs_with_rare_chars[loc_tag]:
num_chars = len(char_to_lang_map[cp])
if num_chars <= most_specific_chars_count:
if num_chars < most_specific_chars_count:
most_specific_chars = set()
most_specific_chars.add(cp)
most_specific_chars_count = num_chars
if most_specific_chars:
selected.append((loc_tag, most_specific_chars))
for cp in most_specific_chars:
for tag in char_to_lang_map[cp]:
if rarity_threshold_map[tag] > most_specific_chars_count:
rarity_threshold_map[tag] = most_specific_chars_count
return selected
def show_selected_rare_chars(selected):
print 'langs with rare chars by lang pop:'
for lang_tag, chars in selected:
print '%10s: %s' % (lang_tag, ', '.join(sorted(chars)))
def sort_for_script(cp_list, script):
lang = lang_for_script(script)
if not lang:
print 'cannot sort for script, no lang for %s' % script
return cp_list
if _HAVE_ICU:
from icu import Locale, Collator
loc = Locale(lang + '_' + script)
col = Collator.createInstance(loc)
return sorted(cp_list, cmp=col.compare)
else:
import locale
return sorted(cp_list, cmp=locale.strcoll)
def addcase(sample, script):
cased_sample = []
for cp in sample:
ucp = unicode_data.to_upper(cp)
if ucp != cp and ucp not in sample: # Copt has cased chars paired in the block
cased_sample.append(ucp)
if cased_sample:
cased_sample = ' '.join(cased_sample)
if _VERBOSE:
print 'add case for %s' % script
return sample + '\n' + cased_sample
return sample
def _generate_excluded_characters():
# Some of these exclusions are desired, and some are reluctantly applied because
# Noto currently does not support some characters. We use the generated
# data as fallback samples on a per-script and not per-font basis, which is also
# a problem.
# Religious characters
# deva OM, Arabic pbuh, bismillah
codepoints = [0x950, 0xfdfa, 0xfdfd]
# Cyrillic characters not in sans or serif
codepoints.append(0x2e2f)
for cp in range(0xa640, 0xa680):
codepoints.append(cp)
# Arabic character not in kufi
codepoints.append(0x08a0)
chars = set()
for cp in codepoints:
chars.add(unichr(cp))
return frozenset(chars)
_EXCLUDE_CHARS = _generate_excluded_characters()
def generate_sample_for_script(script, loc_map):
num_locales = len(loc_map)
if num_locales == 1:
tag, info = loc_map.iteritems().next()
exemplars = info[2]
ex_len = len(exemplars)
info = '%s (1 locale)\nfrom exemplars for %s (%s%d chars)' % (
script, tag, 'first 60 of ' if ex_len > 60 else '', ex_len)
# don't sort, rely on exemplar order
sample = ' '.join(exemplars[:60])
sample = addcase(sample, script)
return sample, info
script_tag = '-' + script
char_to_lang_map = get_char_to_lang_map(loc_map)
if len(char_to_lang_map) <= 60:
info = '%s (%d locales)\nfrom merged exemplars (%d chars) from %s' % (
script, num_locales, len(char_to_lang_map),
', '.join([loc.replace(script_tag, '') for loc in loc_map]))
sample = ' '.join(sort_for_script(list(char_to_lang_map), script))
sample = addcase(sample, script)
return sample, info
# show_rarely_used_char_info(script, loc_map, char_to_lang_map)
chars_by_num_langs, num_langs_to_num_chars = char_lang_info(
num_locales, char_to_lang_map)
# show_char_use_info(chars_by_num_langs, char_to_lang_map)
# show_shared_langs_hist(num_langs_to_num_chars)
# show_tiers(chars_by_num_langs, 3, 40)
shared_lang_threshold = min(7, num_locales)
rare_chars, locs_with_rare_chars = get_rare_char_info(
char_to_lang_map, shared_lang_threshold)
selected = select_rare_chars_for_loc(script,
locs_with_rare_chars, shared_lang_threshold, char_to_lang_map)
# show_selected_rare_chars(selected)
chars_by_num_langs = [cp for cp in chars_by_num_langs if cp not in _EXCLUDE_CHARS]
chosen_chars = list(chars_by_num_langs)[-60:]
rare_extension = []
for _, chars in selected:
avail_chars = [cp for cp in chars if cp not in chosen_chars and
cp not in rare_extension and cp not in _EXCLUDE_CHARS]
rare_extension.extend(sorted(avail_chars)[:4]) # vietnamese dominates latin otherwise
if len(rare_extension) > 20:
break
chosen_chars = chosen_chars[:60 - len(rare_extension)]
chosen_chars.extend(rare_extension)
info = ('%s (%d locales)\n'
'from most common exemplars plus chars specific to most-read languages' % (
script, num_locales))
sample = ' '.join(sort_for_script(chosen_chars, script))
sample = addcase(sample, script)
return sample, info
def generate_samples(dstdir, imgdir, summary):
if imgdir:
imgdir = tool_utils.ensure_dir_exists(imgdir)
print 'writing images to %s' % imgdir
if dstdir:
dstdir = tool_utils.ensure_dir_exists(dstdir)
print 'writing files to %s' % dstdir
verbose = summary
script_map = get_script_to_exemplar_data_map()
for script in sorted(script_map):
sample, info = generate_sample_for_script(script, script_map[script])
if summary:
print
print info
print sample
if imgdir:
path = os.path.join(imgdir, '%s.png' % script)
print 'writing image %s.png' % script
rtl = script in ['Arab', 'Hebr', 'Nkoo', 'Syrc', 'Tfng', 'Thaa']
create_image.create_png(sample, path, font_size=34, line_spacing=40, width=800, rtl=rtl)
if dstdir:
filename = 'und-%s.txt' % script
print 'writing data %s' % filename
filepath = os.path.join(dstdir, filename)
with codecs.open(filepath, 'w', 'utf-8') as f:
f.write(sample + '\n')
def main():
default_dstdir = os.path.join(NOTO_TOOLS, 'sample_texts')
parser = argparse.ArgumentParser()
parser.add_argument('--dstdir', help='where to write samples (default %s)' %
default_dstdir, default=default_dstdir, metavar='dir')
parser.add_argument('--imgdir', help='if defined, generate images in this dir',
metavar='dir')
parser.add_argument('--save', help='write sample files in dstdir', action='store_true')
parser.add_argument('--summary', help='output list of samples and how they were generated',
action='store_true')
parser.add_argument('--verbose', help='print warnings and extra info', action='store_true')
args = parser.parse_args()
if not args.save and not args.imgdir and not args.summary:
print 'nothing to do.'
return
if args.verbose:
global _VERBOSE
_VERBOSE = True
generate_samples(args.dstdir if args.save else None, args.imgdir, args.summary)
if __name__ == '__main__':
locale.setlocale(locale.LC_COLLATE, 'en_US.UTF-8')
main()
| {
"content_hash": "9d4b5f2c1af5140f46a4ca30a21ad173",
"timestamp": "",
"source": "github",
"line_count": 524,
"max_line_length": 94,
"avg_line_length": 34.68129770992366,
"alnum_prop": 0.6420513949265394,
"repo_name": "pathumego/nototools",
"id": "53a9fbccd77fd17a7d8df1b5272d6c696b5d8789",
"size": "18786",
"binary": false,
"copies": "3",
"ref": "refs/heads/master",
"path": "nototools/generate_sample_from_exemplar.py",
"mode": "33261",
"license": "apache-2.0",
"language": [
{
"name": "HTML",
"bytes": "620"
},
{
"name": "Makefile",
"bytes": "3424"
},
{
"name": "Python",
"bytes": "546238"
},
{
"name": "Shell",
"bytes": "3802"
}
],
"symlink_target": ""
} |
from django.shortcuts import render, get_object_or_404, redirect
from django.http import Http404, HttpResponse
from django.contrib import messages
from django.core import serializers
# from django.utils import simplejson
from workflows.urls import *
from workflows.helpers import *
import workflows.interaction_views
import workflows.visualization_views
import sys
import traceback
# modeli
from workflows.models import *
from django.contrib.auth.models import User
from workflows.utils import *
# auth fore
from django.contrib.auth.decorators import login_required
#settings
from mothra.settings import DEBUG, FILES_FOLDER
from streams.models import Stream
import workflows.views
#ostalo
import os
def stream_widget_visualization(request,stream_id,widget_id):
stream = get_object_or_404(Stream,pk=stream_id)
widget = get_object_or_404(Widget,pk=widget_id)
if widget.abstract_widget.streaming_visualization_view == '':
return Http404
else:
view_to_call = getattr(workflows.views,widget.abstract_widget.streaming_visualization_view)
return view_to_call(request,widget,stream)
| {
"content_hash": "9bfc592a1e7532d465fc47b1b4538f92",
"timestamp": "",
"source": "github",
"line_count": 40,
"max_line_length": 99,
"avg_line_length": 28.1,
"alnum_prop": 0.7855871886120996,
"repo_name": "xflows/textflows",
"id": "9e5b7bdb28bae52ac203d354010799a8e8234908",
"size": "1151",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "streams/views.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "C++",
"bytes": "192012"
},
{
"name": "CSS",
"bytes": "76342"
},
{
"name": "HTML",
"bytes": "363446"
},
{
"name": "JavaScript",
"bytes": "794623"
},
{
"name": "Makefile",
"bytes": "385"
},
{
"name": "Prolog",
"bytes": "146760"
},
{
"name": "Python",
"bytes": "30267970"
},
{
"name": "Roff",
"bytes": "58306446"
},
{
"name": "Shell",
"bytes": "97"
}
],
"symlink_target": ""
} |
import discord
from discord.ext import commands
class Info:
"""Info is a class within Pixie that is only for accessing data from discords built in things (Although we add Pixie's status command here)"""
def __init__(self, bot):
self.bot = bot
@commands.command(name="userinfo", pass_context=True)
async def user_info(self, ctx, user: discord.Member = None):
"""Gets information about the desired user (defaults to the message sender)"""
if not user:
user = ctx.message.author
msg = "```\n"
msg += "User: %s\n" % user.name
msg += "Nickname %s\n" % user.nick
msg += "ID: %s\n" % user.id
msg += "Created at: %s\n" % user.created_at
msg += "Joined on: %s\n" % user.joined_at
msg += "Game: %s\n" % user.game
msg += "Roles: %s\n" % ", ".join([role.name for role in user.roles if role.name != "@everyone"])
msg += "```\n"
msg += "Avatar: %s" % user.avatar_url
await self.bot.send_message(ctx.message.channel, msg)
@commands.command(name="guildinfo", pass_context=True)
async def guild_info(self, ctx):
"""Gets information about the current server"""
await self.bot.say("```xl\n"
"Guild: {0}\n"
"ID: {0.id}\n"
"Region: {0.region}\n"
"Member Count: {1}\n"
"Owner: {0.owner}\n"
"Icon: {0.icon_url}\n"
"Roles: {2}"
"```".format(ctx.message.server, sum(1 for x in ctx.message.server.members),
", ".join([x.name for x in ctx.message.server.roles])))
@commands.command(name="status")
async def status(self):
"""Gives some general information about Pixie's current situation"""
await self.bot.say("```xl\n"
"I'm in {0} guilds\n"
"I can currently see {1} people, {2} of which are unique\n"
"I'm also in {3} voice channels"
"```".format(len(self.bot.servers),
sum(1 for x in self.bot.get_all_members()),
len(set(self.bot.get_all_members())),
len(self.bot.voice_clients)))
@commands.command(name="info")
async def info(self):
await self.bot.say("```xl\n"
"Hiya, I'm Pixie; I'm a bot built for weebs by Recchan.\n"
"Check me out on Github, where you can see my codebase: https://github.com/GetRektByMe/Pixie\n"
"Here's my invite link: https://discordapp.com/oauth2/authorize?client_id=175319652073734144&scope=bot&permissions=536083519```")
def setup(bot):
bot.add_cog(Info(bot))
| {
"content_hash": "25506129cb4589224f81965a84b3267e",
"timestamp": "",
"source": "github",
"line_count": 64,
"max_line_length": 156,
"avg_line_length": 46,
"alnum_prop": 0.5047554347826086,
"repo_name": "GetRektByMe/Pixie",
"id": "805a00b6ce491bb9b2338ba701e383afa1bb2084",
"size": "2944",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "pixie/plugins/info.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "31582"
}
],
"symlink_target": ""
} |
from django.apps import apps
from django.core import exceptions
from django.utils.translation import gettext_lazy as _
from hashids import Hashids
from rest_framework import fields
from hashid_field.conf import settings
from hashid_field.hashid import Hashid
from hashid_field.lookups import _is_int_representation
class UnconfiguredHashidSerialField(fields.Field):
def bind(self, field_name, parent):
super().bind(field_name, parent)
raise exceptions.ImproperlyConfigured(
"The field '{field_name}' on {parent} must be explicitly declared when used with a ModelSerializer".format(
field_name=field_name, parent=parent.__class__.__name__))
class HashidSerializerMixin(object):
usage_text = "Must pass a HashidField, HashidAutoField or 'app_label.model.field'"
default_error_messages = {
'invalid': _("value must be a positive integer or a valid Hashids string."),
'invalid_hashid': _("'{value}' value must be a valid Hashids string."),
}
def __init__(self, **kwargs):
self.hashid_salt = kwargs.pop('salt', settings.HASHID_FIELD_SALT)
self.hashid_min_length = kwargs.pop('min_length', settings.HASHID_FIELD_MIN_LENGTH)
self.hashid_alphabet = kwargs.pop('alphabet', settings.HASHID_FIELD_ALPHABET)
self.allow_int_lookup = kwargs.pop('allow_int_lookup', settings.HASHID_FIELD_ALLOW_INT_LOOKUP)
self.prefix = kwargs.pop('prefix', "")
self._hashids = kwargs.pop('hashids', None)
source_field = kwargs.pop('source_field', None)
if source_field:
from hashid_field import HashidField, BigHashidField, HashidAutoField, BigHashidAutoField
if isinstance(source_field, str):
try:
app_label, model_name, field_name = source_field.split(".")
except ValueError:
raise ValueError(self.usage_text)
model = apps.get_model(app_label, model_name)
source_field = model._meta.get_field(field_name)
elif not isinstance(source_field, (HashidField, BigHashidField, HashidAutoField, BigHashidAutoField)):
raise TypeError(self.usage_text)
self.hashid_salt = source_field.salt
self.hashid_min_length = source_field.min_length
self.hashid_alphabet = source_field.alphabet
self.allow_int_lookup = source_field.allow_int_lookup
self.prefix = source_field.prefix
self._hashids =source_field._hashids
if not self._hashids:
self._hashids = Hashids(salt=self.hashid_salt, min_length=self.hashid_min_length,
alphabet=self.hashid_alphabet)
super().__init__(**kwargs)
def to_internal_value(self, data):
value = super().to_internal_value(data)
try:
return Hashid(value, salt=self.hashid_salt, min_length=self.hashid_min_length,
alphabet=self.hashid_alphabet, prefix=self.prefix, hashids=self._hashids)
except ValueError:
self.fail('invalid_hashid', value=data)
class HashidSerializerCharField(HashidSerializerMixin, fields.CharField):
def to_representation(self, value):
return str(value)
def to_internal_value(self, data):
hashid = super().to_internal_value(data)
if isinstance(data, int) and not self.allow_int_lookup:
self.fail('invalid_hashid', value=data)
if isinstance(data, str) and not self.allow_int_lookup:
# Make sure int lookups are not allowed, even if prefixed, unless the
# given value is actually a hashid made up entirely of numbers.
without_prefix = data[len(self.prefix):]
if _is_int_representation(without_prefix) and without_prefix != hashid.hashid:
self.fail('invalid_hashid', value=data)
return hashid
class HashidSerializerIntegerField(HashidSerializerMixin, fields.IntegerField):
def to_representation(self, value):
return int(value)
| {
"content_hash": "0dce4d3f68b0981ead934fdd64c448b1",
"timestamp": "",
"source": "github",
"line_count": 88,
"max_line_length": 119,
"avg_line_length": 46.42045454545455,
"alnum_prop": 0.653610771113831,
"repo_name": "nshafer/django-hashid-field",
"id": "1c7a5778ff2087024a18109177abe257df71ffab",
"size": "4085",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "hashid_field/rest.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "HTML",
"bytes": "2358"
},
{
"name": "Python",
"bytes": "115952"
}
],
"symlink_target": ""
} |
from som.primitives.integer_primitives import IntegerPrimitivesBase as _Base
IntegerPrimitives = _Base
| {
"content_hash": "aa99db5f9b2fa721eb0d995fb49b6cc8",
"timestamp": "",
"source": "github",
"line_count": 4,
"max_line_length": 76,
"avg_line_length": 26.25,
"alnum_prop": 0.8380952380952381,
"repo_name": "SOM-st/PySOM",
"id": "4f285a309e0d458c5a21f888bdfe83521245d91d",
"size": "105",
"binary": false,
"copies": "3",
"ref": "refs/heads/master",
"path": "src/som/primitives/ast/integer_primitives.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Makefile",
"bytes": "1342"
},
{
"name": "Python",
"bytes": "538515"
},
{
"name": "Shell",
"bytes": "407"
}
],
"symlink_target": ""
} |
import gc
import pprint
import sys
import traceback
import eventlet
import eventlet.backdoor
import greenlet
from nova.openstack.common import cfg
eventlet_backdoor_opts = [
cfg.IntOpt('backdoor_port',
default=None,
help='port for eventlet backdoor to listen')
]
CONF = cfg.CONF
CONF.register_opts(eventlet_backdoor_opts)
def _dont_use_this():
print "Don't use this, just disconnect instead"
def _find_objects(t):
return filter(lambda o: isinstance(o, t), gc.get_objects())
def _print_greenthreads():
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
print i, gt
traceback.print_stack(gt.gr_frame)
print
def initialize_if_enabled():
backdoor_locals = {
'exit': _dont_use_this, # So we don't exit the entire process
'quit': _dont_use_this, # So we don't exit the entire process
'fo': _find_objects,
'pgt': _print_greenthreads,
}
if CONF.backdoor_port is None:
return None
# NOTE(johannes): The standard sys.displayhook will print the value of
# the last expression and set it to __builtin__._, which overwrites
# the __builtin__._ that gettext sets. Let's switch to using pprint
# since it won't interact poorly with gettext, and it's easier to
# read the output too.
def displayhook(val):
if val is not None:
pprint.pprint(val)
sys.displayhook = displayhook
sock = eventlet.listen(('localhost', CONF.backdoor_port))
port = sock.getsockname()[1]
eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
locals=backdoor_locals)
return port
| {
"content_hash": "987bf4928bf76ddb414189736861268d",
"timestamp": "",
"source": "github",
"line_count": 62,
"max_line_length": 74,
"avg_line_length": 27.080645161290324,
"alnum_prop": 0.6515783204288267,
"repo_name": "maoy/zknova",
"id": "118385427c65db9509a69f01674091ee34622f10",
"size": "2429",
"binary": false,
"copies": "1",
"ref": "refs/heads/zk-servicegroup",
"path": "nova/openstack/common/eventlet_backdoor.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "JavaScript",
"bytes": "7403"
},
{
"name": "Python",
"bytes": "7960822"
},
{
"name": "Shell",
"bytes": "16987"
}
],
"symlink_target": ""
} |
"""Tests for pooling layers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python import keras
from tensorflow.python.eager import context
from tensorflow.python.framework import test_util as tf_test_util
from tensorflow.python.keras import testing_utils
from tensorflow.python.platform import test
class GlobalPoolingTest(test.TestCase):
@tf_test_util.run_in_graph_and_eager_modes
def test_globalpooling_1d(self):
testing_utils.layer_test(keras.layers.pooling.GlobalMaxPooling1D,
input_shape=(3, 4, 5))
testing_utils.layer_test(keras.layers.pooling.GlobalMaxPooling1D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 5))
testing_utils.layer_test(
keras.layers.pooling.GlobalAveragePooling1D, input_shape=(3, 4, 5))
testing_utils.layer_test(keras.layers.pooling.GlobalAveragePooling1D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 5))
@tf_test_util.run_in_graph_and_eager_modes
def test_globalpooling_1d_masking_support(self):
model = keras.Sequential()
model.add(keras.layers.Masking(mask_value=0., input_shape=(3, 4)))
model.add(keras.layers.GlobalAveragePooling1D())
model.compile(loss='mae', optimizer='rmsprop')
model_input = np.random.random((2, 3, 4))
model_input[0, 1:, :] = 0
output = model.predict(model_input)
self.assertAllClose(output[0], model_input[0, 0, :])
@tf_test_util.run_in_graph_and_eager_modes
def test_globalpooling_2d(self):
testing_utils.layer_test(
keras.layers.pooling.GlobalMaxPooling2D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 5, 6))
testing_utils.layer_test(
keras.layers.pooling.GlobalMaxPooling2D,
kwargs={'data_format': 'channels_last'},
input_shape=(3, 5, 6, 4))
testing_utils.layer_test(
keras.layers.pooling.GlobalAveragePooling2D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 5, 6))
testing_utils.layer_test(
keras.layers.pooling.GlobalAveragePooling2D,
kwargs={'data_format': 'channels_last'},
input_shape=(3, 5, 6, 4))
@tf_test_util.run_in_graph_and_eager_modes
def test_globalpooling_3d(self):
testing_utils.layer_test(
keras.layers.pooling.GlobalMaxPooling3D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 3, 4, 3))
testing_utils.layer_test(
keras.layers.pooling.GlobalMaxPooling3D,
kwargs={'data_format': 'channels_last'},
input_shape=(3, 4, 3, 4, 3))
testing_utils.layer_test(
keras.layers.pooling.GlobalAveragePooling3D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 4, 3, 4, 3))
testing_utils.layer_test(
keras.layers.pooling.GlobalAveragePooling3D,
kwargs={'data_format': 'channels_last'},
input_shape=(3, 4, 3, 4, 3))
class Pooling2DTest(test.TestCase):
@tf_test_util.run_in_graph_and_eager_modes
def test_maxpooling_2d(self):
pool_size = (3, 3)
for strides in [(1, 1), (2, 2)]:
testing_utils.layer_test(
keras.layers.MaxPooling2D,
kwargs={
'strides': strides,
'padding': 'valid',
'pool_size': pool_size
},
input_shape=(3, 5, 6, 4))
@tf_test_util.run_in_graph_and_eager_modes
def test_averagepooling_2d(self):
testing_utils.layer_test(
keras.layers.AveragePooling2D,
kwargs={'strides': (2, 2),
'padding': 'same',
'pool_size': (2, 2)},
input_shape=(3, 5, 6, 4))
testing_utils.layer_test(
keras.layers.AveragePooling2D,
kwargs={'strides': (2, 2),
'padding': 'valid',
'pool_size': (3, 3)},
input_shape=(3, 5, 6, 4))
# This part of the test can only run on GPU but doesn't appear
# to be properly assigned to a GPU when running in eager mode.
if not context.executing_eagerly():
# Only runs on GPU with CUDA, channels_first is not supported on CPU.
# TODO(b/62340061): Support channels_first on CPU.
if test.is_gpu_available(cuda_only=True):
testing_utils.layer_test(
keras.layers.AveragePooling2D,
kwargs={
'strides': (1, 1),
'padding': 'valid',
'pool_size': (2, 2),
'data_format': 'channels_first'
},
input_shape=(3, 4, 5, 6))
class Pooling3DTest(test.TestCase):
@tf_test_util.run_in_graph_and_eager_modes
def test_maxpooling_3d(self):
if test.is_built_with_rocm():
self.skipTest('Pooling with 3D tensors is not supported in ROCm')
pool_size = (3, 3, 3)
testing_utils.layer_test(
keras.layers.MaxPooling3D,
kwargs={'strides': 2,
'padding': 'valid',
'pool_size': pool_size},
input_shape=(3, 11, 12, 10, 4))
testing_utils.layer_test(
keras.layers.MaxPooling3D,
kwargs={
'strides': 3,
'padding': 'valid',
'data_format': 'channels_first',
'pool_size': pool_size
},
input_shape=(3, 4, 11, 12, 10))
@tf_test_util.run_in_graph_and_eager_modes
def test_averagepooling_3d(self):
if test.is_built_with_rocm():
self.skipTest('Pooling with 3D tensors is not supported in ROCm')
pool_size = (3, 3, 3)
testing_utils.layer_test(
keras.layers.AveragePooling3D,
kwargs={'strides': 2,
'padding': 'valid',
'pool_size': pool_size},
input_shape=(3, 11, 12, 10, 4))
testing_utils.layer_test(
keras.layers.AveragePooling3D,
kwargs={
'strides': 3,
'padding': 'valid',
'data_format': 'channels_first',
'pool_size': pool_size
},
input_shape=(3, 4, 11, 12, 10))
class Pooling1DTest(test.TestCase):
@tf_test_util.run_in_graph_and_eager_modes
def test_maxpooling_1d(self):
for padding in ['valid', 'same']:
for stride in [1, 2]:
testing_utils.layer_test(
keras.layers.MaxPooling1D,
kwargs={'strides': stride,
'padding': padding},
input_shape=(3, 5, 4))
testing_utils.layer_test(
keras.layers.MaxPooling1D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 2, 6))
@tf_test_util.run_in_graph_and_eager_modes
def test_averagepooling_1d(self):
for padding in ['valid', 'same']:
for stride in [1, 2]:
testing_utils.layer_test(
keras.layers.AveragePooling1D,
kwargs={'strides': stride,
'padding': padding},
input_shape=(3, 5, 4))
testing_utils.layer_test(
keras.layers.AveragePooling1D,
kwargs={'data_format': 'channels_first'},
input_shape=(3, 2, 6))
if __name__ == '__main__':
test.main()
| {
"content_hash": "61e4ed55354e74f0029c14b19c0a84cc",
"timestamp": "",
"source": "github",
"line_count": 207,
"max_line_length": 75,
"avg_line_length": 34.70048309178744,
"alnum_prop": 0.5900041765279131,
"repo_name": "DavidNorman/tensorflow",
"id": "b3bf875737158bc5785d062c709780a07e2a6204",
"size": "7872",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "tensorflow/python/keras/layers/pooling_test.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Assembly",
"bytes": "4913"
},
{
"name": "Batchfile",
"bytes": "15272"
},
{
"name": "C",
"bytes": "774469"
},
{
"name": "C#",
"bytes": "8562"
},
{
"name": "C++",
"bytes": "74659044"
},
{
"name": "CMake",
"bytes": "6545"
},
{
"name": "Dockerfile",
"bytes": "79827"
},
{
"name": "Go",
"bytes": "1670422"
},
{
"name": "HTML",
"bytes": "4680032"
},
{
"name": "Java",
"bytes": "827737"
},
{
"name": "Jupyter Notebook",
"bytes": "540800"
},
{
"name": "LLVM",
"bytes": "6536"
},
{
"name": "MLIR",
"bytes": "1004638"
},
{
"name": "Makefile",
"bytes": "66660"
},
{
"name": "Objective-C",
"bytes": "105247"
},
{
"name": "Objective-C++",
"bytes": "297569"
},
{
"name": "PHP",
"bytes": "23553"
},
{
"name": "Pascal",
"bytes": "3752"
},
{
"name": "Pawn",
"bytes": "14529"
},
{
"name": "Perl",
"bytes": "7536"
},
{
"name": "Python",
"bytes": "37406546"
},
{
"name": "RobotFramework",
"bytes": "891"
},
{
"name": "Ruby",
"bytes": "4706"
},
{
"name": "Shell",
"bytes": "452517"
},
{
"name": "Smarty",
"bytes": "31460"
},
{
"name": "Swift",
"bytes": "62814"
}
],
"symlink_target": ""
} |
import unittest
from unittest import mock
from google.api_core.gapic_v1.method import DEFAULT
from google.cloud.tasks_v2.types import Queue, Task
from airflow.providers.google.cloud.operators.tasks import (
CloudTasksQueueCreateOperator,
CloudTasksQueueDeleteOperator,
CloudTasksQueueGetOperator,
CloudTasksQueuePauseOperator,
CloudTasksQueuePurgeOperator,
CloudTasksQueueResumeOperator,
CloudTasksQueuesListOperator,
CloudTasksQueueUpdateOperator,
CloudTasksTaskCreateOperator,
CloudTasksTaskDeleteOperator,
CloudTasksTaskGetOperator,
CloudTasksTaskRunOperator,
CloudTasksTasksListOperator,
)
GCP_CONN_ID = "google_cloud_default"
PROJECT_ID = "test-project"
LOCATION = "asia-east2"
FULL_LOCATION_PATH = "projects/test-project/locations/asia-east2"
QUEUE_ID = "test-queue"
FULL_QUEUE_PATH = "projects/test-project/locations/asia-east2/queues/test-queue"
TASK_NAME = "test-task"
FULL_TASK_PATH = "projects/test-project/locations/asia-east2/queues/test-queue/tasks/test-task"
TEST_QUEUE = Queue(name=FULL_QUEUE_PATH)
TEST_TASK = Task(app_engine_http_request={})
class TestCloudTasksQueueCreate(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_create_queue(self, mock_hook):
mock_hook.return_value.create_queue.return_value = TEST_QUEUE
operator = CloudTasksQueueCreateOperator(location=LOCATION, task_queue=TEST_QUEUE, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.create_queue.assert_called_once_with(
location=LOCATION,
task_queue=TEST_QUEUE,
project_id=None,
queue_name=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueueUpdate(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_update_queue(self, mock_hook):
mock_hook.return_value.update_queue.return_value = TEST_QUEUE
operator = CloudTasksQueueUpdateOperator(task_queue=Queue(name=FULL_QUEUE_PATH), task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.update_queue.assert_called_once_with(
task_queue=Queue(name=FULL_QUEUE_PATH),
project_id=None,
location=None,
queue_name=None,
update_mask=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueueGet(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_get_queue(self, mock_hook):
mock_hook.return_value.get_queue.return_value = TEST_QUEUE
operator = CloudTasksQueueGetOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.get_queue.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueuesList(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_list_queues(self, mock_hook):
mock_hook.return_value.list_queues.return_value = [TEST_QUEUE]
operator = CloudTasksQueuesListOperator(location=LOCATION, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert [{'name': FULL_QUEUE_PATH, 'state': 0}] == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.list_queues.assert_called_once_with(
location=LOCATION,
project_id=None,
results_filter=None,
page_size=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueueDelete(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_delete_queue(self, mock_hook):
mock_hook.return_value.delete_queue.return_value = None
operator = CloudTasksQueueDeleteOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
operator.execute(context=mock.MagicMock())
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.delete_queue.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueuePurge(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_delete_queue(self, mock_hook):
mock_hook.return_value.purge_queue.return_value = TEST_QUEUE
operator = CloudTasksQueuePurgeOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.purge_queue.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueuePause(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_pause_queue(self, mock_hook):
mock_hook.return_value.pause_queue.return_value = TEST_QUEUE
operator = CloudTasksQueuePauseOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.pause_queue.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksQueueResume(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_resume_queue(self, mock_hook):
mock_hook.return_value.resume_queue.return_value = TEST_QUEUE
operator = CloudTasksQueueResumeOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert {'name': FULL_QUEUE_PATH, 'state': 0} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.resume_queue.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksTaskCreate(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_create_task(self, mock_hook):
mock_hook.return_value.create_task.return_value = TEST_TASK
operator = CloudTasksTaskCreateOperator(
location=LOCATION, queue_name=QUEUE_ID, task=Task(), task_id="id"
)
result = operator.execute(context=mock.MagicMock())
assert {
'app_engine_http_request': {'body': '', 'headers': {}, 'http_method': 0, 'relative_uri': ''},
'dispatch_count': 0,
'name': '',
'response_count': 0,
'view': 0,
} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.create_task.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
task=Task(),
project_id=None,
task_name=None,
response_view=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksTaskGet(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_get_task(self, mock_hook):
mock_hook.return_value.get_task.return_value = TEST_TASK
operator = CloudTasksTaskGetOperator(
location=LOCATION, queue_name=QUEUE_ID, task_name=TASK_NAME, task_id="id"
)
result = operator.execute(context=mock.MagicMock())
assert {
'app_engine_http_request': {'body': '', 'headers': {}, 'http_method': 0, 'relative_uri': ''},
'dispatch_count': 0,
'name': '',
'response_count': 0,
'view': 0,
} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.get_task.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
task_name=TASK_NAME,
project_id=None,
response_view=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksTasksList(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_list_tasks(self, mock_hook):
mock_hook.return_value.list_tasks.return_value = [TEST_TASK]
operator = CloudTasksTasksListOperator(location=LOCATION, queue_name=QUEUE_ID, task_id="id")
result = operator.execute(context=mock.MagicMock())
assert [
{
'app_engine_http_request': {
'body': '',
'headers': {},
'http_method': 0,
'relative_uri': '',
},
'dispatch_count': 0,
'name': '',
'response_count': 0,
'view': 0,
}
] == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.list_tasks.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
project_id=None,
response_view=None,
page_size=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksTaskDelete(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_delete_task(self, mock_hook):
mock_hook.return_value.delete_task.return_value = None
operator = CloudTasksTaskDeleteOperator(
location=LOCATION, queue_name=QUEUE_ID, task_name=TASK_NAME, task_id="id"
)
operator.execute(context=mock.MagicMock())
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.delete_task.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
task_name=TASK_NAME,
project_id=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
class TestCloudTasksTaskRun(unittest.TestCase):
@mock.patch("airflow.providers.google.cloud.operators.tasks.CloudTasksHook")
def test_run_task(self, mock_hook):
mock_hook.return_value.run_task.return_value = TEST_TASK
operator = CloudTasksTaskRunOperator(
location=LOCATION, queue_name=QUEUE_ID, task_name=TASK_NAME, task_id="id"
)
result = operator.execute(context=mock.MagicMock())
assert {
'app_engine_http_request': {'body': '', 'headers': {}, 'http_method': 0, 'relative_uri': ''},
'dispatch_count': 0,
'name': '',
'response_count': 0,
'view': 0,
} == result
mock_hook.assert_called_once_with(
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=None,
)
mock_hook.return_value.run_task.assert_called_once_with(
location=LOCATION,
queue_name=QUEUE_ID,
task_name=TASK_NAME,
project_id=None,
response_view=None,
retry=DEFAULT,
timeout=None,
metadata=(),
)
| {
"content_hash": "6395cf805571b53f0a941c7b7969acc7",
"timestamp": "",
"source": "github",
"line_count": 382,
"max_line_length": 105,
"avg_line_length": 35.31937172774869,
"alnum_prop": 0.6077675659650164,
"repo_name": "danielvdende/incubator-airflow",
"id": "b13be1ca8dfd6d3e0be1641df6883c695729da20",
"size": "14280",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "tests/providers/google/cloud/operators/test_tasks.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "25785"
},
{
"name": "Dockerfile",
"bytes": "76693"
},
{
"name": "HCL",
"bytes": "3786"
},
{
"name": "HTML",
"bytes": "164512"
},
{
"name": "JavaScript",
"bytes": "236992"
},
{
"name": "Jinja",
"bytes": "37155"
},
{
"name": "Jupyter Notebook",
"bytes": "2929"
},
{
"name": "Mako",
"bytes": "1339"
},
{
"name": "Python",
"bytes": "21824455"
},
{
"name": "R",
"bytes": "313"
},
{
"name": "Shell",
"bytes": "495567"
},
{
"name": "TypeScript",
"bytes": "326556"
}
],
"symlink_target": ""
} |
from future import standard_library
standard_library.install_aliases()
from builtins import str
from builtins import range
from future.utils import iteritems
import codecs
import csv
import json
import re
import urllib.request, urllib.parse, urllib.error
from PDFUtilityFunctions import *
PACKAGE_MAP = {
'AR (ACCOUNTS RECEIVABLE)' : 'Accounts Receivable',
'AUTOMATED INFO COLLECTION SYS': 'Automated Information Collection System',
'AUTOMATED MED INFO EXCHANGE': 'Automated Medical Information Exchange',
'BAR CODE MED ADMIN': 'Barcode Medication Administration',
'CLINICAL INFO RESOURCE NETWORK': 'Clinical Information Resource Network',
'CLINICAL LEXICON UTILITY' : 'Lexicon Utility',
# u'DEVICE HANDLER',
# u'DISCHARGE SUMMARY',
'E CLAIMS MGMT ENGINE': 'E Claims Management Engine',
# u'EDUCATION TRACKING',
'EMERGENCY DEPARTMENT': 'Emergency Department Integration Software',
# u'EXTENSIBLE EDITOR',
# u'EXTERNAL PEER REVIEW',
'ENTERPRISE HEALTH MGMT PLATFORM' : 'Enterprise Health Management Platform',
'ENTERPRISE TERMINOLOGY SERVICE' : 'Enterprise Terminology Services',
'FEE BASIS CLAIMS SYSTEM' : 'Fee Basis',
'GEN. MED. REC. - GENERATOR': 'General Medical Record - Generator',
'GEN. MED. REC. - I/O' : 'General Medical Record - IO',
'GEN. MED. REC. - VITALS' : 'General Medical Record - Vitals',
# u'GRECC',
'HEALTH MANAGEMENT PLATFORM' : 'Enterprise Health Management Platform',
'INTEGRATED PATIENT FUNDS' : 'Integrated Patient Fund',
# u'INDIAN HEALTH SERVICE',
# u'INSURANCE CAPTURE BUFFER',
# u'IV PHARMACY',
'KERNEL (parent)' : 'Kernel',
'MASTER PATIENT INDEX': 'Master Patient Index VistA',
'MCCR BACKBILLING' : 'MCCR National Database - Field',
# u'MINIMAL PATIENT DATASET',
# u'MOBILE SCHEDULING APPLICATIONS SUITE',
# u'Missing Patient Register',
'MRSA INITIATIVE REPORTS' : 'Methicillin Resistant Staph Aurerus Initiative',
'MYHEALTHEVET': 'My HealtheVet',
'NATIONAL HEALTH INFO NETWORK' : 'National Health Information Network',
'OCCUPAT HEALTH RECORD-KEEPING': 'Occupational Health Record-Keeping System',
# u'NEW PERSON',
'PATIENT ASSESSMENT DOCUM' : 'Patient Assessment Documentation',
# u'PATIENT FILE',
# u'PROGRESS NOTES',
# u'QUALITY ASSURANCE',
# u'QUALITY IMPROVEMENT CHECKLIST',
# u'REAL TIME LOCATION SYSTEM',
'SHIFT CHANGE HANDOFF TOOL' : 'Shift Handoff Tool',
'TEXT INTEGRATION UTILITIES' : 'Text Integration Utility',
# u'UNIT DOSE PHARMACY',
'VA POINT OF SERVICE (KIOSKS)' : 'VA Point of Service',
# u'VDEM',
'VISTA INTEGRATION ADAPTOR' : 'VistA Integration Adapter',
'VENDOR - DOCUMENT STORAGE SYS' : 'Vendor - Document Storage Systems'
# u'VETERANS ADMINISTRATION',
# u'VOLUNTARY SERVICE SYSTEM',
# u'VPFS',
# u'cds',
# u'person.demographics',
# u'person.lookup',
# u'term',
# u'term.access'])
} # this is the mapping between CUSTODIAL PACKAGE and packages in Dox
PACKAGE_COMPONENT_MAP = {
"Option": "19",
"Function": ".5",
"List_Manager_Templates": "409.61",
"Dialog": ".84",
"Key": "19.1",
"Remote_Procedure": "8994",
"Protocol": "101",
"Help_Frame": "9.2",
"Form": ".403",
"Sort_Template": ".401",
"HL7_APPLICATION_PARAMETER": "771",
"Input_Template": ".402",
"Print_Template": ".4"
}
# Do not generate the graph if have more than 30 nodes
MAX_DEPENDENCY_LIST_SIZE = 30
COLOR_MAP = {
"Routine": "black",
"Option": "orangered",
"Function": "royalblue",
"List_Manager_Templates": "saddlebrown",
"Dialog": "turquoise",
"Key": "limegreen",
"Remote_Procedure": "firebrick",
"Protocol": "indigo",
"Help_Frame": "moccasin",
"Form": "cadetblue",
"Sort_Template": "salmon",
"HL7_APPLICATION_PARAMETER": "mediumvioletred",
"Input_Template": "skyblue",
"Print_Template": "yellowgreen",
"Global": "magenta"
}
###############################################################################
def cOpen(fileName, openParams):
return codecs.open(fileName, openParams, encoding="ISO-8859-1", errors="ignore")
def getDOXURL(local):
return "../dox"
def getViViaNURL(local):
return "../vivian"
def getFilesURL(local):
return "../vivian-data"
###############################################################################
def findDotColor(object):
return COLOR_MAP.get(object.getObjectType(), "black")
###############################################################################
def readIntoDictionary(infileName):
values = {}
with open(infileName, "r") as templateData:
sniffer = csv.Sniffer()
dialect = sniffer.sniff(templateData.read(1024))
templateData.seek(0)
hasHeader = sniffer.has_header(templateData.read(1024))
templateData.seek(0)
for index, line in enumerate(csv.reader(templateData, dialect)):
if index == 0:
continue
if line[1] not in values:
values[line[1]] = []
values[line[1]].append(line)
return values
def parseICRJson(icrJson):
# Reads in the ICR JSON file and generates
# a dictionary that consists of only the routine information
#
# Each key is a routine and it points to a list of all of the entries
# that have that routine marked as a "ROUTINE" field.
#
parsedICRJSON = {}
with open(icrJson, 'r') as icrFile:
icrEntries = json.load(icrFile)
for entry in icrEntries:
if 'CUSTODIAL PACKAGE' in entry:
# Finding a Custodial Package means the entry should belong somewhere, for now
# we ignore those that don't have one
if not (entry['CUSTODIAL PACKAGE'] in parsedICRJSON):
# First time we come across a package, add dictionaries for the used types
parsedICRJSON[entry['CUSTODIAL PACKAGE']] = {}
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["ROUTINE"] = {}
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["GLOBAL"] = {}
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["OTHER"] = {}
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["OTHER"]["ENTRIES"] = []
if "ROUTINE" in entry:
if not (entry["ROUTINE"] in parsedICRJSON[entry['CUSTODIAL PACKAGE']]["ROUTINE"]):
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["ROUTINE"][entry["ROUTINE"]] = []
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["ROUTINE"][entry["ROUTINE"]].append(entry)
elif "GLOBAL ROOT" in entry:
globalRoot = entry['GLOBAL ROOT'].replace(',', '')
if not (globalRoot in parsedICRJSON[entry['CUSTODIAL PACKAGE']]["GLOBAL"]):
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["GLOBAL"][globalRoot] = []
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["GLOBAL"][globalRoot].append(entry)
else:
# Take all other entries into "OTHER", so that they can be shown on the package page
parsedICRJSON[entry['CUSTODIAL PACKAGE']]["OTHER"]["ENTRIES"].append(entry)
return parsedICRJSON
###############################################################################
def getGlobalHtmlFileName(globalVar):
if globalVar.isSubFile():
return getFileManSubFileHtmlFileNameByName(globalVar.getFileNo())
return getGlobalHtmlFileNameByName(globalVar.getName())
def getGlobalHtmlFileNameByName(globalName):
return ("Global_%s.html" %
normalizeGlobalName(globalName))
def normalizeGlobalName(globalName):
import base64
return base64.urlsafe_b64encode(globalName.encode('utf-8')).decode("utf-8")
def getGlobalPDFFileNameByName(globalName):
return ("Global_%s.pdf" %
normalizeGlobalName(globalName))
def getFileManSubFileHtmlFileName(subFile):
return getFileManSubFileHtmlFileNameByName(subFile.getFileNo())
def getFileManSubFileHtmlFileNameByName(subFileNo):
return urllib.parse.quote("SubFile_%s.html" % subFileNo)
def getFileManSubFilePDFFileNameByName(subFileNo):
return urllib.parse.quote("SubFile_%s.pdf" % subFileNo)
def getPackageHtmlFileName(packageName):
return urllib.parse.quote("Package_%s.html" %
normalizePackageName(packageName))
def getPackagePdfFileName(packageName):
return urllib.parse.quote("Package_%s.pdf" %
normalizePackageName(packageName))
def getPackageDependencyHtmlFileName(packageName, depPackageName):
firstName = normalizePackageName(packageName)
secondName = normalizePackageName(depPackageName)
if firstName < secondName:
temp = firstName
firstName = secondName
secondName = temp
return "Package_%s-%s_detail.html" % (firstName, secondName)
def normalizePackageName(packageName):
if packageName in PACKAGE_MAP:
packageName = PACKAGE_MAP[packageName]
return packageName.replace(' ', '_').replace('-', "_").replace('.', '_') \
.replace('/', '_').replace('(', '_').replace(')', '_')
# Note: 'option' is the object NOT the name string
def getPackageObjHtmlFileName(option):
if "Global" in str(type(option)):
filename = getGlobalHtmlFileNameByName(option.getName())
else:
title = option.getObjectType()
optionName = option.getName()
filename = "%s_%s.html" % (title, normalizeName(optionName))
return filename
def getPackageComponentLink(option):
return urllib.parse.quote(getPackageObjHtmlFileName(option))
def getRoutineLink(routineName):
filename = "Routine_%s.html" % normalizeName(routineName)
return urllib.parse.quote(filename)
def getRoutineHRefLink(rtnName, dox_url, **kargs):
crossRef = None
if 'crossRef' in kargs:
crossRef = kargs['crossRef']
if crossRef:
routine = crossRef.getRoutineByName(rtnName)
if routine:
return '<a href=\"%s/%s\">%s</a>' % (dox_url,
getPackageComponentLink(routine),
rtnName)
return None
def getRoutinePdfFileName(routineName):
filename = "Routine_%s.pdf" % normalizeName(routineName)
return urllib.parse.quote(filename)
def getRoutineSourceHtmlFileName(routineName):
filename = "Routine_%s_source.html" % normalizeName(routineName)
return urllib.parse.quote(filename)
def normalizeName(name):
return re.sub("[ /.*?&<>:\\\"|]", '_', name)
def getDataEntryHtmlFileName(ien, fileNo):
return "%s-%s.html" % (fileNo, ien)
###############################################################################
def getKeys(data, func=int):
outKey = []
for key in data:
try:
func(key)
outKey.append(key)
except ValueError:
pass
outKey.sort()
return outKey
def sortDataEntryFloatFirst(data1, data2):
isData1Float = convertToType(data1, float)
isData2Float = convertToType(data2, float)
if isData1Float and isData2Float:
return (float(data1) > float(data2)) - (float(data1) < float(data2))
if isData1Float:
return -1 # float first
if isData2Float:
return 1
return (data1 > data2) - (data1 < data2)
def convertToType(data, convertFunc):
try:
convertFunc(data)
return True
except ValueError:
return False
###############################################################################
#==============================================================================
# return a tuple of Edge Label, Edge ToolTip, Edge Style
#==============================================================================
def getPackageGraphEdgePropsByMetrics(depMetricsList,
toolTipStartPackage,
toolTipEndPackage,
isEdgeLabel=True):
assert(len(depMetricsList) >= 8)
# default for routine only
toolTip =("Total %d routine(s) in %s called total %d routine(s) in %s" % (depMetricsList[0],
toolTipStartPackage,
depMetricsList[1],
toolTipEndPackage),
"Total %d routine(s) in %s accessed total %d global(s) in %s" % (depMetricsList[2],
toolTipStartPackage,
depMetricsList[3],
toolTipEndPackage),
"Total %d fileman file(s) in %s pointed to total %d fileman file(s) in %s" % (depMetricsList[4],
toolTipStartPackage,
depMetricsList[5],
toolTipEndPackage),
"Total %d routines(s) in %s accessed via fileman db calls to total %d fileman file(s) in %s" % (depMetricsList[6],
toolTipStartPackage,
depMetricsList[7],
toolTipEndPackage),
"Total %d Package Component(s) in %s accessed total %d Routine(s) in %s" % (depMetricsList[8],
toolTipStartPackage,
depMetricsList[9],
toolTipEndPackage),
"Total %d Global(s) in %s accessed total %d Routines(s) in %s" % (depMetricsList[10],
toolTipStartPackage,
depMetricsList[11],
toolTipEndPackage),
"Total %d Global(s) in %s accessed total %d Routines(s) in %s" % (depMetricsList[12],
toolTipStartPackage,
depMetricsList[13],
toolTipEndPackage)
)
labelText =("%s(R)->(R)%s" % (depMetricsList[0], depMetricsList[1]),
"%s(R)->(G)%s" % (depMetricsList[2], depMetricsList[3]),
"%s(F)->(F)%s" % (depMetricsList[4], depMetricsList[5]),
"%s(R)->(F)%s" % (depMetricsList[6], depMetricsList[7]),
"%s(PC)->(R)%s" % (depMetricsList[8], depMetricsList[9]),
"%s(G)->(R)%s" % (depMetricsList[10], depMetricsList[11]),
"%s(G)->(G)%s" % (depMetricsList[12], depMetricsList[13])
)
metricValue = 0
(edgeLabel, edgeToolTip, edgeStyle) = ("", "", "")
metricValue = 0
for i in range(0, 7):
if depMetricsList[i*2]:
if not edgeLabel:
edgeLabel = labelText[i]
elif isEdgeLabel:
edgeLabel = "%s\\n%s" % (edgeLabel, labelText[i])
else:
edgeLabel = "%s:%s" % (edgeLabel, labelText[i])
if edgeToolTip:
edgeToolTip = "%s. %s" % (edgeToolTip, toolTip[i])
else:
edgeToolTip = toolTip[i]
metricValue += 1 * 2**i
if metricValue >= 7:
edgeStyle = "bold"
elif metricValue == 2:
edgeStyle = "dashed"
elif metricValue == 4:
edgeStyle = "dotted"
else:
edgeStyle = "solid"
return (edgeLabel, edgeToolTip, edgeStyle)
#==============================================================================
## Method to generate merge and sorted Dependeny list by Package
#==============================================================================
def mergeAndSortDependencyListByPackage(package, isDependencyList):
depPackageMerged = mergePackageDependenciesList(package, isDependencyList)
# sort by the sum of the total # of routines
depPackages = sorted(list(depPackageMerged.keys()),
key=lambda item: sum(depPackageMerged[item][0:7:2]),
reverse=True)
return (depPackages, depPackageMerged)
#==============================================================================
# Return a dict with package as key, a list of 6 as value
#==============================================================================
def mergePackageDependenciesList(package, isDependencies=True):
packageDepDict = dict()
if isDependencies:
routineDeps = package.getPackageRoutineDependencies()
globalDeps = package.getPackageGlobalDependencies()
globalRtnDeps = package.getPackageGlobalRoutineDependencies()
globalGblDeps = package.getPackageGlobalGlobalDependencies()
fileManDeps = package.getPackageFileManFileDependencies()
dbCallDeps = package.getPackageFileManDbCallDependencies()
optionDeps = package.getPackageComponentDependencies()
else:
routineDeps = package.getPackageRoutineDependents()
globalDeps = package.getPackageGlobalDependents()
globalRtnDeps = package.getPackageGlobalRoutineDependendents()
globalGblDeps = package.getPackageGlobalGlobalDependents()
fileManDeps = package.getPackageFileManFileDependents()
dbCallDeps = package.getPackageFileManDbCallDependents()
optionDeps = {}
for (package, depTuple) in iteritems(routineDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][0] = len(depTuple[0])
packageDepDict[package][1] = len(depTuple[1])
for (package, depTuple) in iteritems(globalDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][2] = len(depTuple[0])
packageDepDict[package][3] = len(depTuple[1])
for (package, depTuple) in iteritems(fileManDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][4] = len(depTuple[0])
packageDepDict[package][5] = len(depTuple[1])
for (package, depTuple) in iteritems(dbCallDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][6] = len(depTuple[0])
packageDepDict[package][7] = len(depTuple[1])
for (package, depTuple) in iteritems(optionDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][8] = len(depTuple[0])
packageDepDict[package][9] = len(depTuple[1])
for (package, depTuple) in iteritems(globalRtnDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][10] = len(depTuple[0])
packageDepDict[package][11] = len(depTuple[1])
for (package, depTuple) in iteritems(globalGblDeps):
if package not in packageDepDict:
packageDepDict[package] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
packageDepDict[package][12] = len(depTuple[0])
packageDepDict[package][13] = len(depTuple[1])
return packageDepDict
| {
"content_hash": "f4ba2f740758a0247adeced2965f72fc",
"timestamp": "",
"source": "github",
"line_count": 450,
"max_line_length": 128,
"avg_line_length": 44.15111111111111,
"alnum_prop": 0.5658848399436279,
"repo_name": "OSEHRA/VistA",
"id": "4e448343e9e30e0e6ea8f2f0675ebe64cbbbb21b",
"size": "20633",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "Utilities/Dox/PythonScripts/UtilityFunctions.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "6315"
},
{
"name": "Brightscript",
"bytes": "297"
},
{
"name": "CMake",
"bytes": "120463"
},
{
"name": "CSS",
"bytes": "132661"
},
{
"name": "Genshi",
"bytes": "72951258"
},
{
"name": "HTML",
"bytes": "2296661"
},
{
"name": "JavaScript",
"bytes": "2341060"
},
{
"name": "M",
"bytes": "483901"
},
{
"name": "PHP",
"bytes": "6750"
},
{
"name": "Pascal",
"bytes": "17825658"
},
{
"name": "Python",
"bytes": "1475872"
},
{
"name": "Ruby",
"bytes": "12147"
},
{
"name": "Shell",
"bytes": "98820"
}
],
"symlink_target": ""
} |
"""Boolean algebra module for SymPy"""
from sympy.core.basic import Basic
from sympy.core.operations import LatticeOp
from sympy.core.function import Application, sympify
class Boolean(Basic):
"""A boolean object is an object for which logic operations make sense."""
__slots__ = []
def __and__(self, other):
"""Overloading for & operator"""
return And(self, other)
def __or__(self, other):
"""Overloading for |"""
return Or(self, other)
def __invert__(self):
"""Overloading for ~"""
return Not(self)
def __rshift__(self, other):
"""Overloading for >>"""
return Implies(self, other)
def __lshift__(self, other):
"""Overloading for <<"""
return Implies(other, self)
class BooleanFunction(Application, Boolean):
"""Boolean function is a function that lives in a boolean space
It is used as base class for And, Or, Not, etc.
"""
pass
class And(LatticeOp, BooleanFunction):
"""
Logical AND function.
It evaluates its arguments in order, giving False immediately if any of them
are False, and True if they are all True.
Examples:
>>> from sympy.core import symbols
>>> from sympy.abc import x, y
>>> x & y
And(x, y)
"""
zero = False
identity = True
class Or(LatticeOp, BooleanFunction):
"""
Logical OR function
It evaluates its arguments in order, giving True immediately if any of them are
True, and False if they are all False.
"""
zero = True
identity = False
class Xor(BooleanFunction):
"""Logical XOR (exclusive OR) function.
returns True if an odd number of the arguments are True, and the rest are False.
returns False if an even number of the arguments are True, and the rest are False.
"""
@classmethod
def eval(cls, *args):
if not args: return False
args = list(args)
A = args.pop()
while args:
B = args.pop()
A = Or(And(A, Not(B)), And(Not(A), B))
return A
class Not(BooleanFunction):
"""Logical Not function (negation)
Note: De Morgan rules applied automatically"""
@classmethod
def eval(cls, *args):
if len(args) > 1:
return map(cls, args)
arg = args[0]
if type(arg) is bool:
return not arg
# apply De Morgan Rules
if arg.func is And:
return Or(*[Not(a) for a in arg.args])
if arg.func is Or:
return And(*[Not(a) for a in arg.args])
if arg.func is Not:
return arg.args[0]
class Nand(BooleanFunction):
"""Logical NAND function.
It evaluates its arguments in order, giving True immediately if any
of them are False, and False if they are all True.
"""
@classmethod
def eval(cls, *args):
return Not(And(*args))
class Nor(BooleanFunction):
"""Logical NOR function.
It evaluates its arguments in order, giving False immediately if any
of them are True, and True if they are all False.
"""
@classmethod
def eval(cls, *args):
return Not(Or(*args))
class Implies(BooleanFunction):
"""Logical implication.
A implies B is equivalent to !A v B
"""
@classmethod
def eval(cls, *args):
if len(args) != 2:
raise ValueError, "%d operand(s) used for an Implies (pairs are required): %s" % (len(args), str(args))
else:
return Or(Not(args[0]), args[1])
class Equivalent(BooleanFunction):
"""Equivalence relation.
Equivalent(A, B) is True if and only if A and B are both True or both False
"""
@classmethod
def eval(cls, *args):
argset = set(args)
if len(argset) <= 1:
return True
if True in argset:
argset.discard(True)
return And(*argset)
if False in argset:
argset.discard(False)
return Nor(*argset)
return Basic.__new__(cls, *set(args))
### end class definitions. Some useful methods
def fuzzy_not(arg):
"""
Not in fuzzy logic
will return Not if arg is a boolean value, and None if argument
is None
>>> from sympy.logic.boolalg import fuzzy_not
>>> fuzzy_not(True)
False
>>> fuzzy_not(None)
>>> fuzzy_not(False)
True
"""
if arg is None:
return
return not arg
def conjuncts(expr):
"""Return a list of the conjuncts in the expr s.
>>> from sympy.logic.boolalg import conjuncts
>>> from sympy.abc import A, B
>>> conjuncts(A & B)
[A, B]
>>> conjuncts(A | B)
[Or(A, B)]
"""
from sympy.utilities import make_list
return make_list(expr, And)
def disjuncts(expr):
"""Return a list of the disjuncts in the sentence s.
>>> from sympy.logic.boolalg import disjuncts
>>> from sympy.abc import A, B
>>> disjuncts(A | B)
[A, B]
>>> disjuncts(A & B)
[And(A, B)]
"""
from sympy.utilities import make_list
return make_list(expr, Or)
def distribute_and_over_or(expr):
"""
Given a sentence s consisting of conjunctions and disjunctions
of literals, return an equivalent sentence in CNF.
"""
if expr.func is Or:
for arg in expr.args:
if arg.func is And:
conj = arg
break
else:
return expr
rest = Or(*[a for a in expr.args if a is not conj])
return And(*map(distribute_and_over_or,
[Or(c, rest) for c in conj.args]))
elif expr.func is And:
return And(*map(distribute_and_over_or, expr.args))
else:
return expr
def to_cnf(expr):
"""Convert a propositional logical sentence s to conjunctive normal form.
That is, of the form ((A | ~B | ...) & (B | C | ...) & ...)
Examples:
>>> from sympy.logic.boolalg import to_cnf
>>> from sympy.abc import A, B, D
>>> to_cnf(~(A | B) | D)
And(Or(D, Not(A)), Or(D, Not(B)))
"""
expr = sympify(expr)
expr = eliminate_implications(expr)
return distribute_and_over_or(expr)
def eliminate_implications(expr):
"""Change >>, <<, and Equivalent into &, |, and ~. That is, return an
expression that is equivalent to s, but has only &, |, and ~ as logical
operators.
"""
expr = sympify(expr)
if expr.is_Atom:
return expr ## (Atoms are unchanged.)
args = map(eliminate_implications, expr.args)
if expr.func is Implies:
a, b = args[0], args[-1]
return (~a) | b
elif expr.func is Equivalent:
a, b = args[0], args[-1]
return (a | Not(b)) & (b | Not(a))
else:
return expr.func(*args)
def compile_rule(s):
"""Transforms a rule into a sympy expression
A rule is a string of the form "symbol1 & symbol2 | ..."
See sympy.assumptions.known_facts for examples of rules
TODO: can this be replaced by sympify ?
"""
import re
from sympy.core import Symbol
return eval(re.sub(r'([a-zA-Z0-9_.]+)', r'Symbol("\1")', s), {'Symbol' : Symbol})
def to_int_repr(clauses, symbols):
"""
takes clauses in CNF puts them into integer representation
Examples:
>>> from sympy.logic.boolalg import to_int_repr
>>> from sympy.abc import x, y
>>> to_int_repr([x | y, y], [x, y]) == [set([1, 2]), set([2])]
True
"""
def append_symbol(arg, symbols):
if arg.func is Not:
return -(symbols.index(arg.args[0])+1)
else:
return symbols.index(arg)+1
from sympy.utilities import make_list
return [set(append_symbol(arg, symbols) for arg in make_list(c, Or)) \
for c in clauses]
| {
"content_hash": "9e277ffe0b24851750336769597b8d79",
"timestamp": "",
"source": "github",
"line_count": 281,
"max_line_length": 115,
"avg_line_length": 27.743772241992882,
"alnum_prop": 0.5815802975885069,
"repo_name": "mattpap/sympy-polys",
"id": "6cfcb115709de7a24b2185feefe0e7d43576cf5a",
"size": "7796",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "sympy/logic/boolalg.py",
"mode": "33261",
"license": "bsd-3-clause",
"language": [
{
"name": "Python",
"bytes": "8476904"
},
{
"name": "Scheme",
"bytes": "125"
}
],
"symlink_target": ""
} |
from django.contrib import admin
from django.db.models import get_model
class ProductRecordAdmin(admin.ModelAdmin):
list_display = ('product', 'num_views', 'num_basket_additions', 'num_purchases')
class UserProductViewAdmin(admin.ModelAdmin):
list_display = ('user', 'product', 'date_created')
class UserRecordAdmin(admin.ModelAdmin):
list_display = ('user', 'num_product_views', 'num_basket_additions', 'num_orders', 'total_spent', 'date_last_order')
admin.site.register(get_model('analytics', 'productrecord'), ProductRecordAdmin)
admin.site.register(get_model('analytics', 'userrecord'), UserRecordAdmin)
admin.site.register(get_model('analytics', 'usersearch'))
admin.site.register(get_model('analytics', 'userproductview'), UserProductViewAdmin)
| {
"content_hash": "f9c3eb833c185dca08d89b5b7a69ef78",
"timestamp": "",
"source": "github",
"line_count": 16,
"max_line_length": 120,
"avg_line_length": 48.1875,
"alnum_prop": 0.7509727626459144,
"repo_name": "aykut/django-oscar",
"id": "79b54b40000b052f691e8e06267b4e7a18a90759",
"size": "771",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "oscar/apps/analytics/admin.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [],
"symlink_target": ""
} |
"""
Unit testing of FontConverter class
"""
import logging
import os
import shutil
import time
import unittest
from font_converter import FontConverter
class TestFontConverter(unittest.TestCase):
def setUp(self):
super(TestFontConverter, self).setUp()
# disable log during tests
logging.disable(logging.CRITICAL)
map_file = os.path.join('resources', 'ships-map.json')
ttf_file = os.path.join('resources', 'xwing-miniatures-ships.ttf')
self._output_folder = os.path.join('output', 'unittest')
self.fc = FontConverter(map_file_path=map_file,
ttf_file_path=ttf_file,
output_folder=self._output_folder)
def test_init(self):
self.assertTrue(self.fc.init_font_converter(), 'Unable to init font converter')
self.assertTrue(os.path.exists(self._output_folder), 'Unable to create output folder')
def test_get_element_on_map(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
self.assertNotEqual(len(self.fc.element_map), 0, 'Element map should not be empty')
def test_convert_2_images(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
self.fc.convert_2_images(color='black', point_size=50, file_format='gif')
self.assertNotEqual(len(os.listdir(self._output_folder)), 0, 'Output folder should not be empty')
def test_convert_2_images_wrong_color(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
with self.assertRaises(AttributeError) as context:
self.fc.convert_2_images(color='WRONG_COLOR')
self.assertIn("WRONG_COLOR", context.exception.message, "Should display error message")
def test_convert_2_images_wrong_format(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
with self.assertRaises(AttributeError) as context:
self.fc.convert_2_images(file_format='jpg')
self.assertIn("jpg", context.exception.message, "Should display error message")
def test_trim_images(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
self.fc.convert_2_images(color='black', point_size=50, file_format='gif')
files_list = os.listdir(self._output_folder)
old_date = {}
for f_name in files_list:
f_stat = os.stat(os.path.join(self._output_folder, f_name))
old_date[f_name] = f_stat.st_atime
self.fc.trim_images()
time.sleep(1)
new_date = {}
for f_name in files_list:
f_stat = os.stat(os.path.join(self._output_folder, f_name))
new_date[f_name] = f_stat.st_atime
diff = set(old_date.values()).intersection(new_date.values())
self.assertEqual(len(diff), 0, "No updated date found, mean image not updated")
def resize_images(self):
self.fc.init_font_converter()
self.fc.get_elements_from_map()
self.fc.convert_2_images(color='black', point_size=50, file_format='gif')
files_list = os.listdir(self._output_folder)
old_date = {}
for f_name in files_list:
f_stat = os.stat(os.path.join(self._output_folder, f_name))
old_date[f_name] = f_stat.st_atime
self.fc.resize_images(20)
time.sleep(1)
new_date = {}
for f_name in files_list:
f_stat = os.stat(os.path.join(self._output_folder, f_name))
new_date[f_name] = f_stat.st_atime
diff = set(old_date.values()).intersection(new_date.values())
self.assertEqual(len(diff), 0, "No updated date found, mean image not updated")
def tearDown(self):
shutil.rmtree(self._output_folder)
if __name__ == '__main__':
unittest.main()
| {
"content_hash": "2724d0c4d28b3514f4e52171f0140b57",
"timestamp": "",
"source": "github",
"line_count": 98,
"max_line_length": 105,
"avg_line_length": 39.33673469387755,
"alnum_prop": 0.6249027237354086,
"repo_name": "kalhamaar/xwing-font-converter",
"id": "bd06fe596274a99ded2a324920cd10d951a0a7f6",
"size": "3855",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "xwing_font_converter/test_font_converter.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "31195"
}
],
"symlink_target": ""
} |
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.todo',
'sphinx.ext.coverage',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'seamus'
copyright = '2016, Ankit Chandawala'
author = 'Ankit Chandawala'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.0.1'
# The full version, including alpha/beta/rc tags.
release = '0.0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "sphinx_rtd_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'seamusdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'seamus.tex', 'seamus Documentation',
'Ankit Chandawala', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'seamus', 'seamus Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'seamus', 'seamus Documentation',
author, 'seamus', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
| {
"content_hash": "2f0e1afe723861aca57c78668d94d808",
"timestamp": "",
"source": "github",
"line_count": 277,
"max_line_length": 79,
"avg_line_length": 32.234657039711195,
"alnum_prop": 0.7037742188374958,
"repo_name": "nerandell/seamus",
"id": "56beb17eb78f1e921be7b8e9f20cb1c35d58a0ab",
"size": "9371",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "docs/conf.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "8535"
}
],
"symlink_target": ""
} |
from __future__ import absolute_import
import gc
import os
import sys
import time
import errno
import fcntl
import signal
import socket
import logging
import marshal
import threading
import subprocess
import multiprocessing
import six
from six.moves import socketserver, cPickle, SimpleHTTPServer, urllib
import resource
import zmq
from addict import Dict
from pymesos import Executor, MesosExecutorDriver, encode_data, decode_data
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
from dpark.utils import (
compress, decompress, spawn, mkdir_p, DparkUserFatalError
)
from dpark.utils.log import get_logger, init_dpark_logger, formatter_message
from dpark.utils.memory import ERROR_TASK_OOM, set_oom_score
from dpark.serialize import marshalable
from dpark.accumulator import Accumulator
from dpark.env import env
from dpark.mutable_dict import MutableDict
from dpark.serialize import loads
from dpark.task import TTID, TaskState, TaskEndReason, FetchFailed
from dpark.utils.debug import spawn_rconsole
from dpark.shuffle import ShuffleWorkDir
logger = get_logger('dpark.executor')
TASK_RESULT_LIMIT = 1024 * 256
DEFAULT_WEB_PORT = 5055
MAX_EXECUTOR_IDLE_TIME = 60 * 60 * 24 # 1 day
KILL_TIMEOUT = 0.1 # 0.1 sec, to reply to mesos fast
TASK_LOST_JOIN_TIMEOUT = 3
TASK_LOST_DISCARD_TIMEOUT = 60
Script = ''
def setproctitle(x):
try:
from setproctitle import setproctitle as _setproctitle
_setproctitle(x)
except ImportError:
pass
def reply_status(driver, task_id, state, reason=None, msg=None, data=None):
status = Dict()
status.task_id = task_id
status.state = state
if reason is not None:
status.message = '{}:{}'.format(reason, msg)
status.timestamp = time.time()
if data is not None:
status.data = encode_data(data)
driver.sendStatusUpdate(status)
def run_task(task_data):
try:
gc.disable()
task, task_try_id = loads(decompress(task_data))
ttid = TTID(task_try_id)
Accumulator.clear()
result = task.run(ttid.ttid)
env.task_stats.bytes_max_rss = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss * 1024
accUpdate = Accumulator.values()
MutableDict.flush()
if marshalable(result):
try:
flag, data = 0, marshal.dumps(result)
except Exception:
flag, data = 1, cPickle.dumps(result, -1)
else:
flag, data = 1, cPickle.dumps(result, -1)
data = compress(data)
if len(data) > TASK_RESULT_LIMIT:
# shuffle_id start from 1
swd = ShuffleWorkDir(0, task.id, ttid.task_try)
tmppath = swd.alloc_tmp(len(data))
with open(tmppath, 'wb') as f:
f.write(data)
f.close()
path = swd.export(tmppath)
data = '/'.join(
[env.server_uri] + path.split('/')[-3:]
)
flag += 2
return TaskState.finished, cPickle.dumps(((flag, data), accUpdate, env.task_stats), -1)
except FetchFailed as e:
return TaskState.failed, TaskEndReason.fetch_failed, str(e), cPickle.dumps(e)
except Exception as e:
import traceback
msg = traceback.format_exc()
ename = e.__class__.__name__
fatal_exceptions = (DparkUserFatalError, ArithmeticError,
ValueError, LookupError, SyntaxError,
TypeError, AssertionError)
prefix = "FATAL" if isinstance(e, fatal_exceptions) else "FAILED"
return TaskState.failed, '{}_EXCEPTION_{}'.format(prefix, ename), msg, cPickle.dumps(e)
finally:
gc.collect()
gc.enable()
class LocalizedHTTP(SimpleHTTPServer.SimpleHTTPRequestHandler):
basedir = None
def translate_path(self, path):
out = SimpleHTTPServer.SimpleHTTPRequestHandler.translate_path(
self, path)
return self.basedir + '/' + os.path.relpath(out)
def log_message(self, format, *args):
pass
def startWebServer(path):
# check the default web server
if not os.path.exists(path):
os.makedirs(path)
testpath = os.path.join(path, 'test')
with open(testpath, 'w') as f:
f.write(path)
default_uri = 'http://%s:%d/%s' % (socket.gethostname(), DEFAULT_WEB_PORT,
os.path.basename(path))
try:
data = urllib.request.urlopen(default_uri + '/' + 'test').read()
if data == path.encode('utf-8'):
return default_uri
except IOError:
pass
logger.warning('default webserver at %s not available', DEFAULT_WEB_PORT)
LocalizedHTTP.basedir = os.path.dirname(path)
ss = socketserver.TCPServer(('0.0.0.0', 0), LocalizedHTTP)
spawn(ss.serve_forever)
uri = 'http://%s:%d/%s' % (socket.gethostname(), ss.server_address[1],
os.path.basename(path))
return uri
def terminate(tid, proc):
name = 'worker(tid: %s, pid: %s)' % (tid, proc.pid)
try:
os.kill(proc.pid, signal.SIGTERM)
except Exception as e:
if proc.join(timeout=KILL_TIMEOUT / 2) is None:
try:
os.kill(proc.pid, signal.SIGKILL)
except Exception as e:
if proc.join(KILL_TIMEOUT / 2) is not None:
logger.exception('%s terminate fail', name)
def get_task_memory(task):
for r in task.resources:
if r.name == 'mem':
return r.scalar.value
logger.error('no memory in resource: %s', task.resources)
return 100 # 100M
def safe(f):
def _(self, *a, **kw):
with self.lock:
r = f(self, *a, **kw)
return r
return _
class Redirect(object):
def __init__(self, fd, addr, prefix):
self.fd = fd
self.addr = addr
self.prefix = prefix
self.fd_dup = os.dup(self.fd)
self.origin_wfile = None
self.pipe_rfd, self.pipe_wfd = os.pipe()
self.pipe_rfile = os.fdopen(self.pipe_rfd, 'rb')
self.pipe_wfile = os.fdopen(self.pipe_wfd, 'wb', 0)
os.close(self.fd)
os.dup2(self.pipe_wfd, self.fd)
# assert os.dup(self.pipe_wfd) == self.fd, 'redirect io failed'
self.ctx = zmq.Context()
self._shutdown = False
self.thread = None
self.sock = None
self.thread = spawn(self._forward)
def reset(self):
err = None
try:
self._shutdown = True
self.pipe_wfile.close()
os.close(self.fd)
self.thread.join(1)
if self.sock:
self.sock.close()
self.ctx.destroy()
except Exception as e:
err = e
os.dup2(self.fd_dup, self.fd) # will close fd first
self.origin_wfile = os.fdopen(self.fd, 'wb', 0)
logger.debug('should see me in sandbox')
if err:
logger.error('redirect reset err:', err)
if self.thread.isAlive():
logger.error('redirect thread not exit')
return self.origin_wfile
def _send(self, buf):
if not self.sock:
self.sock = self.ctx.socket(zmq.PUSH)
self.sock.setsockopt(zmq.LINGER, 0)
self.sock.connect(self.addr)
data = self.prefix + ''.join(buf)
while not self._shutdown:
try:
self.sock.send(data, zmq.NOBLOCK)
return
except zmq.Again:
time.sleep(0.1)
continue
def _forward(self):
buf = []
try:
while not self._shutdown:
try:
line = self.pipe_rfile.readline()
if not line:
break
buf.append(line)
if line.endswith('\n'):
self._send(buf)
buf = []
except IOError:
break
if buf:
self._send(buf)
except Exception as e:
logger.error('_forward err: %s', e)
class MyExecutor(Executor):
def __init__(self):
# task_id.value -> (task, process)
self.tasks = {}
# task_id.value -> (task, timestamp)
self.finished_tasks = {}
# (task_id.value, (status, data))
self.result_queue = multiprocessing.Queue()
self.lock = threading.RLock()
# Keep the file descriptor of current workdir,
# so we can check whether a workdir is in use externally.
self._fd_for_locks = []
self.stdout_redirect = None
self.stderr_redirect = None
def check_alive(self, driver):
try:
import psutil
except ImportError:
logger.error('no psutil module')
return
idle_since = time.time()
kill_ecs = [-signal.SIGTERM]
while True:
with self.lock:
tasks = self.tasks.items()
tids_to_pop = []
for tid, (task, proc) in tasks:
task_id = task.task_id
name = "task %s (pid = %d)" % (tid, proc.pid)
proc_end = False
reason = None
msg = None
try:
p = psutil.Process(proc.pid)
except Exception:
proc_end = True
if proc_end or p.status() == psutil.STATUS_ZOMBIE or (not p.is_running()):
proc.join(TASK_LOST_JOIN_TIMEOUT) # join in py2 not return exitcode
ec = proc.exitcode
if ec == 0: # p.status() == psutil.STATUS_ZOMBIE
continue # handled in replier
if ec is not None:
proc_end = True
msg = 'exitcode: {}'.format(ec)
if ec in kill_ecs:
reason = TaskEndReason.recv_sig
elif ec == -signal.SIGKILL:
reason = TaskEndReason.recv_sig_kill
elif ec == ERROR_TASK_OOM:
reason = TaskEndReason.task_oom
else:
reason = TaskEndReason.other_ecs
logger.warning('%s lost with exit code: %s', tid, ec)
else:
try:
os.waitpid(proc.pid, os.WNOHANG)
except OSError as e:
proc_end = True
if e.errno != errno.ECHILD:
logger.exception('%s lost, raise exception when waitpid', tid)
else:
t = self.finished_tasks.get(tid)
if t is not None and time.time() - t > TASK_LOST_DISCARD_TIMEOUT:
logger.warning('%s is zombie for %d secs, discard it!', name, TASK_LOST_DISCARD_TIMEOUT)
if proc_end:
tids_to_pop.append(tid)
reply_status(driver, task_id, TaskState.failed, reason, msg)
with self.lock:
for tid_ in tids_to_pop:
try:
self.tasks.pop(tid_)
except:
pass
now = time.time()
if self.tasks:
idle_since = now
elif idle_since + MAX_EXECUTOR_IDLE_TIME < now:
os._exit(0)
time.sleep(1)
def _try_flock(self, path):
fd = os.open(path, os.O_RDONLY)
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError as e:
try:
pids = subprocess.check_output(['fuser', path]).split()
curr_pid = os.getpid()
logger.warning(
'current process: %s, processes that are using %s: %s',
curr_pid, path, pids)
except Exception:
pass
raise e
self._fd_for_locks.append(fd)
@safe
def registered(self, driver, executorInfo, frameworkInfo, agent_info):
try:
global Script
(
Script, cwd, python_path, osenv, self.parallel,
out_logger, err_logger, logLevel, use_color, dpark_env
) = marshal.loads(decode_data(executorInfo.data))
sys.path = python_path
os.environ.update(osenv)
setproctitle('[Executor]' + Script)
prefix = formatter_message(
'{MAGENTA}[%s]{RESET} ' % socket.gethostname().ljust(10),
use_color
)
init_dpark_logger(logLevel, use_color=use_color)
logging.root.setLevel(logLevel)
r1 = self.stdout_redirect = Redirect(1, out_logger, prefix)
sys.stdout = r1.pipe_wfile
r2 = self.stderr_redirect = Redirect(2, err_logger, prefix)
sys.stderr = r2.pipe_wfile
spawn_rconsole(locals())
if os.path.exists(cwd):
try:
os.chdir(cwd)
except Exception as e:
logger.warning('change cwd to %s failed: %s', cwd, e)
else:
logger.warning('cwd (%s) not exists', cwd)
env.workdir.init(dpark_env.get(env.DPARK_ID))
self._try_flock(env.workdir.main)
dpark_env['SERVER_URI'] = startWebServer(env.workdir.main)
if 'MESOS_SLAVE_PID' in os.environ: # make unit test happy
env.workdir.setup_cleaner_process()
spawn(self.check_alive, driver)
spawn(self.replier, driver)
env.environ.update(dpark_env)
from dpark.broadcast import start_download_manager
start_download_manager()
logger.debug('executor started at %s', agent_info.hostname)
except Exception as e:
import traceback
msg = traceback.format_exc()
logger.error('init executor failed: %s', msg)
raise
def replier(self, driver):
while True:
try:
result = self.result_queue.get()
if result is None:
return
reason = None
message = None
task_id_value, result = result
if result[0] == TaskState.failed:
state, reason, message, data = result
else:
state, data = result
with self.lock:
task, _ = self.tasks.pop(task_id_value)
self.finished_tasks[task_id_value] = time.time()
reply_status(driver, task.task_id, state, reason, message, data)
except Exception as e:
logger.warning('reply fail %s', e)
@safe
def launchTask(self, driver, task):
task_id = task.task_id
reply_status(driver, task_id, TaskState.running)
logger.debug('launch task %s', task.task_id.value)
def worker(procname, q, task_id_value, task_data):
task_id_str = "task %s" % (task_id_value,)
threading.current_thread().name = task_id_str
setproctitle(procname)
set_oom_score(100)
env.start_slave()
q.put((task_id_value, run_task(task_data)))
try:
name = '[Task-%s]%s' % (task.task_id.value, Script)
proc = multiprocessing.Process(target=worker,
args=(name,
self.result_queue,
task.task_id.value,
decode_data(task.data),))
proc.name = name
proc.daemon = True
proc.start()
self.tasks[task.task_id.value] = (task, proc)
except Exception as e:
import traceback
msg = traceback.format_exc()
reply_status(driver, task_id, TaskState.failed, TaskEndReason.launch_failed, msg, cPickle.dumps(e))
@safe
def killTask(self, driver, taskId):
reply_status(driver, taskId, TaskState.killed)
if taskId.value in self.tasks:
_, proc = self.tasks.pop(taskId.value)
terminate(taskId.value, proc)
@safe
def shutdown(self, driver=None):
for tid, (_, proc) in six.iteritems(self.tasks):
terminate(tid, proc)
self.tasks = {}
self.result_queue.put(None)
for fd in self._fd_for_locks:
os.close(fd)
if self.stdout_redirect:
sys.stdout = self.stdout_redirect.reset()
if self.stderr_redirect:
sys.stderr = self.stderr_redirect.reset()
def run():
setproctitle('Executor')
if os.getuid() == 0:
gid = os.environ['GID']
uid = os.environ['UID']
os.setgid(int(gid))
os.setuid(int(uid))
executor = MyExecutor()
driver = MesosExecutorDriver(executor, use_addict=True)
driver.run()
if __name__ == '__main__':
fmt = '%(asctime)-15s [%(levelname)s] [%(threadName)s] [%(name)-9s] %(message)s'
logging.basicConfig(format=fmt, level=logging.INFO)
run()
| {
"content_hash": "69d2205e3ff564e8d56c3f3f6f4e0ec2",
"timestamp": "",
"source": "github",
"line_count": 538,
"max_line_length": 120,
"avg_line_length": 32.45910780669145,
"alnum_prop": 0.5302639867147684,
"repo_name": "douban/dpark",
"id": "28bbd2316a725990ad75ed1ffe0c5c5b6ed86a04",
"size": "17463",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "dpark/executor.py",
"mode": "33261",
"license": "bsd-3-clause",
"language": [
{
"name": "C",
"bytes": "12283"
},
{
"name": "CSS",
"bytes": "2638"
},
{
"name": "Dockerfile",
"bytes": "1378"
},
{
"name": "HTML",
"bytes": "9696"
},
{
"name": "JavaScript",
"bytes": "25347"
},
{
"name": "Python",
"bytes": "672082"
},
{
"name": "Shell",
"bytes": "1865"
}
],
"symlink_target": ""
} |
"""Auto-generated file, do not edit by hand. FJ metadata"""
from ..phonemetadata import NumberFormat, PhoneNumberDesc, PhoneMetadata
PHONE_METADATA_FJ = PhoneMetadata(id='FJ', country_code=None, international_prefix=None,
general_desc=PhoneNumberDesc(national_number_pattern='[0-579]\\d(?:\\d(?:\\d{2})?)?', possible_length=(2, 3, 5)),
toll_free=PhoneNumberDesc(national_number_pattern='91[17]', example_number='911', possible_length=(3,)),
emergency=PhoneNumberDesc(national_number_pattern='91[17]', example_number='911', possible_length=(3,)),
short_code=PhoneNumberDesc(national_number_pattern='0(?:1[34]|8[1-4])|1(?:0[1-3]|[25]9)|2[289]|30|40404|91[137]|[45]4|75', example_number='22', possible_length=(2, 3, 5)),
sms_services=PhoneNumberDesc(national_number_pattern='404\\d\\d', example_number='40400', possible_length=(5,)),
short_data=True)
| {
"content_hash": "2743427a265e7c8bb758f72ced7d1c1d",
"timestamp": "",
"source": "github",
"line_count": 10,
"max_line_length": 175,
"avg_line_length": 87.3,
"alnum_prop": 0.7056128293241696,
"repo_name": "daviddrysdale/python-phonenumbers",
"id": "89655ddc890d76c09c263012bb4cdd602b4bfc9f",
"size": "873",
"binary": false,
"copies": "1",
"ref": "refs/heads/dev",
"path": "python/phonenumbers/shortdata/region_FJ.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Java",
"bytes": "3898"
},
{
"name": "Makefile",
"bytes": "9034"
},
{
"name": "Python",
"bytes": "22052087"
},
{
"name": "Ruby",
"bytes": "237"
}
],
"symlink_target": ""
} |
from . import census
class Stats(census.Stats):
namespace = "dcuo"
def __str__(self):
return "EVERQUST 2 STATS API"
| {
"content_hash": "8159b71067046215c59d7f3726719cb5",
"timestamp": "",
"source": "github",
"line_count": 7,
"max_line_length": 31,
"avg_line_length": 18.428571428571427,
"alnum_prop": 0.6434108527131783,
"repo_name": "abathur/dbg_census",
"id": "668ee95a7721f6eeb3b4d5d7d288918119a2e26c",
"size": "129",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "dbg_census/dcuo.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "3046"
}
],
"symlink_target": ""
} |
"""Tests for tensorflow.python.training.saver.py."""
import glob
import math
import os
import random
import time
import numpy as np
import six
from google.protobuf.any_pb2 import Any
from tensorflow.core.framework import summary_pb2
from tensorflow.core.protobuf import config_pb2
from tensorflow.core.protobuf import meta_graph_pb2
from tensorflow.core.protobuf import queue_runner_pb2
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.core.protobuf import saver_pb2
from tensorflow.python.client import session
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.data.ops import iterator_ops
from tensorflow.python.eager import context
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import function
from tensorflow.python.framework import graph_io
from tensorflow.python.framework import meta_graph
from tensorflow.python.framework import ops as ops_lib
from tensorflow.python.framework import test_util
from tensorflow.python.lib.io import file_io
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import gradients_impl
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops import partitioned_variables
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.ops import variables
import tensorflow.python.ops.nn_grad # pylint: disable=unused-import
from tensorflow.python.platform import gfile
from tensorflow.python.platform import test
from tensorflow.python.saved_model.pywrap_saved_model import metrics
from tensorflow.python.summary import summary
from tensorflow.python.training import adam
from tensorflow.python.training import checkpoint_management
from tensorflow.python.training import gradient_descent
from tensorflow.python.training import py_checkpoint_reader
from tensorflow.python.training import queue_runner_impl
from tensorflow.python.training import saver as saver_module
from tensorflow.python.training import saver_test_utils
from tensorflow.python.training.tracking import base as trackable_base
from tensorflow.python.util import compat
class SaverTest(test.TestCase):
def basicSaveRestore(self, variable_op):
save_path = os.path.join(self.get_temp_dir(), "basic_save_restore")
with self.session(graph=ops_lib.Graph()) as sess:
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variable_op(10.0, name="v0")
v1 = variable_op(20.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
v2_init = v2.insert("k1", 30.0)
# Initialize all variables
if not context.executing_eagerly():
self.evaluate([variables.global_variables_initializer(), v2_init])
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
# Save the initialized values in the file at "save_path"
save = saver_module.Saver(
{
"v0": v0,
"v1": v1,
"v2": v2.saveable
}, restore_sequentially=True)
val = save.save(sess, save_path)
self.assertTrue(isinstance(val, six.string_types))
self.assertEqual(save_path, val)
# Start a second session. In that session the parameter nodes
# have not been initialized either.
with self.session(graph=ops_lib.Graph()) as sess:
v0 = variable_op(-1.0, name="v0")
v1 = variable_op(-1.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
# Assert that the variables are not initialized.
if not context.executing_eagerly():
self.assertEqual(
len(variables.report_uninitialized_variables().eval()), 2)
self.assertEqual(0, len(self.evaluate(v2.keys())))
self.assertEqual(0, len(self.evaluate(v2.values())))
# Restore the saved values in the parameter nodes.
save = saver_module.Saver({"v0": v0, "v1": v1, "v2": v2.saveable})
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
# Build another graph with 2 nodes, initialized
# differently, and a Restore node for them.
with self.session(graph=ops_lib.Graph()) as sess:
v0_2 = variable_op(1000.0, name="v0")
v1_2 = variable_op(2000.0, name="v1")
v2_2 = saver_test_utils.CheckpointedOp(name="v2")
v2_init = v2_2.insert("k1000", 3000.0)
# Check that the parameter nodes have been initialized.
if not context.executing_eagerly():
init_all_op = [variables.global_variables_initializer(), v2_init]
self.evaluate(init_all_op)
# TODO(xpan): Why _mutable_hash_table_v2 doesn't create empty
# table as it claims in eager mode?
self.assertEqual(b"k1000", self.evaluate(v2_2.keys()))
self.assertEqual(3000.0, self.evaluate(v2_2.values()))
self.assertEqual(1000.0, self.evaluate(v0_2))
self.assertEqual(2000.0, self.evaluate(v1_2))
# Restore the values saved earlier in the parameter nodes.
save2 = saver_module.Saver({"v0": v0_2, "v1": v1_2, "v2": v2_2.saveable})
save2.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0_2))
self.assertEqual(20.0, self.evaluate(v1_2))
self.assertEqual(b"k1", self.evaluate(v2_2.keys()))
self.assertEqual(30.0, self.evaluate(v2_2.values()))
def testBasic(self):
self.basicSaveRestore(variables.Variable)
@test_util.run_in_graph_and_eager_modes
def testResourceBasic(self):
self.basicSaveRestore(resource_variable_ops.ResourceVariable)
def testResourceColocation(self):
# train.Saver is V1 only API.
with ops_lib.Graph().as_default():
partitioner = partitioned_variables.fixed_size_partitioner(num_shards=2)
with ops_lib.device("/job:ps/device:GPU:0"):
v = variable_scope.get_variable(
"v0", shape=[10, 2], partitioner=partitioner, use_resource=True)
saver_module.Saver({"v0": v}).build()
save_op = None
for op in ops_lib.get_default_graph().get_operations():
if op.type == "SaveV2":
save_op = op
break
assert save_op is not None
for save_inp in save_op.inputs[3:]:
# Input to SaveV2 op is placed on CPU of the same device as
# the Variable.
self.assertEqual("/job:ps/device:CPU:0", save_inp.device)
def testResourceVariableReadOpsAddedDeterministically(self):
graph_defs = []
num_graphs = 10
for _ in range(num_graphs):
with ops_lib.Graph().as_default() as g:
for i in range(20):
resource_variable_ops.ResourceVariable(i, name="var%s" % i)
saver_module.Saver()
graph_defs.append(g.as_graph_def())
for i in range(num_graphs - 1):
self.assertEqual(graph_defs[i], graph_defs[i + 1])
def testEagerBasic(self):
with context.eager_mode():
ckpt_prefix = os.path.join(self.get_temp_dir(), "ckpt")
v1 = resource_variable_ops.ResourceVariable(3.14, name="v1")
v2 = resource_variable_ops.ResourceVariable([1, 2], name="v2")
save = saver_module.Saver([v1, v2])
save.save(None, ckpt_prefix)
v1.assign(0.0)
v2.assign([0, 0])
self.assertNear(0.0, self.evaluate(v1), 1e-5)
self.assertAllEqual([0, 0], self.evaluate(v2))
save.restore(None, ckpt_prefix)
self.assertNear(3.14, self.evaluate(v1), 1e-5)
self.assertAllEqual([1, 2], self.evaluate(v2))
def testEagerGraphCompatibility(self):
# Save from graph mode and restore from eager mode.
graph_ckpt_prefix = os.path.join(self.get_temp_dir(), "graph_ckpt")
with context.graph_mode():
with self.session(graph=ops_lib.Graph()) as sess:
# Create a graph model and save the checkpoint.
w1 = resource_variable_ops.ResourceVariable(1.0, name="w1")
w2 = resource_variable_ops.ResourceVariable(2.0, name="w2")
graph_saver = saver_module.Saver([w1, w2])
self.evaluate(variables.global_variables_initializer())
graph_saver.save(sess, graph_ckpt_prefix)
with context.eager_mode():
ops_lib._default_graph_stack.reset() # pylint: disable=protected-access
ops_lib.reset_default_graph()
w1 = resource_variable_ops.ResourceVariable(0.0, name="w1")
w2 = resource_variable_ops.ResourceVariable(0.0, name="w2")
graph_saver = saver_module.Saver([w1, w2])
graph_saver.restore(None, graph_ckpt_prefix)
self.assertAllEqual(self.evaluate(w1), 1.0)
self.assertAllEqual(self.evaluate(w2), 2.0)
# Save from eager mode and restore from graph mode.
eager_ckpt_prefix = os.path.join(self.get_temp_dir(), "eager_ckpt")
with context.eager_mode():
ops_lib._default_graph_stack.reset() # pylint: disable=protected-access
ops_lib.reset_default_graph()
w3 = resource_variable_ops.ResourceVariable(3.0, name="w3")
w4 = resource_variable_ops.ResourceVariable(4.0, name="w4")
graph_saver = saver_module.Saver([w3, w4])
graph_saver.save(None, eager_ckpt_prefix)
with context.graph_mode():
with self.session(graph=ops_lib.Graph()) as sess:
w3 = resource_variable_ops.ResourceVariable(0.0, name="w3")
w4 = resource_variable_ops.ResourceVariable(0.0, name="w4")
graph_saver = saver_module.Saver([w3, w4])
self.evaluate(variables.global_variables_initializer())
graph_saver.restore(sess, eager_ckpt_prefix)
self.assertAllEqual(w3, 3.0)
self.assertAllEqual(w4, 4.0)
@test_util.run_in_graph_and_eager_modes
def testResourceSaveRestoreCachingDevice(self):
save_path = os.path.join(self.get_temp_dir(), "resource_cache")
with self.session(graph=ops_lib.Graph()) as sess:
v = resource_variable_ops.ResourceVariable([1], caching_device="/cpu:0",
name="v")
if context.executing_eagerly():
sess = None
else:
self.evaluate(variables.global_variables_initializer())
save = saver_module.Saver([v])
save.save(sess, save_path)
save2 = saver_module.Saver([v])
save2.restore(sess, save_path)
self.assertEqual(self.evaluate(v), [1])
def testNoAdditionalOpsAddedBySaverForResourceVariablesOutsideSaveScope(self):
with ops_lib.Graph().as_default() as g:
v = resource_variable_ops.ResourceVariable(1.0, name="v")
with ops_lib.name_scope("saver1"):
saver_module.Saver()
with ops_lib.name_scope("saver2"):
saver_module.Saver({"name": v})
ops_in_saver1_scope_but_not_save_scope = [
op for op in g.get_operations()
if (op.name.startswith("saver1/") and
not op.name.startswith("saver1/save/"))]
self.assertEqual(ops_in_saver1_scope_but_not_save_scope, [])
ops_in_saver2_scope_but_not_save_scope = [
op for op in g.get_operations()
if (op.name.startswith("saver2/") and
not op.name.startswith("saver2/save/"))]
self.assertEqual(ops_in_saver2_scope_but_not_save_scope, [])
def testSaveCopyRestoreWithSaveRelativePaths(self):
"""Save, copy checkpoint dir and restore from copied dir.
This only works for save_relative_paths=True.
"""
save_dir1 = os.path.join(self.get_temp_dir(), "save_dir1")
os.mkdir(save_dir1)
save_path1 = os.path.join(save_dir1, "save_copy_restore")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default():
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variables.VariableV1(10.0, name="v0")
v1 = variables.VariableV1(20.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
v2_init = v2.insert("k1", 30.0)
save = saver_module.Saver(
var_list={
"v0": v0,
"v1": v1,
"v2": v2.saveable
},
restore_sequentially=True,
save_relative_paths=True)
init_all_op = [variables.global_variables_initializer(), v2_init]
with self.cached_session() as sess:
# Initialize all variables
self.evaluate(init_all_op)
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
# Save the initialized values in the file at "save_path"
val = save.save(sess, save_path1)
self.assertTrue(isinstance(val, six.string_types))
self.assertEqual(save_path1, val)
self.assertEqual(
checkpoint_management.latest_checkpoint(save_dir1), save_path1)
save_dir2 = os.path.join(self.get_temp_dir(), "save_dir2")
os.renames(save_dir1, save_dir2)
save_path2 = os.path.join(save_dir2, "save_copy_restore")
self.assertEqual(
checkpoint_management.latest_checkpoint(save_dir2), save_path2)
# Start a second session. In that session the parameter nodes
# have not been initialized either.
with self.cached_session() as sess:
v0 = variables.VariableV1(-1.0, name="v0")
v1 = variables.VariableV1(-1.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
save = saver_module.Saver({"v0": v0, "v1": v1, "v2": v2.saveable})
# Assert that the variables are not initialized.
self.assertEqual(
len(variables.report_uninitialized_variables().eval()), 2)
self.assertEqual(0, len(self.evaluate(v2.keys())))
self.assertEqual(0, len(self.evaluate(v2.values())))
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path2)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
def testFilenameTensor(self):
# train.Saver is V1 only API.
with ops_lib.Graph().as_default():
v0 = variables.VariableV1(0, name="v0")
filename = b"somerandomfilename"
save = saver_module.Saver({"v0": v0}, filename=filename)
with self.cached_session() as sess:
tensor = sess.graph.get_tensor_by_name(
save.saver_def.filename_tensor_name)
self.assertEqual(self.evaluate(tensor), filename)
def testInvalidPath(self):
v0 = variables.VariableV1(0, name="v0")
for ver in (saver_pb2.SaverDef.V1, saver_pb2.SaverDef.V2):
with self.cached_session() as sess:
save = saver_module.Saver({"v0": v0}, write_version=ver)
with self.assertRaisesRegex(
ValueError, "The passed save_path is not a valid checkpoint:"):
save.restore(sess, "invalid path")
@test_util.run_v1_only("train.Saver is V1 only API.")
def testInt64(self):
save_path = os.path.join(self.get_temp_dir(), "int64")
with self.cached_session() as sess:
# Build a graph with 1 node, and save and restore for them.
v = variables.VariableV1(np.int64(15), name="v")
save = saver_module.Saver({"v": v}, restore_sequentially=True)
self.evaluate(variables.global_variables_initializer())
# Save the initialized values in the file at "save_path"
val = save.save(sess, save_path)
self.assertTrue(isinstance(val, six.string_types))
self.assertEqual(save_path, val)
with self.cached_session() as sess:
v = variables.VariableV1(np.int64(-1), name="v")
save = saver_module.Saver({"v": v})
with self.assertRaisesWithPredicateMatch(
errors_impl.OpError, lambda e: "uninitialized value v" in e.message):
self.evaluate(v)
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(np.int64(15), self.evaluate(v))
def testSomeErrors(self):
with ops_lib.Graph().as_default():
v0 = variables.VariableV1([10.0], name="v0")
v1 = variables.VariableV1([20.0], name="v1")
v2 = variables.VariableV1([20.0], name="v2")
v2._set_save_slice_info(
variables.Variable.SaveSliceInfo("v1", [1], [0], [1]))
# By default the name used for "v2" will be "v1" and raise an error.
with self.assertRaisesRegex(ValueError, "same name: v1"):
saver_module.Saver([v0, v1, v2])
# The names are different and will work.
saver_module.Saver({"vee1": v1, "other": [v2]})
# Partitioned variables also cause name conflicts.
p_v1 = variable_scope.get_variable(
"p_v1",
shape=[4, 5],
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=2))
p_v2 = variable_scope.get_variable(
"p_v2",
shape=[4, 5],
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=2))
p_v2._name = "p_v1"
with self.assertRaisesRegex(ValueError, "same name: p_v1"):
saver_module.Saver([p_v1, p_v2])
def testSameName(self):
with ops_lib.Graph().as_default():
v0 = variables.VariableV1([10.0], name="v0")
v2 = saver_test_utils.CheckpointedOp(name="v2")
# Saving one variable under two names raises an error.
with self.assertRaisesRegex(
ValueError, "The same saveable will be restored with two names: v0"):
saver_module.Saver({"v0": v0, "v0too": v0})
# Ditto for custom saveables.
with self.assertRaisesRegex(
ValueError, "The same saveable will be restored with two names: v2"):
saver_module.Saver({"v2": v2.saveable, "v2too": v2.saveable})
# Verify non-duplicate names work.
saver_module.Saver({"v0": v0, "v2": v2.saveable})
@test_util.run_v1_only("train.Saver and VariableV1 are V1 only APIs.")
def testBasicsWithListOfVariables(self):
save_path = os.path.join(self.get_temp_dir(), "basics_with_list")
with self.session(graph=ops_lib.Graph()) as sess:
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variables.VariableV1(10.0, name="v0")
v1 = variables.VariableV1(20.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
v2_init = v2.insert("k1", 30.0)
save = saver_module.Saver([v0, v1, v2.saveable])
self.evaluate(variables.global_variables_initializer())
v2_init.run()
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
# Save the initialized values in the file at "save_path"
val = save.save(sess, save_path)
self.assertTrue(isinstance(val, six.string_types))
self.assertEqual(save_path, val)
# Start a second session. In that session the variables
# have not been initialized either.
with self.session(graph=ops_lib.Graph()) as sess:
v0 = variables.VariableV1(-1.0, name="v0")
v1 = variables.VariableV1(-1.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
save = saver_module.Saver([v0, v1, v2.saveable])
with self.assertRaisesWithPredicateMatch(
errors_impl.OpError, lambda e: "uninitialized value v0" in e.message):
self.evaluate(v0)
with self.assertRaisesWithPredicateMatch(
errors_impl.OpError, lambda e: "uninitialized value v1" in e.message):
self.evaluate(v1)
self.assertEqual(0, len(self.evaluate(v2.keys())))
self.assertEqual(0, len(self.evaluate(v2.values())))
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(30.0, self.evaluate(v2.values()))
# Build another graph with 2 nodes, initialized
# differently, and a Restore node for them.
with self.session(graph=ops_lib.Graph()) as sess:
v0_2 = variables.VariableV1(1000.0, name="v0")
v1_2 = variables.VariableV1(2000.0, name="v1")
v2_2 = saver_test_utils.CheckpointedOp(name="v2")
save2 = saver_module.Saver([v0_2, v1_2, v2_2.saveable])
v2_2.insert("k1000", 3000.0).run()
self.evaluate(variables.global_variables_initializer())
# Check that the parameter nodes have been initialized.
self.assertEqual(1000.0, self.evaluate(v0_2))
self.assertEqual(2000.0, self.evaluate(v1_2))
self.assertEqual(b"k1000", self.evaluate(v2_2.keys()))
self.assertEqual(3000.0, self.evaluate(v2_2.values()))
# Restore the values saved earlier in the parameter nodes.
save2.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0_2))
self.assertEqual(20.0, self.evaluate(v1_2))
self.assertEqual(b"k1", self.evaluate(v2_2.keys()))
self.assertEqual(30.0, self.evaluate(v2_2.values()))
def _SaveAndLoad(self, var_name, var_value, other_value, save_path):
with self.session(graph=ops_lib.Graph()) as sess:
var = resource_variable_ops.ResourceVariable(var_value, name=var_name)
save = saver_module.Saver({var_name: var})
if not context.executing_eagerly():
self.evaluate(var.initializer)
val = save.save(sess, save_path)
self.assertEqual(save_path, val)
with self.session(graph=ops_lib.Graph()) as sess:
var = resource_variable_ops.ResourceVariable(other_value, name=var_name)
save = saver_module.Saver({var_name: var})
save.restore(sess, save_path)
self.assertAllClose(var_value, self.evaluate(var))
def testCacheRereadsFile(self):
save_path = os.path.join(self.get_temp_dir(), "cache_rereads")
# Save and reload one Variable named "var0".
self._SaveAndLoad("var0", 0.0, 1.0, save_path)
# Save and reload one Variable named "var1" in the same file.
# The cached readers should know to re-read the file.
self._SaveAndLoad("var1", 1.1, 2.2, save_path)
def testAllowEmpty(self):
save_path = os.path.join(self.get_temp_dir(), "allow_empty")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.cached_session() as sess:
_ = constant_op.constant(1)
save = saver_module.Saver(allow_empty=True)
val = save.save(sess, save_path)
self.assertIsNone(val)
with ops_lib.Graph().as_default(), self.cached_session() as sess:
save = saver_module.Saver(allow_empty=True)
save.restore(sess, save_path)
def testGPU(self):
if not test.is_gpu_available():
return
save_path = os.path.join(self.get_temp_dir(), "gpu")
with session.Session("", graph=ops_lib.Graph()) as sess:
with sess.graph.device(test.gpu_device_name()):
v0_1 = variables.VariableV1(123.45)
save = saver_module.Saver({"v0": v0_1})
self.evaluate(variables.global_variables_initializer())
save.save(sess, save_path)
with session.Session("", graph=ops_lib.Graph()) as sess:
with sess.graph.device(test.gpu_device_name()):
v0_2 = variables.VariableV1(543.21)
save = saver_module.Saver({"v0": v0_2})
self.evaluate(variables.global_variables_initializer())
def testSharedServerOnGPU(self):
if not test.is_gpu_available():
return
save_path = os.path.join(self.get_temp_dir(), "gpu")
with session.Session("", graph=ops_lib.Graph()) as sess:
with sess.graph.device(test.gpu_device_name()):
v0_1 = variables.VariableV1(123.45)
save = saver_module.Saver({"v0": v0_1}, sharded=True, allow_empty=True)
self.evaluate(variables.global_variables_initializer())
save.save(sess, save_path)
with session.Session("", graph=ops_lib.Graph()) as sess:
with sess.graph.device(test.gpu_device_name()):
v0_2 = variables.VariableV1(543.21)
save = saver_module.Saver({"v0": v0_2}, sharded=True, allow_empty=True)
self.evaluate(variables.global_variables_initializer())
def testVariables(self):
save_path = os.path.join(self.get_temp_dir(), "variables")
with session.Session("", graph=ops_lib.Graph()) as sess:
one = variables.VariableV1(1.0)
twos = variables.VariableV1([2.0, 2.0, 2.0])
v2 = saver_test_utils.CheckpointedOp(name="v2")
init = variables.global_variables_initializer()
save = saver_module.Saver()
init.run()
v2.insert("k1", 3.0).run()
save.save(sess, save_path)
with session.Session("", graph=ops_lib.Graph()) as sess:
one = variables.VariableV1(0.0)
twos = variables.VariableV1([0.0, 0.0, 0.0])
v2 = saver_test_utils.CheckpointedOp(name="v2")
# Saver with no arg, defaults to 'all variables'.
save = saver_module.Saver()
save.restore(sess, save_path)
self.assertAllClose(1.0, self.evaluate(one))
self.assertAllClose([2.0, 2.0, 2.0], self.evaluate(twos))
self.assertEqual(b"k1", self.evaluate(v2.keys()))
self.assertEqual(3.0, self.evaluate(v2.values()))
def testVarListShouldBeEmptyInDeferredBuild(self):
with ops_lib.Graph().as_default():
v = variables.VariableV1(1.0)
with self.assertRaisesRegex(ValueError, "defer_build"):
saver_module.Saver([v], defer_build=True)
def testBuildShouldBeCalledBeforeSaveInCaseOfDeferBuild(self):
save_path = os.path.join(self.get_temp_dir(), "error_deferred_build")
with ops_lib.Graph().as_default(), session.Session() as sess:
variables.VariableV1(1.0)
saver = saver_module.Saver(defer_build=True)
with self.assertRaisesRegex(RuntimeError, "build"):
saver.save(sess, save_path)
def testDeferredBuild(self):
save_path = os.path.join(self.get_temp_dir(), "deferred_build")
with session.Session("", graph=ops_lib.Graph()) as sess:
one = variables.VariableV1(1.0)
save = saver_module.Saver(defer_build=True)
# if build is not deferred, saver cannot save the `twos`.
twos = variables.VariableV1([2.0, 2.0, 2.0])
init = variables.global_variables_initializer()
save.build()
init.run()
save.save(sess, save_path)
with session.Session("", graph=ops_lib.Graph()) as sess:
one = variables.VariableV1(0.0)
twos = variables.VariableV1([0.0, 0.0, 0.0])
# Saver with no arg, defaults to 'all variables'.
save = saver_module.Saver()
save.restore(sess, save_path)
self.assertAllClose(1.0, self.evaluate(one))
self.assertAllClose([2.0, 2.0, 2.0], self.evaluate(twos))
@test_util.run_v1_only("train.Saver is V1 only API.")
def testReshape(self):
save_path = os.path.join(self.get_temp_dir(), "variables_reshape")
with session.Session("", graph=ops_lib.Graph()) as sess:
var = variables.VariableV1([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
init = variables.global_variables_initializer()
save = saver_module.Saver()
init.run()
save.save(sess, save_path)
# Error when restoring with default reshape=False
with session.Session("", graph=ops_lib.Graph()) as sess:
var = variables.VariableV1([[0.0, 0.0], [0.0, 0.0], [0.0, 0.0]])
save = saver_module.Saver()
with self.assertRaisesRegex(
errors_impl.InvalidArgumentError,
"Assign requires shapes of both tensors to match."):
save.restore(sess, save_path)
# Restored to new shape with reshape=True
with session.Session("", graph=ops_lib.Graph()) as sess:
var = variables.VariableV1([[0.0, 0.0], [0.0, 0.0], [0.0, 0.0]])
save = saver_module.Saver(reshape=True)
save.restore(sess, save_path)
self.assertAllClose([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
self.evaluate(var))
@test_util.run_in_graph_and_eager_modes
def testSaveWithGlobalStep(self, pad_step_number=False):
save_path = os.path.join(self.get_temp_dir(), "ckpt_with_global_step")
global_step_int = 5
# Save and reload one Variable named "var0".
self._SaveAndLoad("var0", 0.0, 1.0, save_path)
for use_tensor in [True, False]:
with self.session(graph=ops_lib.Graph()):
var = resource_variable_ops.ResourceVariable(1.0, name="var0")
save = saver_module.Saver(
{
var._shared_name: var
}, pad_step_number=pad_step_number)
if context.executing_eagerly():
sess = None
else:
self.evaluate(var.initializer)
sess = ops_lib.get_default_session()
if use_tensor:
global_step = constant_op.constant(global_step_int)
val = save.save(sess, save_path, global_step=global_step)
else:
val = save.save(sess, save_path, global_step=global_step_int)
if pad_step_number:
expected_save_path = "%s-%s" % (save_path,
"{:08d}".format(global_step_int))
else:
expected_save_path = "%s-%d" % (save_path, global_step_int)
self.assertEqual(expected_save_path, val)
def testSaveWithGlobalStepWithPadding(self):
self.testSaveWithGlobalStep(pad_step_number=True)
def testSaveToNonexistingPath(self):
file_io.write_string_to_file(
os.path.join(self.get_temp_dir(), "actually_a_file"), "")
paths = [
os.path.join(self.get_temp_dir(), "nonexisting_dir/path"),
os.path.join(self.get_temp_dir(), "other_nonexisting_dir/path1/path2"),
os.path.join(self.get_temp_dir(), "actually_a_file/path"),
]
for save_path in paths:
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variables.VariableV1(10.0, name="v0")
v1 = variables.VariableV1(20.0, name="v1")
save = saver_module.Saver({"v0": v0, "v1": v1}, restore_sequentially=True)
init_all_op = variables.global_variables_initializer()
# In the case where the parent directory doesn't exist, whether or not the
# save succeeds or fails is implementation dependent. Therefore we allow
# both cases.
try:
with self.cached_session() as sess:
# Initialize all variables
self.evaluate(init_all_op)
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
# Save the graph.
save.save(sess, save_path)
with self.cached_session() as sess:
# Restore the saved values in the parameter nodes.
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
except ValueError as exc:
error_msg_template = "Parent directory of {} doesn't exist, can't save."
self.assertEqual(error_msg_template.format(save_path), str(exc))
def testSaveToURI(self):
# ParseURI functions don't work on Windows yet.
# TODO(jhseu): Remove this check when it works.
if os.name == "nt":
self.skipTest("Local URI support doesn't work on Windows")
save_path = "file://" + os.path.join(self.get_temp_dir(), "uri")
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variables.VariableV1(10.0, name="v0")
v1 = variables.VariableV1(20.0, name="v1")
save = saver_module.Saver({"v0": v0, "v1": v1}, restore_sequentially=True)
init_all_op = variables.global_variables_initializer()
with self.cached_session() as sess:
# Initialize all variables
self.evaluate(init_all_op)
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
save.save(sess, save_path)
def testSaveRestoreAndValidateVariableDtype(self):
for variable_op in [
variables.Variable, resource_variable_ops.ResourceVariable
]:
save_path = os.path.join(self.get_temp_dir(), "basic_save_restore")
# Build the first session.
with self.session(graph=ops_lib.Graph()) as sess:
v0 = variable_op(10.0, name="v0", dtype=dtypes.float32)
if not context.executing_eagerly():
self.evaluate([variables.global_variables_initializer()])
save = saver_module.Saver({"v0": v0})
save.save(sess, save_path)
# Start a second session.
with self.session(graph=ops_lib.Graph()) as sess:
v0_wrong_dtype = variable_op(1, name="v0", dtype=dtypes.int32)
# Restore the saved value with different dtype
# in the parameter nodes.
save = saver_module.Saver({"v0": v0_wrong_dtype})
with self.assertRaisesRegex(errors.InvalidArgumentError,
"original dtype"):
save.restore(sess, save_path)
# Test restoring large tensors (triggers a thread pool)
def testRestoreLargeTensors(self):
save_dir = self.get_temp_dir()
def _model():
small_v = [variable_scope.get_variable(
"small%d" % i, shape=[10, 2], use_resource=True) for i in range(5)]
large_v = [variable_scope.get_variable(
"large%d" % i, shape=[32000, 1000], use_resource=True)
for i in range(3)]
return small_v + large_v
save_graph = ops_lib.Graph()
with save_graph.as_default(), self.session(graph=save_graph) as sess:
orig_vars = _model()
self.evaluate(variables.global_variables_initializer())
save = saver_module.Saver(max_to_keep=1)
self.evaluate(variables.global_variables_initializer())
save.save(sess, save_dir)
orig_vals = self.evaluate(orig_vars)
restore_graph = ops_lib.Graph()
with restore_graph.as_default(), self.session(
graph=restore_graph) as sess:
restored_vars = _model()
save = saver_module.Saver(max_to_keep=1)
save.restore(sess, save_dir)
restored_vals = self.evaluate(restored_vars)
for orig, restored in zip(orig_vals, restored_vals):
self.assertAllEqual(orig, restored)
def test_metrics_save_restore(self):
api_label = saver_module._SAVER_LABEL
def _get_write_histogram_proto():
proto_bytes = metrics.GetCheckpointWriteDurations(api_label=api_label)
histogram_proto = summary_pb2.HistogramProto()
histogram_proto.ParseFromString(proto_bytes)
return histogram_proto
def _get_read_histogram_proto():
proto_bytes = metrics.GetCheckpointReadDurations(api_label=api_label)
histogram_proto = summary_pb2.HistogramProto()
histogram_proto.ParseFromString(proto_bytes)
return histogram_proto
save_path = os.path.join(self.get_temp_dir(), "metrics_save_restore")
# Values at beginning of unit test.
time_start = metrics.GetTrainingTimeSaved(api_label=api_label)
num_writes_start = _get_write_histogram_proto().num
num_reads_start = _get_read_histogram_proto().num
with self.session(graph=ops_lib.Graph()) as sess:
v0 = resource_variable_ops.ResourceVariable(10.0, name="v0")
v1 = resource_variable_ops.ResourceVariable(20.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
# Initialize all variables
if not context.executing_eagerly():
self.evaluate([variables.global_variables_initializer()])
save = saver_module.Saver({
"v0": v0,
"v1": v1,
"v2": v2.saveable
},
restore_sequentially=True)
ckpt_prefix = save.save(sess, save_path)
filesize = saver_module._get_checkpoint_size(ckpt_prefix)
count_after_one_save = metrics.GetCheckpointSize(
api_label=api_label, filesize=filesize)
self.assertEqual(_get_write_histogram_proto().num, num_writes_start + 1)
time_after_one_save = metrics.GetTrainingTimeSaved(api_label=api_label)
self.assertGreater(time_after_one_save, time_start)
with self.session(graph=ops_lib.Graph()) as sess:
v0 = resource_variable_ops.ResourceVariable(-1.0, name="v0")
v1 = resource_variable_ops.ResourceVariable(-1.0, name="v1")
v2 = saver_test_utils.CheckpointedOp(name="v2")
save = saver_module.Saver({"v0": v0, "v1": v1, "v2": v2.saveable})
save.restore(sess, save_path)
self.assertEqual(_get_write_histogram_proto().num, num_writes_start + 1)
self.assertEqual(_get_read_histogram_proto().num, num_reads_start + 1)
# Check that training time saved has not increased.
self.assertEqual(
metrics.GetTrainingTimeSaved(api_label=api_label),
time_after_one_save)
save.save(sess, save_path)
self.assertEqual(_get_write_histogram_proto().num, num_writes_start + 2)
self.assertEqual(_get_read_histogram_proto().num, num_reads_start + 1)
# Check that training time saved has increased.
self.assertGreater(
metrics.GetTrainingTimeSaved(api_label=api_label),
time_after_one_save)
self.assertEqual(
metrics.GetCheckpointSize(api_label=api_label, filesize=filesize),
count_after_one_save + 1)
class SaveRestoreShardedTest(test.TestCase):
_WRITE_VERSION = saver_pb2.SaverDef.V1
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
def testBasics(self):
save_path = os.path.join(self.get_temp_dir(), "sharded_basics")
# Build a graph with 2 parameter nodes on different devices.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
v0 = variables.VariableV1(10, name="v0")
t0 = saver_test_utils.CheckpointedOp(name="t0")
with sess.graph.device("/cpu:1"):
v1 = variables.VariableV1(20, name="v1")
t1 = saver_test_utils.CheckpointedOp(name="t1")
save = saver_module.Saver(
{
"v0": v0,
"v1": v1,
"t0": t0.saveable,
"t1": t1.saveable
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(variables.global_variables_initializer())
t0.insert("k1", 30.0).run()
t1.insert("k2", 40.0).run()
val = save.save(sess, save_path)
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(save_path + "-?????-of-00002", val)
else:
self.assertEqual(save_path, val)
meta_graph_filename = checkpoint_management.meta_graph_filename(val)
self.assertEqual(save_path + ".meta", meta_graph_filename)
if save._write_version is saver_pb2.SaverDef.V1:
# Restore different ops from shard 0 of the saved files.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
v0 = variables.VariableV1(111, name="v0")
t0 = saver_test_utils.CheckpointedOp(name="t0")
save = saver_module.Saver(
{
"v0": v0,
"t0": t0.saveable
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(variables.global_variables_initializer())
t0.insert("k11", 33.0).run()
self.assertEqual(111, self.evaluate(v0))
self.assertEqual(b"k11", self.evaluate(t0.keys()))
self.assertEqual(33.0, self.evaluate(t0.values()))
save.restore(sess, save_path + "-00000-of-00002")
self.assertEqual(10, self.evaluate(v0))
self.assertEqual(b"k1", self.evaluate(t0.keys()))
self.assertEqual(30.0, self.evaluate(t0.values()))
# Restore different ops from shard 1 of the saved files.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
v1 = variables.VariableV1(222)
t1 = saver_test_utils.CheckpointedOp(name="t1")
save = saver_module.Saver(
{
"v1": v1,
"t1": t1.saveable
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(variables.global_variables_initializer())
t1.insert("k22", 44.0).run()
self.assertEqual(222, self.evaluate(v1))
self.assertEqual(b"k22", self.evaluate(t1.keys()))
self.assertEqual(44.0, self.evaluate(t1.values()))
save.restore(sess, save_path + "-00001-of-00002")
self.assertEqual(20, self.evaluate(v1))
self.assertEqual(b"k2", self.evaluate(t1.keys()))
self.assertEqual(40.0, self.evaluate(t1.values()))
# Now try a restore with the sharded filename.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
v0 = variables.VariableV1(111, name="v0")
t0 = saver_test_utils.CheckpointedOp(name="t0")
with sess.graph.device("/cpu:1"):
v1 = variables.VariableV1(222, name="v1")
t1 = saver_test_utils.CheckpointedOp(name="t1")
save = saver_module.Saver(
{
"v0": v0,
"v1": v1,
"t0": t0.saveable,
"t1": t1.saveable
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(variables.global_variables_initializer())
t0.insert("k11", 33.0).run()
t1.insert("k22", 44.0).run()
self.assertEqual(111, self.evaluate(v0))
self.assertEqual(222, self.evaluate(v1))
self.assertEqual(b"k11", self.evaluate(t0.keys()))
self.assertEqual(33.0, self.evaluate(t0.values()))
self.assertEqual(b"k22", self.evaluate(t1.keys()))
self.assertEqual(44.0, self.evaluate(t1.values()))
save_path = os.path.join(self.get_temp_dir(), "sharded_basics")
if save._write_version is saver_pb2.SaverDef.V1:
save.restore(sess, save_path + "-?????-of-?????")
else:
save.restore(sess, save_path)
self.assertEqual(10, self.evaluate(v0))
self.assertEqual(20, self.evaluate(v1))
self.assertEqual(b"k1", self.evaluate(t0.keys()))
self.assertEqual(30.0, self.evaluate(t0.values()))
self.assertEqual(b"k2", self.evaluate(t1.keys()))
self.assertEqual(40.0, self.evaluate(t1.values()))
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(
checkpoint_management.latest_checkpoint(self.get_temp_dir()),
os.path.join(self.get_temp_dir(), "sharded_basics-?????-of-00002"))
else:
self.assertEqual(
checkpoint_management.latest_checkpoint(self.get_temp_dir()),
os.path.join(self.get_temp_dir(), "sharded_basics"))
def testSaverDef(self):
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.cached_session():
v0 = variables.VariableV1(123, name="v0")
save = saver_module.Saver({"v0": v0}, sharded=True)
sd = save.as_saver_def()
self.assertTrue(sd.sharded)
def _testPartitionedVariables(self, use_resource):
var_full_shape = [10, 3]
# Allows save/restore mechanism to work w/ different slicings.
var_name = "my_var"
saved_dir = self._get_test_dir("partitioned_variables")
saved_path = os.path.join(saved_dir, "ckpt")
call_saver_with_dict = False # updated by test loop below
def _save(partitioner=None):
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.session() as sess:
# Calls .eval() to return the ndarray that makes up the full variable.
rnd = random_ops.random_uniform(var_full_shape).eval()
if partitioner:
vs = [
variable_scope.get_variable(
var_name,
shape=var_full_shape,
initializer=rnd,
partitioner=partitioner,
use_resource=use_resource)
]
else:
if use_resource:
vs = [resource_variable_ops.ResourceVariable(rnd, name=var_name)]
else:
vs = [variables.VariableV1(rnd, name=var_name)]
self.evaluate(variables.global_variables_initializer())
if call_saver_with_dict:
saver = saver_module.Saver({var_name: vs[0]})
else:
saver = saver_module.Saver(vs)
actual_path = saver.save(sess, saved_path)
self.assertEqual(saved_path, actual_path)
return rnd
def _restore(partitioner=None):
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.session() as sess:
if partitioner:
new_vs = [
variable_scope.get_variable(
var_name,
shape=var_full_shape,
initializer=array_ops.zeros(var_full_shape),
partitioner=partitioner)
]
else:
new_vs = [
variables.VariableV1(
array_ops.zeros(
shape=var_full_shape), # != original contents.
name=var_name)
]
self.evaluate(variables.global_variables_initializer())
if call_saver_with_dict:
saver = saver_module.Saver({
var_name: new_vs[0]
})
else:
saver = saver_module.Saver(new_vs)
saver.restore(sess, saved_path)
if partitioner:
return new_vs[0].as_tensor().eval()
else:
return new_vs[0].eval()
for call_saver_with_dict in {False, True}:
# Save PartitionedVariable and restore into full variable.
saved_full = _save(
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=2))
restored_full = _restore()
self.assertAllEqual(saved_full, restored_full)
# Restores into the same number of partitions.
restored_full = _restore(
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=2))
self.assertAllEqual(saved_full, restored_full)
# Restores into a different number of partitions.
restored_full = _restore(
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=3))
self.assertAllEqual(saved_full, restored_full)
# Now, saves a full variable and restores PartitionedVariable.
saved_full = _save()
restored_full = _restore(
partitioner=partitioned_variables.fixed_size_partitioner(
num_shards=3))
self.assertAllEqual(saved_full, restored_full)
def testPartitionedVariable(self):
self._testPartitionedVariables(use_resource=False)
def testPartitionedResourceVariable(self):
self._testPartitionedVariables(use_resource=True)
class SaveRestoreShardedTestV2(SaveRestoreShardedTest):
_WRITE_VERSION = saver_pb2.SaverDef.V2
def testIterators(self):
save_path = os.path.join(self.get_temp_dir(), "sharded_iterators")
# Build a graph with 2 parameter nodes on different devices and save.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
ds0 = dataset_ops.Dataset.range(10)
it0 = dataset_ops.make_initializable_iterator(ds0)
get_next0 = it0.get_next()
saveable0 = iterator_ops._IteratorSaveable(
it0._iterator_resource, name="saveable_it0")
with sess.graph.device("/cpu:1"):
ds1 = dataset_ops.Dataset.range(20)
it1 = dataset_ops.make_initializable_iterator(ds1)
get_next1 = it1.get_next()
saveable1 = iterator_ops._IteratorSaveable(
it1._iterator_resource, name="saveable_it1")
saver = saver_module.Saver({
"it0": saveable0,
"it1": saveable1
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(it0.initializer)
self.evaluate(it1.initializer)
self.assertEqual(0, self.evaluate(get_next0))
self.assertEqual(1, self.evaluate(get_next0))
self.assertEqual(0, self.evaluate(get_next1))
val = saver.save(sess, save_path)
self.assertEqual(save_path, val)
data_files = glob.glob(save_path + ".data*")
self.assertEqual(2, len(data_files))
# Restore
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
ds0 = dataset_ops.Dataset.range(10)
it0 = dataset_ops.make_initializable_iterator(ds0)
get_next0 = it0.get_next()
saveable0 = iterator_ops._IteratorSaveable(
it0._iterator_resource, name="saveable_it0")
with sess.graph.device("/cpu:1"):
ds1 = dataset_ops.Dataset.range(20)
it1 = dataset_ops.make_initializable_iterator(ds1)
get_next1 = it1.get_next()
saveable1 = iterator_ops._IteratorSaveable(
it1._iterator_resource, name="saveable_it1")
saver = saver_module.Saver({
"it0": saveable0,
"it1": saveable1
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(it0.initializer)
self.evaluate(it1.initializer)
saver.restore(sess, save_path)
self.assertEqual(2, self.evaluate(get_next0))
self.assertEqual(1, self.evaluate(get_next1))
def testIteratorsUnshardedRestore(self):
save_path = os.path.join(self.get_temp_dir(), "restore_unsharded_iterators")
# Build a graph with 2 parameter nodes on different devices and save.
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
ds0 = dataset_ops.Dataset.range(10)
it0 = dataset_ops.make_initializable_iterator(ds0)
get_next0 = it0.get_next()
saveable0 = iterator_ops._IteratorSaveable(
it0._iterator_resource, name="saveable_it0")
with sess.graph.device("/cpu:1"):
ds1 = dataset_ops.Dataset.range(20)
it1 = dataset_ops.make_initializable_iterator(ds1)
get_next1 = it1.get_next()
saveable1 = iterator_ops._IteratorSaveable(
it1._iterator_resource, name="saveable_it1")
saver = saver_module.Saver({
"it0": saveable0,
"it1": saveable1
},
write_version=self._WRITE_VERSION,
sharded=True)
self.evaluate(it0.initializer)
self.evaluate(it1.initializer)
self.assertEqual(0, self.evaluate(get_next0))
self.assertEqual(1, self.evaluate(get_next0))
self.assertEqual(0, self.evaluate(get_next1))
val = saver.save(sess, save_path)
self.assertEqual(save_path, val)
data_files = glob.glob(save_path + ".data*")
self.assertEqual(2, len(data_files))
# Restore
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
ds0 = dataset_ops.Dataset.range(10)
it0 = dataset_ops.make_initializable_iterator(ds0)
get_next0 = it0.get_next()
saveable0 = iterator_ops._IteratorSaveable(
it0._iterator_resource, name="saveable_it0")
with sess.graph.device("/cpu:1"):
ds1 = dataset_ops.Dataset.range(20)
it1 = dataset_ops.make_initializable_iterator(ds1)
get_next1 = it1.get_next()
saveable1 = iterator_ops._IteratorSaveable(
it1._iterator_resource, name="saveable_it1")
saver = saver_module.Saver({
"it0": saveable0,
"it1": saveable1
},
write_version=self._WRITE_VERSION,
sharded=False)
self.evaluate(it0.initializer)
self.evaluate(it1.initializer)
saver.restore(sess, save_path)
self.assertEqual(2, self.evaluate(get_next0))
self.assertEqual(1, self.evaluate(get_next1))
class MaxToKeepTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
def assertCheckpointState(self, model_checkpoint_path,
all_model_checkpoint_paths, save_dir):
checkpoint_state = checkpoint_management.get_checkpoint_state(save_dir)
self.assertEqual(checkpoint_state.model_checkpoint_path,
model_checkpoint_path)
self.assertEqual(checkpoint_state.all_model_checkpoint_paths,
all_model_checkpoint_paths)
def testMaxToKeepEager(self):
with context.eager_mode():
save_dir = self._get_test_dir("max_to_keep_eager")
v = variable_scope.variable(10.0, name="v")
save = saver_module.Saver({"v": v}, max_to_keep=2)
self.evaluate(variables.global_variables_initializer())
if not context.executing_eagerly():
self.assertEqual([], save.last_checkpoints)
s1 = save.save(None, os.path.join(save_dir, "s1"))
self.assertEqual([s1], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s1],
save_dir=save_dir)
s2 = save.save(None, os.path.join(save_dir, "s2"))
self.assertEqual([s1, s2], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s1, s2],
save_dir=save_dir)
s3 = save.save(None, os.path.join(save_dir, "s3"))
self.assertEqual([s2, s3], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertCheckpointState(
model_checkpoint_path=s3,
all_model_checkpoint_paths=[s2, s3],
save_dir=save_dir)
# Create a second helper, identical to the first.
save2 = saver_module.Saver({"v": v}, max_to_keep=2)
save2.set_last_checkpoints(save.last_checkpoints)
# Exercise the first helper.
# Adding s2 again (old s2 is removed first, then new s2 appended)
s2 = save.save(None, os.path.join(save_dir, "s2"))
self.assertEqual([s3, s2], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s3, s2],
save_dir=save_dir)
# Adding s1 (s3 should now be deleted as oldest in list)
s1 = save.save(None, os.path.join(save_dir, "s1"))
self.assertEqual([s2, s1], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s2, s1],
save_dir=save_dir)
s2 = save2.save(None, os.path.join(save_dir, "s2"))
self.assertEqual([s3, s2], save2.last_checkpoints)
# Created by the first helper.
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
# Deleted by the first helper.
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
def testNonSharded(self):
save_dir = self._get_test_dir("max_to_keep_non_sharded")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.cached_session() as sess:
v = variables.VariableV1(10.0, name="v")
save = saver_module.Saver({"v": v}, max_to_keep=2)
self.evaluate(variables.global_variables_initializer())
self.assertEqual([], save.last_checkpoints)
s1 = save.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s1], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s1],
save_dir=save_dir)
s2 = save.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s1, s2], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s1, s2],
save_dir=save_dir)
s3 = save.save(sess, os.path.join(save_dir, "s3"))
self.assertEqual([s2, s3], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertCheckpointState(
model_checkpoint_path=s3,
all_model_checkpoint_paths=[s2, s3],
save_dir=save_dir)
# Create a second helper, identical to the first.
save2 = saver_module.Saver(saver_def=save.as_saver_def())
save2.set_last_checkpoints(save.last_checkpoints)
# Create a third helper, with the same configuration but no knowledge of
# previous checkpoints.
save3 = saver_module.Saver(saver_def=save.as_saver_def())
# Exercise the first helper.
# Adding s2 again (old s2 is removed first, then new s2 appended)
s2 = save.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s3, s2], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s1))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s3, s2],
save_dir=save_dir)
# Adding s1 (s3 should now be deleted as oldest in list)
s1 = save.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s2, s1], save.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s2, s1],
save_dir=save_dir)
# Exercise the second helper.
# Adding s2 again (old s2 is removed first, then new s2 appended)
s2 = save2.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s3, s2], save2.last_checkpoints)
# Created by the first helper.
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
# Deleted by the first helper.
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s3, s2],
save_dir=save_dir)
# Adding s1 (s3 should now be deleted as oldest in list)
s1 = save2.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s2, s1], save2.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s2, s1],
save_dir=save_dir)
# Exercise the third helper.
# Adding s2 again (but helper is unaware of previous s2)
s2 = save3.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s2], save3.last_checkpoints)
# Created by the first helper.
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
# Deleted by the first helper.
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
# Even though the file for s1 exists, this saver isn't aware of it, which
# is why it doesn't end up in the checkpoint state.
self.assertCheckpointState(
model_checkpoint_path=s2,
all_model_checkpoint_paths=[s2],
save_dir=save_dir)
# Adding s1 (s3 should not be deleted because helper is unaware of it)
s1 = save3.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s2, s1], save3.last_checkpoints)
self.assertFalse(checkpoint_management.checkpoint_exists(s3))
self.assertFalse(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s3)))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s2)))
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(
checkpoint_management.checkpoint_exists(
checkpoint_management.meta_graph_filename(s1)))
self.assertCheckpointState(
model_checkpoint_path=s1,
all_model_checkpoint_paths=[s2, s1],
save_dir=save_dir)
def testSharded(self):
save_dir = self._get_test_dir("max_to_keep_sharded")
with session.Session(
target="",
config=config_pb2.ConfigProto(device_count={"CPU": 2})) as sess:
with sess.graph.device("/cpu:0"):
v0 = variables.VariableV1(111, name="v0")
with sess.graph.device("/cpu:1"):
v1 = variables.VariableV1(222, name="v1")
save = saver_module.Saver(
{
"v0": v0,
"v1": v1
}, sharded=True, max_to_keep=2)
self.evaluate(variables.global_variables_initializer())
self.assertEqual([], save.last_checkpoints)
s1 = save.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s1], save.last_checkpoints)
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(2, len(gfile.Glob(s1)))
else:
self.assertEqual(4, len(gfile.Glob(s1 + "*")))
self.assertTrue(
gfile.Exists(checkpoint_management.meta_graph_filename(s1)))
s2 = save.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s1, s2], save.last_checkpoints)
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(2, len(gfile.Glob(s1)))
else:
self.assertEqual(4, len(gfile.Glob(s1 + "*")))
self.assertTrue(
gfile.Exists(checkpoint_management.meta_graph_filename(s1)))
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(2, len(gfile.Glob(s2)))
else:
self.assertEqual(4, len(gfile.Glob(s2 + "*")))
self.assertTrue(
gfile.Exists(checkpoint_management.meta_graph_filename(s2)))
s3 = save.save(sess, os.path.join(save_dir, "s3"))
self.assertEqual([s2, s3], save.last_checkpoints)
self.assertEqual(0, len(gfile.Glob(s1 + "*")))
self.assertFalse(
gfile.Exists(checkpoint_management.meta_graph_filename(s1)))
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(2, len(gfile.Glob(s2)))
else:
self.assertEqual(4, len(gfile.Glob(s2 + "*")))
self.assertTrue(
gfile.Exists(checkpoint_management.meta_graph_filename(s2)))
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(2, len(gfile.Glob(s3)))
else:
self.assertEqual(4, len(gfile.Glob(s3 + "*")))
self.assertTrue(
gfile.Exists(checkpoint_management.meta_graph_filename(s3)))
def testNoMaxToKeep(self):
save_dir = self._get_test_dir("no_max_to_keep")
save_dir2 = self._get_test_dir("max_to_keep_0")
with self.cached_session() as sess:
v = variables.VariableV1(10.0, name="v")
self.evaluate(variables.global_variables_initializer())
# Test max_to_keep being None.
save = saver_module.Saver({"v": v}, max_to_keep=None)
self.assertEqual([], save.last_checkpoints)
s1 = save.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
s2 = save.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
# Test max_to_keep being 0.
save2 = saver_module.Saver({"v": v}, max_to_keep=0)
self.assertEqual([], save2.last_checkpoints)
s1 = save2.save(sess, os.path.join(save_dir2, "s1"))
self.assertEqual([], save2.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
s2 = save2.save(sess, os.path.join(save_dir2, "s2"))
self.assertEqual([], save2.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
def testNoMetaGraph(self):
save_dir = self._get_test_dir("no_meta_graph")
with self.cached_session() as sess:
v = variables.VariableV1(10.0, name="v")
save = saver_module.Saver({"v": v})
self.evaluate(variables.global_variables_initializer())
s1 = save.save(sess, os.path.join(save_dir, "s1"), write_meta_graph=False)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertFalse(
gfile.Exists(checkpoint_management.meta_graph_filename(s1)))
class RecoverLastCheckpointsTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
def assertCheckpointState(self, model_checkpoint_path,
all_model_checkpoint_paths, save_dir):
checkpoint_state = checkpoint_management.get_checkpoint_state(save_dir)
self.assertEqual(checkpoint_state.model_checkpoint_path,
model_checkpoint_path)
self.assertEqual(checkpoint_state.all_model_checkpoint_paths,
all_model_checkpoint_paths)
def test_recover_last_checkpoints(self):
with context.eager_mode():
save_dir = self._get_test_dir("recover_last_checkpoints")
v = variable_scope.variable(10.0, name="v")
save = saver_module.Saver({"v": v}, max_to_keep=10)
self.evaluate(variables.global_variables_initializer())
self.assertEqual([], save.last_checkpoints)
s1 = save.save(None, os.path.join(save_dir, "ckpt-1"))
s2 = save.save(None, os.path.join(save_dir, "ckpt-2"))
s3 = save.save(None, os.path.join(save_dir, "ckpt-3"))
self.assertEqual([s1, s2, s3], save.last_checkpoints)
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertTrue(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertCheckpointState(
model_checkpoint_path=s3,
all_model_checkpoint_paths=[s1, s2, s3],
save_dir=save_dir)
# Create another saver and recover last checkpoints.
save2 = saver_module.Saver({"v": v}, max_to_keep=10)
self.assertEqual([], save2.last_checkpoints)
save2.recover_last_checkpoints([s1, s2, s3])
self.assertEqual([s1, s2, s3], save2.last_checkpoints)
# Remove a checkpoint and check that last checkpoints are
# restored correctly.
for fname in gfile.Glob("{}*".format(s1)):
gfile.Remove(fname)
self.assertFalse(checkpoint_management.checkpoint_exists(s1))
# Create another saver and recover last checkpoints. The removed
# checkpoint would be correctly omitted.
save3 = saver_module.Saver({"v": v}, max_to_keep=10)
self.assertEqual([], save3.last_checkpoints)
save3.recover_last_checkpoints([s1, s2, s3])
self.assertEqual([s2, s3], save3.last_checkpoints)
s4 = save3.save(None, os.path.join(save_dir, "ckpt-4"))
self.assertCheckpointState(
model_checkpoint_path=s4,
all_model_checkpoint_paths=[s2, s3, s4],
save_dir=save_dir)
class KeepCheckpointEveryNHoursTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
@test_util.run_in_graph_and_eager_modes
@test.mock.patch.object(saver_module, "time")
def testNonSharded(self, mock_time):
save_dir = self._get_test_dir("keep_checkpoint_every_n_hours")
with self.cached_session() as sess:
v = variable_scope.variable([10.0], name="v")
# Run the initializer NOW to avoid the 0.5s overhead of the first Run()
# call, which throws the test timing off in fastbuild mode.
self.evaluate(variables.global_variables_initializer())
# Create a saver that will keep the last 2 checkpoints plus one every 0.7
# seconds.
start_time = time.time()
mock_time.time.return_value = start_time
save = saver_module.Saver(
{
"v": v
}, max_to_keep=2, keep_checkpoint_every_n_hours=0.7 / 3600)
self.assertEqual([], save.last_checkpoints)
# Wait till 1 seconds have elapsed so s1 will be old enough to keep.
# sleep may return early, don't trust it.
mock_time.time.return_value = start_time + 1.0
s1 = save.save(sess, os.path.join(save_dir, "s1"))
self.assertEqual([s1], save.last_checkpoints)
s2 = save.save(sess, os.path.join(save_dir, "s2"))
self.assertEqual([s1, s2], save.last_checkpoints)
# We now have 2 'last_checkpoints': [s1, s2]. The next call to Save(),
# would normally delete s1, because max_to_keep is 2. However, s1 is
# older than 0.7s so we must keep it.
s3 = save.save(sess, os.path.join(save_dir, "s3"))
self.assertEqual([s2, s3], save.last_checkpoints)
# s1 should still be here, we are Not checking now to reduce time
# variance in the test.
# We now have 2 'last_checkpoints': [s2, s3], and s1 on disk. The next
# call to Save(), will delete s2, because max_to_keep is 2, and because
# we already kept the old s1. s2 is very close in time to s1 so it gets
# deleted.
s4 = save.save(sess, os.path.join(save_dir, "s4"))
self.assertEqual([s3, s4], save.last_checkpoints)
# Check that s1 is still here, but s2 is gone.
self.assertTrue(checkpoint_management.checkpoint_exists(s1))
self.assertFalse(checkpoint_management.checkpoint_exists(s2))
self.assertTrue(checkpoint_management.checkpoint_exists(s3))
self.assertTrue(checkpoint_management.checkpoint_exists(s4))
class SaveRestoreWithVariableNameMap(test.TestCase):
def _testNonReshape(self, variable_op):
save_path = os.path.join(self.get_temp_dir(), "non_reshape")
with self.session(graph=ops_lib.Graph()) as sess:
# Build a graph with 2 parameter nodes, and Save and
# Restore nodes for them.
v0 = variable_op(10.0, name="v0")
v1 = variable_op(20.0, name="v1")
save = saver_module.Saver({"save_prefix/v0": v0, "save_prefix/v1": v1})
self.evaluate(variables.global_variables_initializer())
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
# Save the initialized values in the file at "save_path"
# Use a variable name map to set the saved tensor names
val = save.save(sess, save_path)
self.assertTrue(isinstance(val, six.string_types))
self.assertEqual(save_path, val)
# Verify that the original names are not in the Saved file
save = saver_module.Saver({"v0": v0, "v1": v1})
with self.assertRaisesOpError("not found in checkpoint"):
save.restore(sess, save_path)
# Verify that the mapped names are present in the Saved file and can be
# Restored using remapped names.
with self.session(graph=ops_lib.Graph()) as sess:
v0 = variable_op(-1.0, name="v0")
v1 = variable_op(-1.0, name="v1")
if not context.executing_eagerly():
with self.assertRaisesOpError("uninitialized"):
self.evaluate(v0)
with self.assertRaisesOpError("uninitialized"):
self.evaluate(v1)
save = saver_module.Saver({"save_prefix/v0": v0, "save_prefix/v1": v1})
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
if not context.executing_eagerly():
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
# Add a prefix to the node names in the current graph and Restore using
# remapped names.
with self.session(graph=ops_lib.Graph()) as sess:
v0 = variable_op(-1.0, name="restore_prefix/v0")
v1 = variable_op(-1.0, name="restore_prefix/v1")
if not context.executing_eagerly():
with self.assertRaisesOpError("uninitialized"):
self.evaluate(v0)
with self.assertRaisesOpError("uninitialized"):
self.evaluate(v1)
# Restore the saved values in the parameter nodes.
save = saver_module.Saver({"save_prefix/v0": v0, "save_prefix/v1": v1})
save.restore(sess, save_path)
# Check that the parameter nodes have been restored.
self.assertEqual(10.0, self.evaluate(v0))
self.assertEqual(20.0, self.evaluate(v1))
@test_util.run_in_graph_and_eager_modes
def testNonReshapeResourceVariable(self):
self._testNonReshape(resource_variable_ops.ResourceVariable)
def testNonReshapeVariable(self):
self._testNonReshape(variables.Variable)
class MetaGraphTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
@test_util.run_v1_only(
"Queue-based input pipelines have been replaced by `tf.data` "
"and not supported in V2.")
def testAddCollectionDef(self):
test_dir = self._get_test_dir("good_collection")
filename = os.path.join(test_dir, "metafile")
with self.cached_session():
# Creates a graph.
v0 = variables.VariableV1(1.0, name="v0")
control_flow_ops.cond(
math_ops.less(v0, 10), lambda: math_ops.add(v0, 1),
lambda: math_ops.subtract(v0, 1))
control_flow_ops.while_loop(lambda i: math_ops.less(i, 10),
lambda i: math_ops.add(i, 1), [v0])
var = variables.VariableV1(constant_op.constant(0, dtype=dtypes.int64))
count_up_to = var.count_up_to(3)
input_queue = data_flow_ops.FIFOQueue(
30, dtypes.float32, shared_name="collection_queue")
qr = queue_runner_impl.QueueRunner(input_queue, [count_up_to])
variables.global_variables_initializer()
# Creates a saver.
save = saver_module.Saver({"v0": v0})
# Adds a set of collections.
ops_lib.add_to_collection("int_collection", 3)
ops_lib.add_to_collection("float_collection", 3.5)
ops_lib.add_to_collection("string_collection", "hello")
ops_lib.add_to_collection("variable_collection", v0)
# Add QueueRunners.
queue_runner_impl.add_queue_runner(qr)
# Adds user_defined proto in three formats: string, bytes and Any.
queue_runner = queue_runner_pb2.QueueRunnerDef(queue_name="test_queue")
ops_lib.add_to_collection("user_defined_string_collection",
str(queue_runner))
ops_lib.add_to_collection("user_defined_bytes_collection",
queue_runner.SerializeToString())
any_buf = Any()
any_buf.Pack(queue_runner)
ops_lib.add_to_collection("user_defined_any_collection", any_buf)
# Generates MetaGraphDef.
meta_graph_def = save.export_meta_graph(filename)
self.assertTrue(meta_graph_def.HasField("saver_def"))
self.assertTrue(meta_graph_def.HasField("graph_def"))
self.assertTrue(meta_graph_def.HasField("meta_info_def"))
self.assertNotEqual(meta_graph_def.meta_info_def.tensorflow_version, "")
self.assertNotEqual(meta_graph_def.meta_info_def.tensorflow_git_version,
"")
collection_def = meta_graph_def.collection_def
self.assertEqual(len(collection_def), 12)
with ops_lib.Graph().as_default():
# Restores from MetaGraphDef.
new_saver = saver_module.import_meta_graph(filename)
# Generates a new MetaGraphDef.
new_meta_graph_def = new_saver.export_meta_graph()
# It should be the same as the original.
test_util.assert_meta_graph_protos_equal(
self, meta_graph_def, new_meta_graph_def)
def testAddCollectionDefFails(self):
with self.cached_session():
# Creates a graph.
v0 = variables.VariableV1(10.0, name="v0")
# Creates a saver.
save = saver_module.Saver({"v0": v0})
# Generates MetaGraphDef.
meta_graph_def = meta_graph_pb2.MetaGraphDef()
# Verifies that collection with unsupported key will not be added.
ops_lib.add_to_collection(save, 3)
save._add_collection_def(meta_graph_def, save)
self.assertEqual(len(meta_graph_def.collection_def), 0)
# Verifies that collection where item type does not match expected
# type will not be added.
ops_lib.add_to_collection("int_collection", 3)
ops_lib.add_to_collection("int_collection", 3.5)
save._add_collection_def(meta_graph_def, "int_collection")
self.assertEqual(len(meta_graph_def.collection_def), 0)
def _testMultiSaverCollectionSave(self, test_dir):
filename = os.path.join(test_dir, "metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
saver1_ckpt = os.path.join(test_dir, "saver1.ckpt")
with self.session(graph=ops_lib.Graph()) as sess:
# Creates a graph.
v0 = variables.VariableV1([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], name="v0")
v1 = variables.VariableV1(11.0, name="v1")
# Creates 2 savers.
saver0 = saver_module.Saver({"v0": v0}, name="saver0")
saver1 = saver_module.Saver({"v1": v1}, name="saver1")
ops_lib.add_to_collection("savers", saver0)
ops_lib.add_to_collection("savers", saver1)
self.evaluate(variables.global_variables_initializer())
# Saves to different checkpoints.
saver0.save(sess, saver0_ckpt)
saver1.save(sess, saver1_ckpt)
# Generates MetaGraphDef.
meta_graph_def = saver_module.export_meta_graph(filename)
meta_graph_def0 = saver0.export_meta_graph()
meta_graph_def1 = saver1.export_meta_graph()
# Verifies that there is no saver_def in meta_graph_def.
self.assertFalse(meta_graph_def.HasField("saver_def"))
# Verifies that there is saver_def in meta_graph_def0 and 1.
self.assertTrue(meta_graph_def0.HasField("saver_def"))
self.assertTrue(meta_graph_def1.HasField("saver_def"))
# Verifies SAVERS is saved as bytes_list for meta_graph_def.
collection_def = meta_graph_def.collection_def["savers"]
kind = collection_def.WhichOneof("kind")
self.assertEqual(kind, "bytes_list")
# Verifies that there are 2 entries in SAVERS collection.
savers = getattr(collection_def, kind)
self.assertEqual(2, len(savers.value))
# Verifies SAVERS collection is saved as bytes_list for meta_graph_def0.
collection_def = meta_graph_def0.collection_def["savers"]
kind = collection_def.WhichOneof("kind")
self.assertEqual(kind, "bytes_list")
# Verifies that there are 2 entries in SAVERS collection.
savers = getattr(collection_def, kind)
self.assertEqual(2, len(savers.value))
def _testMultiSaverCollectionRestore(self, test_dir):
filename = os.path.join(test_dir, "metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
saver1_ckpt = os.path.join(test_dir, "saver1.ckpt")
with self.session(graph=ops_lib.Graph()) as sess:
# Imports from meta_graph.
saver_module.import_meta_graph(filename)
# Retrieves SAVERS collection. Verifies there are 2 entries.
savers = ops_lib.get_collection("savers")
self.assertEqual(2, len(savers))
# Retrieves saver0. Verifies that new_saver0 can restore v0, but not v1.
new_saver0 = savers[0]
new_saver0.restore(sess, saver0_ckpt)
v0 = sess.graph.get_tensor_by_name("v0:0")
v1 = sess.graph.get_tensor_by_name("v1:0")
self.assertAllEqual([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
self.evaluate(v0))
self.assertEqual([3, 2], v0.get_shape())
self.assertEqual([], v1.get_shape())
with self.assertRaisesWithPredicateMatch(
errors_impl.OpError, lambda e: "uninitialized value v1" in e.message):
self.evaluate(v1)
# Retrieves saver1. Verifies that new_saver1 can restore v1.
new_saver1 = savers[1]
new_saver1.restore(sess, saver1_ckpt)
v1 = sess.graph.get_tensor_by_name("v1:0")
self.assertEqual(11.0, self.evaluate(v1))
@test_util.run_v1_only(
"Exporting/importing meta graphs is only supported in V1.")
def testMultiSaverCollection(self):
test_dir = self._get_test_dir("saver_collection")
self._testMultiSaverCollectionSave(test_dir)
self._testMultiSaverCollectionRestore(test_dir)
@test_util.run_v1_only(
"Exporting/importing meta graphs is only supported in V1.")
def testClearExtraneousSavers(self):
test_dir = self._get_test_dir("clear_extraneous_savers")
filename = os.path.join(test_dir, "metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
saver1_ckpt = os.path.join(test_dir, "saver1.ckpt")
with self.session(graph=ops_lib.Graph()) as sess:
# Creates a graph.
v0 = variables.VariableV1([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], name="v0")
v1 = variables.VariableV1(11.0, name="v1")
# Creates 2 savers.
saver0 = saver_module.Saver({"v0": v0}, name="saver0")
saver1 = saver_module.Saver({"v1": v1}, name="saver1")
ops_lib.add_to_collection("savers", saver0)
ops_lib.add_to_collection("savers", saver1)
self.evaluate(variables.global_variables_initializer())
# Saves to different checkpoints.
saver0.save(sess, saver0_ckpt)
saver1.save(sess, saver1_ckpt)
# Generates MetaGraphDef.
meta_graph_def = saver_module.export_meta_graph(filename)
meta_graph_def0 = saver0.export_meta_graph()
meta_graph_def1 = saver1.export_meta_graph(clear_extraneous_savers=True)
# Verifies that there is no saver_def in meta_graph_def.
self.assertFalse(meta_graph_def.HasField("saver_def"))
# Verifies that there is saver_def in meta_graph_def0 and 1.
self.assertTrue(meta_graph_def0.HasField("saver_def"))
self.assertTrue(meta_graph_def1.HasField("saver_def"))
# Verifies SAVERS is saved as bytes_list for meta_graph_def.
collection_def = meta_graph_def.collection_def["savers"]
kind = collection_def.WhichOneof("kind")
self.assertEqual(kind, "bytes_list")
# Verifies that there are 2 entries in SAVERS collection.
savers = getattr(collection_def, kind)
self.assertEqual(2, len(savers.value))
# Verifies SAVERS collection is saved as bytes_list for meta_graph_def1.
collection_def = meta_graph_def1.collection_def["savers"]
kind = collection_def.WhichOneof("kind")
self.assertEqual(kind, "bytes_list")
# Verifies that there is 1 entry in SAVERS collection.
savers = getattr(collection_def, kind)
self.assertEqual(1, len(savers.value))
# Verifies that saver0 graph nodes are omitted from the saver1 export
self.assertEqual(33, len(meta_graph_def0.graph_def.node))
self.assertEqual(21, len(meta_graph_def1.graph_def.node))
def testBinaryAndTextFormat(self):
test_dir = self._get_test_dir("binary_and_text")
filename = os.path.join(test_dir, "metafile")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default(), self.session():
# Creates a graph.
variables.VariableV1(10.0, name="v0")
# Exports the graph as binary format.
saver_module.export_meta_graph(filename, as_text=False)
with ops_lib.Graph().as_default(), self.session():
# Imports the binary format graph.
saver = saver_module.import_meta_graph(filename)
self.assertIsNotNone(saver)
# Exports the graph as text format.
saver.export_meta_graph(filename, as_text=True)
with ops_lib.Graph().as_default(), self.session():
# Imports the text format graph.
saver_module.import_meta_graph(filename)
# Writes wrong contents to the file.
graph_io.write_graph(saver.as_saver_def(),
os.path.dirname(filename),
os.path.basename(filename))
with ops_lib.Graph().as_default(), self.session():
# Import should fail.
with self.assertRaisesWithPredicateMatch(IOError,
lambda e: "Cannot parse file"):
saver_module.import_meta_graph(filename)
# Deletes the file
gfile.Remove(filename)
with self.assertRaisesWithPredicateMatch(IOError,
lambda e: "does not exist"):
saver_module.import_meta_graph(filename)
@test_util.run_v1_only(
"Exporting/importing meta graphs is only supported in V1.")
def testSliceVariable(self):
test_dir = self._get_test_dir("slice_saver")
filename = os.path.join(test_dir, "metafile")
with self.cached_session():
v1 = variables.VariableV1([20.0], name="v1")
v2 = variables.VariableV1([20.0], name="v2")
v2._set_save_slice_info(
variables.Variable.SaveSliceInfo("v1", [1], [0], [1]))
# The names are different and will work.
slice_saver = saver_module.Saver({"first": v1, "second": v2})
self.evaluate(variables.global_variables_initializer())
# Exports to meta_graph
meta_graph_def = slice_saver.export_meta_graph(filename)
with ops_lib.Graph().as_default():
# Restores from MetaGraphDef.
new_saver = saver_module.import_meta_graph(filename)
self.assertIsNotNone(new_saver)
# Generates a new MetaGraphDef.
new_meta_graph_def = new_saver.export_meta_graph()
# It should be the same as the original.
test_util.assert_meta_graph_protos_equal(self, meta_graph_def,
new_meta_graph_def)
def _testGraphExtensionSave(self, test_dir):
filename = os.path.join(test_dir, "metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
# Creates an inference graph.
# Hidden 1
images = constant_op.constant(1.2, dtypes.float32, shape=[100, 28])
with ops_lib.name_scope("hidden1"):
weights = variables.VariableV1(
random_ops.truncated_normal(
[28, 128], stddev=1.0 / math.sqrt(float(28))),
name="weights")
# The use of control_flow_ops.cond here is purely for adding test coverage
# the save and restore of control flow context (which doesn't make any
# sense here from a machine learning perspective). The typical biases is
# a simple Variable without the conditions.
biases = variables.VariableV1(
control_flow_ops.cond(
math_ops.less(random.random(), 0.5),
lambda: array_ops.ones([128]), lambda: array_ops.zeros([128])),
name="biases")
hidden1 = nn_ops.relu(math_ops.matmul(images, weights) + biases)
# Hidden 2
with ops_lib.name_scope("hidden2"):
weights = variables.VariableV1(
random_ops.truncated_normal(
[128, 32], stddev=1.0 / math.sqrt(float(128))),
name="weights")
# The use of control_flow_ops.while_loop here is purely for adding test
# coverage the save and restore of control flow context (which doesn't
# make any sense here from a machine learning perspective). The typical
# biases is a simple Variable without the conditions.
def loop_cond(it, _):
return it < 2
def loop_body(it, biases):
biases += constant_op.constant(0.1, shape=[32])
return it + 1, biases
_, biases = control_flow_ops.while_loop(
loop_cond, loop_body,
[constant_op.constant(0),
variables.VariableV1(array_ops.zeros([32]))])
hidden2 = nn_ops.relu(math_ops.matmul(hidden1, weights) + biases)
# Linear
with ops_lib.name_scope("softmax_linear"):
weights = variables.VariableV1(
random_ops.truncated_normal(
[32, 10], stddev=1.0 / math.sqrt(float(32))),
name="weights")
biases = variables.VariableV1(array_ops.zeros([10]), name="biases")
logits = math_ops.matmul(hidden2, weights) + biases
ops_lib.add_to_collection("logits", logits)
init_all_op = variables.global_variables_initializer()
with self.cached_session() as sess:
# Initializes all the variables.
self.evaluate(init_all_op)
# Runs to logit.
self.evaluate(logits)
# Creates a saver.
saver0 = saver_module.Saver()
saver0.save(sess, saver0_ckpt)
# Generates MetaGraphDef.
saver0.export_meta_graph(filename)
def _testGraphExtensionRestore(self, test_dir):
filename = os.path.join(test_dir, "metafile")
train_filename = os.path.join(test_dir, "train_metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
with self.session(graph=ops_lib.Graph()) as sess:
# Restores from MetaGraphDef.
new_saver = saver_module.import_meta_graph(filename)
# Generates a new MetaGraphDef.
new_saver.export_meta_graph()
# Restores from checkpoint.
new_saver.restore(sess, saver0_ckpt)
# Adds loss and train.
labels = constant_op.constant(0, dtypes.int32, shape=[100], name="labels")
batch_size = array_ops.size(labels)
labels = array_ops.expand_dims(labels, 1)
indices = array_ops.expand_dims(math_ops.range(0, batch_size), 1)
concated = array_ops.concat([indices, labels], 1)
onehot_labels = sparse_ops.sparse_to_dense(
concated, array_ops.stack([batch_size, 10]), 1.0, 0.0)
logits = ops_lib.get_collection("logits")[0]
cross_entropy = nn_ops.softmax_cross_entropy_with_logits(
labels=onehot_labels, logits=logits, name="xentropy")
loss = math_ops.reduce_mean(cross_entropy, name="xentropy_mean")
summary.scalar("loss", loss)
# Creates the gradient descent optimizer with the given learning rate.
optimizer = gradient_descent.GradientDescentOptimizer(0.01)
# Runs train_op.
train_op = optimizer.minimize(loss)
ops_lib.add_to_collection("train_op", train_op)
# Runs train_op.
self.evaluate(train_op)
# Generates MetaGraphDef.
saver_module.export_meta_graph(train_filename)
def _testRestoreFromTrainGraphWithControlContext(self, test_dir):
train_filename = os.path.join(test_dir, "train_metafile")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
with self.session(graph=ops_lib.Graph()) as sess:
# Restores from MetaGraphDef.
new_saver = saver_module.import_meta_graph(train_filename)
# Restores from checkpoint.
new_saver.restore(sess, saver0_ckpt)
train_op = ops_lib.get_collection("train_op")[0]
self.evaluate(train_op)
def testGraphExtension(self):
test_dir = self._get_test_dir("graph_extension")
# train.Saver and train.import_meta_graph are V1 only APIs.
with ops_lib.Graph().as_default():
self._testGraphExtensionSave(test_dir)
self._testGraphExtensionRestore(test_dir)
self._testRestoreFromTrainGraphWithControlContext(test_dir)
def _testGradientSerDes(self, graph_fn):
"""Tests that gradients can be computed after exporting and importing.
Builds a graph, exports it, and verifies that it can be imported and the
gradient can be built and run correctly.
Args:
graph_fn: takes a single float Tensor argument as input, outputs a single
Tensor
"""
test_dir = self._get_test_dir("nested_control_flow")
filename = os.path.join(test_dir, "metafile")
saver_ckpt = os.path.join(test_dir, "saver.ckpt")
# Create while loop using `outer_body_fn`.
with ops_lib.Graph().as_default():
var = variables.VariableV1(0.0)
var_name = var.name
output = graph_fn(var)
output_name = output.name
init_op = variables.global_variables_initializer()
# Generate a MetaGraphDef containing the while loop.
with session.Session() as sess:
self.evaluate(init_op)
self.evaluate(output)
saver = saver_module.Saver()
saver.save(sess, saver_ckpt)
saver.export_meta_graph(filename)
# Build and run the gradients of the while loop. We use this below to
# verify that the gradients are correct with an imported MetaGraphDef.
grad = gradients_impl.gradients([output], [var])
# Turn off constant folding to avoid breaking testNestedControlFlowSerDes.
# It appears that a missing control dependency in the gradient graph
# causes the fetch node to not be triggered.
no_constfold_config = config_pb2.ConfigProto()
no_constfold_config.graph_options.rewrite_options.constant_folding = (
rewriter_config_pb2.RewriterConfig.OFF)
with session.Session(config=no_constfold_config) as sess:
self.evaluate(init_op)
expected_grad_value = self.evaluate(grad)
# Restore the MetaGraphDef into a new Graph.
with ops_lib.Graph().as_default():
with session.Session() as sess:
saver = saver_module.import_meta_graph(filename)
saver.restore(sess, saver_ckpt)
# Make sure we can still build gradients and get the same result.
var = ops_lib.get_default_graph().get_tensor_by_name(var_name)
output = ops_lib.get_default_graph().get_tensor_by_name(output_name)
grad = gradients_impl.gradients([output], [var])
init_op = variables.global_variables_initializer()
with session.Session(config=no_constfold_config) as sess:
self.evaluate(init_op)
actual_grad_value = self.evaluate(grad)
self.assertEqual(expected_grad_value, actual_grad_value)
def _testWhileLoopAndGradientSerDes(self, outer_body_fn):
# Build a while loop with `outer_body_fn`, export it, and verify that it can
# be imported and the gradient can be built and run correctly.
# pylint: disable=g-long-lambda
return self._testGradientSerDes(
lambda x: control_flow_ops.while_loop(
lambda i, y: i < 5, outer_body_fn, [0, x])[1])
# pylint: enable=g-long-lambda
def testNestedWhileLoopsSerDes(self):
# Test two simple nested while loops.
def body(i, x):
_, r = control_flow_ops.while_loop(lambda j, y: j < 3,
lambda j, y: (j + 1, y + x),
[0, 0.0])
return i + 1, x + r
self._testWhileLoopAndGradientSerDes(body)
def testNestedControlFlowSerDes(self):
# Test while loop in a cond in a while loop.
# pylint: disable=g-long-lambda
def body(i, x):
cond_result = control_flow_ops.cond(
i > 0,
lambda: control_flow_ops.while_loop(
lambda j, y: j < 3,
lambda j, y: (j + 1, y + x),
[0, 0.0])[1],
lambda: x)
return i + 1, cond_result
# pylint: enable=g-long-lambda
self._testWhileLoopAndGradientSerDes(body)
def testNestedCondsSerDes(self):
# Test conds in a cond.
# pylint: disable=g-long-lambda
self._testGradientSerDes(lambda x: control_flow_ops.cond(
x > 0,
lambda: control_flow_ops.cond(x > 3,
lambda: array_ops.identity(x),
lambda: math_ops.multiply(x, 2.0)),
lambda: control_flow_ops.cond(x < -3,
lambda: constant_op.constant(1.0),
lambda: math_ops.multiply(x, -1.0))))
# pylint: enable=g-long-lambda
@test_util.run_v1_only("This exercises Tensor.op which is meaningless in V2.")
def testStrippedOpListDef(self):
with self.cached_session():
# Creates a graph.
v0 = variables.VariableV1(0.0)
var = variables.VariableV1(10.0)
math_ops.add(v0, var)
@function.Defun(dtypes.float32)
def minus_one(x):
return x - 1
minus_one(array_ops.identity(v0))
save = saver_module.Saver({"v0": v0})
variables.global_variables_initializer()
# Generates MetaGraphDef.
meta_graph_def = save.export_meta_graph()
ops = [o.name for o in meta_graph_def.meta_info_def.stripped_op_list.op]
if save._write_version is saver_pb2.SaverDef.V1:
self.assertEqual(ops, [
"AddV2", "Assign", "Const", "Identity", "NoOp",
"PlaceholderWithDefault", "RestoreV2", "SaveSlices", "Sub",
"VariableV2"
])
else:
self.assertEqual(ops, [
"AddV2", "Assign", "Const", "Identity", "NoOp",
"PlaceholderWithDefault", "RestoreV2", "SaveV2", "Sub", "VariableV2"
])
# Test calling stripped_op_list_for_graph directly
op_list = meta_graph.stripped_op_list_for_graph(meta_graph_def.graph_def)
self.assertEqual(ops, [o.name for o in op_list.op])
for o in op_list.op:
self.assertEqual(o.summary, "")
self.assertEqual(o.description, "")
def testStripDefaultValuedAttrs(self):
"""Verifies that default valued attrs are stripped, unless disabled."""
# With strip_default_attrs enabled, attributes "T" (float32) and "Tout"
# (complex64) in the "Complex" op must be removed.
# train.Saver and train.export_meta_graph are V1 only APIs.
with ops_lib.Graph().as_default(), self.cached_session():
real_num = variables.VariableV1(1.0, dtype=dtypes.float32, name="real")
imag_num = variables.VariableV1(2.0, dtype=dtypes.float32, name="imag")
math_ops.complex(real_num, imag_num, name="complex")
save = saver_module.Saver({"real_num": real_num, "imag_num": imag_num})
variables.global_variables_initializer()
meta_graph_def = save.export_meta_graph(strip_default_attrs=True)
node_def = test_util.get_node_def_from_graph("complex",
meta_graph_def.graph_def)
self.assertNotIn("T", node_def.attr)
self.assertNotIn("Tout", node_def.attr)
# With strip_default_attrs disabled, attributes "T" (float32) and "Tout"
# (complex64) in the "Complex" op must *not* be removed, even if they map
# to their defaults.
with ops_lib.Graph().as_default(), self.session():
real_num = variables.VariableV1(1.0, dtype=dtypes.float32, name="real")
imag_num = variables.VariableV1(2.0, dtype=dtypes.float32, name="imag")
math_ops.complex(real_num, imag_num, name="complex")
save = saver_module.Saver({"real_num": real_num, "imag_num": imag_num})
variables.global_variables_initializer()
meta_graph_def = save.export_meta_graph(strip_default_attrs=False)
node_def = test_util.get_node_def_from_graph("complex",
meta_graph_def.graph_def)
self.assertIn("T", node_def.attr)
self.assertIn("Tout", node_def.attr)
def testImportIntoNamescope(self):
# Test that we can import a meta graph into a namescope.
test_dir = self._get_test_dir("import_into_namescope")
filename = os.path.join(test_dir, "ckpt")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default():
image = array_ops.placeholder(dtypes.float32, [None, 784], name="image")
label = array_ops.placeholder(dtypes.float32, [None, 10], name="label")
with session.Session() as sess:
weights = variables.VariableV1(
random_ops.random_uniform([784, 10]), name="weights")
bias = variables.VariableV1(array_ops.zeros([10]), name="bias")
logit = nn_ops.relu(
math_ops.matmul(image, weights) + bias, name="logits")
nn_ops.softmax(logit, name="prediction")
cost = nn_ops.softmax_cross_entropy_with_logits(
labels=label, logits=logit, name="cost")
adam.AdamOptimizer().minimize(cost, name="optimize")
saver = saver_module.Saver()
self.evaluate(variables.global_variables_initializer())
saver.save(sess, filename)
graph = ops_lib.Graph()
with session.Session(graph=graph) as sess:
new_saver = saver_module.import_meta_graph(
filename + ".meta", graph=graph, import_scope="new_model")
new_saver.restore(sess, filename)
sess.run(["new_model/optimize"], {
"new_model/image:0": np.random.random([1, 784]),
"new_model/label:0": np.random.randint(
10, size=[1, 10])
})
def testImportIntoNamescopeWithoutVariables(self):
# Save a simple graph that contains no variables into a checkpoint.
test_dir = self._get_test_dir("no_vars_graph")
filename = os.path.join(test_dir, "ckpt")
graph_1 = ops_lib.Graph()
with session.Session(graph=graph_1) as sess:
constant_op.constant([1, 2, 3], name="x")
constant_op.constant([1, 2, 3], name="y")
saver = saver_module.Saver(allow_empty=True)
saver.save(sess, filename)
# Create a fresh graph.
graph_2 = ops_lib.Graph()
with session.Session(graph=graph_2) as sess:
# Restore the above checkpoint under scope "subgraph_1".
new_saver_1 = saver_module.import_meta_graph(
filename + ".meta", graph=graph_2, import_scope="subgraph_1")
# There are no variables to restore, so import_meta_graph should not
# return a Saver.
self.assertIsNone(new_saver_1)
# Create a variable in graph_2 under scope "my_scope".
variables.VariableV1(array_ops.zeros([10]), name="my_scope/my_var")
self.evaluate(variables.global_variables_initializer())
# Restore the checkpoint into a different scope "subgraph_2".
new_saver_2 = saver_module.import_meta_graph(
filename + ".meta", graph=graph_2, import_scope="subgraph_2")
# Because the variable does not live in scope "subgraph_2",
# import_meta_graph should not attempt to restore the variable. So,
# import_meta_graph still won't return a Saver instance.
self.assertIsNone(new_saver_2)
# However, if we restore the checkpoint under scope "my_scope",
# import_meta_graph will detect the variable and return a Saver for
# restoring it. This should happen even when the variable does not
# originate from graph_1.
new_saver_3 = saver_module.import_meta_graph(
filename + ".meta", graph=graph_2, import_scope="my_scope")
self.assertIsInstance(new_saver_3, saver_module.Saver)
def testImportIntoImplicitNamescope(self):
# Test that we can import a meta graph into an implicit namescope.
test_dir = self._get_test_dir("import_into_namescope")
filename = os.path.join(test_dir, "ckpt")
# train.Saver is V1 only API.
with ops_lib.Graph().as_default():
image = array_ops.placeholder(dtypes.float32, [None, 784], name="image")
label = array_ops.placeholder(dtypes.float32, [None, 10], name="label")
with session.Session() as sess:
weights = variables.VariableV1(
random_ops.random_uniform([784, 10]), name="weights")
bias = variables.VariableV1(array_ops.zeros([10]), name="bias")
logit = nn_ops.relu(
math_ops.matmul(image, weights) + bias, name="logits")
nn_ops.softmax(logit, name="prediction")
cost = nn_ops.softmax_cross_entropy_with_logits(
labels=label, logits=logit, name="cost")
adam.AdamOptimizer().minimize(cost, name="optimize")
saver = saver_module.Saver()
self.evaluate(variables.global_variables_initializer())
saver.save(sess, filename)
graph = ops_lib.Graph()
with session.Session(graph=graph) as sess:
with ops_lib.name_scope("new_model"):
new_saver = saver_module.import_meta_graph(
filename + ".meta", graph=graph)
new_saver.restore(sess, filename)
sess.run(["new_model/optimize"], {
"new_model/image:0": np.random.random([1, 784]),
"new_model/label:0": np.random.randint(
10, size=[1, 10])
})
def testClearDevicesOnImport(self):
# Test that we import a graph without its devices and run successfully.
with ops_lib.Graph().as_default():
with ops_lib.device("/job:ps/replica:0/task:0/device:GPU:0"):
image = array_ops.placeholder(dtypes.float32, [None, 784], name="image")
label = array_ops.placeholder(dtypes.float32, [None, 10], name="label")
weights = variables.VariableV1(
random_ops.random_uniform([784, 10]), name="weights")
bias = variables.VariableV1(array_ops.zeros([10]), name="bias")
logit = nn_ops.relu(math_ops.matmul(image, weights) + bias)
nn_ops.softmax(logit, name="prediction")
cost = nn_ops.softmax_cross_entropy_with_logits(labels=label,
logits=logit)
adam.AdamOptimizer().minimize(cost, name="optimize")
meta_graph_def = saver_module.export_meta_graph()
with session.Session(graph=ops_lib.Graph()) as sess:
saver_module.import_meta_graph(
meta_graph_def, clear_devices=False, import_scope="new_model")
# Device refers to GPU, which is not available here.
with self.assertRaises(errors_impl.InvalidArgumentError):
self.evaluate(variables.global_variables_initializer())
with session.Session(graph=ops_lib.Graph()) as sess:
saver_module.import_meta_graph(
meta_graph_def, clear_devices=True, import_scope="new_model")
self.evaluate(variables.global_variables_initializer())
sess.run(["new_model/optimize"], {
"new_model/image:0": np.random.random([1, 784]),
"new_model/label:0": np.random.randint(
10, size=[1, 10])
})
def testClearDevicesOnExport(self):
# Test that we export a graph without its devices and run successfully.
with ops_lib.Graph().as_default():
with ops_lib.device("/job:ps/replica:0/task:0/device:GPU:0"):
image = array_ops.placeholder(dtypes.float32, [None, 784], name="image")
label = array_ops.placeholder(dtypes.float32, [None, 10], name="label")
weights = variables.VariableV1(
random_ops.random_uniform([784, 10]), name="weights")
bias = variables.VariableV1(array_ops.zeros([10]), name="bias")
logit = nn_ops.relu(math_ops.matmul(image, weights) + bias)
nn_ops.softmax(logit, name="prediction")
cost = nn_ops.softmax_cross_entropy_with_logits(labels=label,
logits=logit)
adam.AdamOptimizer().minimize(cost, name="optimize")
meta_graph_def = saver_module.export_meta_graph(clear_devices=True)
graph_io.write_graph(meta_graph_def, self.get_temp_dir(),
"meta_graph.pbtxt")
with session.Session(graph=ops_lib.Graph()) as sess:
saver_module.import_meta_graph(meta_graph_def, import_scope="new_model")
self.evaluate(variables.global_variables_initializer())
sess.run(["new_model/optimize"], {
"new_model/image:0": np.random.random([1, 784]),
"new_model/label:0": np.random.randint(
10, size=[1, 10])
})
def testPreserveDatasetAndFunctions(self):
with ops_lib.Graph().as_default() as g:
dataset = dataset_ops.Dataset.range(10).map(lambda x: x * x)
iterator = dataset_ops.make_one_shot_iterator(dataset)
next_element = iterator.get_next()
_ = array_ops.identity(next_element, name="output")
# Generate three MetaGraphDef protos using different code paths.
meta_graph_def_simple = saver_module.export_meta_graph()
meta_graph_def_devices_cleared = saver_module.export_meta_graph(
clear_devices=True)
meta_graph_def_from_graph_def = saver_module.export_meta_graph(
clear_devices=True, graph_def=g.as_graph_def())
for meta_graph_def in [meta_graph_def_simple,
meta_graph_def_devices_cleared,
meta_graph_def_from_graph_def]:
with session.Session(graph=ops_lib.Graph()) as sess:
saver_module.import_meta_graph(meta_graph_def, import_scope="new_model")
self.evaluate(variables.global_variables_initializer())
for i in range(10):
self.assertEqual(i * i, sess.run("new_model/output:0"))
with self.assertRaises(errors.OutOfRangeError):
sess.run("new_model/output:0")
class CheckpointReaderTest(test.TestCase):
_WRITE_VERSION = saver_pb2.SaverDef.V1
def testDebugString(self):
# Builds a graph.
v0 = variables.VariableV1(
[[1, 2, 3], [4, 5, 6]], dtype=dtypes.float32, name="v0")
v1 = variables.VariableV1(
[[[1], [2]], [[3], [4]], [[5], [6]]], dtype=dtypes.float32, name="v1")
init_all_op = variables.global_variables_initializer()
save = saver_module.Saver(
{
"v0": v0,
"v1": v1
}, write_version=self._WRITE_VERSION)
save_path = os.path.join(self.get_temp_dir(),
"ckpt_for_debug_string" + str(self._WRITE_VERSION))
with self.cached_session() as sess:
self.evaluate(init_all_op)
# Saves a checkpoint.
save.save(sess, save_path)
# Creates a reader.
reader = py_checkpoint_reader.NewCheckpointReader(save_path)
# Verifies that the tensors exist.
self.assertTrue(reader.has_tensor("v0"))
self.assertTrue(reader.has_tensor("v1"))
debug_string = reader.debug_string()
# Verifies that debug string contains the right strings.
self.assertTrue(compat.as_bytes("v0 (DT_FLOAT) [2,3]") in debug_string)
self.assertTrue(compat.as_bytes("v1 (DT_FLOAT) [3,2,1]") in debug_string)
# Verifies get_variable_to_shape_map() returns the correct information.
var_map = reader.get_variable_to_shape_map()
self.assertEqual([2, 3], var_map["v0"])
self.assertEqual([3, 2, 1], var_map["v1"])
# Verifies get_tensor() returns the tensor value.
v0_tensor = reader.get_tensor("v0")
v1_tensor = reader.get_tensor("v1")
self.assertAllEqual(v0, v0_tensor)
self.assertAllEqual(v1, v1_tensor)
# Verifies get_tensor() fails for non-existent tensors.
with self.assertRaisesRegex(errors.NotFoundError,
"v3 not found in checkpoint"):
reader.get_tensor("v3")
def testNonexistentPath(self):
with self.assertRaisesRegex(errors.NotFoundError,
"Unsuccessful TensorSliceReader"):
py_checkpoint_reader.NewCheckpointReader("non-existent")
class CheckpointReaderForV2Test(CheckpointReaderTest):
_WRITE_VERSION = saver_pb2.SaverDef.V2
class WriteGraphTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
def testWriteGraph(self):
test_dir = self._get_test_dir("write_graph_dir")
variables.VariableV1(
[[1, 2, 3], [4, 5, 6]], dtype=dtypes.float32, name="v0")
path = graph_io.write_graph(ops_lib.get_default_graph(),
os.path.join(test_dir, "l1"), "graph.pbtxt")
truth = os.path.join(test_dir, "l1", "graph.pbtxt")
self.assertEqual(path, truth)
self.assertTrue(os.path.exists(path))
def testRecursiveCreate(self):
test_dir = self._get_test_dir("deep_dir")
variables.VariableV1(
[[1, 2, 3], [4, 5, 6]], dtype=dtypes.float32, name="v0")
path = graph_io.write_graph(ops_lib.get_default_graph().as_graph_def(),
os.path.join(test_dir, "l1", "l2", "l3"),
"graph.pbtxt")
truth = os.path.join(test_dir, "l1", "l2", "l3", "graph.pbtxt")
self.assertEqual(path, truth)
self.assertTrue(os.path.exists(path))
class ScopedGraphTest(test.TestCase):
def _get_test_dir(self, dirname):
test_dir = os.path.join(self.get_temp_dir(), dirname)
gfile.MakeDirs(test_dir)
return test_dir
def _testScopedSave(self, test_dir, exported_filename, ckpt_filename):
graph = ops_lib.Graph()
with graph.as_default():
# Creates an inference graph.
# Hidden 1
images = constant_op.constant(
1.2, dtypes.float32, shape=[100, 28], name="images")
with ops_lib.name_scope("hidden1"):
weights1 = variables.VariableV1(
random_ops.truncated_normal(
[28, 128], stddev=1.0 / math.sqrt(float(28))),
name="weights")
# The use of control_flow_ops.cond here is purely for adding test
# coverage the save and restore of control flow context (which doesn't
# make any sense here from a machine learning perspective). The typical
# biases is a simple Variable without the conditions.
biases1 = variables.VariableV1(
control_flow_ops.cond(
math_ops.less(random.random(), 0.5),
lambda: array_ops.ones([128]), lambda: array_ops.zeros([128])),
name="biases")
hidden1 = nn_ops.relu(math_ops.matmul(images, weights1) + biases1)
# Hidden 2
with ops_lib.name_scope("hidden2"):
weights2 = variables.VariableV1(
random_ops.truncated_normal(
[128, 32], stddev=1.0 / math.sqrt(float(128))),
name="weights")
# The use of control_flow_ops.while_loop here is purely for adding test
# coverage the save and restore of control flow context (which doesn't
# make any sense here from a machine learning perspective). The typical
# biases is a simple Variable without the conditions.
def loop_cond(it, _):
return it < 2
def loop_body(it, biases2):
biases2 += constant_op.constant(0.1, shape=[32])
return it + 1, biases2
_, biases2 = control_flow_ops.while_loop(loop_cond, loop_body, [
constant_op.constant(0), variables.VariableV1(array_ops.zeros([32]))
])
hidden2 = nn_ops.relu(math_ops.matmul(hidden1, weights2) + biases2)
# Linear
with ops_lib.name_scope("softmax_linear"):
weights3 = variables.VariableV1(
random_ops.truncated_normal(
[32, 10], stddev=1.0 / math.sqrt(float(32))),
name="weights")
biases3 = variables.VariableV1(array_ops.zeros([10]), name="biases")
logits = math_ops.matmul(hidden2, weights3) + biases3
ops_lib.add_to_collection("logits", logits)
# Adds user_defined proto in three formats: string, bytes and Any.
# Any proto should just pass through.
queue_runner = queue_runner_pb2.QueueRunnerDef(queue_name="test_queue")
ops_lib.add_to_collection("user_defined_string_collection",
str(queue_runner))
ops_lib.add_to_collection("user_defined_bytes_collection",
queue_runner.SerializeToString())
any_buf = Any()
any_buf.Pack(queue_runner)
ops_lib.add_to_collection("user_defined_any_collection", any_buf)
_, var_list = meta_graph.export_scoped_meta_graph(
filename=os.path.join(test_dir, exported_filename),
graph=ops_lib.get_default_graph(),
export_scope="hidden1")
self.assertEqual(["biases:0", "weights:0"], sorted(var_list.keys()))
with graph.as_default(), self.session() as sess:
self.evaluate(variables.global_variables_initializer())
saver = saver_module.Saver(var_list=var_list, max_to_keep=1)
saver.save(sess, os.path.join(test_dir, ckpt_filename), write_state=False)
def _testScopedRestore(self, test_dir, exported_filename,
new_exported_filename, ckpt_filename):
graph = ops_lib.Graph()
# Create all the missing inputs.
with graph.as_default():
new_image = constant_op.constant(
1.2, dtypes.float32, shape=[100, 28], name="images")
var_list = meta_graph.import_scoped_meta_graph(
os.path.join(test_dir, exported_filename),
graph=graph,
input_map={"$unbound_inputs_images": new_image},
import_scope="new_hidden1")
self.assertEqual(["biases:0", "weights:0"], sorted(var_list.keys()))
hidden1 = graph.as_graph_element("new_hidden1/Relu:0")
weights1 = graph.as_graph_element("new_hidden1/weights:0")
biases1 = graph.as_graph_element("new_hidden1/biases:0")
with graph.as_default():
# Hidden 2
with ops_lib.name_scope("hidden2"):
weights = variables.VariableV1(
random_ops.truncated_normal(
[128, 32], stddev=1.0 / math.sqrt(float(128))),
name="weights")
# The use of control_flow_ops.while_loop here is purely for adding test
# coverage the save and restore of control flow context (which doesn't
# make any sense here from a machine learning perspective). The typical
# biases is a simple Variable without the conditions.
def loop_cond(it, _):
return it < 2
def loop_body(it, biases):
biases += constant_op.constant(0.1, shape=[32])
return it + 1, biases
_, biases = control_flow_ops.while_loop(loop_cond, loop_body, [
constant_op.constant(0), variables.VariableV1(array_ops.zeros([32]))
])
hidden2 = nn_ops.relu(math_ops.matmul(hidden1, weights) + biases)
# Linear
with ops_lib.name_scope("softmax_linear"):
weights = variables.VariableV1(
random_ops.truncated_normal(
[32, 10], stddev=1.0 / math.sqrt(float(32))),
name="weights")
biases = variables.VariableV1(array_ops.zeros([10]), name="biases")
logits = math_ops.matmul(hidden2, weights) + biases
ops_lib.add_to_collection("logits", logits)
# The rest of the variables.
rest_variables = list(
set(variables.global_variables()) - set(var_list.keys()))
init_rest_op = variables.variables_initializer(rest_variables)
with graph.as_default(), self.session() as sess:
saver = saver_module.Saver(var_list=var_list, max_to_keep=1)
saver.restore(sess, os.path.join(test_dir, ckpt_filename))
# Verify that we have restored weights1 and biases1.
self.evaluate([weights1, biases1])
# Initialize the rest of the variables and run logits.
self.evaluate(init_rest_op)
self.evaluate(logits)
# Verifies that we can save the subgraph under "hidden1" and restore it
# into "new_hidden1" in the new graph.
def testScopedSaveAndRestore(self):
test_dir = self._get_test_dir("scoped_export_import")
ckpt_filename = "ckpt"
self._testScopedSave(test_dir, "exported_hidden1.pbtxt", ckpt_filename)
self._testScopedRestore(test_dir, "exported_hidden1.pbtxt",
"exported_new_hidden1.pbtxt", ckpt_filename)
# Verifies that we can copy the subgraph under "hidden1" and copy it
# to different name scope in the same graph or different graph.
def testCopyScopedGraph(self):
test_dir = self._get_test_dir("scoped_copy")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
graph1 = ops_lib.Graph()
with graph1.as_default():
with ops_lib.name_scope("hidden1"):
images = constant_op.constant(
1.0, dtypes.float32, shape=[3, 2], name="images")
weights1 = variables.VariableV1(
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], name="weights")
biases1 = variables.VariableV1([0.1] * 3, name="biases")
nn_ops.relu(math_ops.matmul(images, weights1) + biases1, name="relu")
# Run the graph and save scoped checkpoint.
with graph1.as_default(), self.session(graph=graph1) as sess:
self.evaluate(variables.global_variables_initializer())
_, var_list_1 = meta_graph.export_scoped_meta_graph(
export_scope="hidden1")
saver = saver_module.Saver(var_list=var_list_1, max_to_keep=1)
saver.save(sess, saver0_ckpt, write_state=False)
expected = np.reshape([[5.0999999, 7.0999999, 9.10000038] * 3], (3, 3))
# Verifies copy to the same graph with the same name fails.
with graph1.as_default():
with self.assertRaisesWithPredicateMatch(
ValueError, lambda e: "need to be different" in str(e)):
meta_graph.copy_scoped_meta_graph(
from_scope="hidden1", to_scope="hidden1")
# Verifies copy to the same graph.
with graph1.as_default():
var_list_2 = meta_graph.copy_scoped_meta_graph(
from_scope="hidden1", to_scope="hidden2")
with graph1.as_default(), self.session(graph=graph1) as sess:
saver1 = saver_module.Saver(var_list=var_list_1, max_to_keep=1)
saver1.restore(sess, saver0_ckpt)
saver2 = saver_module.Saver(var_list=var_list_2, max_to_keep=1)
saver2.restore(sess, saver0_ckpt)
self.assertAllClose(expected, sess.run("hidden1/relu:0"))
self.assertAllClose(expected, sess.run("hidden2/relu:0"))
# Verifies copy to different graph.
graph2 = ops_lib.Graph()
with graph2.as_default():
new_var_list_1 = meta_graph.copy_scoped_meta_graph(
from_scope="hidden1",
to_scope="new_hidden1",
from_graph=graph1,
to_graph=graph2)
with self.session() as sess:
saver3 = saver_module.Saver(var_list=new_var_list_1, max_to_keep=1)
saver3.restore(sess, saver0_ckpt)
self.assertAllClose(expected, sess.run("new_hidden1/relu:0"))
def testExportGraphDefWithScope(self):
test_dir = self._get_test_dir("export_graph_def")
saver0_ckpt = os.path.join(test_dir, "saver0.ckpt")
graph1 = ops_lib.Graph()
with graph1.as_default():
with ops_lib.name_scope("hidden1"):
images = constant_op.constant(
1.0, dtypes.float32, shape=[3, 2], name="images")
weights1 = variables.VariableV1(
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], name="weights")
biases1 = variables.VariableV1([0.1] * 3, name="biases")
nn_ops.relu(math_ops.matmul(images, weights1) + biases1, name="relu")
# Run the graph and save scoped checkpoint.
with self.session(graph=graph1) as sess:
self.evaluate(variables.global_variables_initializer())
_, var_list_1 = meta_graph.export_scoped_meta_graph(
graph_def=graph1.as_graph_def(), export_scope="hidden1")
saver = saver_module.Saver(var_list=var_list_1, max_to_keep=1)
saver.save(sess, saver0_ckpt, write_state=False)
expected = np.reshape([[5.0999999, 7.0999999, 9.10000038] * 3], (3, 3))
# Verifies that we can run successfully after restoring.
graph2 = ops_lib.Graph()
with graph2.as_default():
new_var_list_1 = meta_graph.copy_scoped_meta_graph(
from_scope="hidden1",
to_scope="new_hidden1",
from_graph=graph1,
to_graph=graph2)
with self.session(graph=graph2) as sess:
saver3 = saver_module.Saver(var_list=new_var_list_1, max_to_keep=1)
saver3.restore(sess, saver0_ckpt)
self.assertAllClose(expected, sess.run("new_hidden1/relu:0"))
def testSerializeSaverWithScope(self):
test_dir = self._get_test_dir("export_graph_def")
saver1_ckpt = os.path.join(test_dir, "saver1.ckpt")
saver2_ckpt = os.path.join(test_dir, "saver2.ckpt")
graph = ops_lib.Graph()
with graph.as_default():
with ops_lib.name_scope("hidden1"):
variable1 = variables.VariableV1([1.0], name="variable1")
saver1 = saver_module.Saver(var_list=[variable1])
graph.add_to_collection(ops_lib.GraphKeys.SAVERS, saver1)
with ops_lib.name_scope("hidden2"):
variable2 = variables.VariableV1([2.0], name="variable2")
saver2 = saver_module.Saver(var_list=[variable2], name="hidden2/")
graph.add_to_collection(ops_lib.GraphKeys.SAVERS, saver2)
with self.session(graph=graph) as sess:
self.evaluate(variables.global_variables_initializer())
saver1.save(sess, saver1_ckpt, write_state=False)
saver2.save(sess, saver2_ckpt, write_state=False)
graph1 = ops_lib.Graph()
with graph1.as_default():
var_dict1 = meta_graph.copy_scoped_meta_graph(
from_scope="hidden1",
to_scope="new_hidden1",
from_graph=graph,
to_graph=graph1)
self.assertEqual(1, len(var_dict1))
saver_list1 = graph1.get_collection(ops_lib.GraphKeys.SAVERS)
self.assertEqual(1, len(saver_list1))
with self.session(graph=graph1) as sess:
saver_list1[0].restore(sess, saver1_ckpt)
self.assertEqual(1.0, self.evaluate(var_dict1["variable1:0"]))
graph2 = ops_lib.Graph()
with graph2.as_default():
var_dict2 = meta_graph.copy_scoped_meta_graph(
from_scope="hidden2",
to_scope="new_hidden2",
from_graph=graph,
to_graph=graph2)
self.assertEqual(1, len(var_dict2))
saver_list2 = graph2.get_collection(ops_lib.GraphKeys.SAVERS)
self.assertEqual(1, len(saver_list2))
with self.session(graph=graph2) as sess:
saver_list2[0].restore(sess, saver2_ckpt)
self.assertEqual(2.0, self.evaluate(var_dict2["variable2:0"]))
class _OwnsAVariableSimple(trackable_base.Trackable):
"""A Trackable object which can be saved using a tf.train.Saver."""
def __init__(self):
self.non_dep_variable = variable_scope.get_variable(
name="non_dep_variable", initializer=6., use_resource=True)
def _gather_saveables_for_checkpoint(self):
return {trackable_base.VARIABLE_VALUE_KEY: self.non_dep_variable}
# The Saver sorts by name before parsing, so we need a name property.
@property
def name(self):
return self.non_dep_variable.name
class _MirroringSaveable(
saver_module.BaseSaverBuilder.ResourceVariableSaveable):
def __init__(self, primary_variable, mirrored_variable, name):
self._primary_variable = primary_variable
self._mirrored_variable = mirrored_variable
super(_MirroringSaveable, self).__init__(
self._primary_variable, "", name)
def restore(self, restored_tensors, restored_shapes):
"""Restore the same value into both variables."""
tensor, = restored_tensors
return control_flow_ops.group(
self._primary_variable.assign(tensor),
self._mirrored_variable.assign(tensor))
class _OwnsMirroredVariables(trackable_base.Trackable):
"""A Trackable object which returns a more complex SaveableObject."""
def __init__(self):
self.non_dep_variable = variable_scope.get_variable(
name="non_dep_variable", initializer=6., use_resource=True)
self.mirrored = variable_scope.get_variable(
name="mirrored", initializer=15., use_resource=True)
def _gather_saveables_for_checkpoint(self):
def _saveable_factory(name=self.non_dep_variable.name):
return _MirroringSaveable(
primary_variable=self.non_dep_variable,
mirrored_variable=self.mirrored,
name=name)
return {trackable_base.VARIABLE_VALUE_KEY: _saveable_factory}
# The Saver sorts by name before parsing, so we need a name property.
@property
def name(self):
return self.non_dep_variable.name
class TrackableCompatibilityTests(test.TestCase):
# TODO(allenl): Track down python3 reference cycles in these tests.
@test_util.run_in_graph_and_eager_modes
def testNotSaveableButIsTrackable(self):
v = _OwnsAVariableSimple()
test_dir = self.get_temp_dir()
prefix = os.path.join(test_dir, "ckpt")
for saver in (saver_module.Saver(var_list=[v]),
saver_module.Saver(var_list={"v": v})):
with self.cached_session() as sess:
self.evaluate(v.non_dep_variable.assign(42.))
save_path = saver.save(sess, prefix)
self.evaluate(v.non_dep_variable.assign(43.))
saver.restore(sess, save_path)
self.assertEqual(42., self.evaluate(v.non_dep_variable))
@test_util.run_in_graph_and_eager_modes
def testMoreComplexSaveableReturned(self):
v = _OwnsMirroredVariables()
test_dir = self.get_temp_dir()
prefix = os.path.join(test_dir, "ckpt")
self.evaluate(v.non_dep_variable.assign(42.))
for saver in (saver_module.Saver(var_list=[v]),
saver_module.Saver(var_list={"v": v})):
with self.cached_session() as sess:
save_path = saver.save(sess, prefix)
self.evaluate(v.non_dep_variable.assign(43.))
self.evaluate(v.mirrored.assign(44.))
saver.restore(sess, save_path)
self.assertEqual(42., self.evaluate(v.non_dep_variable))
self.assertEqual(42., self.evaluate(v.mirrored))
def testSingleTensorEvaluation(self):
class _CountingSaveable(saver_module.BaseSaverBuilder.SaveableObject):
def __init__(self, name):
self.eval_count = 0
def _tensor():
self.eval_count += 1
return constant_op.constant([1.])
dummy_op = constant_op.constant([2.])
super(_CountingSaveable, self).__init__(
dummy_op,
[saver_module.BaseSaverBuilder.SaveSpec(
_tensor, "", name, dtype=dummy_op.dtype,
device=dummy_op.device)],
name)
def restore(self, restored_tensors, restored_shapes):
"""Restore the same value into both variables."""
pass
with context.eager_mode():
v = _CountingSaveable("foo")
saver = saver_module.Saver(var_list=[v])
test_dir = self.get_temp_dir()
prefix = os.path.join(test_dir, "ckpt")
with self.cached_session() as sess:
save_path = saver.save(sess, prefix)
self.assertEqual(1, v.eval_count)
saver.restore(sess, save_path)
self.assertEqual(1, v.eval_count)
def testVariableNotFoundErrorRaised(self):
# Restore does some tricky exception handling to figure out if it should
# load an object-based checkpoint. Tests that the exception handling isn't
# too broad.
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
a = resource_variable_ops.ResourceVariable(1., name="a")
b = resource_variable_ops.ResourceVariable(1., name="b")
a_saver = saver_module.Saver([a])
b_saver = saver_module.Saver([b])
with self.cached_session() as sess:
self.evaluate(a.initializer)
save_path = a_saver.save(sess=sess, save_path=checkpoint_prefix)
with self.assertRaisesRegex(errors.NotFoundError,
"Key b not found in checkpoint"):
b_saver.restore(sess=sess, save_path=save_path)
with self.assertRaises(errors.NotFoundError) as cs:
b_saver.restore(sess=sess, save_path=save_path)
# Make sure we don't have a confusing "During handling of the above
# exception" block in Python 3.
self.assertNotIn("NewCheckpointReader", cs.exception.message)
@test_util.run_v1_only("train.Saver is V1 only API.")
def testGraphChangedForRestoreErrorRaised(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
with ops_lib.Graph().as_default() as g:
a = variables.VariableV1(1., name="a")
a_saver = saver_module.Saver([a])
with self.session(graph=g) as sess:
self.evaluate(a.initializer)
save_path = a_saver.save(sess=sess, save_path=checkpoint_prefix)
with ops_lib.Graph().as_default() as g:
a = variables.VariableV1([1.], name="a")
a_saver = saver_module.Saver([a])
with self.session(graph=g) as sess:
with self.assertRaisesRegex(
errors.InvalidArgumentError,
"a mismatch between the current graph and the graph"):
a_saver.restore(sess=sess, save_path=save_path)
if __name__ == "__main__":
test.main()
| {
"content_hash": "43c5a35855e7767ea018695698ad45ca",
"timestamp": "",
"source": "github",
"line_count": 3200,
"max_line_length": 80,
"avg_line_length": 42.5784375,
"alnum_prop": 0.6442594916734556,
"repo_name": "Intel-Corporation/tensorflow",
"id": "423fbb07f4609e8fa2d2db1571a9c19c5023eb7b",
"size": "136939",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "tensorflow/python/training/saver_test.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "7481"
},
{
"name": "C",
"bytes": "183416"
},
{
"name": "C++",
"bytes": "24549804"
},
{
"name": "CMake",
"bytes": "160888"
},
{
"name": "Go",
"bytes": "849081"
},
{
"name": "HTML",
"bytes": "681293"
},
{
"name": "Java",
"bytes": "307123"
},
{
"name": "Jupyter Notebook",
"bytes": "1833659"
},
{
"name": "LLVM",
"bytes": "6536"
},
{
"name": "Makefile",
"bytes": "37393"
},
{
"name": "Objective-C",
"bytes": "7037"
},
{
"name": "Objective-C++",
"bytes": "64142"
},
{
"name": "Protocol Buffer",
"bytes": "218430"
},
{
"name": "Python",
"bytes": "21875003"
},
{
"name": "Shell",
"bytes": "337846"
},
{
"name": "TypeScript",
"bytes": "849555"
}
],
"symlink_target": ""
} |
from __future__ import print_function, division
import matplotlib
import logging
from sys import stdout
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
from neuralnilm import (Net, RealApplianceSource)
from neuralnilm.source import (standardise, discretize, fdiff, power_and_fdiff,
RandomSegments, RandomSegmentsInMemory,
SameLocation, MultiSource)
from neuralnilm.experiment import run_experiment, init_experiment
from neuralnilm.net import TrainingError
from neuralnilm.layers import (MixtureDensityLayer, DeConv1DLayer,
SharedWeightsDenseLayer, BLSTMLayer)
from neuralnilm.objectives import (scaled_cost, mdn_nll,
scaled_cost_ignore_inactive, ignore_inactive,
scaled_cost3)
from neuralnilm.plot import MDNPlotter, CentralOutputPlotter, Plotter, RectangularOutputPlotter, StartEndMeanPlotter
from neuralnilm.updates import clipped_nesterov_momentum
from neuralnilm.disaggregate import disaggregate
from neuralnilm.rectangulariser import rectangularise
from lasagne.nonlinearities import sigmoid, rectify, tanh, identity, softmax
from lasagne.objectives import squared_error, binary_crossentropy
from lasagne.init import Uniform, Normal
from lasagne.layers import (DenseLayer, Conv1DLayer,
ReshapeLayer, FeaturePoolLayer,
DimshuffleLayer, DropoutLayer, ConcatLayer, PadLayer)
from lasagne.updates import nesterov_momentum, momentum
from functools import partial
import os
import __main__
from copy import deepcopy
from math import sqrt
import numpy as np
import theano.tensor as T
import gc
"""
Max powers:
microwave = 3000W
"""
NAME = os.path.splitext(os.path.split(__main__.__file__)[1])[0]
#PATH = "/homes/dk3810/workspace/python/neuralnilm/figures"
PATH = "/data/dk3810/figures"
# PATH = "/home/jack/experiments/neuralnilm/figures"
SAVE_PLOT_INTERVAL = 1000
UKDALE_FILENAME = '/data/dk3810/ukdale.h5'
MAX_TARGET_POWER = 3000
ON_POWER_THRESHOLD = 200
MIN_ON_DURATION = 18
MIN_OFF_DURATION = 30
TARGET_APPLIANCE = 'microwave'
SEQ_LENGTH = 256
N_SEQ_PER_BATCH = 64
TRAIN_BUILDINGS = [1] #, 2]
VALIDATION_BUILDINGS = [1] # 5
SKIP_PROBABILITY_FOR_TARGET = 0.5
INDEPENDENTLY_CENTER_INPUTS = False
WINDOW_PER_BUILDING = {
1: ("2013-03-17", "2014-12-01"),
2: ("2013-05-22", "2013-10-01"),
3: ("2013-02-27", "2013-04-01"),
4: ("2013-03-09", "2013-09-20"),
5: ("2014-06-29", "2014-08-27")
}
INPUT_STATS = {
'mean': np.array([297.87216187], dtype=np.float32),
'std': np.array([374.43884277], dtype=np.float32)
}
def only_train_on_real_data(net, iteration):
net.logger.info(
"Iteration {}: Now only training on real data.".format(iteration))
net.source.sources[0]['train_probability'] = 0.0
net.source.sources[1]['train_probability'] = 1.0
net_dict = dict(
save_plot_interval=SAVE_PLOT_INTERVAL,
loss_function=lambda x, t: squared_error(x, t).mean(),
updates_func=nesterov_momentum,
learning_rate=1e-1,
learning_rate_changes_by_iteration={
1000: 1e-2,
10000: 1e-3
},
epoch_callbacks={
350000: only_train_on_real_data
},
do_save_activations=True,
auto_reshape=False,
layers_config=[
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # (batch, features, time)
},
{
'type': Conv1DLayer, # convolve over the time axis
'num_filters': 16,
'filter_size': 4,
'stride': 1,
'nonlinearity': None,
'border_mode': 'same'
},
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1), # back to (batch, time, features)
'label': 'dimshuffle3'
},
{
'type': BLSTMLayer,
'num_units': 128,
'merge_mode': 'concatenate'
},
{
'type': BLSTMLayer,
'num_units': 128,
'merge_mode': 'concatenate'
},
{
'type': ReshapeLayer,
'shape': (16384, 256)
},
{
'type': DenseLayer,
'num_units': 128,
'nonlinearity': tanh
},
{
'type': DenseLayer,
'num_units': 1,
'nonlinearity': None
}
]
)
def exp_a(name):
logger = logging.getLogger(name)
global multi_source
real_appliance_source1 = RealApplianceSource(
logger=logger,
filename=UKDALE_FILENAME,
appliances=[
TARGET_APPLIANCE,
['fridge freezer', 'fridge', 'freezer']
],
max_appliance_powers=[MAX_TARGET_POWER, 300, 2500, 2600, 2400],
on_power_thresholds=[ON_POWER_THRESHOLD] + [10] * 4,
min_on_durations=[MIN_ON_DURATION, 60, 1800, 12, 1800],
min_off_durations=[MIN_OFF_DURATION, 12, 1800, 12, 600],
divide_input_by_max_input_power=False,
window_per_building=WINDOW_PER_BUILDING,
seq_length=SEQ_LENGTH,
output_one_appliance=True,
train_buildings=TRAIN_BUILDINGS,
validation_buildings=VALIDATION_BUILDINGS,
n_seq_per_batch=N_SEQ_PER_BATCH,
skip_probability=0.75,
skip_probability_for_first_appliance=SKIP_PROBABILITY_FOR_TARGET,
# target_is_start_and_end_and_mean=True,
standardise_input=True,
input_stats=INPUT_STATS,
independently_center_inputs=INDEPENDENTLY_CENTER_INPUTS
)
real_appliance_source2 = RealApplianceSource(
logger=logger,
filename=UKDALE_FILENAME,
appliances=[
TARGET_APPLIANCE,
['fridge freezer', 'fridge', 'freezer'],
'dish washer',
'kettle',
['washer dryer', 'washing machine']
],
max_appliance_powers=[MAX_TARGET_POWER, 300, 2500, 2600, 2400],
on_power_thresholds=[ON_POWER_THRESHOLD] + [10] * 4,
min_on_durations=[MIN_ON_DURATION, 60, 1800, 12, 1800],
min_off_durations=[MIN_OFF_DURATION, 12, 1800, 12, 600],
divide_input_by_max_input_power=False,
window_per_building=WINDOW_PER_BUILDING,
seq_length=SEQ_LENGTH,
output_one_appliance=True,
train_buildings=TRAIN_BUILDINGS,
validation_buildings=VALIDATION_BUILDINGS,
n_seq_per_batch=N_SEQ_PER_BATCH,
skip_probability=0.75,
skip_probability_for_first_appliance=SKIP_PROBABILITY_FOR_TARGET,
# target_is_start_and_end_and_mean=True,
standardise_input=True,
input_stats=INPUT_STATS,
independently_center_inputs=INDEPENDENTLY_CENTER_INPUTS
)
multi_source = MultiSource(
sources=[
{
'source': real_appliance_source1,
'train_probability': 1.0,
'validation_probability': 0
},
{
'source': real_appliance_source2,
'train_probability': 0.0,
'validation_probability': 1
}
],
standardisation_source=real_appliance_source2
)
net_dict_copy = deepcopy(net_dict)
net_dict_copy.update(dict(
experiment_name=name,
source=multi_source,
plotter=Plotter(
n_seq_to_plot=32,
n_training_examples_to_plot=16)
))
net = Net(**net_dict_copy)
return net
def main():
EXPERIMENTS = list('a')
for experiment in EXPERIMENTS:
full_exp_name = NAME + experiment
func_call = init_experiment(PATH, experiment, full_exp_name)
logger = logging.getLogger(full_exp_name)
try:
net = eval(func_call)
run_experiment(net, epochs=None)
except KeyboardInterrupt:
logger.info("KeyboardInterrupt")
break
except Exception:
logger.exception("Exception")
# raise
finally:
logging.shutdown()
if __name__ == "__main__":
main()
"""
Emacs variables
Local Variables:
compile-command: "cp /home/jack/workspace/python/neuralnilm/scripts/e560.py /mnt/sshfs/imperial/workspace/python/neuralnilm/scripts/"
End:
"""
| {
"content_hash": "8f2df40d35b33dfbc09a90a377a1f5d1",
"timestamp": "",
"source": "github",
"line_count": 261,
"max_line_length": 133,
"avg_line_length": 31.697318007662837,
"alnum_prop": 0.6061888069624078,
"repo_name": "mmottahedi/neuralnilm_prototype",
"id": "54c1598483dcb361e1aaa27add727d63c181a1f5",
"size": "8273",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "scripts/e560.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "4536723"
}
],
"symlink_target": ""
} |
from django.http import HttpResponseRedirect
from django.views.generic import ListView
from ...views import BackendModelViewMixin
class ActionFormMixin(BackendModelViewMixin):
def pre_dispatch(self, request, *args, **kwargs):
self.action_form = self.get_action_form()
return super(ActionFormMixin, self).pre_dispatch(request, *args,
**kwargs)
def post(self, request, *args, **kwargs):
if self.action_form is None:
return self.http_method_not_allowed(request, *args, **kwargs)
if self.action_form.is_valid():
self.action_form.perform_action(self.backend, request)
return HttpResponseRedirect(request.path)
return self.get(request, *args, **kwargs)
def get_action_form_class(self):
return self.backend.get_action_form_class()
def get_action_form(self):
actions = self.backend.get_available_list_actions(self.request.user)
if actions:
kwargs = {
'actions': actions,
'queryset': self.get_queryset(),
}
action_form_class = self.get_action_form_class()
if self.request.method == 'POST':
return action_form_class(self.request.POST, **kwargs)
else:
return action_form_class(**kwargs)
def get_context_data(self, **kwargs):
kwargs['action_form'] = self.action_form
return super(ActionFormMixin, self).get_context_data(**kwargs)
class BackendListView(ActionFormMixin, BackendModelViewMixin, ListView):
template_type = 'list'
def get_required_permissions(self):
return super(BackendListView, self).get_required_permissions() + ['list']
def init_dispatch(self, request, *args, **kwargs):
self.filter_form = self.get_filter_form()
self.sort_form = self.get_sort_form()
super(BackendListView, self).init_dispatch(request, *args, **kwargs)
def get_paginate_by(self, queryset):
return self.backend.paginate_by
def get_filter_form_class(self):
return self.backend.get_filter_form_class()
def get_filter_form(self):
filter_form_class = self.get_filter_form_class()
if filter_form_class is None:
return
return filter_form_class(self.request.GET)
def get_sort_form_class(self):
return self.backend.get_sort_form_class()
def get_sort_form(self, list_columns=None):
sort_form_class = self.get_sort_form_class()
if sort_form_class is None:
return
# TODO: Support self.backend.order_by tuple?
sort_form_data = {
'order_by': self.backend.order_by,
}
if 'order_by' in self.request.GET:
sort_form_data['order_by'] = self.request.GET['order_by']
if list_columns is None:
list_columns = self.backend.list_columns
return sort_form_class(
sort_form_data,
list_columns=list_columns)
def get_context_data(self, **kwargs):
kwargs['filter_form'] = self.filter_form
kwargs['sort_form'] = self.sort_form
return super(BackendListView, self).get_context_data(**kwargs)
def get_queryset(self):
queryset = super(BackendListView, self).get_queryset()
# Filter the queryset.
if self.filter_form and self.filter_form.is_valid():
queryset = self.filter_form.filter_queryset(queryset)
# Apply backend order_by
if self.backend.order_by:
order_by = self.backend.order_by
if not isinstance(order_by, (list, tuple)):
order_by = (order_by,)
queryset = queryset.order_by(*order_by)
# Sort the queryset.
if self.sort_form and self.sort_form.is_valid():
queryset = self.sort_form.sort_queryset(queryset)
return queryset
| {
"content_hash": "0e916510ac0a35122f8cba2f14ad2c8f",
"timestamp": "",
"source": "github",
"line_count": 107,
"max_line_length": 81,
"avg_line_length": 36.78504672897196,
"alnum_prop": 0.6133130081300813,
"repo_name": "team23/django_backend",
"id": "0a3c4e0bbff551f869e80a7440004fb6b7bf2fdf",
"size": "3936",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "django_backend/backend/base/views/list.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "70141"
},
{
"name": "Groff",
"bytes": "160"
},
{
"name": "HTML",
"bytes": "220610"
},
{
"name": "JavaScript",
"bytes": "707306"
},
{
"name": "Python",
"bytes": "168182"
},
{
"name": "Ruby",
"bytes": "1515"
},
{
"name": "Shell",
"bytes": "53"
}
],
"symlink_target": ""
} |
"""
BRMFlask instance instantiation.
Load and configure brmflask flask application.
"""
from os import environ
from importlib import import_module
from flask import Flask
from brmflask.utils.routing import base_path
from brmflask.utils.dotenv import dotenv
def create_app(
app_name=__name__,
config_override=None,
env_file='.brm_env',
load_environ=True,
template_folder='app/templates'
):
"""
Create Flask app using factory.
:param app_name: Name of current application
:return: Flask Class
"""
this_app = Flask(
app_name,
instance_relative_config=True,
template_folder=base_path(template_folder)
)
configure_app(
this_app,
config_override=config_override,
load_environ=load_environ,
env_file=env_file
)
register_blueprints(this_app, this_app.config['BRMFLASK_BLUEPRINTS'])
register_extensions(this_app, this_app.config['BRMFLASK_EXTENSIONS'])
return this_app
def register_blueprints(app, blueprints):
"""
Apply extensions to Flask app.
Each extension specified in the extensions list will be called.
In addition, custom extensions may be put in app/__init__.py.
Current list options are:
1. static
2. sitemap
3. dynamic
Psuedo Code:
1. For each "blueprint" in the list of blueprints
2. Import the module found at the path brmflask.blueprints."blueprint"
3. Get the package of the above imported module whose name is "blueprint"
4. Register the package as a Blueprint object for the given app.
:param: app Flask app
:param: extensions list of extensions to add
:return: None (or is it better to return the app??)
"""
for blueprint in blueprints:
app.register_blueprint(
getattr(
import_module("brmflask.blueprints.{0}".format(blueprint)),
blueprint
)
)
return None
def register_extensions(app, extensions):
"""
Apply extensions to Flask app.
Each extension specified in the extensions list will be called.
In addition, custom extensions may be put in app/__init__.py.
Current options are:
1. markdown
2. compress
3. cache
4. Debug Toolbar
:param: app Flask app
:param: extensions list of extensions to add
:return: None (or is it better to return the app??)
"""
for extension in extensions:
register = "register_{0}".format(extension)
locals()[register] = getattr(
import_module("brmflask.exts.{}".format(extension)),
"register_{}".format(extension)
)(app)
return None
def configure_app(
app,
config_override=None,
load_environ=True,
env_file='.brm_env'
):
"""
Configure application settings.
:param app: Flask application instance
:return: Flask app.config object.
"""
# Add the environ dict to the config
if load_environ:
app.config.update(environ)
# load environment from file if it exists
app.config.update(load_dotenv(env_file))
# Override any values at runtime if necessary
if config_override:
app.config.update(config_override)
# Try to load the class settings specified in the
# Environ key BRMFLASK_CONFIG
# These values should be sensetive info, not settings
# And the values will be applied circularly to the settings.
app.config.from_object(
app.config.get(
'BRMFLASK_CONFIG',
'brmflask.settings.BaseConfig'
)
)
return app.config
def load_dotenv(env_file):
"""
Load the app environment.
First try to load the environment from env file
If no env file is loaded, then set the env to the
system environment. This env file will be used to
load the correct settings class. We are looking for
the BRMFLASK_CONFIG key which should have the value
of the class object to load into the config.
:param env_file: <string> path/to/file
:return: <dict>
"""
return dotenv(env_file)
| {
"content_hash": "8d7d08b44d2e5a615dbfa134b1bb8be6",
"timestamp": "",
"source": "github",
"line_count": 150,
"max_line_length": 77,
"avg_line_length": 27.173333333333332,
"alnum_prop": 0.6570166830225711,
"repo_name": "BRMWebDev/BRMFlask",
"id": "bb1671808af8a698e98ee86da2ec384b3ee7ee45",
"size": "4076",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "brmflask/app.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "32909"
}
],
"symlink_target": ""
} |
from rest_framework import routers
from .api import (
AttemptLogViewSet, ContentSessionLogViewSet, ContentSummaryLogViewSet, ExamAttemptLogViewSet, ExamLogViewSet, MasteryLogViewSet,
TotalContentProgressViewSet, UserSessionLogViewSet
)
from .csv import ContentSessionLogCSVExportViewSet, ContentSummaryLogCSVExportViewSet
router = routers.SimpleRouter()
router.register(r'contentsessionlog', ContentSessionLogViewSet, base_name='contentsessionlog')
router.register(r'contentsummarylog', ContentSummaryLogViewSet, base_name='contentsummarylog')
router.register(r'usersessionlog', UserSessionLogViewSet, base_name='usersessionlog')
router.register(r'masterylog', MasteryLogViewSet, base_name='masterylog')
router.register(r'attemptlog', AttemptLogViewSet, base_name='attemptlog')
router.register(r'examlog', ExamLogViewSet, base_name='examlog')
router.register(r'examattemptlog', ExamAttemptLogViewSet, base_name='examattemptlog')
router.register(r'userprogress', TotalContentProgressViewSet, base_name='userprogress')
router.register(r'contentsummarylogcsv', ContentSummaryLogCSVExportViewSet, base_name='contentsummarylogcsv')
router.register(r'contentsessionlogcsv', ContentSessionLogCSVExportViewSet, base_name='contentsessionlogcsv')
urlpatterns = router.urls
| {
"content_hash": "72dfd892297c073ee2f7cd50e86b976d",
"timestamp": "",
"source": "github",
"line_count": 23,
"max_line_length": 132,
"avg_line_length": 55.47826086956522,
"alnum_prop": 0.8432601880877743,
"repo_name": "rtibbles/kolibri",
"id": "a9b409c03d1e0043ec7c7ffc88420371ca505f23",
"size": "1276",
"binary": false,
"copies": "5",
"ref": "refs/heads/develop",
"path": "kolibri/logger/api_urls.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "27623"
},
{
"name": "HTML",
"bytes": "4406"
},
{
"name": "JavaScript",
"bytes": "510659"
},
{
"name": "Makefile",
"bytes": "3914"
},
{
"name": "Python",
"bytes": "664765"
},
{
"name": "Shell",
"bytes": "10337"
},
{
"name": "Vue",
"bytes": "481473"
}
],
"symlink_target": ""
} |
from __future__ import absolute_import, unicode_literals
from bs4 import BeautifulSoup
from django.test import TestCase
from wagtail.wagtaildocs.rich_text import DocumentLinkHandler
class TestDocumentRichTextLinkHandler(TestCase):
fixtures = ['test.json']
def test_get_db_attributes(self):
soup = BeautifulSoup('<a data-id="test-id">foo</a>', 'html5lib')
tag = soup.a
result = DocumentLinkHandler.get_db_attributes(tag)
self.assertEqual(result,
{'id': 'test-id'})
def test_expand_db_attributes_document_does_not_exist(self):
result = DocumentLinkHandler.expand_db_attributes(
{'id': 0},
False
)
self.assertEqual(result, '<a>')
def test_expand_db_attributes_for_editor(self):
result = DocumentLinkHandler.expand_db_attributes(
{'id': 1},
True
)
self.assertEqual(result,
'<a data-linktype="document" data-id="1" href="/documents/1/test.pdf">')
def test_expand_db_attributes_not_for_editor(self):
result = DocumentLinkHandler.expand_db_attributes(
{'id': 1},
False
)
self.assertEqual(result,
'<a href="/documents/1/test.pdf">')
| {
"content_hash": "08b106f62395834ae4799b0c5afd8f64",
"timestamp": "",
"source": "github",
"line_count": 41,
"max_line_length": 97,
"avg_line_length": 31.829268292682926,
"alnum_prop": 0.5954022988505747,
"repo_name": "iansprice/wagtail",
"id": "520dc8ba0fffa120d21fb68a962d7990eef01153",
"size": "1305",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "wagtail/wagtaildocs/tests/test_rich_text.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "166081"
},
{
"name": "HTML",
"bytes": "325248"
},
{
"name": "JavaScript",
"bytes": "177341"
},
{
"name": "Makefile",
"bytes": "720"
},
{
"name": "Python",
"bytes": "3102671"
},
{
"name": "Shell",
"bytes": "7871"
}
],
"symlink_target": ""
} |
from selenium.webdriver.firefox.webdriver import WebDriver
from selenium.webdriver.common.action_chains import ActionChains
import time, unittest
def is_alert_present(wd):
try:
wd.switch_to_alert().text
return True
except:
return False
class test_add_fgroup(unittest.TestCase):
def setUp(self):
self.wd = WebDriver()
self.wd.implicitly_wait(60)
def test_test_add_fgroup(self):
success = True
wd = self.wd
wd.get("http://localhost/addressbook/")
wd.find_element_by_id("LoginForm").click()
wd.find_element_by_name("user").click()
wd.find_element_by_name("user").clear()
wd.find_element_by_name("user").send_keys("admin")
wd.find_element_by_name("pass").click()
wd.find_element_by_name("pass").clear()
wd.find_element_by_name("pass").send_keys("secret")
wd.find_element_by_css_selector("input[type=\"submit\"]").click()
wd.find_element_by_link_text("groups").click()
wd.find_element_by_name("new").click()
wd.find_element_by_name("group_name").click()
wd.find_element_by_name("group_name").clear()
wd.find_element_by_name("group_name").send_keys("students_group")
wd.find_element_by_name("group_header").click()
wd.find_element_by_name("group_header").click()
wd.find_element_by_name("group_header").clear()
wd.find_element_by_name("group_header").send_keys("logo")
wd.find_element_by_name("group_footer").click()
wd.find_element_by_name("group_footer").clear()
wd.find_element_by_name("group_footer").send_keys("comment")
wd.find_element_by_name("submit").click()
wd.find_element_by_link_text("group page").click()
wd.find_element_by_link_text("Logout").click()
self.assertTrue(success)
def tearDown(self):
self.wd.quit()
if __name__ == '__main__':
unittest.main()
| {
"content_hash": "f9a3d934e25baedf2652e33dae93f112",
"timestamp": "",
"source": "github",
"line_count": 50,
"max_line_length": 73,
"avg_line_length": 39.28,
"alnum_prop": 0.6206720977596741,
"repo_name": "Manolaru/Python_train",
"id": "e04744643e2214e4991999905716675f0b7de3dd",
"size": "1988",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "Les_1/Task_1/test_add_fgroup.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "290326"
}
],
"symlink_target": ""
} |
import time
import logging
import sys
import threading
import argparse
from ConfigParser import SafeConfigParser
from morfeu.tsuru.app import TsuruApp
from morfeu.tsuru.client import TsuruClient
from morfeu.settings import TSURU_APP_PROXY, SLEEP_TIME, DOMAIN, SKIP_APPS
logging.basicConfig(format='%(asctime)s %(levelname)s %(module)s %(message)s',
level=logging.DEBUG,
handlers=[logging.StreamHandler(sys.stdout)])
logging.getLogger("requests").setLevel(logging.WARNING)
LOG = logging.getLogger(__name__)
def check_sleep(app_name, dry, apps_to_sleep):
tsuru_app = TsuruApp(name=app_name, dry=dry)
if tsuru_app.should_go_to_bed():
apps_to_sleep.append(tsuru_app)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Morfeu.. put apps to sleep')
parser.add_argument('--dry', action='store_true',
help='Just pretending...')
parser.add_argument('--daemon', action='store_true',
help='Should be daemonized?')
args = parser.parse_args()
parser = SafeConfigParser()
dry = args.dry
daemon = args.daemon
tsuru_client = TsuruClient()
while True:
try:
LOG.info("Running morfeu...")
apps_to_sleep = []
apps = tsuru_client.list_apps(process_name="web", domain=DOMAIN)
never_sleep = [TSURU_APP_PROXY] + SKIP_APPS
apps = [app for app in apps if set(app.keys()).intersection(never_sleep) == set([])]
threads = []
LOG.info("Checking {} apps".format(len(apps)))
for app in apps:
app_name = app.keys()[0]
thread = threading.Thread(target=check_sleep, args=(app_name, dry, apps_to_sleep))
threads.append(thread)
thread.start()
[x.join() for x in threads]
LOG.info("{0} apps to sleep: {1}".format(len(apps_to_sleep), [app.name for app in apps_to_sleep]))
for app in apps_to_sleep:
app.sleep()
if not daemon:
break
LOG.info("sleeping for {0} seconds ...".format(SLEEP_TIME))
time.sleep(SLEEP_TIME)
except KeyboardInterrupt:
sys.exit(0)
except Exception, e:
LOG.exception("ops... {0}".format(e))
| {
"content_hash": "a81b1440acbd5d5df505abe91a6c00cd",
"timestamp": "",
"source": "github",
"line_count": 80,
"max_line_length": 110,
"avg_line_length": 29.6375,
"alnum_prop": 0.5854070012652889,
"repo_name": "tsuru/morfeu",
"id": "826966d8f34285242e93a81a98df9ecee8626170",
"size": "2371",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "main.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Makefile",
"bytes": "227"
},
{
"name": "Python",
"bytes": "23329"
}
],
"symlink_target": ""
} |
"""
Storage service catalog utility functions and classes for NetApp systems.
"""
import copy
import threading
from oslo_log import log as logging
from oslo_utils import timeutils
import six
from cinder import exception
from cinder.i18n import _, _LI, _LW
from cinder import utils
from cinder.volume.drivers.netapp.dataontap.client import api as netapp_api
from cinder.volume.drivers.netapp import utils as na_utils
LOG = logging.getLogger(__name__)
class NetAppVolume(object):
"""Represents a NetApp volume.
Present attributes
id - name, vserver, junction_path, type
aggr - name, raid_type, ha_policy, disk_type
sis - dedup, compression
state - status, vserver_root, cluster_volume,
inconsistent, invalid, junction_active
qos - qos_policy_group
space - space-guarantee-enabled, space-guarantee,
thin_provisioned, size_avl_bytes, size_total_bytes
mirror - mirrored i.e. dp mirror
export - path
"""
def __init__(self, name, vserver=None):
self.id = {}
self.aggr = {}
self.sis = {}
self.state = {}
self.qos = {}
self.space = {}
self.mirror = {}
self.export = {}
self.id['name'] = name
self.id['vserver'] = vserver
def __eq__(self, other):
"""Checks for equality."""
if (self.id['name'] == other.id['name'] and
self.id['vserver'] == other.id['vserver']):
return True
def __hash__(self):
"""Computes hash for the object."""
return hash(self.id['name'])
def __cmp__(self, other):
"""Implements comparison logic for volumes."""
self_size_avl = self.space.get('size_avl_bytes')
other_size_avl = other.space.get('size_avl_bytes')
if self_size_avl is None and other_size_avl is not None:
return -1
elif self_size_avl is not None and other_size_avl is None:
return 1
elif self_size_avl is None and other_size_avl is None:
return 0
elif int(self_size_avl) < int(other_size_avl):
return -1
elif int(self_size_avl) > int(other_size_avl):
return 1
else:
return 0
def __str__(self):
"""Returns human readable form for object."""
vol_str = "NetApp Volume id: %s, aggr: %s,"\
" space: %s, sis: %s, state: %s, qos: %s"\
% (self.id, self.aggr, self.space, self.sis, self.state, self.qos)
return vol_str
@utils.trace_method
def get_cluster_vols_with_ssc(na_server, vserver, volume=None):
"""Gets ssc vols for cluster vserver."""
volumes = query_cluster_vols_for_ssc(na_server, vserver, volume)
sis_vols = get_sis_vol_dict(na_server, vserver, volume)
mirrored_vols = get_snapmirror_vol_dict(na_server, vserver, volume)
aggrs = {}
for vol in volumes:
aggr_name = vol.aggr['name']
if aggr_name:
if aggr_name in aggrs:
aggr_attrs = aggrs[aggr_name]
else:
aggr_attrs = query_aggr_options(na_server, aggr_name)
if aggr_attrs:
eff_disk_type = query_aggr_storage_disk(na_server,
aggr_name)
aggr_attrs['disk_type'] = eff_disk_type
aggrs[aggr_name] = aggr_attrs
vol.aggr['raid_type'] = aggr_attrs.get('raid_type')
vol.aggr['ha_policy'] = aggr_attrs.get('ha_policy')
vol.aggr['disk_type'] = aggr_attrs.get('disk_type')
if sis_vols:
if vol.id['name'] in sis_vols:
vol.sis['dedup'] = sis_vols[vol.id['name']]['dedup']
vol.sis['compression'] =\
sis_vols[vol.id['name']]['compression']
else:
vol.sis['dedup'] = False
vol.sis['compression'] = False
if (vol.space['space-guarantee-enabled'] and
(vol.space['space-guarantee'] == 'file' or
vol.space['space-guarantee'] == 'volume')):
vol.space['thin_provisioned'] = False
else:
vol.space['thin_provisioned'] = True
if mirrored_vols:
vol.mirror['mirrored'] = False
if vol.id['name'] in mirrored_vols:
for mirr_attrs in mirrored_vols[vol.id['name']]:
if (mirr_attrs['rel_type'] == 'data_protection' and
mirr_attrs['mirr_state'] == 'snapmirrored'):
vol.mirror['mirrored'] = True
break
return volumes
@utils.trace_method
def query_cluster_vols_for_ssc(na_server, vserver, volume=None):
"""Queries cluster volumes for ssc."""
query = {'volume-attributes': None}
volume_id = {'volume-id-attributes': {'owning-vserver-name': vserver}}
if volume:
volume_id['volume-id-attributes']['name'] = volume
query['volume-attributes'] = volume_id
des_attr = {'volume-attributes':
['volume-id-attributes',
'volume-space-attributes',
'volume-state-attributes',
'volume-qos-attributes']}
result = netapp_api.invoke_api(na_server, api_name='volume-get-iter',
api_family='cm', query=query,
des_result=des_attr,
additional_elems=None,
is_iter=True)
vols = set()
for res in result:
records = res.get_child_content('num-records')
if records > 0:
attr_list = res.get_child_by_name('attributes-list')
if attr_list:
vol_attrs = attr_list.get_children()
vols_found = create_vol_list(vol_attrs)
vols.update(vols_found)
return vols
@utils.trace_method
def create_vol_list(vol_attrs):
"""Creates vol list with features from attr list."""
vols = set()
for v in vol_attrs:
try:
# name and vserver are mandatory
# Absence will skip by giving KeyError.
name = v['volume-id-attributes']['name']
vserver = v['volume-id-attributes']['owning-vserver-name']
vol = NetAppVolume(name, vserver)
vol.id['type'] =\
v['volume-id-attributes'].get_child_content('type')
if vol.id['type'] == "tmp":
continue
vol.id['junction_path'] =\
v['volume-id-attributes'].get_child_content('junction-path')
# state attributes mandatory.
vol.state['vserver_root'] =\
na_utils.to_bool(
v['volume-state-attributes'].get_child_content(
'is-vserver-root'))
if vol.state['vserver_root']:
continue
vol.state['status'] =\
v['volume-state-attributes'].get_child_content('state')
vol.state['inconsistent'] =\
na_utils.to_bool(
v['volume-state-attributes'].get_child_content(
'is-inconsistent'))
vol.state['invalid'] =\
na_utils.to_bool(
v['volume-state-attributes'].get_child_content(
'is-invalid'))
vol.state['junction_active'] =\
na_utils.to_bool(
v['volume-state-attributes'].get_child_content(
'is-junction-active'))
vol.state['cluster_volume'] =\
na_utils.to_bool(
v['volume-state-attributes'].get_child_content(
'is-cluster-volume'))
if (vol.state['status'] != 'online' or
vol.state['inconsistent'] or vol.state['invalid']):
# offline, invalid and inconsistent volumes are not usable
continue
# aggr attributes mandatory.
vol.aggr['name'] =\
v['volume-id-attributes']['containing-aggregate-name']
# space attributes mandatory.
vol.space['size_avl_bytes'] =\
v['volume-space-attributes']['size-available']
vol.space['size_total_bytes'] =\
v['volume-space-attributes']['size-total']
vol.space['space-guarantee-enabled'] =\
na_utils.to_bool(
v['volume-space-attributes'].get_child_content(
'is-space-guarantee-enabled'))
vol.space['space-guarantee'] =\
v['volume-space-attributes'].get_child_content(
'space-guarantee')
# qos attributes optional.
if v.get_child_by_name('volume-qos-attributes'):
vol.qos['qos_policy_group'] =\
v['volume-qos-attributes'].get_child_content(
'policy-group-name')
else:
vol.qos['qos_policy_group'] = None
vols.add(vol)
except KeyError as e:
LOG.debug('Unexpected error while creating'
' ssc vol list. Message - %s', e)
continue
return vols
@utils.trace_method
def query_aggr_options(na_server, aggr_name):
"""Queries cluster aggr for attributes.
Currently queries for raid and ha-policy.
"""
add_elems = {'aggregate': aggr_name}
attrs = {}
try:
result = netapp_api.invoke_api(na_server,
api_name='aggr-options-list-info',
api_family='cm', query=None,
des_result=None,
additional_elems=add_elems,
is_iter=False)
for res in result:
options = res.get_child_by_name('options')
if options:
op_list = options.get_children()
for op in op_list:
if op.get_child_content('name') == 'ha_policy':
attrs['ha_policy'] = op.get_child_content('value')
if op.get_child_content('name') == 'raidtype':
attrs['raid_type'] = op.get_child_content('value')
except Exception as e:
LOG.debug("Exception querying aggr options. %s", e)
return attrs
@utils.trace_method
def get_sis_vol_dict(na_server, vserver, volume=None):
"""Queries sis for volumes.
If volume is present sis is queried for it.
Records dedup and compression enabled.
"""
sis_vols = {}
query_attr = {'vserver': vserver}
if volume:
vol_path = '/vol/%s' % (volume)
query_attr['path'] = vol_path
query = {'sis-status-info': query_attr}
try:
result = netapp_api.invoke_api(na_server,
api_name='sis-get-iter',
api_family='cm',
query=query,
is_iter=True)
for res in result:
attr_list = res.get_child_by_name('attributes-list')
if attr_list:
sis_status = attr_list.get_children()
for sis in sis_status:
path = sis.get_child_content('path')
if not path:
continue
(___, __, vol) = path.rpartition('/')
if not vol:
continue
v_sis = {}
v_sis['compression'] = na_utils.to_bool(
sis.get_child_content('is-compression-enabled'))
v_sis['dedup'] = na_utils.to_bool(
sis.get_child_content('state'))
sis_vols[vol] = v_sis
except Exception as e:
LOG.debug("Exception querying sis information. %s", e)
return sis_vols
@utils.trace_method
def get_snapmirror_vol_dict(na_server, vserver, volume=None):
"""Queries snapmirror volumes."""
mirrored_vols = {}
query_attr = {'source-vserver': vserver}
if volume:
query_attr['source-volume'] = volume
query = {'snapmirror-info': query_attr}
try:
result = netapp_api.invoke_api(na_server,
api_name='snapmirror-get-iter',
api_family='cm', query=query,
is_iter=True)
for res in result:
attr_list = res.get_child_by_name('attributes-list')
if attr_list:
snap_info = attr_list.get_children()
for snap in snap_info:
src_volume = snap.get_child_content('source-volume')
v_snap = {}
v_snap['dest_loc'] =\
snap.get_child_content('destination-location')
v_snap['rel_type'] =\
snap.get_child_content('relationship-type')
v_snap['mirr_state'] =\
snap.get_child_content('mirror-state')
if mirrored_vols.get(src_volume):
mirrored_vols.get(src_volume).append(v_snap)
else:
mirrored_vols[src_volume] = [v_snap]
except Exception as e:
LOG.debug("Exception querying mirror information. %s", e)
return mirrored_vols
@utils.trace_method
def query_aggr_storage_disk(na_server, aggr):
"""Queries for storage disks associated to an aggregate."""
query = {'storage-disk-info': {'disk-raid-info':
{'disk-aggregate-info':
{'aggregate-name': aggr}}}}
des_attr = {'storage-disk-info':
{'disk-raid-info': ['effective-disk-type']}}
try:
result = netapp_api.invoke_api(na_server,
api_name='storage-disk-get-iter',
api_family='cm', query=query,
des_result=des_attr,
additional_elems=None,
is_iter=True)
for res in result:
attr_list = res.get_child_by_name('attributes-list')
if attr_list:
storage_disks = attr_list.get_children()
for disk in storage_disks:
raid_info = disk.get_child_by_name('disk-raid-info')
if raid_info:
eff_disk_type =\
raid_info.get_child_content('effective-disk-type')
if eff_disk_type:
return eff_disk_type
else:
continue
except Exception as e:
LOG.debug("Exception querying storage disk. %s", e)
return 'unknown'
@utils.trace_method
def get_cluster_ssc(na_server, vserver):
"""Provides cluster volumes with ssc."""
netapp_volumes = get_cluster_vols_with_ssc(na_server, vserver)
mirror_vols = set()
dedup_vols = set()
compress_vols = set()
thin_prov_vols = set()
ssc_map = {'mirrored': mirror_vols, 'dedup': dedup_vols,
'compression': compress_vols,
'thin': thin_prov_vols, 'all': netapp_volumes}
for vol in netapp_volumes:
if vol.sis.get('dedup'):
dedup_vols.add(vol)
if vol.sis.get('compression'):
compress_vols.add(vol)
if vol.mirror.get('mirrored'):
mirror_vols.add(vol)
if vol.space.get('thin_provisioned'):
thin_prov_vols.add(vol)
return ssc_map
@utils.trace_method
def refresh_cluster_stale_ssc(*args, **kwargs):
"""Refreshes stale ssc volumes with latest."""
backend = args[0]
na_server = args[1]
vserver = args[2]
identity = six.text_type(id(backend))
lock_pr = '%s_%s' % ('refresh_ssc', identity)
try:
job_set = na_utils.set_safe_attr(
backend, 'refresh_stale_running', True)
if not job_set:
return
@utils.synchronized(lock_pr)
def refresh_stale_ssc():
stale_vols = backend._update_stale_vols(reset=True)
LOG.info(_LI('Running stale ssc refresh job for %(server)s'
' and vserver %(vs)s'),
{'server': na_server, 'vs': vserver})
# refreshing single volumes can create inconsistency
# hence doing manipulations on copy
ssc_vols_copy = copy.deepcopy(backend.ssc_vols)
refresh_vols = set()
expired_vols = set()
for vol in stale_vols:
name = vol.id['name']
res = get_cluster_vols_with_ssc(na_server, vserver, name)
if res:
refresh_vols.add(res.pop())
else:
expired_vols.add(vol)
for vol in refresh_vols:
for k in ssc_vols_copy:
vol_set = ssc_vols_copy[k]
vol_set.discard(vol)
if k == "mirrored" and vol.mirror.get('mirrored'):
vol_set.add(vol)
if k == "dedup" and vol.sis.get('dedup'):
vol_set.add(vol)
if k == "compression" and vol.sis.get('compression'):
vol_set.add(vol)
if k == "thin" and vol.space.get('thin_provisioned'):
vol_set.add(vol)
if k == "all":
vol_set.add(vol)
for vol in expired_vols:
for k in ssc_vols_copy:
vol_set = ssc_vols_copy[k]
vol_set.discard(vol)
backend.refresh_ssc_vols(ssc_vols_copy)
LOG.info(_LI('Successfully completed stale refresh job for'
' %(server)s and vserver %(vs)s'),
{'server': na_server, 'vs': vserver})
refresh_stale_ssc()
finally:
na_utils.set_safe_attr(backend, 'refresh_stale_running', False)
@utils.trace_method
def get_cluster_latest_ssc(*args, **kwargs):
"""Updates volumes including ssc."""
backend = args[0]
na_server = args[1]
vserver = args[2]
identity = six.text_type(id(backend))
lock_pr = '%s_%s' % ('refresh_ssc', identity)
# As this depends on stale job running state
# set flag as soon as job starts to avoid
# job accumulation.
try:
job_set = na_utils.set_safe_attr(backend, 'ssc_job_running', True)
if not job_set:
return
@utils.synchronized(lock_pr)
def get_latest_ssc():
LOG.info(_LI('Running cluster latest ssc job for %(server)s'
' and vserver %(vs)s'),
{'server': na_server, 'vs': vserver})
ssc_vols = get_cluster_ssc(na_server, vserver)
backend.refresh_ssc_vols(ssc_vols)
backend.ssc_run_time = timeutils.utcnow()
LOG.info(_LI('Successfully completed ssc job for %(server)s'
' and vserver %(vs)s'),
{'server': na_server, 'vs': vserver})
get_latest_ssc()
finally:
na_utils.set_safe_attr(backend, 'ssc_job_running', False)
@utils.trace_method
def refresh_cluster_ssc(backend, na_server, vserver, synchronous=False):
"""Refresh cluster ssc for backend."""
if not isinstance(na_server, netapp_api.NaServer):
raise exception.InvalidInput(reason=_("Backend server not NaServer."))
delta_secs = getattr(backend, 'ssc_run_delta_secs', 1800)
if getattr(backend, 'ssc_job_running', None):
LOG.warning(_LW('ssc job in progress. Returning... '))
return
elif (getattr(backend, 'ssc_run_time', None) is None or
(backend.ssc_run_time and
timeutils.is_newer_than(backend.ssc_run_time, delta_secs))):
if synchronous:
get_cluster_latest_ssc(backend, na_server, vserver)
else:
t = threading.Timer(0, get_cluster_latest_ssc,
args=[backend, na_server, vserver])
t.start()
elif getattr(backend, 'refresh_stale_running', None):
LOG.warning(_LW('refresh stale ssc job in progress. Returning... '))
return
else:
if backend.stale_vols:
if synchronous:
refresh_cluster_stale_ssc(backend, na_server, vserver)
else:
t = threading.Timer(0, refresh_cluster_stale_ssc,
args=[backend, na_server, vserver])
t.start()
@utils.trace_method
def get_volumes_for_specs(ssc_vols, specs):
"""Shortlists volumes for extra specs provided."""
if specs is None or specs == {} or not isinstance(specs, dict):
return ssc_vols['all']
result = copy.deepcopy(ssc_vols['all'])
raid_type = specs.get('netapp:raid_type')
disk_type = specs.get('netapp:disk_type')
bool_specs_list = ['netapp_mirrored', 'netapp_unmirrored',
'netapp_dedup', 'netapp_nodedup',
'netapp_compression', 'netapp_nocompression',
'netapp_thin_provisioned', 'netapp_thick_provisioned']
b_specs = {}
for spec in bool_specs_list:
b_specs[spec] = na_utils.to_bool(specs.get(spec))\
if specs.get(spec) else None
def _spec_ineffect(b_specs, spec, opp_spec):
"""If the spec with opposite spec is ineffective."""
if ((b_specs[spec] is None and b_specs[opp_spec] is None)
or (b_specs[spec] == b_specs[opp_spec])):
return True
else:
return False
if _spec_ineffect(b_specs, 'netapp_mirrored', 'netapp_unmirrored'):
pass
else:
if b_specs['netapp_mirrored'] or b_specs['netapp_unmirrored'] is False:
result = result & ssc_vols['mirrored']
else:
result = result - ssc_vols['mirrored']
if _spec_ineffect(b_specs, 'netapp_dedup', 'netapp_nodedup'):
pass
else:
if b_specs['netapp_dedup'] or b_specs['netapp_nodedup'] is False:
result = result & ssc_vols['dedup']
else:
result = result - ssc_vols['dedup']
if _spec_ineffect(b_specs, 'netapp_compression', 'netapp_nocompression'):
pass
else:
if (b_specs['netapp_compression'] or
b_specs['netapp_nocompression'] is False):
result = result & ssc_vols['compression']
else:
result = result - ssc_vols['compression']
if _spec_ineffect(b_specs, 'netapp_thin_provisioned',
'netapp_thick_provisioned'):
pass
else:
if (b_specs['netapp_thin_provisioned'] or
b_specs['netapp_thick_provisioned'] is False):
result = result & ssc_vols['thin']
else:
result = result - ssc_vols['thin']
if raid_type or disk_type:
tmp = copy.deepcopy(result)
for vol in tmp:
if raid_type:
vol_raid = vol.aggr['raid_type']
vol_raid = vol_raid.lower() if vol_raid else None
if raid_type.lower() != vol_raid:
result.discard(vol)
if disk_type:
vol_dtype = vol.aggr['disk_type']
vol_dtype = vol_dtype.lower() if vol_dtype else None
if disk_type.lower() != vol_dtype:
result.discard(vol)
return result
@utils.trace_method
def check_ssc_api_permissions(client_cmode):
"""Checks backend SSC API permissions for the user."""
api_map = {'storage-disk-get-iter': ['netapp:disk_type'],
'snapmirror-get-iter': ['netapp_mirrored',
'netapp_unmirrored'],
'sis-get-iter': ['netapp_dedup', 'netapp_nodedup',
'netapp_compression',
'netapp_nocompression'],
'aggr-options-list-info': ['netapp:raid_type'],
'volume-get-iter': []}
failed_apis = client_cmode.check_apis_on_cluster(api_map.keys())
if failed_apis:
if 'volume-get-iter' in failed_apis:
msg = _("Fatal error: User not permitted"
" to query NetApp volumes.")
raise exception.VolumeBackendAPIException(data=msg)
else:
unsupp_ssc_features = []
for fail in failed_apis:
unsupp_ssc_features.extend(api_map[fail])
LOG.warning(_LW("The user does not have access or sufficient "
"privileges to use all netapp APIs. The "
"following extra_specs will fail or be ignored: "
"%s"), unsupp_ssc_features)
| {
"content_hash": "01ca93369d4446c015248e89be3af49f",
"timestamp": "",
"source": "github",
"line_count": 621,
"max_line_length": 79,
"avg_line_length": 40.388083735909824,
"alnum_prop": 0.5182010286671186,
"repo_name": "JioCloud/cinder",
"id": "8c2c781338133b083f147a1a59746e1533485873",
"size": "25999",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "cinder/volume/drivers/netapp/dataontap/ssc_cmode.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "11977630"
},
{
"name": "Shell",
"bytes": "8111"
}
],
"symlink_target": ""
} |
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
'sphinx_autodoc_typehints',
]
source_suffix = '.rst'
master_doc = 'index'
project = 'PyGenSound'
copyright = '2017, MacRat'
author = 'MacRat'
version = '0.0.1'
release = '0.0.1 dev'
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
html_theme = 'alabaster'
html_theme_options = {
'description': 'The python library for generating sound and compute.',
'github_user': 'macrat',
'github_repo': 'PyGenSound',
'github_banner': True,
'github_button': False,
}
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'searchbox.html',
]
}
intersphinx_mapping = {'https://docs.python.org/': None}
autodoc_member_order = 'bysource'
autodoc_default_flags = ['members', 'undoc-members', 'show-inheritance']
| {
"content_hash": "72c69a6d9bbe4fa3ec13842a16427b6d",
"timestamp": "",
"source": "github",
"line_count": 52,
"max_line_length": 74,
"avg_line_length": 19.576923076923077,
"alnum_prop": 0.6296660117878192,
"repo_name": "macrat/PyGenSound",
"id": "7e80b3ea25b42385c9971dc10fd040bad31487c3",
"size": "1018",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "doc/conf.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "102439"
}
],
"symlink_target": ""
} |
"""Python md to HTML converter with YAML
Usage:
pyconverter.py FILE [--pandoc_args | -h]
Arguments:
file the file to convert
Options:
-h, --help Show this screen.
--pandoc_args option list of arguments for pandoc
"""
from docopt import docopt
"""
pyconverter.py uses Pandoc internally to convert a markdown file to an html
file. The only reason this was necessary was to find a way to keep the damn
YAML front matter in the converted HTML file. You may pass arbitrary
arguments to pandoc via optional command line args.
"""
import yaml as ym
import frontmatter as fm
import pypandoc as pd
def main(_args):
""" Does the things."""
print("Converting {computer}".format(computer=_args["FILE"]))
if _args["--pandoc_args"]:
pdoc_args = _args["--pandoc_args"]
else:
filters = ['pandoc-citeproc']
pdoc_args = ['--mathjax',
'--smart'
]
filename = _args['FILE']
basename = filename.split('.')[0]
post = fm.load(filename)
filedate = post.metadata['date']
file_frontmatter = ('---\n' + ym.dump(post.metadata) + '---\n')
output = pd.convert(source=filename,
to='html5',
format='md',
extra_args=pdoc_args,
filters=filters)
out = (file_frontmatter + '\n' + output)
write_name = (str(filedate) + '-' + basename + '.html')
# writes file friend for Jekyll
with open(write_name, mode='w') as f:
f.write(out)
if __name__ == "__main__":
main(docopt(__doc__))
| {
"content_hash": "94e8039573f275ed9b922b40c9129b94",
"timestamp": "",
"source": "github",
"line_count": 59,
"max_line_length": 79,
"avg_line_length": 27.661016949152543,
"alnum_prop": 0.5735294117647058,
"repo_name": "xysmas/pandoc_yaml_keeper",
"id": "473e078fe9e417ed68333fa0ca3c6ae03c0efc0e",
"size": "1655",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "pandoc_yaml_keeper.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "21439"
}
],
"symlink_target": ""
} |
"""Helper module to factorize the conditional multiprocessing import logic
We use a distinct module to simplify import statements and avoid introducing
circular dependencies (for instance for the assert_spawning name).
"""
import os
import sys
import warnings
# Obtain possible configuration from the environment, assuming 1 (on)
# by default, upon 0 set to None. Should instructively fail if some non
# 0/1 value is set.
mp = int(os.environ.get('JOBLIB_MULTIPROCESSING', 1)) or None
if mp:
try:
import multiprocessing as mp
except ImportError:
mp = None
# 2nd stage: validate that locking is available on the system and
# issue a warning if not
if mp is not None:
try:
# Use the spawn context
if sys.version_info < (3, 3):
Semaphore = mp.Semaphore
else:
# Using mp.Semaphore has a border effect and set the default
# backend for multiprocessing. To avoid that, we use the 'spawn'
# context which is available on all supported platforms.
ctx = mp.get_context('spawn')
Semaphore = ctx.Semaphore
_sem = Semaphore()
del _sem # cleanup
except (ImportError, OSError) as e:
mp = None
warnings.warn('%s. joblib will operate in serial mode' % (e,))
# 3rd stage: backward compat for the assert_spawning helper
if mp is not None:
try:
# Python 3.4+
from multiprocessing.context import assert_spawning
except ImportError:
from multiprocessing.forking import assert_spawning
else:
assert_spawning = None
| {
"content_hash": "1e62eeacb38590174afc204e9d464ae7",
"timestamp": "",
"source": "github",
"line_count": 49,
"max_line_length": 76,
"avg_line_length": 32.795918367346935,
"alnum_prop": 0.6664592408214064,
"repo_name": "vortex-ape/scikit-learn",
"id": "be642b869febe1c94b2ba8a26c25ba537a017c90",
"size": "1607",
"binary": false,
"copies": "12",
"ref": "refs/heads/master",
"path": "sklearn/externals/joblib/_multiprocessing_helpers.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Batchfile",
"bytes": "3366"
},
{
"name": "C",
"bytes": "394787"
},
{
"name": "C++",
"bytes": "140225"
},
{
"name": "Makefile",
"bytes": "1588"
},
{
"name": "PowerShell",
"bytes": "17312"
},
{
"name": "Python",
"bytes": "6351428"
},
{
"name": "Shell",
"bytes": "8687"
}
],
"symlink_target": ""
} |
import asyncio
from octobot import Bot
from octobot.plugins import PluginManager
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.set_debug(True)
bot = Bot.new(loop, 'localhost', 6667)
loop.run_forever()
#plugin_manager = PluginManager(['plugins'])
#plugin_manager.find_plugins()
| {
"content_hash": "343cbebfcb7582d7a179c39008476c5a",
"timestamp": "",
"source": "github",
"line_count": 12,
"max_line_length": 48,
"avg_line_length": 26.75,
"alnum_prop": 0.6728971962616822,
"repo_name": "Thezomg/OctoBot",
"id": "25ed55a9793d5133b8dc324679566b0002dfc0be",
"size": "321",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "main.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "43356"
}
],
"symlink_target": ""
} |
import os
import sys
#find out the file location within the sources tree
this_module_dir_path = os.path.abspath ( os.path.dirname( sys.modules[__name__].__file__) )
#find out gccxml location
gccxml_09_path = os.path.join( this_module_dir_path, '..', '..', '..', 'gccxml_bin', 'v09', sys.platform, 'bin' )
#add pygccxml package to Python path
sys.path.append( os.path.join( this_module_dir_path, '..', '..' ) )
from pygccxml import parser
from pygccxml import declarations
#configure GCC-XML parser
config = parser.config_t( gccxml_path=gccxml_09_path )
#parsing source file
decls = parser.parse( ['example.hpp'], config )
global_ns = declarations.get_global_namespace( decls )
#get object that describes unittests namespace
unittests = global_ns.namespace( 'unittests' )
print '"unittests" declarations: '
declarations.print_declarations( unittests )
#print all base and derived class names
for class_ in unittests.classes():
print 'class "%s" hierarchy information:' % class_.name
print '\tbase classes : ', `[base.related_class.name for base in class_.bases]`
print '\tderived classes: ', `[derive.related_class.name for derive in class_.derived]`
print '\n'
#pygccxml has very powerfull query api:
#select multiple declarations
run_functions = unittests.member_functions( 'run' )
print 'the namespace contains %d "run" member functions' % len(run_functions)
print 'they are: '
for f in run_functions:
print '\t' + declarations.full_name( f )
#select single declaration - all next statements will return same object
#vector< unittests::test_case* >
#you can select the class using "full" name
test_container_1 = global_ns.class_( 'vector<unittests::test_case*, std::allocator<unittests::test_case*> >' )
#you can select the class using partial name
test_container_2 = global_ns.class_( 'vector< unittests::test_case* >' )
#you can define your own "match" criteria
test_container_3 = global_ns.class_( lambda decl: 'vector' in decl.name )
is_same_object = test_container_1 is test_container_2 \
and test_container_2 is test_container_3
print "Does all test_container_* refer to the same object? ", str(is_same_object)
| {
"content_hash": "36a2ebf78a66c31d292781614dbe9be7",
"timestamp": "",
"source": "github",
"line_count": 60,
"max_line_length": 113,
"avg_line_length": 37.31666666666667,
"alnum_prop": 0.6976328718177758,
"repo_name": "avaitla/Haskell-to-C---Bridge",
"id": "b0306a2e9e139d51591e82706648576f30c8f542",
"size": "2435",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "pygccxml-1.0.0/docs/example/example.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Assembly",
"bytes": "1194590"
},
{
"name": "C",
"bytes": "40073785"
},
{
"name": "C++",
"bytes": "2198628"
},
{
"name": "Haskell",
"bytes": "22377"
},
{
"name": "JavaScript",
"bytes": "10874"
},
{
"name": "Perl",
"bytes": "1373"
},
{
"name": "Python",
"bytes": "696243"
},
{
"name": "Shell",
"bytes": "1623468"
}
],
"symlink_target": ""
} |
import time
import functools
class TimeCost(object):
def __init__(self, unit='s', precision=4, logger=None):
self.start = None
self.end = None
self.total = 0
self.unit = unit
self.precision = precision
self.__unitfactor = {'s': 1,
'ms': 1000,
'us': 1000000}
self.logger = logger
def __call__(self, f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
with self:
return f(*args, **kwargs)
return wrapped
def __enter__(self):
if self.unit not in self.__unitfactor:
raise KeyError('Unsupported time unit.')
if self.precision < 0:
raise KeyError('must gte 0')
self.start = time.time()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.end = time.time()
self.total = (self.end - self.start) * self.__unitfactor[self.unit]
if self.precision != 0:
self.total = round(self.total, self.precision)
else:
self.total = int(self.total)
if self.logger:
self.logger.info('this cost {0}{1}'.format(self.total, self.unit))
def __str__(self):
return 'this cost {0}{1}'.format(self.total, self.unit)
| {
"content_hash": "13b513739ee642b7a063d805b21be482",
"timestamp": "",
"source": "github",
"line_count": 43,
"max_line_length": 78,
"avg_line_length": 30.46511627906977,
"alnum_prop": 0.5290076335877862,
"repo_name": "rfyiamcool/TimeCost",
"id": "e814bd1b8421bf0773a2b834ac7a70a72d5ad159",
"size": "1310",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "timecost.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "2787"
}
],
"symlink_target": ""
} |
from os.path import dirname as _dirname, basename as _basename, isfile as _isfile
import glob as _glob
exec('\n'.join(map(lambda name: "from ." + name + " import *",
[_basename(f)[:-3] for f in _glob.glob(_dirname(__file__) + "/*.py") \
if _isfile(f) and not _basename(f).startswith('_')])))
| {
"content_hash": "c0cb1ee5fad6e8882b7d17c576b7fc21",
"timestamp": "",
"source": "github",
"line_count": 6,
"max_line_length": 89,
"avg_line_length": 55.5,
"alnum_prop": 0.5615615615615616,
"repo_name": "TimeSynth/TimeSynth",
"id": "2e5d4599b4e546cdac354aa4d073b9fb307f3e15",
"size": "333",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "timesynth/signals/__init__.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Jupyter Notebook",
"bytes": "914203"
},
{
"name": "Python",
"bytes": "31422"
}
],
"symlink_target": ""
} |
import IECore
import Gaffer
import GafferUI
from Qt import QtGui
from Qt import QtWidgets
from Qt import QtCore
class Button( GafferUI.Widget ) :
__palette = None
def __init__( self, text="", image=None, hasFrame=True, highlightOnOver=True, **kw ) :
GafferUI.Widget.__init__( self, QtWidgets.QPushButton(), **kw )
self.__highlightForHover = False
self._qtWidget().setAttribute( QtCore.Qt.WA_LayoutUsesWidgetRect )
self._qtWidget().setFocusPolicy( QtCore.Qt.TabFocus )
# allow return and enter keys to click button
self._qtWidget().setAutoDefault( True )
self.setText( text )
self.setImage( image )
self.setHasFrame( hasFrame )
# using a WeakMethod to avoid circular references which would otherwise
# never be broken.
self._qtWidget().clicked.connect( Gaffer.WeakMethod( self.__clicked ) )
self.__clickedSignal = GafferUI.WidgetSignal()
# buttons appear to totally ignore the etch-disabled-text stylesheet option,
# and we really don't like the etching. the only effective way of disabling it
# seems to be to apply this palette which makes the etched text transparent.
if Button.__palette is None :
Button.__palette = QtGui.QPalette( QtWidgets.QApplication.instance().palette( self._qtWidget() ) )
Button.__palette.setColor( QtGui.QPalette.Disabled, QtGui.QPalette.Light, QtGui.QColor( 0, 0, 0, 0 ) )
self._qtWidget().setPalette( Button.__palette )
if highlightOnOver :
self.enterSignal().connect( Gaffer.WeakMethod( self.__enter ), scoped = False )
self.leaveSignal().connect( Gaffer.WeakMethod( self.__leave ), scoped = False )
def setHighlighted( self, highlighted ) :
GafferUI.Widget.setHighlighted( self, highlighted )
self.__updateIcon()
def setText( self, text ) :
assert( isinstance( text, str ) )
self._qtWidget().setText( text )
def getText( self ) :
return self._qtWidget().text()
def setImage( self, imageOrImageFileName ) :
assert( isinstance( imageOrImageFileName, ( str, GafferUI.Image, type( None ) ) ) )
if isinstance( imageOrImageFileName, str ) :
# Avoid our image getting parented to the wrong thing
# if our caller is in a `with container` block.
GafferUI.Widget._pushParent( None )
# Make sure we don't break if an image is missing
try:
self.__image = GafferUI.Image( imageOrImageFileName )
except Exception as e:
IECore.msg( IECore.Msg.Level.Error, "GafferUI.Button",
"Could not read image for icon : " + str( e )
)
GafferUI.Widget._popParent()
else :
self.__image = imageOrImageFileName
self.__updateIcon()
def getImage( self ) :
return self.__image
def setHasFrame( self, hasFrame ) :
self._qtWidget().setProperty( "gafferWithFrame", hasFrame )
self._qtWidget().setSizePolicy(
QtWidgets.QSizePolicy.Minimum if hasFrame else QtWidgets.QSizePolicy.Fixed,
QtWidgets.QSizePolicy.Fixed
)
self._repolish()
def getHasFrame( self ) :
return self._qtWidget().property( "gafferWithFrame" )
def setEnabled( self, enabled ) :
# Once we're disabled, mouse leave events will be skipped, and we'll
# remain in a highlighted state once re-enabled.
if not enabled and self.__highlightForHover :
self.__highlightForHover = False
self.__updateIcon()
GafferUI.Widget.setEnabled( self, enabled )
def clickedSignal( self ) :
return self.__clickedSignal
def __clicked( self, *unusedArgs ) : # currently PyQt passes a "checked" argument and PySide doesn't
# workaround problem whereby not all text fields will have committed their contents
# into plugs when the button is pressed - this occurs particularly in the OpDialogue, and causes
# the op to run without the values the user sees in the ui. normally editingFinished is emitted by
# the text widget itself on a loss of focus, but unfortunately clicking on a button doesn't cause that
# focus loss. so we helpfully emit the signal ourselves here.
focusWidget = GafferUI.Widget._owner( QtWidgets.QApplication.focusWidget() )
if focusWidget is not None and hasattr( focusWidget, "editingFinishedSignal" ) :
focusWidget.editingFinishedSignal()( focusWidget )
self.clickedSignal()( self )
def __updateIcon( self ) :
if self.__image is None :
self._qtWidget().setIcon( QtGui.QIcon() )
return
# Qt's built-in disabled state generation doesn't work well with dark schemes
# There is no built-in support for QtGui.QIcon.Active in the default
# painter, which is why we have to juggle it here.
icon = self.__image._qtIcon( highlighted = self.getHighlighted() or self.__highlightForHover )
self._qtWidget().setIcon( icon )
self._qtWidget().setIconSize( self.__image._qtPixmap().size() )
def __enter( self, widget ) :
self.__highlightForHover = True
self.__updateIcon()
def __leave( self, widget ) :
self.__highlightForHover = False
self.__updateIcon()
| {
"content_hash": "e865398ded00635412a25229fd54c8a2",
"timestamp": "",
"source": "github",
"line_count": 151,
"max_line_length": 105,
"avg_line_length": 32.12582781456954,
"alnum_prop": 0.7161410018552876,
"repo_name": "GafferHQ/gaffer",
"id": "1b59a13f0bd89e4954de7bc86992dfe689397ae5",
"size": "6716",
"binary": false,
"copies": "3",
"ref": "refs/heads/main",
"path": "python/GafferUI/Button.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Batchfile",
"bytes": "5790"
},
{
"name": "C",
"bytes": "61993"
},
{
"name": "C++",
"bytes": "9572701"
},
{
"name": "CMake",
"bytes": "85201"
},
{
"name": "GLSL",
"bytes": "6208"
},
{
"name": "Python",
"bytes": "10280178"
},
{
"name": "Ruby",
"bytes": "419"
},
{
"name": "Shell",
"bytes": "14580"
}
],
"symlink_target": ""
} |
import pygame
from spritesheet import SpriteSheet
from enemy import Enemy
from directions import Directions
from utils import *
class Spider(Enemy):
def __init__(self, width, height, x, y):
Enemy.__init__(self, width, 20, x, y)
self.dir = 'D'
self.speed = 1
# initialize sprite lists
ss = SpriteSheet(path.join(get_art_dir(), 'Spider', 'Spider_spritesheet.png'), 4)
self.sprites_walk_left = ss.get_sprites(size=(30, 20))
self.sprites_walk_right = [pygame.transform.flip(s, True, False) for s in self.sprites_walk_left]
self.image = self.sprites_walk_left[0]
def update(self, c):
# Always face the player
if self.rect.x < c.player.rect.x:
self.heading = Directions.Right
else:
self.heading = Directions.Left
self.update_sprites()
# Check aggro
if not self.checkAggro(c, False):
if self.dir == 'U':
self.delta_y = -self.speed
else:
self.delta_y = self.speed
else:
dist = c.player.rect.y - self.rect.y
if dist > 0:
self.dir == 'D'
self.delta_y = self.speed
else:
self.dir == 'U'
self.delta_y = -self.speed
pl = c.lvl_current.platform_list
# collision detection in y
self.rect.y += self.delta_y
collide_list = pygame.sprite.spritecollide(self, pl, False)
for platform in collide_list:
if self.delta_y > 0:
self.rect.bottom = platform.rect.top
self.dir = "U"
elif self.delta_y < 0:
self.rect.top = platform.rect.bottom
self.dir = "D"
self.delta_y = 0
# Move above top boundary
if self.rect.top <= 0:
self.rect.top = 0
self.dir = "D"
# collision detection in x
# self.rect.x += self.delta_x
# collide_list = pygame.sprite.spritecollide(self, pl, False)
# for platform in collide_list:
# if self.delta_x > 0: # dir = "R"
# self.rect.right = platform.rect.left
# elif self.delta_x < 0: # dir = "L"
# self.rect.left = platform.rect.right
# self.delta_x = 0
def get_sprites(self):
ret = None
if self.heading == Directions.Left:
ret = self.sprites_walk_left
elif self.heading == Directions.Right:
ret = self.sprites_walk_right
return ret
| {
"content_hash": "d11819c7edc136102b5ddbdddb4dd01f",
"timestamp": "",
"source": "github",
"line_count": 81,
"max_line_length": 99,
"avg_line_length": 26.123456790123456,
"alnum_prop": 0.6483931947069943,
"repo_name": "450W16/MODACT",
"id": "fbae5471068bd67f5bb416d2b6e11b122dbbb7af",
"size": "2116",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/characters/spider.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "63777"
}
],
"symlink_target": ""
} |
from __future__ import annotations
import socket
from functools import lru_cache
from airflow.configuration import conf
# patched version of socket.getfqdn() - see https://github.com/python/cpython/issues/49254
@lru_cache(maxsize=None)
def getfqdn(name=""):
"""Get fully qualified domain name from name.
An empty argument is interpreted as meaning the local host.
"""
name = name.strip()
if not name or name == "0.0.0.0":
name = socket.gethostname()
try:
addrs = socket.getaddrinfo(name, None, 0, socket.SOCK_DGRAM, 0, socket.AI_CANONNAME)
except OSError:
pass
else:
for addr in addrs:
if addr[3]:
name = addr[3]
break
return name
def get_host_ip_address():
"""Fetch host ip address."""
return socket.gethostbyname(getfqdn())
def get_hostname():
"""
Fetch the hostname using the callable from the config or using
`airflow.utils.net.getfqdn` as a fallback.
"""
return conf.getimport("core", "hostname_callable", fallback="airflow.utils.net.getfqdn")()
| {
"content_hash": "693725b50fb6fe9c13ce22cbda74690b",
"timestamp": "",
"source": "github",
"line_count": 40,
"max_line_length": 94,
"avg_line_length": 27.475,
"alnum_prop": 0.6460418562329391,
"repo_name": "apache/airflow",
"id": "57bd9008b95d88710261c3ede0016ce16779fee2",
"size": "1886",
"binary": false,
"copies": "2",
"ref": "refs/heads/main",
"path": "airflow/utils/net.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "25980"
},
{
"name": "Dockerfile",
"bytes": "71458"
},
{
"name": "HCL",
"bytes": "3786"
},
{
"name": "HTML",
"bytes": "172957"
},
{
"name": "JavaScript",
"bytes": "143915"
},
{
"name": "Jinja",
"bytes": "38911"
},
{
"name": "Jupyter Notebook",
"bytes": "5482"
},
{
"name": "Mako",
"bytes": "1339"
},
{
"name": "Python",
"bytes": "23697738"
},
{
"name": "R",
"bytes": "313"
},
{
"name": "Shell",
"bytes": "211306"
},
{
"name": "TypeScript",
"bytes": "521019"
}
],
"symlink_target": ""
} |
import sys
import unittest
import tempfile, os, shutil
from gppylib.commands.base import CommandResult, Command, ExecutionError
from gppylib.operations.backup_utils import *
from gppylib.operations.restore import *
from gppylib.operations.restore import _build_gpdbrestore_cmd_line
from gppylib.mainUtils import ExceptionNoStackTraceNeeded
from mock import patch, MagicMock, Mock, mock_open, call, ANY
from . import setup_fake_gparray
class RestoreTestCase(unittest.TestCase):
def setUp(self):
context = Context()
with patch('gppylib.gparray.GpArray.initFromCatalog', return_value=setup_fake_gparray()):
context = Context()
context.target_db='testdb'
context.include_dump_tables_file='/tmp/table_list.txt'
context.master_datadir='/data/master/p1'
context.backup_dir=None
context.batch_default=None
context.timestamp = '20160101010101'
context.no_analyze = True
context.drop_db = True
context.master_port = 5432
self.context = context
self.restore = RestoreDatabase(self.context)
self.validate_timestamp = ValidateTimestamp(self.context)
def test_GetDbName_default(self):
""" Basic test """
with tempfile.NamedTemporaryFile() as f:
f.write("""
--
-- Database creation
--
CREATE DATABASE monkey WITH TEMPLATE = template0 ENCODING = 'UTF8' OWNER = thisguy;
""")
f.flush()
self.assertTrue(GetDbName(f.name).run() == "monkey")
def test_GetDbName_line_check(self):
""" Verify that GetDbName looks no further than 50 lines. """
with tempfile.NamedTemporaryFile() as f:
for i in range(0, 50):
f.write("crap\n")
f.write("CREATE DATABASE monkey")
f.flush()
try:
GetDbName(f.name).run()
except GetDbName.DbNameGiveUp, e:
return
self.fail("DbNameGiveUp should have been raised.")
def test_GetDbName_no_name(self):
""" Verify that GetDbName fails when cdatabase file ends prematurely. """
with tempfile.NamedTemporaryFile() as f:
f.write("this is the whole file")
f.flush()
try:
GetDbName(f.name).run()
except GetDbName.DbNameNotFound, e:
return
self.fail("DbNameNotFound should have been raised.")
@patch('gppylib.operations.restore.RestoreDatabase._process_createdb', side_effect=ExceptionNoStackTraceNeeded('Failed to create database'))
@patch('time.sleep')
def test_multitry_createdb_create_fails(self, mock1, mock2):
self.assertRaises(ExceptionNoStackTraceNeeded, self.restore._multitry_createdb)
@patch('gppylib.operations.restore.RestoreDatabase._process_createdb')
def test_multitry_createdb_default(self, mock):
self.restore._multitry_createdb()
@patch('gppylib.operations.restore.get_partition_list', return_value=[('public', 't1'), ('public', 't2'), ('public', 't3')])
@patch('gppylib.operations.restore.get_incremental_restore_timestamps', return_value=['20160101010101', '20160101010111'])
@patch('gppylib.operations.restore.get_dirty_table_file_contents', return_value=['public.t1', 'public.t2'])
def test_create_restore_plan_default(self, mock1, mock2, mock3):
expected = ["20160101010111:", "20160101010101:public.t1,public.t2", "20160101000000:public.t3"]
self.context.full_dump_timestamp = '20160101000000'
m = mock_open()
with patch('__builtin__.open', m, create=True):
plan_file = create_restore_plan(self.context)
result = m()
self.assertEqual(len(expected), len(result.write.call_args_list))
for i in range(len(expected)):
self.assertEqual(call(expected[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.restore.get_incremental_restore_timestamps', return_value=['20160101010101', '20160101010111'])
@patch('gppylib.operations.restore.get_dirty_table_file_contents', return_value=['public.t1', 'public.t2'])
def test_create_restore_plan_empty_list(self, mock1, mock2):
expected = ["20160101010111:", "20160101010101:", "20160101000000:"]
self.context.full_dump_timestamp = '20160101000000'
m = mock_open()
with patch('__builtin__.open', m, create=True):
plan_file = create_restore_plan(self.context)
result = m()
self.assertEqual(len(expected), len(result.write.call_args_list))
for i in range(len(expected)):
self.assertEqual(call(expected[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.restore.get_partition_list', return_value=[])
@patch('gppylib.operations.restore.get_full_timestamp_for_incremental', return_value='20120101000000')
@patch('gppylib.operations.restore.get_incremental_restore_timestamps', return_value=['20160101010101', '20160101010111'])
@patch('gppylib.operations.restore.get_dirty_table_file_contents', return_value=['public.t1', 'public.t2'])
@patch('gppylib.operations.restore.create_plan_file_contents')
def test_create_restore_plan_empty_list_with_nbu(self, mock1, mock2, mock3, mock4, mock5):
self.context.netbackup_service_host = 'mdw'
self.context.netbackup_block_size = '1024'
m = mock_open()
with patch('__builtin__.open', m, create=True):
plan_file = create_restore_plan(self.context)
result = m()
self.assertEqual(len(result.write.call_args_list), 0)
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['20160101010110', '20160101010109', '20160101010108', '20160101010107', '20160101010106', '20160101010105', '20160101010104', '20160101010103', '20160101010102', '20160101010101'])
def test_get_incremental_restore_timestamps_midway(self, mock):
self.context.full_dump_timestamp = '20160101010101'
self.context.timestamp = '20160101010105'
increments = get_incremental_restore_timestamps(self.context)
self.assertEqual(increments, ['20160101010105', '20160101010104', '20160101010103', '20160101010102', '20160101010101'])
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['20160101010110', '20160101010109', '20160101010108', '20160101010107', '20160101010106', '20160101010105', '20160101010104', '20160101010103', '20160101010102', '20160101010101'])
def test_get_incremental_restore_timestamps_latest(self, mock):
self.context.full_dump_timestamp = '20160101010101'
self.context.timestamp = '20160101010110'
increments = get_incremental_restore_timestamps(self.context)
self.assertEqual(increments, ['20160101010110', '20160101010109', '20160101010108', '20160101010107', '20160101010106', '20160101010105', '20160101010104', '20160101010103', '20160101010102', '20160101010101'])
@patch('gppylib.operations.restore.get_lines_from_file', return_value=[])
def test_get_incremental_restore_timestamps_earliest(self, mock):
self.context.full_dump_timestamp = '20160101010101'
self.context.timestamp = '20160101010100'
increments = get_incremental_restore_timestamps(self.context)
self.assertEqual(increments, [])
@patch('gppylib.operations.restore.get_lines_from_file', side_effect=[['public.t1'], ['public.t1', 'public.t2', 'public.t3'], ['public.t2', 'public.t4']])
def test_create_plan_file_contents_with_file(self, mock):
table_set_from_metadata_file = ['public.t1', 'public.t2', 'public.t3', 'public.t4']
incremental_restore_timestamps = ['20160101010113', '20160101010101', '20160101010111']
latest_full_timestamp = '20160101010110'
expected_output = {'20160101010113': ['public.t1'], '20160101010101': ['public.t2', 'public.t3'], '20160101010111': ['public.t4'], '20160101010110': []}
file_contents = create_plan_file_contents(self.context, table_set_from_metadata_file, incremental_restore_timestamps, latest_full_timestamp)
self.assertEqual(file_contents, expected_output)
def test_create_plan_file_contents_no_file(self):
table_set_from_metadata_file = ['public.t1', 'public.t2', 'public.t3', 'public.t4']
incremental_restore_timestamps = []
latest_full_timestamp = '20160101010110'
expected_output = {'20160101010110': ['public.t1', 'public.t2', 'public.t3', 'public.t4']}
file_contents = create_plan_file_contents(self.context, table_set_from_metadata_file, incremental_restore_timestamps, latest_full_timestamp)
self.assertEqual(file_contents, expected_output)
@patch('gppylib.operations.restore.get_lines_from_file', side_effect=[['public.t1'], ['public.t1', 'public.t2', 'public.t3'], ['public.t2', 'public.t4']])
def test_create_plan_file_contents_no_metadata(self, mock):
table_set_from_metadata_file = []
incremental_restore_timestamps = ['20160101010113', '20160101010101', '20160101010111']
latest_full_timestamp = '20160101010110'
expected_output = {'20160101010101': [], '20160101010113': [], '20160101010111': [], '20160101010110': []}
file_contents = create_plan_file_contents(self.context, table_set_from_metadata_file, incremental_restore_timestamps, latest_full_timestamp)
self.assertEqual(file_contents, expected_output)
@patch('gppylib.operations.restore.get_lines_from_file', side_effect=[['public.t1'], ['public.t1', 'public.t2', 'public.t3'], ['public.t2', 'public.t4']])
@patch('gppylib.operations.restore.restore_file_with_nbu')
def test_create_plan_file_contents_with_nbu(self, mock1, mock2):
self.context.netbackup_service_host = 'mdw'
self.context.netbackup_block_size = '1024'
table_set_from_metadata_file = []
incremental_restore_timestamps = ['20160101010113', '20160101010101', '20160101010111']
latest_full_timestamp = '20160101010110'
expected_output = {'20160101010101': [], '20160101010113': [], '20160101010111': [], '20160101010110': []}
file_contents = create_plan_file_contents(self.context, table_set_from_metadata_file, incremental_restore_timestamps, latest_full_timestamp)
self.assertEqual(file_contents, expected_output)
@patch('gppylib.operations.restore.write_lines_to_file')
def test_write_to_plan_file_default(self, mock1):
plan_file = 'blah'
plan_file_contents = {'20160101010113': ['public.t1'],
'20160101010101': ['public.t2', 'public.t3'],
'20160101010111': ['public.t4']}
expected_output = ['20160101010113:public.t1',
'20160101010111:public.t4',
'20160101010101:public.t2,public.t3']
file_contents = write_to_plan_file(plan_file_contents, plan_file)
self.assertEqual(expected_output, file_contents)
@patch('gppylib.operations.restore.write_lines_to_file')
def test_write_to_plan_file_empty_list(self, mock1):
plan_file = 'blah'
plan_file_contents = {}
expected_output = []
file_contents = write_to_plan_file(plan_file_contents, plan_file)
self.assertEqual(expected_output, file_contents)
@patch('gppylib.operations.restore.write_lines_to_file')
def test_write_to_plan_file_no_plan_file(self, mock1):
plan_file = None
plan_file_contents = {}
with self.assertRaisesRegexp(Exception, 'Invalid plan file .*'):
write_to_plan_file(plan_file_contents, plan_file)
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['public.t1', 'public.t2'])
def test_get_partition_list_default(self, mock):
partition_list = get_partition_list(self.context)
self.assertEqual(partition_list, [('public', 't1'), ('public', 't2')])
@patch('gppylib.operations.restore.get_lines_from_file', return_value=[])
def test_get_partition_list_no_partitions(self, mock):
partition_list = get_partition_list(self.context)
self.assertEqual(partition_list, [])
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['Backup Type: Incremental'])
@patch('os.path.isfile', return_value=True)
def test_is_incremental_restore_default(self, mock1, mock2):
self.assertTrue(is_incremental_restore(self.context))
@patch('gppylib.operations.restore.get_lines_from_file')
@patch('gppylib.operations.restore.check_backup_type', return_value=True)
@patch('os.path.isfile', return_value=True)
def test_is_incremental_restore_bypass_file_incremental(self, mock1, mock2, mock3):
self.assertTrue(is_incremental_restore(self.context))
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['Backup Type: Full'])
def test_is_incremental_restore_full_backup(self, mock1, mock2):
self.assertFalse(is_incremental_restore(self.context))
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.get_lines_from_file')
@patch('gppylib.operations.restore.check_backup_type', return_value=False)
def test_is_incremental_restore_bypass_file_full(self, mock1, mock2, mock3):
self.assertFalse(is_incremental_restore(self.context))
@patch('os.path.isfile', return_value=False)
def test_is_incremental_restore_no_file(self, mock1):
self.assertFalse(is_incremental_restore(self.context))
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['Backup Type: Full'])
@patch('os.path.isfile', return_value=True)
def test_is_full_restore_default(self, mock1, mock2, mock3):
self.assertTrue(is_full_restore(self.context))
@patch('gppylib.operations.restore.get_lines_from_file')
@patch('gppylib.operations.restore.check_backup_type', return_value=True)
@patch('os.path.isfile', return_value=True)
def test_is_full_restore_bypass_file_full(self, mock1, mock2, mock3):
self.assertTrue(is_full_restore(self.context))
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['Backup Type: Incremental'])
def test_is_full_restore_incremental(self, mock1, mock2):
self.assertFalse(is_full_restore(self.context))
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.get_lines_from_file')
@patch('gppylib.operations.restore.check_backup_type', return_value=False)
def test_is_full_restore_bypass_file_incremental(self, mock1, mock2, mock3):
self.assertFalse(is_full_restore(self.context))
@patch('os.path.isfile', return_value=False)
def test_is_full_restore_no_file(self, mock1):
filename = self.context.generate_filename("report")
with self.assertRaisesRegexp(Exception, 'Report file %s does not exist' % filename):
is_full_restore(self.context)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_default(self, mock1, mock2):
table_filter_file = None
full_restore_with_filter = False
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_no_compression(self, mock1, mock2):
self.context.compress = False
table_filter_file = None
full_restore_with_filter = False
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p -d "testdb" -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=True)
def test_create_schema_only_restore_string_backup_dir(self, mock1, mock2, mock3):
table_filter_file = None
full_restore_with_filter = False
self.context.report_status_dir = "/data/master/p1/db_dumps/20160101"
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/data/master/p1/db_dumps/20160101 --status=/data/master/p1/db_dumps/20160101 --gp-c -d "testdb" -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=False)
def test_create_schema_only_restore_string_prefix(self, mock1, mock2, mock3):
self.context.dump_prefix = 'bar_'
table_filter_file = 'filter_file1'
metadata_file = self.context.generate_filename("metadata")
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --prefix=bar_ --gp-f=%s --gp-c -d "testdb" -s %s' % (table_filter_file, metadata_file)
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=False)
def test_create_schema_only_restore_string_no_filter_file(self, mock1, mock2, mock3):
table_filter_file = None
metadata_file = self.context.generate_filename("metadata")
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_different_status_dir(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = False
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_status_dir_with_filter(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = True
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -s %s -P' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_with_nbu(self, mock1, mock2):
table_filter_file = None
full_restore_with_filter = False
self.context.netbackup_service_host = "mdw"
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" --netbackup-service-host=mdw -s %s' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_schema_only_restore_string_with_ddboost(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = True
self.context.ddboost = True
self.context.dump_dir = '/backup/DCA-35'
metadata_file = self.context.generate_filename("metadata")
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=/backup/DCA-35/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" --ddboost -s %s -P' % metadata_file
restore_line = self.restore.create_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_post_data_schema_only_restore_string_default(self, mock1, mock2):
table_filter_file = None
full_restore_with_filter = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" -P'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=True)
def test_create_post_data_schema_only_restore_string_no_filter(self, mock1, mock2, mock3):
table_filter_file = None
full_restore_with_filter = False
self.context.report_status_dir="/data/master/p1/db_dumps/20160101"
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/data/master/p1/db_dumps/20160101 --status=/data/master/p1/db_dumps/20160101 --gp-c -d "testdb"'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=False)
def test_create_post_data_schema_only_restore_string_with_prefix(self, mock1, mock2, mock3):
self.context.dump_prefix = 'bar_'
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --prefix=bar_ --gp-c -d "testdb"'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=False)
def test_create_post_data_schema_only_restore_string_with_prefix_and_filter(self, mock1, mock2, mock3):
self.context.dump_prefix = 'bar_'
table_filter_file = 'filter_file1'
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --prefix=bar_ --gp-f=%s --gp-c -d "testdb"' % (table_filter_file)
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=False)
def test_create_post_data_schema_only_restore_string_no_backup_dir(self, mock1, mock2, mock3):
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb"'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_post_data_schema_only_restore_string_different_status_dir(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb"'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_post_data_schema_only_restore_string_status_dir_and_filter(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -P'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_post_data_schema_only_restore_string_with_ddboost(self, mock1, mock2):
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = True
self.context.ddboost = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" --ddboost -P'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_post_data_schema_only_restore_string_with_nbu(self, mock1, mock2):
table_filter_file = None
full_restore_with_filter = True
self.context.backup_dir = None
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = 1024
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" --netbackup-service-host=mdw --netbackup-block-size=1024 -P'
restore_line = self.restore.create_post_data_schema_only_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_build_gpdbrestore_cmd_line_default(self, mock1, mock2):
ts = '20160101010101'
self.context.backup_dir = None
expected_output = 'gpdbrestore -t 20160101010101 --table-file foo -a -v --noplan --noanalyze --noaostats --no-validate-table-name'
restore_line = _build_gpdbrestore_cmd_line(self.context, ts, 'foo')
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_build_gpdbrestore_cmd_line_backup_dir(self, mock1, mock2):
ts = '20160101010101'
self.context.backup_dir = '/tmp'
expected_output = 'gpdbrestore -t 20160101010101 --table-file foo -a -v --noplan --noanalyze --noaostats --no-validate-table-name -u /tmp'
restore_line = _build_gpdbrestore_cmd_line(self.context, ts, 'foo')
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_build_gpdbrestore_cmd_line_report_status_dir(self, mock1, mock2):
ts = '20160101010101'
self.context.backup_dir = None
self.context.report_status_dir = '/tmp'
expected_output = 'gpdbrestore -t 20160101010101 --table-file foo -a -v --noplan --noanalyze --noaostats --no-validate-table-name --report-status-dir=/tmp'
restore_line = _build_gpdbrestore_cmd_line(self.context, ts, 'foo')
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_build_gpdbrestore_cmd_line_redirected_restore(self, mock1, mock2):
ts = '20160101010101'
self.context.backup_dir = None
self.context.redirected_restore_db = "redb"
expected_output = 'gpdbrestore -t 20160101010101 --table-file foo -a -v --noplan --noanalyze --noaostats --no-validate-table-name --redirect=redb'
restore_line = _build_gpdbrestore_cmd_line(self.context, ts, 'foo')
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_build_gpdbrestore_cmd_line_with_ddboost(self, mock1, mock2):
ts = '20160101010101'
self.context.backup_dir = None
self.context.ddboost = True
self.context.report_status_dir = '/tmp'
expected_output = 'gpdbrestore -t 20160101010101 --table-file foo -a -v --noplan --noanalyze --noaostats --no-validate-table-name --report-status-dir=/tmp --ddboost'
ddboost = True
restore_line = _build_gpdbrestore_cmd_line(self.context, ts, 'foo')
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.dbconn.DbURL')
@patch('gppylib.operations.restore.dbconn.connect')
@patch('gppylib.operations.restore.execSQL')
@patch('gppylib.operations.restore.RestoreDatabase.get_full_tables_in_schema', return_value=['"public"."tablename1"', '"public"."tablename2"', '"public"."tablename3"'])
def test_truncate_restore_tables_restore_schemas(self, mock1, mock2, mock3, mock4):
self.context.restore_schemas = ['public']
self.restore.truncate_restore_tables()
calls = [call(ANY,'Truncate "public"."tablename1"'), call(ANY,'Truncate "public"."tablename2"'), call(ANY,'Truncate "public"."tablename3"')]
mock2.assert_has_calls(calls)
@patch('gppylib.operations.restore.dbconn.DbURL')
@patch('gppylib.operations.restore.dbconn.connect')
@patch('gppylib.operations.restore.escape_string', side_effect=['public', 'ao1', 'testschema', 'heap1'])
@patch('gppylib.operations.restore.execSQL')
@patch('gppylib.operations.restore.execSQLForSingleton', return_value='t')
def test_truncate_restore_tables_restore_tables(self, mock1, mock2, mock3, mock4, mock5):
self.context.restore_tables = ['public.ao1', 'testschema.heap1']
self.restore.truncate_restore_tables()
calls = [call(ANY,'Truncate "public"."ao1"'), call(ANY,'Truncate "testschema"."heap1"')]
mock2.assert_has_calls(calls)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_no_filter_file(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_default(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = '/tmp/foo'
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-f=/tmp/foo --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_with_ddboost(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
full_restore_with_filter = False
self.context.ddboost = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" --ddboost -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_different_status_dir(self, mock1, mock2):
self.context.no_plan = True
self.context.report_status_dir = '/tmp'
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_no_filter(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_no_filter_file(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = '/tmp/foo'
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-f=/tmp/foo --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_ddboost_and_prefix(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
self.context.dump_prefix = 'bar_'
full_restore_with_filter = False
self.context.ddboost = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --prefix=bar_ --gp-c -d "testdb" --ddboost -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
@patch('gppylib.operations.backup_utils.Context.backup_dir_is_writable', return_value=True)
def test_create_restore_string_backup_dir(self, mock1, mock2, mock3):
self.context.no_plan = True
table_filter_file = None
self.context.backup_dir = '/tmp'
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=/tmp/db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp/db_dumps/20160101 --status=/tmp/db_dumps/20160101 --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_no_ao_stats(self, mock1, mock2):
self.context.no_plan = True
self.context.no_ao_stats = True
table_filter_file = None
self.context.report_status_dir = '/tmp'
self.context.backup_dir = '/foo'
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=/foo/db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -a --gp-nostats'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_with_plan(self, mock1, mock2):
table_filter_file = None
self.context.report_status_dir = '/tmp'
self.context.backup_dir = '/foo'
full_restore_with_filter = True
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=/foo/db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_with_nbu(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
self.context.report_status_dir = '/tmp'
self.context.backup_dir = '/foo'
self.context.netbackup_service_host = "mdw"
full_restore_with_filter = False
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=/foo/db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-r=/tmp --status=/tmp --gp-c -d "testdb" --netbackup-service-host=mdw -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter)
self.assertEqual(restore_line, expected_output)
# Test to verify the command line for gp_restore
@patch('gppylib.operations.restore.socket.gethostname', return_value='host')
@patch('gppylib.operations.restore.getpass.getuser', return_value='user')
def test_create_restore_string_change_schema(self, mock1, mock2):
self.context.no_plan = True
table_filter_file = None
full_restore_with_filter = False
change_schema_file = 'newschema'
expected_output = 'gp_restore -i -h host -p 5432 -U user --gp-d=db_dumps/20160101 --gp-i --gp-k=20160101010101 --gp-l=p --gp-c -d "testdb" --change-schema-file=newschema -a'
restore_line = self.restore.create_standard_restore_string(table_filter_file, full_restore_with_filter, change_schema_file)
self.assertEqual(restore_line, expected_output)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo')
def test_get_plan_file_contents_no_file(self, mock1):
with self.assertRaisesRegexp(Exception, 'Plan file foo does not exist'):
get_plan_file_contents(self.context)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo')
@patch('gppylib.operations.restore.get_lines_from_file', return_value=[])
@patch('os.path.isfile', return_value=True)
def test_get_plan_file_contents_empty_file(self, mock1, mock2, mock3):
with self.assertRaisesRegexp(Exception, 'Plan file foo has no contents'):
get_plan_file_contents(self.context)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo')
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['20160101010101:t1,t2', '20160101010111:t3,t4', '20160101121210:t5,t6,t7'])
@patch('os.path.isfile', return_value=True)
def test_get_plan_file_contents_default(self, mock1, mock2, mock3):
expected_output = [('20160101010101','t1,t2'), ('20160101010111','t3,t4'), ('20160101121210','t5,t6,t7')]
output = get_plan_file_contents(self.context)
self.assertEqual(output, expected_output)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo')
@patch('gppylib.operations.restore.get_lines_from_file', return_value=['20160101010101:', '20160101010111', '20160101121210:'])
@patch('os.path.isfile', return_value=True)
def test_get_plan_file_contents_invalid_format(self, mock1, mock2, mock3):
with self.assertRaisesRegexp(Exception, 'Invalid plan file format'):
get_plan_file_contents(self.context)
@patch('gppylib.operations.restore.get_plan_file_contents', return_value=[('20160101010101', 't1,t2'), ('20160101010111', 't3,t4'), ('20160101121210', 't5,t6,t7')])
@patch('gppylib.operations.restore.Command.run')
@patch('gppylib.operations.restore.update_ao_statistics')
def test_restore_incremental_data_only_default(self, mock1, mock2, mock3):
results = self.restore.restore_incremental_data_only()
self.assertTrue(results)
@patch('gppylib.operations.restore.get_plan_file_contents', return_value=[('20160101010101', ''), ('20160101010111', ''), ('20160101121210', '')])
@patch('os.path.isfile', return_value=True)
@patch('gppylib.operations.restore.update_ao_statistics')
def test_restore_incremental_data_only_no_tables(self, mock1, mock2, mock3):
with self.assertRaisesRegexp(Exception, 'There were no tables to restore. Check the plan file contents for restore timestamp 20160101010101'):
self.restore.restore_incremental_data_only()
@patch('gppylib.operations.restore.get_plan_file_contents', return_value=[('20160101010101', 't1,t2'), ('20160101010111', 't3,t4'), ('20160101121210', 't5,t6,t7')])
@patch('gppylib.operations.restore.Command.run', side_effect=Exception('Error executing gpdbrestore'))
@patch('gppylib.operations.restore.update_ao_statistics')
def test_restore_incremental_data_only_error(self, mock1, mock2, mock3):
with self.assertRaisesRegexp(Exception, 'Error executing gpdbrestore'):
self.restore.restore_incremental_data_only()
def test_create_filter_file_no_tables(self):
self.context.restore_tables = None
self.assertEquals(self.restore.create_filter_file(), None)
@patch('gppylib.operations.restore.get_all_segment_addresses', return_value=['host1'])
@patch('gppylib.operations.restore.scp_file_to_hosts')
def test_create_filter_file_default(self, m1, m2):
self.context.restore_tables = ['public.ao1', 'testschema.heap1']
m = mock_open()
with patch('tempfile.NamedTemporaryFile', m, create=True):
fname = self.restore.create_filter_file()
result = m()
self.assertEqual(len(self.context.restore_tables), len(result.write.call_args_list))
for i in range(len(self.context.restore_tables)):
self.assertEqual(call(self.context.restore_tables[i]+'\n'), result.write.call_args_list[i])
@patch('gppylib.operations.restore.get_lines_from_file', return_value = ['public.t1', 'public.t2', 'public.t3'])
@patch('os.path.isfile', return_value = True)
def test_get_restore_tables_from_table_file_default(self, mock1, mock2):
table_file = '/foo'
expected_result = ['public.t1', 'public.t2', 'public.t3']
result = get_restore_tables_from_table_file(table_file)
self.assertEqual(expected_result, result)
@patch('os.path.isfile', return_value = False)
def test_get_restore_tables_from_table_file_no_file(self, mock):
table_file = '/foo'
expected_result = ['public.t1', 'public.t2', 'public.t3']
with self.assertRaisesRegexp(Exception, 'Table file does not exist'):
result = get_restore_tables_from_table_file(table_file)
def test_check_table_name_format_and_duplicate_missing_schema(self):
table_list = ['publicao1', 'public.ao2']
with self.assertRaisesRegexp(Exception, 'No schema name supplied'):
check_table_name_format_and_duplicate(table_list, None)
def test_check_table_name_format_and_duplicate_default(self):
table_list = ['public.ao1', 'public.ao2']
check_table_name_format_and_duplicate(table_list, [])
def test_check_table_name_format_and_duplicate_no_tables(self):
table_list = []
schema_list = []
check_table_name_format_and_duplicate(table_list, schema_list)
def test_check_table_name_format_and_duplicate_duplicate_tables(self):
table_list = ['public.ao1', 'public.ao1']
resolved_list, _ = check_table_name_format_and_duplicate(table_list, [])
self.assertEqual(resolved_list, ['public.ao1'])
def test_check_table_name_format_and_duplicate_funny_chars(self):
table_list = [' `"@#$%^&( )_|:;<>?/-+={}[]*1Aa . `"@#$%^&( )_|:;<>?/-+={}[]*1Aa ', 'schema.ao1']
schema_list = ['schema']
resolved_table_list, resolved_schema_list = check_table_name_format_and_duplicate(table_list, schema_list)
self.assertEqual(resolved_table_list, [' `"@#$%^&( )_|:;<>?/-+={}[]*1Aa . `"@#$%^&( )_|:;<>?/-+={}[]*1Aa '])
self.assertEqual(resolved_schema_list, ['schema'])
def test_validate_tablenames_exist_in_dump_file_no_tables(self):
dumped_tables = []
table_list = ['schema.ao']
with self.assertRaisesRegexp(Exception, 'No dumped tables to restore.'):
validate_tablenames_exist_in_dump_file(table_list, dumped_tables)
def test_validate_tablenames_exist_in_dump_file_one_table(self):
dumped_tables = [('schema', 'ao', 'gpadmin')]
table_list = ['schema.ao']
validate_tablenames_exist_in_dump_file(table_list, dumped_tables)
def test_validate_tablenames_exist_in_dump_file_nonexistent_table(self):
dumped_tables = [('schema', 'ao', 'gpadmin')]
table_list = ['schema.ao', 'schema.co']
with self.assertRaisesRegexp(Exception, "Tables \['schema.co'\] not found in backup"):
validate_tablenames_exist_in_dump_file(table_list, dumped_tables)
def test_get_restore_table_list_default(self):
table_list = ['public.ao_table', 'public.ao_table2', 'public.co_table', 'public.heap_table']
restore_tables = ['public.ao_table2', 'public.co_table']
m = mock_open()
with patch('tempfile.NamedTemporaryFile', m, create=True):
result = get_restore_table_list(table_list, restore_tables)
result = m()
self.assertEqual(len(restore_tables), len(result.write.call_args_list))
for i in range(len(restore_tables)):
self.assertEqual(call(restore_tables[i]+'\n'), result.write.call_args_list[i])
def test_get_restore_table_list_no_restore_tables(self):
table_list = ['public.ao_table', 'public.ao_table2', 'public.co_table', 'public.heap_table']
restore_tables = None
m = mock_open()
with patch('tempfile.NamedTemporaryFile', m, create=True):
result = get_restore_table_list(table_list, restore_tables)
result = m()
self.assertEqual(len(table_list), len(result.write.call_args_list))
for i in range(len(table_list)):
self.assertEqual(call(table_list[i]+'\n'), result.write.call_args_list[i])
def test_get_restore_table_list_extra_restore_tables(self):
table_list = ['public.ao_table', 'public.ao_table2', 'public.co_table', 'public.heap_table']
restore_tables = ['public.ao_table2', 'public.co_table', 'public.ao_table3']
expected = ['public.ao_table2', 'public.co_table']
m = mock_open()
with patch('tempfile.NamedTemporaryFile', m, create=True):
result = get_restore_table_list(table_list, restore_tables)
result = m()
self.assertEqual(len(expected), len(result.write.call_args_list))
for i in range(len(expected)):
self.assertEqual(call(expected[i]+'\n'), result.write.call_args_list[i])
def test_validate_restore_tables_list_default(self):
plan_file_contents = [('20160101121213', 'public.t1'), ('20160101010101', 'public.t2,public.t3'), ('20160101010101', 'public.t4')]
restore_tables = ['public.t1', 'public.t2']
validate_restore_tables_list(plan_file_contents, restore_tables)
def test_validate_restore_tables_list_invalid_tables(self):
plan_file_contents = [('20160101121213', 'public.t1'), ('20160101010101', 'public.t2,public.t3'), ('20160101010101', 'public.t4')]
restore_tables = ['public.t5', 'public.t2']
with self.assertRaisesRegexp(Exception, 'Invalid tables for -T option: The following tables were not found in plan file'):
validate_restore_tables_list(plan_file_contents, restore_tables)
@patch('os.path.exists', return_value=False)
def test_restore_global_no_file(self, mock):
with self.assertRaisesRegexp(Exception, 'Unable to locate global file /data/master/p1/db_dumps/20160101/gp_global_-1_1_20160101010101 in dump set'):
self.restore._restore_global(self.context)
@patch('os.path.exists', return_value=True)
@patch('gppylib.commands.gp.Psql.run')
def test_restore_global_default(self, mock1, mock2):
self.restore._restore_global(self.context) # should not error out
@patch('gppylib.operations.restore.escape_string', return_value='schema.table')
@patch('gppylib.operations.restore.execSQLForSingleton')
@patch('pygresql.pgdb.pgdbCnx.commit')
def test_update_ao_stat_func_default(self, m1, m2, m3):
conn = None
ao_schema = 'schema'
ao_table = 'table'
counter = 1
batch_size = 1000
update_ao_stat_func(self.context, conn, ao_schema, ao_table, counter, batch_size)
@patch('gppylib.operations.restore.escape_string', return_value='schema.table')
@patch('gppylib.operations.restore.execSQLForSingleton')
@patch('pygresql.pgdb.pgdbCnx.commit')
def test_update_ao_stat_func_near_batch_size(self, m1, m2, m3):
conn = None
ao_table = 'table'
ao_schema = 'schema'
counter = 999
batch_size = 1000
update_ao_stat_func(self.context, conn, ao_schema, ao_table, counter, batch_size)
@patch('gppylib.operations.restore.escape_string', return_value='schema.table')
@patch('gppylib.operations.restore.execSQLForSingleton')
@patch('pygresql.pgdb.pgdbCnx.commit')
def test_update_ao_stat_func_equal_batch_size(self, m1, m2, m3):
conn = None
ao_table = 'table'
ao_schema = 'schema'
counter = 1000
batch_size = 1000
with self.assertRaisesRegexp(AttributeError, "'NoneType' object has no attribute 'commit'"):
update_ao_stat_func(self.context, conn, ao_schema, ao_table, counter, batch_size)
@patch('gppylib.operations.restore.escape_string', return_value='schema.table')
@patch('gppylib.operations.restore.execSQLForSingleton')
@patch('pygresql.pgdb.pgdbCnx.commit')
def test_update_ao_stat_func_over_batch_size(self, m1, m2, m3):
conn = None
ao_table = 'table'
ao_schema = 'schema'
counter = 1001
batch_size = 1000
update_ao_stat_func(self.context, conn, ao_schema, ao_table, counter, batch_size)
@patch('gppylib.operations.restore.escape_string', return_value='schema.table')
@patch('gppylib.operations.restore.execSQLForSingleton')
@patch('pygresql.pgdb.pgdbCnx.commit')
def test_update_ao_stat_func_double_batch_size(self, m1, m2, m3):
conn = None
ao_table = 'table'
ao_schema = 'schema'
counter = 2000
batch_size = 1000
with self.assertRaisesRegexp(AttributeError, "'NoneType' object has no attribute 'commit'"):
update_ao_stat_func(self.context, conn, ao_schema, ao_table, counter, batch_size)
@patch('gppylib.operations.restore.execute_sql', return_value=[['t1', 'public']])
@patch('gppylib.operations.restore.dbconn.connect')
@patch('gppylib.operations.restore.update_ao_stat_func')
def test_update_ao_statistics_default(self, m1, m2, m3):
restored_tables = []
update_ao_statistics(self.context, restored_tables)
update_ao_statistics(self.context, restored_tables=['public.t1'], restored_schema=[], restore_all=False)
update_ao_statistics(self.context, restored_tables=[], restored_schema=['public'], restore_all=False)
update_ao_statistics(self.context, restored_tables=[], restored_schema=[], restore_all=True)
def test_generate_restored_tables_no_table(self):
results = [['t1','public'], ['t2', 'public'], ['foo', 'bar']]
tables = generate_restored_tables(results, restored_tables=[], restored_schema=[], restore_all=False)
self.assertEqual(tables, set())
def test_generate_restored_tables_specified_table(self):
results = [['t1','public'], ['t2', 'public'], ['foo', 'bar']]
tables = generate_restored_tables(results, restored_tables=['public.t1'], restored_schema=[], restore_all=False)
self.assertEqual(tables, set([('public','t1')]))
def test_generate_restored_tables_specified_schema(self):
results = [['t1','public'], ['t2', 'public'], ['foo', 'bar']]
tables = generate_restored_tables(results, restored_tables=[], restored_schema=['public'], restore_all=False)
self.assertEqual(tables, set([('public','t1'), ('public', 't2')]))
def test_generate_restored_tables_full_restore(self):
results = [['t1','public'], ['t2', 'public'], ['foo', 'bar']]
tables = generate_restored_tables(results, restored_tables=[], restored_schema=[], restore_all=True)
self.assertEqual(tables, set([('public','t1'), ('public', 't2'), ('bar', 'foo')]))
@patch('gppylib.operations.restore.dbconn.connect')
@patch('gppylib.db.dbconn.execSQLForSingleton', return_value=5)
def test_check_gp_toolkit_true(self, m1, m2):
self.assertTrue(self.restore.check_gp_toolkit())
@patch('gppylib.operations.restore.dbconn.connect')
@patch('gppylib.db.dbconn.execSQLForSingleton', return_value=0)
def test_check_gp_toolkit_false(self, m1, m2):
self.assertFalse(self.restore.check_gp_toolkit())
@patch('gppylib.operations.backup_utils.dbconn.DbURL')
@patch('gppylib.operations.backup_utils.dbconn.connect')
@patch('gppylib.operations.restore.execSQL')
def test_analyze_restore_tables_default(self, mock1, mock2, mock3):
self.context.restore_tables = ['public.t1', 'public.t2']
self.restore._analyze_restore_tables()
@patch('gppylib.operations.restore.execSQL', side_effect=Exception('analyze failed'))
@patch('gppylib.operations.backup_utils.dbconn.DbURL')
@patch('gppylib.operations.backup_utils.dbconn.connect')
def test_analyze_restore_tables_analyze_failed(self, mock1, mock2, mock3):
self.context.restore_tables = ['public.t1', 'public.t2']
self.assertRaises(Exception, self.restore._analyze_restore_tables)
@patch('gppylib.operations.backup_utils.execSQL')
@patch('gppylib.operations.backup_utils.dbconn.DbURL', side_effect=Exception('Failed'))
@patch('gppylib.operations.backup_utils.dbconn.connect')
def test_analyze_restore_tables_connection_failed(self, mock1, mock2, mock3):
self.context.restore_tables = ['public.t1', 'public.t2']
self.assertRaises(Exception, self.restore._analyze_restore_tables)
@patch('gppylib.operations.backup_utils.dbconn.DbURL')
@patch('gppylib.operations.backup_utils.dbconn.connect')
@patch('gppylib.operations.restore.execSQL')
def test_analyze_restore_tables_three_batches(self, mock1, mock2, mock3):
self.context.restore_tables = ['public.t%d' % i for i in range(3002)]
expected_batch_count = 3
batch_count = self.restore._analyze_restore_tables()
self.assertEqual(batch_count, expected_batch_count)
@patch('gppylib.operations.backup_utils.dbconn.DbURL')
@patch('gppylib.operations.backup_utils.dbconn.connect')
@patch('gppylib.operations.backup_utils.dbconn.execSQL')
def test_analyze_restore_tables_change_schema(self, mock1, mock2, mock3):
self.context.restore_tables = ['public.t1', 'public.t2']
self.context.change_schema = 'newschema'
self.restore._analyze_restore_tables()
@patch('gppylib.operations.restore.execSQL', side_effect=Exception())
@patch('gppylib.operations.backup_utils.dbconn.DbURL')
@patch('gppylib.operations.backup_utils.dbconn.connect')
def test_analyze_restore_tables_execSQL_failed(self, mock1, mock2, mock3):
self.context.target_db = 'db1'
self.context.restore_tables = ['public.t1', 'public.t2']
self.assertRaisesRegexp(Exception, 'Issue with \'ANALYZE\' of restored table \'"public"."t1"\' in \'db1\' database', self.restore._analyze_restore_tables)
@patch('gppylib.operations.restore.restore_file_with_nbu')
def test_restore_state_files_with_nbu_default(self, mock1):
self.context.netbackup_service_host = "mdw"
restore_state_files_with_nbu(self.context)
self.assertEqual(mock1.call_count, 3)
calls = ["ao", "co", "last_operation"]
for i in range(len(mock1.call_args_list)):
self.assertEqual(mock1.call_args_list[i], call(self.context, calls[i]))
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_default(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, "schema")
cmd.assert_called_with("restoring metadata files to master", cmdStr)
self.assertEqual(mock2.call_count, 1)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_no_filetype(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = 100
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-block-size 100 --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, path="/tmp/foo_schema")
cmd.assert_called_with("restoring metadata files to master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_no_path(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = 100
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-block-size 100 --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, "schema")
cmd.assert_called_with("restoring metadata files to master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_both_args(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, 'Cannot supply both a file type and a file path to restore_file_with_nbu'):
restore_file_with_nbu(self.context, "schema", "/tmp/foo_schema")
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_neither_arg(self, mock1, mock2):
with self.assertRaisesRegexp(Exception, 'Cannot call restore_file_with_nbu with no type or path argument'):
restore_file_with_nbu(self.context)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_block_size(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_block_size = 1024
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-block-size 1024 --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, "schema")
cmd.assert_called_with("restoring metadata files to master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_keyword(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
self.context.netbackup_keyword = "foo"
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, "schema")
cmd.assert_called_with("restoring metadata files to master", cmdStr)
@patch('gppylib.operations.backup_utils.Context.generate_filename', return_value='/tmp/foo_schema')
@patch('gppylib.commands.base.Command.run')
def test_restore_file_with_nbu_segment(self, mock1, mock2):
self.context.netbackup_service_host = "mdw"
cmdStr = "gp_bsa_restore_agent --netbackup-service-host mdw --netbackup-filename /tmp/foo_schema > /tmp/foo_schema"
with patch.object(Command, '__init__', return_value=None) as cmd:
restore_file_with_nbu(self.context, "schema", hostname="sdw")
from gppylib.commands.base import REMOTE
cmd.assert_called_with("restoring metadata files to segment", cmdStr, ctxt=REMOTE, remoteHost="sdw")
@patch('gppylib.gparray.GpDB.getSegmentHostName', return_value='sdw')
def test_restore_config_files_with_nbu_default(self, mock1):
with patch('gppylib.operations.restore.restore_file_with_nbu', side_effect=my_counter) as nbu_mock:
global i
i = 0
self.context.netbackup_service_host = "mdw"
self.context.netbackup_policy = "test_policy"
self.context.netbackup_schedule = "test_schedule"
restore_config_files_with_nbu(self.context)
args, _ = nbu_mock.call_args_list[0]
self.assertEqual(args[1], "master_config")
for id, seg in enumerate(mock1.mock_segs):
self.assertEqual(seg.get_active_primary.call_count, 1)
self.assertEqual(seg.get_primary_dbid.call_count, 1)
args, _ = nbu_mock.call_args_list[id]
self.assertEqual(i, 3)
if __name__ == '__main__':
unittest.main()
i=0
def my_counter(*args, **kwargs):
global i
i += 1
return Mock()
| {
"content_hash": "f72bef3c3b2bcfb463f31046b4d6f6d5",
"timestamp": "",
"source": "github",
"line_count": 1138,
"max_line_length": 255,
"avg_line_length": 59.61775043936731,
"alnum_prop": 0.6807281302970005,
"repo_name": "yuanzhao/gpdb",
"id": "2844f0f44e6e700b4a92c0c44477f61d4d4ecfa3",
"size": "67945",
"binary": false,
"copies": "10",
"ref": "refs/heads/master",
"path": "gpMgmt/bin/gppylib/operations/test/unit/test_unit_restore.py",
"mode": "33261",
"license": "apache-2.0",
"language": [
{
"name": "Assembly",
"bytes": "5665"
},
{
"name": "Batchfile",
"bytes": "11492"
},
{
"name": "C",
"bytes": "35389544"
},
{
"name": "C++",
"bytes": "3974251"
},
{
"name": "CMake",
"bytes": "17118"
},
{
"name": "CSS",
"bytes": "7407"
},
{
"name": "Csound Score",
"bytes": "179"
},
{
"name": "DTrace",
"bytes": "1150"
},
{
"name": "Fortran",
"bytes": "14777"
},
{
"name": "GDB",
"bytes": "576"
},
{
"name": "Gherkin",
"bytes": "744167"
},
{
"name": "HTML",
"bytes": "191406"
},
{
"name": "Java",
"bytes": "268244"
},
{
"name": "JavaScript",
"bytes": "23969"
},
{
"name": "Lex",
"bytes": "196444"
},
{
"name": "M4",
"bytes": "103687"
},
{
"name": "Makefile",
"bytes": "436624"
},
{
"name": "PLSQL",
"bytes": "242673"
},
{
"name": "PLpgSQL",
"bytes": "5494374"
},
{
"name": "Perl",
"bytes": "3895289"
},
{
"name": "Perl 6",
"bytes": "14956"
},
{
"name": "Python",
"bytes": "8758314"
},
{
"name": "Roff",
"bytes": "51338"
},
{
"name": "Ruby",
"bytes": "26724"
},
{
"name": "SQLPL",
"bytes": "3827063"
},
{
"name": "Shell",
"bytes": "559654"
},
{
"name": "XS",
"bytes": "8405"
},
{
"name": "XSLT",
"bytes": "5779"
},
{
"name": "Yacc",
"bytes": "494720"
}
],
"symlink_target": ""
} |
from django.core.mail import send_mail
import string
import random
from django.conf import settings
def createEmailBody(request,aff):
tmp = """Dear """+request.user.get_full_name()+"""
Your confirmation code to confirm your affiliation at """ +aff.org+""" is:
"""+aff.token+"""
Use this code to confirm your affiliation at
"""+settings.AFFILIATIONS_URL+"""
Sincerely,
Admin
"""
return tmp
def sendEmail(request,aff):
body = createEmailBody(request,aff)
email = aff.account+"@"+aff.org
if not settings.AFFILIATIONS_CODE_SUBJECT:
subject = "Confirmation Code"
subject = settings.AFFILIATIONS_CODE_SUBJECT
fromemail =settings.AFFILIATIONS_FROM_EMAIL
send_mail(subject, body, fromemail,[email], fail_silently=False)
def updateAffiliationCode(aff):
token=''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(12)])
aff.token=token
aff.save() | {
"content_hash": "c02da2e3c3f6c3824ecfb08b1edc55d8",
"timestamp": "",
"source": "github",
"line_count": 33,
"max_line_length": 89,
"avg_line_length": 27.151515151515152,
"alnum_prop": 0.7310267857142857,
"repo_name": "salimm/django-affiliations",
"id": "d49519e806e6d84b8090506a76906a8c79f1388f",
"size": "896",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "build/lib/affiliations/utils.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "JavaScript",
"bytes": "13379"
},
{
"name": "Python",
"bytes": "13563"
}
],
"symlink_target": ""
} |
import types
import weakref
from .lock import allocate_lock
from .error import CDefError, VerificationError, VerificationMissing
# type qualifiers
Q_CONST = 0x01
Q_RESTRICT = 0x02
Q_VOLATILE = 0x04
def qualify(quals, replace_with):
if quals & Q_CONST:
replace_with = ' const ' + replace_with.lstrip()
if quals & Q_VOLATILE:
replace_with = ' volatile ' + replace_with.lstrip()
if quals & Q_RESTRICT:
# It seems that __restrict is supported by gcc and msvc.
# If you hit some different compiler, add a #define in
# _cffi_include.h for it (and in its copies, documented there)
replace_with = ' __restrict ' + replace_with.lstrip()
return replace_with
class BaseTypeByIdentity(object):
is_array_type = False
is_raw_function = False
def get_c_name(self, replace_with='', context='a C file', quals=0):
result = self.c_name_with_marker
assert result.count('&') == 1
# some logic duplication with ffi.getctype()... :-(
replace_with = replace_with.strip()
if replace_with:
if replace_with.startswith('*') and '&[' in result:
replace_with = '(%s)' % replace_with
elif not replace_with[0] in '[(':
replace_with = ' ' + replace_with
replace_with = qualify(quals, replace_with)
result = result.replace('&', replace_with)
if '$' in result:
raise VerificationError(
"cannot generate '%s' in %s: unknown type name"
% (self._get_c_name(), context))
return result
def _get_c_name(self):
return self.c_name_with_marker.replace('&', '')
def has_c_name(self):
return '$' not in self._get_c_name()
def is_integer_type(self):
return False
def get_cached_btype(self, ffi, finishlist, can_delay=False):
try:
BType = ffi._cached_btypes[self]
except KeyError:
BType = self.build_backend_type(ffi, finishlist)
BType2 = ffi._cached_btypes.setdefault(self, BType)
assert BType2 is BType
return BType
def __repr__(self):
return '<%s>' % (self._get_c_name(),)
def _get_items(self):
return [(name, getattr(self, name)) for name in self._attrs_]
class BaseType(BaseTypeByIdentity):
def __eq__(self, other):
return (self.__class__ == other.__class__ and
self._get_items() == other._get_items())
def __ne__(self, other):
return not self == other
def __hash__(self):
return hash((self.__class__, tuple(self._get_items())))
class VoidType(BaseType):
_attrs_ = ()
def __init__(self):
self.c_name_with_marker = 'void&'
def build_backend_type(self, ffi, finishlist):
return global_cache(self, ffi, 'new_void_type')
void_type = VoidType()
class BasePrimitiveType(BaseType):
def is_complex_type(self):
return False
class PrimitiveType(BasePrimitiveType):
_attrs_ = ('name',)
ALL_PRIMITIVE_TYPES = {
'char': 'c',
'short': 'i',
'int': 'i',
'long': 'i',
'long long': 'i',
'signed char': 'i',
'unsigned char': 'i',
'unsigned short': 'i',
'unsigned int': 'i',
'unsigned long': 'i',
'unsigned long long': 'i',
'float': 'f',
'double': 'f',
'long double': 'f',
'float _Complex': 'j',
'double _Complex': 'j',
'_Bool': 'i',
# the following types are not primitive in the C sense
'wchar_t': 'c',
'char16_t': 'c',
'char32_t': 'c',
'int8_t': 'i',
'uint8_t': 'i',
'int16_t': 'i',
'uint16_t': 'i',
'int32_t': 'i',
'uint32_t': 'i',
'int64_t': 'i',
'uint64_t': 'i',
'int_least8_t': 'i',
'uint_least8_t': 'i',
'int_least16_t': 'i',
'uint_least16_t': 'i',
'int_least32_t': 'i',
'uint_least32_t': 'i',
'int_least64_t': 'i',
'uint_least64_t': 'i',
'int_fast8_t': 'i',
'uint_fast8_t': 'i',
'int_fast16_t': 'i',
'uint_fast16_t': 'i',
'int_fast32_t': 'i',
'uint_fast32_t': 'i',
'int_fast64_t': 'i',
'uint_fast64_t': 'i',
'intptr_t': 'i',
'uintptr_t': 'i',
'intmax_t': 'i',
'uintmax_t': 'i',
'ptrdiff_t': 'i',
'size_t': 'i',
'ssize_t': 'i',
}
def __init__(self, name):
assert name in self.ALL_PRIMITIVE_TYPES
self.name = name
self.c_name_with_marker = name + '&'
def is_char_type(self):
return self.ALL_PRIMITIVE_TYPES[self.name] == 'c'
def is_integer_type(self):
return self.ALL_PRIMITIVE_TYPES[self.name] == 'i'
def is_float_type(self):
return self.ALL_PRIMITIVE_TYPES[self.name] == 'f'
def is_complex_type(self):
return self.ALL_PRIMITIVE_TYPES[self.name] == 'j'
def build_backend_type(self, ffi, finishlist):
return global_cache(self, ffi, 'new_primitive_type', self.name)
class UnknownIntegerType(BasePrimitiveType):
_attrs_ = ('name',)
def __init__(self, name):
self.name = name
self.c_name_with_marker = name + '&'
def is_integer_type(self):
return True
def build_backend_type(self, ffi, finishlist):
raise NotImplementedError("integer type '%s' can only be used after "
"compilation" % self.name)
class UnknownFloatType(BasePrimitiveType):
_attrs_ = ('name', )
def __init__(self, name):
self.name = name
self.c_name_with_marker = name + '&'
def build_backend_type(self, ffi, finishlist):
raise NotImplementedError("float type '%s' can only be used after "
"compilation" % self.name)
class BaseFunctionType(BaseType):
_attrs_ = ('args', 'result', 'ellipsis', 'abi')
def __init__(self, args, result, ellipsis, abi=None):
self.args = args
self.result = result
self.ellipsis = ellipsis
self.abi = abi
#
reprargs = [arg._get_c_name() for arg in self.args]
if self.ellipsis:
reprargs.append('...')
reprargs = reprargs or ['void']
replace_with = self._base_pattern % (', '.join(reprargs),)
if abi is not None:
replace_with = replace_with[:1] + abi + ' ' + replace_with[1:]
self.c_name_with_marker = (
self.result.c_name_with_marker.replace('&', replace_with))
class RawFunctionType(BaseFunctionType):
# Corresponds to a C type like 'int(int)', which is the C type of
# a function, but not a pointer-to-function. The backend has no
# notion of such a type; it's used temporarily by parsing.
_base_pattern = '(&)(%s)'
is_raw_function = True
def build_backend_type(self, ffi, finishlist):
raise CDefError("cannot render the type %r: it is a function "
"type, not a pointer-to-function type" % (self,))
def as_function_pointer(self):
return FunctionPtrType(self.args, self.result, self.ellipsis, self.abi)
class FunctionPtrType(BaseFunctionType):
_base_pattern = '(*&)(%s)'
def build_backend_type(self, ffi, finishlist):
result = self.result.get_cached_btype(ffi, finishlist)
args = []
for tp in self.args:
args.append(tp.get_cached_btype(ffi, finishlist))
abi_args = ()
if self.abi == "__stdcall":
if not self.ellipsis: # __stdcall ignored for variadic funcs
try:
abi_args = (ffi._backend.FFI_STDCALL,)
except AttributeError:
pass
return global_cache(self, ffi, 'new_function_type',
tuple(args), result, self.ellipsis, *abi_args)
def as_raw_function(self):
return RawFunctionType(self.args, self.result, self.ellipsis, self.abi)
class PointerType(BaseType):
_attrs_ = ('totype', 'quals')
def __init__(self, totype, quals=0):
self.totype = totype
self.quals = quals
extra = qualify(quals, " *&")
if totype.is_array_type:
extra = "(%s)" % (extra.lstrip(),)
self.c_name_with_marker = totype.c_name_with_marker.replace('&', extra)
def build_backend_type(self, ffi, finishlist):
BItem = self.totype.get_cached_btype(ffi, finishlist, can_delay=True)
return global_cache(self, ffi, 'new_pointer_type', BItem)
voidp_type = PointerType(void_type)
def ConstPointerType(totype):
return PointerType(totype, Q_CONST)
const_voidp_type = ConstPointerType(void_type)
class NamedPointerType(PointerType):
_attrs_ = ('totype', 'name')
def __init__(self, totype, name, quals=0):
PointerType.__init__(self, totype, quals)
self.name = name
self.c_name_with_marker = name + '&'
class ArrayType(BaseType):
_attrs_ = ('item', 'length')
is_array_type = True
def __init__(self, item, length):
self.item = item
self.length = length
#
if length is None:
brackets = '&[]'
elif length == '...':
brackets = '&[/*...*/]'
else:
brackets = '&[%s]' % length
self.c_name_with_marker = (
self.item.c_name_with_marker.replace('&', brackets))
def resolve_length(self, newlength):
return ArrayType(self.item, newlength)
def build_backend_type(self, ffi, finishlist):
if self.length == '...':
raise CDefError("cannot render the type %r: unknown length" %
(self,))
self.item.get_cached_btype(ffi, finishlist) # force the item BType
BPtrItem = PointerType(self.item).get_cached_btype(ffi, finishlist)
return global_cache(self, ffi, 'new_array_type', BPtrItem, self.length)
char_array_type = ArrayType(PrimitiveType('char'), None)
class StructOrUnionOrEnum(BaseTypeByIdentity):
_attrs_ = ('name',)
forcename = None
def build_c_name_with_marker(self):
name = self.forcename or '%s %s' % (self.kind, self.name)
self.c_name_with_marker = name + '&'
def force_the_name(self, forcename):
self.forcename = forcename
self.build_c_name_with_marker()
def get_official_name(self):
assert self.c_name_with_marker.endswith('&')
return self.c_name_with_marker[:-1]
class StructOrUnion(StructOrUnionOrEnum):
fixedlayout = None
completed = 0
partial = False
packed = 0
def __init__(self, name, fldnames, fldtypes, fldbitsize, fldquals=None):
self.name = name
self.fldnames = fldnames
self.fldtypes = fldtypes
self.fldbitsize = fldbitsize
self.fldquals = fldquals
self.build_c_name_with_marker()
def anonymous_struct_fields(self):
if self.fldtypes is not None:
for name, type in zip(self.fldnames, self.fldtypes):
if name == '' and isinstance(type, StructOrUnion):
yield type
def enumfields(self, expand_anonymous_struct_union=True):
fldquals = self.fldquals
if fldquals is None:
fldquals = (0,) * len(self.fldnames)
for name, type, bitsize, quals in zip(self.fldnames, self.fldtypes,
self.fldbitsize, fldquals):
if (name == '' and isinstance(type, StructOrUnion)
and expand_anonymous_struct_union):
# nested anonymous struct/union
for result in type.enumfields():
yield result
else:
yield (name, type, bitsize, quals)
def force_flatten(self):
# force the struct or union to have a declaration that lists
# directly all fields returned by enumfields(), flattening
# nested anonymous structs/unions.
names = []
types = []
bitsizes = []
fldquals = []
for name, type, bitsize, quals in self.enumfields():
names.append(name)
types.append(type)
bitsizes.append(bitsize)
fldquals.append(quals)
self.fldnames = tuple(names)
self.fldtypes = tuple(types)
self.fldbitsize = tuple(bitsizes)
self.fldquals = tuple(fldquals)
def get_cached_btype(self, ffi, finishlist, can_delay=False):
BType = StructOrUnionOrEnum.get_cached_btype(self, ffi, finishlist,
can_delay)
if not can_delay:
self.finish_backend_type(ffi, finishlist)
return BType
def finish_backend_type(self, ffi, finishlist):
if self.completed:
if self.completed != 2:
raise NotImplementedError("recursive structure declaration "
"for '%s'" % (self.name,))
return
BType = ffi._cached_btypes[self]
#
self.completed = 1
#
if self.fldtypes is None:
pass # not completing it: it's an opaque struct
#
elif self.fixedlayout is None:
fldtypes = [tp.get_cached_btype(ffi, finishlist)
for tp in self.fldtypes]
lst = list(zip(self.fldnames, fldtypes, self.fldbitsize))
extra_flags = ()
if self.packed:
if self.packed == 1:
extra_flags = (8,) # SF_PACKED
else:
extra_flags = (0, self.packed)
ffi._backend.complete_struct_or_union(BType, lst, self,
-1, -1, *extra_flags)
#
else:
fldtypes = []
fieldofs, fieldsize, totalsize, totalalignment = self.fixedlayout
for i in range(len(self.fldnames)):
fsize = fieldsize[i]
ftype = self.fldtypes[i]
#
if isinstance(ftype, ArrayType) and ftype.length == '...':
# fix the length to match the total size
BItemType = ftype.item.get_cached_btype(ffi, finishlist)
nlen, nrest = divmod(fsize, ffi.sizeof(BItemType))
if nrest != 0:
self._verification_error(
"field '%s.%s' has a bogus size?" % (
self.name, self.fldnames[i] or '{}'))
ftype = ftype.resolve_length(nlen)
self.fldtypes = (self.fldtypes[:i] + (ftype,) +
self.fldtypes[i+1:])
#
BFieldType = ftype.get_cached_btype(ffi, finishlist)
if isinstance(ftype, ArrayType) and ftype.length is None:
assert fsize == 0
else:
bitemsize = ffi.sizeof(BFieldType)
if bitemsize != fsize:
self._verification_error(
"field '%s.%s' is declared as %d bytes, but is "
"really %d bytes" % (self.name,
self.fldnames[i] or '{}',
bitemsize, fsize))
fldtypes.append(BFieldType)
#
lst = list(zip(self.fldnames, fldtypes, self.fldbitsize, fieldofs))
ffi._backend.complete_struct_or_union(BType, lst, self,
totalsize, totalalignment)
self.completed = 2
def _verification_error(self, msg):
raise VerificationError(msg)
def check_not_partial(self):
if self.partial and self.fixedlayout is None:
raise VerificationMissing(self._get_c_name())
def build_backend_type(self, ffi, finishlist):
self.check_not_partial()
finishlist.append(self)
#
return global_cache(self, ffi, 'new_%s_type' % self.kind,
self.get_official_name(), key=self)
class StructType(StructOrUnion):
kind = 'struct'
class UnionType(StructOrUnion):
kind = 'union'
class EnumType(StructOrUnionOrEnum):
kind = 'enum'
partial = False
partial_resolved = False
def __init__(self, name, enumerators, enumvalues, baseinttype=None):
self.name = name
self.enumerators = enumerators
self.enumvalues = enumvalues
self.baseinttype = baseinttype
self.build_c_name_with_marker()
def force_the_name(self, forcename):
StructOrUnionOrEnum.force_the_name(self, forcename)
if self.forcename is None:
name = self.get_official_name()
self.forcename = '$' + name.replace(' ', '_')
def check_not_partial(self):
if self.partial and not self.partial_resolved:
raise VerificationMissing(self._get_c_name())
def build_backend_type(self, ffi, finishlist):
self.check_not_partial()
base_btype = self.build_baseinttype(ffi, finishlist)
return global_cache(self, ffi, 'new_enum_type',
self.get_official_name(),
self.enumerators, self.enumvalues,
base_btype, key=self)
def build_baseinttype(self, ffi, finishlist):
if self.baseinttype is not None:
return self.baseinttype.get_cached_btype(ffi, finishlist)
#
if self.enumvalues:
smallest_value = min(self.enumvalues)
largest_value = max(self.enumvalues)
else:
import warnings
try:
# XXX! The goal is to ensure that the warnings.warn()
# will not suppress the warning. We want to get it
# several times if we reach this point several times.
__warningregistry__.clear()
except NameError:
pass
warnings.warn("%r has no values explicitly defined; "
"guessing that it is equivalent to 'unsigned int'"
% self._get_c_name())
smallest_value = largest_value = 0
if smallest_value < 0: # needs a signed type
sign = 1
candidate1 = PrimitiveType("int")
candidate2 = PrimitiveType("long")
else:
sign = 0
candidate1 = PrimitiveType("unsigned int")
candidate2 = PrimitiveType("unsigned long")
btype1 = candidate1.get_cached_btype(ffi, finishlist)
btype2 = candidate2.get_cached_btype(ffi, finishlist)
size1 = ffi.sizeof(btype1)
size2 = ffi.sizeof(btype2)
if (smallest_value >= ((-1) << (8*size1-1)) and
largest_value < (1 << (8*size1-sign))):
return btype1
if (smallest_value >= ((-1) << (8*size2-1)) and
largest_value < (1 << (8*size2-sign))):
return btype2
raise CDefError("%s values don't all fit into either 'long' "
"or 'unsigned long'" % self._get_c_name())
def unknown_type(name, structname=None):
if structname is None:
structname = '$%s' % name
tp = StructType(structname, None, None, None)
tp.force_the_name(name)
tp.origin = "unknown_type"
return tp
def unknown_ptr_type(name, structname=None):
if structname is None:
structname = '$$%s' % name
tp = StructType(structname, None, None, None)
return NamedPointerType(tp, name)
global_lock = allocate_lock()
_typecache_cffi_backend = weakref.WeakValueDictionary()
def get_typecache(backend):
# returns _typecache_cffi_backend if backend is the _cffi_backend
# module, or type(backend).__typecache if backend is an instance of
# CTypesBackend (or some FakeBackend class during tests)
if isinstance(backend, types.ModuleType):
return _typecache_cffi_backend
with global_lock:
if not hasattr(type(backend), '__typecache'):
type(backend).__typecache = weakref.WeakValueDictionary()
return type(backend).__typecache
def global_cache(srctype, ffi, funcname, *args, **kwds):
key = kwds.pop('key', (funcname, args))
assert not kwds
try:
return ffi._typecache[key]
except KeyError:
pass
try:
res = getattr(ffi._backend, funcname)(*args)
except NotImplementedError as e:
raise NotImplementedError("%s: %r: %s" % (funcname, srctype, e))
# note that setdefault() on WeakValueDictionary is not atomic
# and contains a rare bug (http://bugs.python.org/issue19542);
# we have to use a lock and do it ourselves
cache = ffi._typecache
with global_lock:
res1 = cache.get(key)
if res1 is None:
cache[key] = res
return res
else:
return res1
def pointer_cache(ffi, BType):
return global_cache('?', ffi, 'new_pointer_type', BType)
def attach_exception_info(e, name):
if e.args and type(e.args[0]) is str:
e.args = ('%s: %s' % (name, e.args[0]),) + e.args[1:]
| {
"content_hash": "663a13e58883c281fdbb695bc29bd401",
"timestamp": "",
"source": "github",
"line_count": 614,
"max_line_length": 79,
"avg_line_length": 35.31270358306189,
"alnum_prop": 0.5422470251821788,
"repo_name": "LukeMurphey/splunk-google-drive",
"id": "5f1b0d2b7641385bb8389cf2780ce155bf2bd502",
"size": "21682",
"binary": false,
"copies": "10",
"ref": "refs/heads/master",
"path": "src/bin/google_drive_app/cffi/model.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "C",
"bytes": "39392"
},
{
"name": "CSS",
"bytes": "10235"
},
{
"name": "JavaScript",
"bytes": "11701"
},
{
"name": "Python",
"bytes": "4299895"
}
],
"symlink_target": ""
} |
from django.apps import AppConfig
class MRRBenchmarkConfig(AppConfig):
name = 'apps.evaluators.MRR.benchmark'
label = 'MRRBenchmark'
| {
"content_hash": "e64b664881a4d9f2337645bb28ebbd5c",
"timestamp": "",
"source": "github",
"line_count": 6,
"max_line_length": 42,
"avg_line_length": 23.833333333333332,
"alnum_prop": 0.7552447552447552,
"repo_name": "DiegoCorrea/ouvidoMusical",
"id": "a46ef2314b27e6a3b4d6a66d21dce2230e92d5db",
"size": "143",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "apps/evaluators/MRR/benchmark/apps.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "182332"
},
{
"name": "Shell",
"bytes": "51486"
}
],
"symlink_target": ""
} |
import os
import re
import tempfile
try:
import subprocess32 as subprocess
except Exception:
import subprocess
from pandaharvester.harvestercore import core_utils
from .base_stager import BaseStager
from pandaharvester.harvestermover import mover_utils
# logger
baseLogger = core_utils.setup_logger('gridftp_stager')
# stager plugin with GridFTP
"""
-- example of plugin config
"stager":{
"name":"GridFtpStager",
"module":"pandaharvester.harvesterstager.gridftp_stager",
"objstoreID_ES":117,
# base path for local access to the files to be copied
"srcOldBasePath":"/tmp/workdirs",
# base path for access through source GridFTP server to the files to be copied
"srcNewBasePath":"gsiftp://dcdum02.aglt2.org/pnfs/aglt2.org",
# base path for destination GridFTP server
"dstBasePath":"gsiftp://dcgftp.usatlas.bnl.gov:2811/pnfs/usatlas.bnl.gov/atlasscratchdisk/rucio",
# max number of attempts
maxAttempts: 3,
# options for globus-url-copy
"gulOpts":"-verify-checksum -v"
}
"""
class GridFtpStager(BaseStager):
# constructor
def __init__(self, **kwarg):
self.scopeForTmp = 'transient'
self.pathConvention = 1000
self.objstoreID = None
self.gulOpts = None
self.maxAttempts = 3
BaseStager.__init__(self, **kwarg)
# check status
def check_stage_out_status(self, jobspec):
for fileSpec in jobspec.get_output_file_specs(skip_done=True):
fileSpec.status = 'finished'
fileSpec.pathConvention = self.pathConvention
fileSpec.objstoreID = self.objstoreID
return True, ''
# trigger stage out
def trigger_stage_out(self, jobspec):
# make logger
tmpLog = self.make_logger(baseLogger, 'PandaID={0}'.format(jobspec.PandaID),
method_name='trigger_stage_out')
tmpLog.debug('start')
# loop over all files
gucInput = None
for fileSpec in jobspec.outFiles:
# skip if already done
if fileSpec.status in ['finished', 'failed']:
continue
# scope
if fileSpec.fileType in ['es_output', 'zip_output']:
scope = self.scopeForTmp
else:
scope = fileSpec.fileAttributes.get('scope')
if scope is None:
scope = fileSpec.scope
# construct source and destination paths
srcPath = re.sub(self.srcOldBasePath, self.srcNewBasePath, fileSpec.path)
dstPath = mover_utils.construct_file_path(self.dstBasePath, scope, fileSpec.lfn)
# make input for globus-url-copy
if gucInput is None:
gucInput = tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='_guc_out.tmp')
gucInput.write("{0} {1}\n".format(srcPath, dstPath))
fileSpec.attemptNr += 1
# nothing to transfer
if gucInput is None:
tmpLog.debug('done with no transfers')
return True, ''
# transfer
gucInput.close()
args = ['globus-url-copy', '-f', gucInput.name, '-cd']
if self.gulOpts is not None:
args += self.gulOpts.split()
try:
tmpLog.debug('execute globus-url-copy' + ' '.join(args))
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
return_code = p.returncode
if stdout is not None:
stdout = stdout.replace('\n', ' ')
if stderr is not None:
stderr = stderr.replace('\n', ' ')
tmpLog.debug("stdout: %s" % stdout)
tmpLog.debug("stderr: %s" % stderr)
except Exception:
core_utils.dump_error_message(tmpLog)
return_code = 1
os.remove(gucInput.name)
if return_code == 0:
tmpLog.debug('succeeded')
return True, ''
else:
errMsg = 'failed with {0}'.format(return_code)
tmpLog.error(errMsg)
# check attemptNr
for fileSpec in jobspec.inFiles:
if fileSpec.attemptNr >= self.maxAttempts:
errMsg = 'gave up due to max attempts'
tmpLog.error(errMsg)
return (False, errMsg)
return None, errMsg
| {
"content_hash": "e821479fbd5c1a101ea886141860a819",
"timestamp": "",
"source": "github",
"line_count": 117,
"max_line_length": 105,
"avg_line_length": 38.17948717948718,
"alnum_prop": 0.5854040743228117,
"repo_name": "PanDAWMS/panda-harvester",
"id": "48f1158fac308c23d01f9dfd6033894c7f935f3c",
"size": "4467",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "pandaharvester/harvesterstager/gridftp_stager.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "1650803"
},
{
"name": "Shell",
"bytes": "21117"
}
],
"symlink_target": ""
} |
"""This example deletes a campaign by setting the status to 'REMOVED'.
To get campaigns, run get_campaigns.py.
The LoadFromStorage method is pulling credentials and properties from a
"googleads.yaml" file. By default, it looks for this file in your home
directory. For more information, see the "Caching authentication information"
section of our README.
"""
from googleads import adwords
CAMPAIGN_ID = 'INSERT_CAMPAIGN_ID_HERE'
def main(client, campaign_id):
# Initialize appropriate service.
campaign_service = client.GetService('CampaignService', version='v201809')
# Construct operations and delete campaign.
operations = [{
'operator': 'SET',
'operand': {
'id': campaign_id,
'status': 'REMOVED'
}
}]
result = campaign_service.mutate(operations)
# Display results.
for campaign in result['value']:
print('Campaign with name "%s" and id "%s" was deleted.'
% (campaign['name'], campaign['id']))
if __name__ == '__main__':
# Initialize client object.
adwords_client = adwords.AdWordsClient.LoadFromStorage()
main(adwords_client, CAMPAIGN_ID)
| {
"content_hash": "945e21e950a727c7a4c2dba068c12e96",
"timestamp": "",
"source": "github",
"line_count": 42,
"max_line_length": 77,
"avg_line_length": 26.904761904761905,
"alnum_prop": 0.6920353982300885,
"repo_name": "googleads/googleads-python-lib",
"id": "778f9dc6d148f53052743e39824a104ab9c37fa2",
"size": "1752",
"binary": false,
"copies": "1",
"ref": "refs/heads/main",
"path": "examples/adwords/v201809/basic_operations/remove_campaign.py",
"mode": "33261",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "403821"
}
],
"symlink_target": ""
} |
from __future__ import absolute_import, division, print_function, unicode_literals
from binascii import unhexlify
import time
from django.conf import settings
from django.db import models
from django.utils.encoding import force_text
from django_otp.models import Device
from django_otp.oath import TOTP
from django_otp.util import random_hex, hex_validator
def default_key():
return force_text(random_hex(20))
def key_validator(value):
return hex_validator()(value)
class TOTPDevice(Device):
"""
A generic TOTP :class:`~django_otp.models.Device`. The model fields mostly
correspond to the arguments to :func:`django_otp.oath.totp`. They all have
sensible defaults, including the key, which is randomly generated.
.. attribute:: key
*CharField*: A hex-encoded secret key of up to 40 bytes. (Default: 20
random bytes)
.. attribute:: step
*PositiveSmallIntegerField*: The time step in seconds. (Default: 30)
.. attribute:: t0
*BigIntegerField*: The Unix time at which to begin counting steps.
(Default: 0)
.. attribute:: digits
*PositiveSmallIntegerField*: The number of digits to expect in a token
(6 or 8). (Default: 6)
.. attribute:: tolerance
*PositiveSmallIntegerField*: The number of time steps in the past or
future to allow. For example, if this is 1, we'll accept any of three
tokens: the current one, the previous one, and the next one. (Default:
1)
.. attribute:: drift
*SmallIntegerField*: The number of time steps the prover is known to
deviate from our clock. If :setting:`OTP_TOTP_SYNC` is ``True``, we'll
update this any time we match a token that is not the current one.
(Default: 0)
.. attribute:: last_t
*BigIntegerField*: The time step of the last verified token. To avoid
verifying the same token twice, this will be updated on each successful
verification. Only tokens at a higher time step will be verified
subsequently. (Default: -1)
"""
key = models.CharField(max_length=80, validators=[key_validator], default=default_key, help_text="A hex-encoded secret key of up to 40 bytes.")
step = models.PositiveSmallIntegerField(default=30, help_text="The time step in seconds.")
t0 = models.BigIntegerField(default=0, help_text="The Unix time at which to begin counting steps.")
digits = models.PositiveSmallIntegerField(choices=[(6, 6), (8, 8)], default=6, help_text="The number of digits to expect in a token.")
tolerance = models.PositiveSmallIntegerField(default=1, help_text="The number of time steps in the past or future to allow.")
drift = models.SmallIntegerField(default=0, help_text="The number of time steps the prover is known to deviate from our clock.")
last_t = models.BigIntegerField(default=-1, help_text="The t value of the latest verified token. The next token must be at a higher time step.")
class Meta(Device.Meta):
verbose_name = "TOTP device"
@property
def bin_key(self):
"""
The secret key as a binary string.
"""
return unhexlify(self.key.encode())
def verify_token(self, token):
OTP_TOTP_SYNC = getattr(settings, 'OTP_TOTP_SYNC', True)
try:
token = int(token)
except Exception:
verified = False
else:
key = self.bin_key
totp = TOTP(key, self.step, self.t0, self.digits)
totp.time = time.time()
for offset in range(-self.tolerance, self.tolerance + 1):
totp.drift = self.drift + offset
if (totp.t() > self.last_t) and (totp.token() == token):
self.last_t = totp.t()
if (offset != 0) and OTP_TOTP_SYNC:
self.drift += offset
self.save()
verified = True
break
else:
verified = False
return verified
| {
"content_hash": "2f089b17f967cbc244df331108e71996",
"timestamp": "",
"source": "github",
"line_count": 114,
"max_line_length": 148,
"avg_line_length": 35.6140350877193,
"alnum_prop": 0.638423645320197,
"repo_name": "altanawealth/django-otp",
"id": "7b2baeea2d40f99613b53e44daa8b9c6b9c0e168",
"size": "4060",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "django_otp/plugins/otp_totp/models.py",
"mode": "33188",
"license": "bsd-2-clause",
"language": [
{
"name": "HTML",
"bytes": "16149"
},
{
"name": "Python",
"bytes": "97807"
}
],
"symlink_target": ""
} |
import unittest
import uuid
from airflow.providers.amazon.aws.hooks.aws_dynamodb_hook import AwsDynamoDBHook
try:
from moto import mock_dynamodb2
except ImportError:
mock_dynamodb2 = None
class TestDynamoDBHook(unittest.TestCase):
@unittest.skipIf(mock_dynamodb2 is None, 'mock_dynamodb2 package not present')
@mock_dynamodb2
def test_get_conn_returns_a_boto3_connection(self):
hook = AwsDynamoDBHook(aws_conn_id='aws_default')
self.assertIsNotNone(hook.get_conn())
@unittest.skipIf(mock_dynamodb2 is None, 'mock_dynamodb2 package not present')
@mock_dynamodb2
def test_insert_batch_items_dynamodb_table(self):
hook = AwsDynamoDBHook(aws_conn_id='aws_default',
table_name='test_airflow', table_keys=['id'], region_name='us-east-1')
# this table needs to be created in production
table = hook.get_conn().create_table(
TableName='test_airflow',
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH'
},
],
AttributeDefinitions=[
{
'AttributeName': 'id',
'AttributeType': 'S'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
table = hook.get_conn().Table('test_airflow')
items = [{'id': str(uuid.uuid4()), 'name': 'airflow'}
for _ in range(10)]
hook.write_batch_data(items)
table.meta.client.get_waiter(
'table_exists').wait(TableName='test_airflow')
self.assertEqual(table.item_count, 10)
if __name__ == '__main__':
unittest.main()
| {
"content_hash": "5d16f4658fb0f3dc007556fa3cb817ae",
"timestamp": "",
"source": "github",
"line_count": 61,
"max_line_length": 101,
"avg_line_length": 29.868852459016395,
"alnum_prop": 0.5592755214050494,
"repo_name": "wileeam/airflow",
"id": "f72efe4f9f448f396364ed1b99e237c63630816c",
"size": "2612",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "tests/providers/amazon/aws/hooks/test_aws_dynamodb_hook.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "CSS",
"bytes": "13715"
},
{
"name": "Dockerfile",
"bytes": "17179"
},
{
"name": "HTML",
"bytes": "148281"
},
{
"name": "JavaScript",
"bytes": "25233"
},
{
"name": "Jupyter Notebook",
"bytes": "2933"
},
{
"name": "Mako",
"bytes": "1339"
},
{
"name": "Python",
"bytes": "9763694"
},
{
"name": "Shell",
"bytes": "221331"
},
{
"name": "TSQL",
"bytes": "879"
}
],
"symlink_target": ""
} |
"""
Tests for `api.defined_names`.
"""
import textwrap
from jedi import api
from ..helpers import TestCase
class TestDefinedNames(TestCase):
def assert_definition_names(self, definitions, names):
assert [d.name for d in definitions] == names
def check_defined_names(self, source, names):
definitions = api.names(textwrap.dedent(source))
self.assert_definition_names(definitions, names)
return definitions
def test_get_definitions_flat(self):
self.check_defined_names("""
import module
class Class:
pass
def func():
pass
data = None
""", ['module', 'Class', 'func', 'data'])
def test_dotted_assignment(self):
self.check_defined_names("""
x = Class()
x.y.z = None
""", ['x', 'z']) # TODO is this behavior what we want?
def test_multiple_assignment(self):
self.check_defined_names("""
x = y = None
""", ['x', 'y'])
def test_multiple_imports(self):
self.check_defined_names("""
from module import a, b
from another_module import *
""", ['a', 'b'])
def test_nested_definitions(self):
definitions = self.check_defined_names("""
class Class:
def f():
pass
def g():
pass
""", ['Class'])
subdefinitions = definitions[0].defined_names()
self.assert_definition_names(subdefinitions, ['f', 'g'])
self.assertEqual([d.full_name for d in subdefinitions],
['Class.f', 'Class.g'])
def test_nested_class(self):
definitions = self.check_defined_names("""
class L1:
class L2:
class L3:
def f(): pass
def f(): pass
def f(): pass
def f(): pass
""", ['L1', 'f'])
subdefs = definitions[0].defined_names()
subsubdefs = subdefs[0].defined_names()
self.assert_definition_names(subdefs, ['L2', 'f'])
self.assert_definition_names(subsubdefs, ['L3', 'f'])
self.assert_definition_names(subsubdefs[0].defined_names(), ['f'])
def test_follow_imports():
# github issue #344
imp = api.defined_names('import datetime')[0]
assert imp.name == 'datetime'
datetime_names = [str(d.name) for d in imp.defined_names()]
assert 'timedelta' in datetime_names
| {
"content_hash": "a53a6db5938caf1efe174d88f6787300",
"timestamp": "",
"source": "github",
"line_count": 82,
"max_line_length": 74,
"avg_line_length": 29.853658536585368,
"alnum_prop": 0.551062091503268,
"repo_name": "jonashaag/jedi",
"id": "3eb413566a723819a71b4824193c9b9625f70b7e",
"size": "2448",
"binary": false,
"copies": "14",
"ref": "refs/heads/master",
"path": "test/test_api/test_defined_names.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "C",
"bytes": "294"
},
{
"name": "Python",
"bytes": "676396"
}
],
"symlink_target": ""
} |
from django.conf import settings
from django.core.exceptions import MiddlewareNotUsed
from django.test import RequestFactory, SimpleTestCase, override_settings
from django.test.utils import patch_logger
from . import middleware as mw
@override_settings(ROOT_URLCONF='middleware_exceptions.urls')
class MiddlewareTests(SimpleTestCase):
def tearDown(self):
mw.log = []
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.ProcessViewNoneMiddleware'])
def test_process_view_return_none(self):
response = self.client.get('/middleware_exceptions/view/')
self.assertEqual(mw.log, ['processed view normal_view'])
self.assertEqual(response.content, b'OK')
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.ProcessViewMiddleware'])
def test_process_view_return_response(self):
response = self.client.get('/middleware_exceptions/view/')
self.assertEqual(response.content, b'Processed view normal_view')
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.TemplateResponseMiddleware'])
def test_process_template_response(self):
response = self.client.get('/middleware_exceptions/template_response/')
self.assertEqual(response.content, b'template-response middleware')
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.LogMiddleware'])
def test_view_exception_converted_before_middleware(self):
response = self.client.get('/middleware_exceptions/permission_denied/')
self.assertEqual(mw.log, [(response.status_code, response.content)])
self.assertEqual(response.status_code, 403)
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.ProcessExceptionMiddleware'])
def test_view_exception_handled_by_process_exception(self):
response = self.client.get('/middleware_exceptions/error/')
self.assertEqual(response.content, b'Exception caught')
@override_settings(MIDDLEWARE=[
'middleware_exceptions.middleware.ProcessExceptionLogMiddleware',
'middleware_exceptions.middleware.ProcessExceptionMiddleware',
])
def test_response_from_process_exception_short_circuits_remainder(self):
response = self.client.get('/middleware_exceptions/error/')
self.assertEqual(mw.log, [])
self.assertEqual(response.content, b'Exception caught')
@override_settings(MIDDLEWARE=[
'middleware_exceptions.middleware.LogMiddleware',
'middleware_exceptions.middleware.NotFoundMiddleware',
])
def test_exception_in_middleware_converted_before_prior_middleware(self):
response = self.client.get('/middleware_exceptions/view/')
self.assertEqual(mw.log, [(404, response.content)])
self.assertEqual(response.status_code, 404)
@override_settings(MIDDLEWARE=['middleware_exceptions.middleware.ProcessExceptionMiddleware'])
def test_exception_in_render_passed_to_process_exception(self):
response = self.client.get('/middleware_exceptions/exception_in_render/')
self.assertEqual(response.content, b'Exception caught')
@override_settings(ROOT_URLCONF='middleware_exceptions.urls')
class RootUrlconfTests(SimpleTestCase):
@override_settings(ROOT_URLCONF=None)
def test_missing_root_urlconf(self):
# Removing ROOT_URLCONF is safe, as override_settings will restore
# the previously defined settings.
del settings.ROOT_URLCONF
with self.assertRaises(AttributeError):
self.client.get("/middleware_exceptions/view/")
class MyMiddleware(object):
def __init__(self, get_response=None):
raise MiddlewareNotUsed
def process_request(self, request):
pass
class MyMiddlewareWithExceptionMessage(object):
def __init__(self, get_response=None):
raise MiddlewareNotUsed('spam eggs')
def process_request(self, request):
pass
@override_settings(
DEBUG=True,
ROOT_URLCONF='middleware_exceptions.urls',
MIDDLEWARE=['django.middleware.common.CommonMiddleware'],
)
class MiddlewareNotUsedTests(SimpleTestCase):
rf = RequestFactory()
def test_raise_exception(self):
request = self.rf.get('middleware_exceptions/view/')
with self.assertRaises(MiddlewareNotUsed):
MyMiddleware().process_request(request)
@override_settings(MIDDLEWARE=['middleware_exceptions.tests.MyMiddleware'])
def test_log(self):
with patch_logger('django.request', 'debug') as calls:
self.client.get('/middleware_exceptions/view/')
self.assertEqual(len(calls), 1)
self.assertEqual(
calls[0],
"MiddlewareNotUsed: 'middleware_exceptions.tests.MyMiddleware'"
)
@override_settings(MIDDLEWARE=['middleware_exceptions.tests.MyMiddlewareWithExceptionMessage'])
def test_log_custom_message(self):
with patch_logger('django.request', 'debug') as calls:
self.client.get('/middleware_exceptions/view/')
self.assertEqual(len(calls), 1)
self.assertEqual(
calls[0],
"MiddlewareNotUsed('middleware_exceptions.tests.MyMiddlewareWithExceptionMessage'): spam eggs"
)
@override_settings(DEBUG=False)
def test_do_not_log_when_debug_is_false(self):
with patch_logger('django.request', 'debug') as calls:
self.client.get('/middleware_exceptions/view/')
self.assertEqual(len(calls), 0)
| {
"content_hash": "31ded3e33b6a8b45b80e766b8ae66d54",
"timestamp": "",
"source": "github",
"line_count": 133,
"max_line_length": 106,
"avg_line_length": 41.097744360902254,
"alnum_prop": 0.7136845956824003,
"repo_name": "loic/django",
"id": "f6a7e24e591ccf7661cdff8ce0117ae79996e57e",
"size": "5466",
"binary": false,
"copies": "4",
"ref": "refs/heads/master",
"path": "tests/middleware_exceptions/tests.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "52439"
},
{
"name": "HTML",
"bytes": "173525"
},
{
"name": "JavaScript",
"bytes": "451010"
},
{
"name": "Makefile",
"bytes": "125"
},
{
"name": "Python",
"bytes": "11934726"
},
{
"name": "Shell",
"bytes": "809"
},
{
"name": "Smarty",
"bytes": "130"
}
],
"symlink_target": ""
} |
"""
byceps.blueprints.user.profile.views
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:Copyright: 2006-2020 Jochen Kupperschmidt
:License: Modified BSD, see LICENSE for details.
"""
from operator import attrgetter
from flask import abort, g
from ....services.orga_team import service as orga_team_service
from ....services.ticketing import attendance_service, ticket_service
from ....services.user import service as user_service
from ....services.user_badge import service as badge_service
from ....util.framework.blueprint import create_blueprint
from ....util.framework.templating import templated
blueprint = create_blueprint('user_profile', __name__)
@blueprint.route('/<uuid:user_id>')
@templated
def view(user_id):
"""Show a user's profile."""
user = user_service.find_active_user(user_id, include_avatar=True)
if user is None:
abort(404)
badges_with_awarding_quantity = badge_service.get_badges_for_user(user.id)
orga_team_membership = orga_team_service.find_membership_for_party(
user.id, g.party_id
)
_current_party_tickets = ticket_service.find_tickets_used_by_user(
user.id, g.party_id
)
current_party_tickets = [t for t in _current_party_tickets if not t.revoked]
attended_parties = attendance_service.get_attended_parties(user.id)
attended_parties.sort(key=attrgetter('starts_at'), reverse=True)
return {
'user': user,
'badges_with_awarding_quantity': badges_with_awarding_quantity,
'orga_team_membership': orga_team_membership,
'current_party_tickets': current_party_tickets,
'attended_parties': attended_parties,
}
| {
"content_hash": "04f59e8a5fa5d38ba42ffd8a21174b24",
"timestamp": "",
"source": "github",
"line_count": 52,
"max_line_length": 80,
"avg_line_length": 31.75,
"alnum_prop": 0.6941247728649304,
"repo_name": "m-ober/byceps",
"id": "7dfe65544f9b8f7980452703a7cc0d5e1fa20ce1",
"size": "1651",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "byceps/blueprints/user/profile/views.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "38499"
},
{
"name": "Dockerfile",
"bytes": "1302"
},
{
"name": "HTML",
"bytes": "369989"
},
{
"name": "JavaScript",
"bytes": "9483"
},
{
"name": "Python",
"bytes": "1152996"
}
],
"symlink_target": ""
} |
import os
from absl import app, flags
from selenium import webdriver
import test_util
FLAGS = flags.FLAGS
flags.DEFINE_string('user_data_dir', None, 'Need specify user data dir to test')
flags.mark_flag_as_required('user_data_dir')
def main(argv):
policy_url = "chrome://policy"
version_url = "chrome://version"
# Verify the user data dir is not existing before launch the Chrome
print "User data before running chrome is " + str(
os.path.isdir(FLAGS.user_data_dir))
# Launch real Chrome
os.system('start chrome --remote-debugging-port=9222')
options = webdriver.ChromeOptions()
# Add option for connecting chromedriver with Chrome
options.add_experimental_option("debuggerAddress", "localhost:9222")
driver = test_util.create_chrome_webdriver(chrome_options=options)
try:
# Verify User Data Dir in chrome://policy page
driver.get(policy_url)
print driver.find_element_by_css_selector('html').text.encode('utf-8')
# Verfiy User Data Dir used in chrome://version
driver.get(version_url)
print "Profile path is " + driver.find_element_by_id("profile_path").text
# Verify if UserDataDir folder is created
print "User data dir creation is " + str(os.path.isdir(FLAGS.user_data_dir))
except Exception as error:
print error
finally:
driver.quit()
os.system('taskkill /f /im chrome.exe')
if __name__ == '__main__':
app.run(main)
| {
"content_hash": "16713712d57c1c4e792a3829f25146ec",
"timestamp": "",
"source": "github",
"line_count": 49,
"max_line_length": 80,
"avg_line_length": 28.857142857142858,
"alnum_prop": 0.7072135785007072,
"repo_name": "endlessm/chromium-browser",
"id": "6a89f7cfd603fa75011d65191627c635a812b010",
"size": "1577",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "chrome/test/enterprise/e2e/policy/user_data_dir/user_data_dir_webdriver.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [],
"symlink_target": ""
} |
from util import sieve
primes = set(sieve(1000000))
def rotations(i):
result = []
s = str(i)
for j in xrange(len(s)):
result.append(int(s[j:] + s[:j]))
return result
def circular(prime):
r = set(rotations(prime))
return len(r) == len(r & primes)
def solve():
return sum(circular(prime) for prime in primes)
if __name__ == '__main__':
print solve()
| {
"content_hash": "93ad7947845d1a64f412c848b613984a",
"timestamp": "",
"source": "github",
"line_count": 20,
"max_line_length": 51,
"avg_line_length": 20.65,
"alnum_prop": 0.559322033898305,
"repo_name": "elemel/project-euler",
"id": "7c3c37c1c3c2a6cf97b3917c5a0d675f1aca6c87",
"size": "413",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "src/35.py",
"mode": "33188",
"license": "mit",
"language": [],
"symlink_target": ""
} |
from __future__ import unicode_literals
import time
import json
from django.db import models
from django.utils.dateformat import format
class ActionType(models.Model):
"""This define type of action for action log record."""
name = models.CharField(
max_length=100, blank=False, null=False, unique=True
)
description = models.CharField(
max_length=255, blank=True, null=False
)
enabled = models.BooleanField(default=False)
def __unicode__(self):
return '%s (%s)' % (self.name, self.enabled, )
class Meta:
app_label = 'action_log'
db_table = 'action_log__action_type'
class ActionRecordQuerySet(models.QuerySet):
"""Create record to describe performed action at a given time.
For example this can be user logged in or performed some action
like search, activate option, changed password ...
We will use simple string for user identifier. This is
generally an username for auth application but we don't user
user object to prevent relations with other apps! Also, this is
not a mandatory param since there are actions that don't have
anything to do with user. Those are system action like cron jobs
or sending promotions to users ...
If there is no action with provided name, it will be created.
It will be enabled by default to prevent losing data.
"""
def create_record(self, action_name, username=None, payload=None):
"""Create action record of provided action type.
@param:action_name is string of action type name
@param:username username of a user performing the action
@param:payload this is optional additional data for action
"""
action_type, created = ActionType.objects.get_or_create(
name=action_name
)
# if this is a new action, enable it by default
if created:
action_type.enabled = True
action_type.save()
# only create record for enabled actions
if not action_type.enabled:
return None
if payload is not None:
try:
payload = json.dumps(payload)
except Exception:
payload = None
return self.create(
action_type=action_type, username=username, payload=payload
)
class ActionRecord(models.Model):
"""This is action log record."""
action_type = models.ForeignKey(ActionType, blank=False, null=False)
username = models.CharField(max_length=50, blank=True, null=True)
date_created = models.DateTimeField(auto_now_add=True)
payload = models.TextField(blank=True, null=True, default=None)
objects = ActionRecordQuerySet.as_manager()
def __unicode__(self):
return '%s by %s@%s' % (
self.action_type, self.username, self.date_created,
)
class Meta:
app_label = 'action_log'
db_table = 'action_log__action_record'
# https://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
def dump(self, fields):
defaults = {
'id': self.id,
'action': self.action_type.name,
'username': self.username,
'date_created': format(self.date_created, 'U'),
'date_created_ISO8601': format(
self.date_created, 'c'
),
# 'date_created': int(time.mktime(
# self.date_created.timetuple()
# )),
'payload': self.payload,
}
return {
field: defaults[field]
for field in defaults if field in fields
}
| {
"content_hash": "3fcec8d14bdba99afe98f5d19fe387bd",
"timestamp": "",
"source": "github",
"line_count": 110,
"max_line_length": 72,
"avg_line_length": 33.00909090909091,
"alnum_prop": 0.6232442853208483,
"repo_name": "bradojevic/django-action-log",
"id": "3aa6bb4d60feb723341aea3f26e56846faaf2875",
"size": "3655",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "action_log/models.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "8338"
}
],
"symlink_target": ""
} |
"""
Tests for importing .sdf files
"""
from __future__ import division
from __future__ import unicode_literals
__author__ = "Joseph Gomes"
__copyright__ = "Copyright 2016, Stanford University"
__license__ = "MIT"
import os
import unittest
import tempfile
import shutil
import deepchem as dc
class TestFeaturizedSamples(unittest.TestCase):
"""
Test Featurized Samples class.
"""
def random_test_train_valid_test_split_from_sdf(self):
"""Test of singletask CoulombMatrixEig regression on .sdf file."""
splittype = "random"
input_transforms = []
output_transforms = ["normalize"]
model_params = {}
tasks = ["atomization_energy"]
task_type = "regression"
task_types = {task: task_type for task in tasks}
current_dir = os.path.dirname(os.path.abspath(__file__))
input_file = os.path.join(current_dir, "data/water.sdf")
featurizer = dc.feat.CoulombMatrixEig(6, remove_hydrogens=False)
loader = dc.data.SDFLoader(
tasks=tasks,
smiles_field="smiles",
mol_field="mol",
featurizer=featurizer)
dataset = loader.featurize(input_file)
# Splits featurized samples into train/test
splitter = dc.splits.RandomSplitter()
train_dataset, valid_dataset, test_dataset = \
splitter.train_valid_test_split(dataset)
assert len(train_dataset) == 8
assert len(valid_dataset) == 1
assert len(test_dataset) == 1
| {
"content_hash": "0f8157d561b3214e4967f441068514b8",
"timestamp": "",
"source": "github",
"line_count": 50,
"max_line_length": 70,
"avg_line_length": 28.32,
"alnum_prop": 0.6779661016949152,
"repo_name": "Agent007/deepchem",
"id": "337d8d790fd7bbfa194f1b9f2b969308f74256a2",
"size": "1416",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "deepchem/feat/tests/test_sdf_reader.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "16453"
},
{
"name": "HTML",
"bytes": "20618"
},
{
"name": "Jupyter Notebook",
"bytes": "59756"
},
{
"name": "Python",
"bytes": "2129306"
},
{
"name": "Shell",
"bytes": "11976"
}
],
"symlink_target": ""
} |
""" Prepare images to go out to the LabelMe site.
"""
import os
import urlparse
import zipfile
import numpy as np
from scipy.misc import imread, imsave
import pyfacades
import pyfacades.models.driving_12x360x480.model
import pyfacades.models.driving_12x360x480.segment
from pyfacades.rectify import rectify
from pyfacades.util import find_files, channels_first
def rename_file(filename, basedir):
newname = os.path.relpath(filename, basedir)
newname = newname.replace('/', '-')
newname = os.path.splitext(newname)[0] + '.jpg'
return newname
def rectify_building(image, meta=None):
labels = pyfacades.models.driving_12x360x480.segment.process_strip(channels_first(image))
non_rectangular = building = np.argmax(labels, 0) == pyfacades.models.driving_12x360x480.model.BUILDING
h = pyfacades.rectify.Homography(image, mask=non_rectangular)
if meta is not None:
meta['labels'] = labels
meta['building'] = building
meta['homography'] = h
return h.rectified
def process_files(files, basedir='./data', debug=False, rectify=False,
outdir='./data/for-labelme', **kwargs):
attempts = 0
n = len(files)
print "Rectify is set to", rectify
try:
os.makedirs(outdir)
except OSError as e:
pass
if debug:
try:
os.makedirs(os.path.join(outdir, 'debug'))
except OSError as e:
# Directory already exists
pass
for i, f in enumerate(files):
try:
newbasename = rename_file(f, basedir)
newname = os.path.join(outdir, newbasename)
print i + 1, 'of', n, newname
image = imread(f)
if rectify:
try:
meta = {}
rectified = rectify_building(image, meta)
if debug:
import pylab as pl
h = meta['homography']
pl.suptitle('u:{} d:{} l:{} r:{}'.format(h.du, h.dd, h.dl, h.dr))
pl.subplot(221)
pl.imshow(image)
pl.axis('off')
pl.subplot(222)
pl.imshow(meta['building'])
pl.axis('off')
pl.subplot(223)
h.plot_original()
pl.subplot(224)
h.plot_rectified()
pl.savefig(os.path.join(outdir, 'debug', newbasename))
imsave(newname, rectified)
except Exception as e:
print e
pass
else:
imsave(newname, image)
except Exception as e:
print e
def is_url(url):
return urlparse.urlparse(url).scheme != ""
def is_zip(filename):
return zipfile.is_zipfile(filename)
def process_dir(dir, pattern='orthographic.png', debug=False, **kwargs):
files = find_files(dir, pattern)
print "Found", len(files), "files."
process_files(files, basedir=dir, debug=debug, **kwargs)
def process_zip(path_to_zip_file, out_dir=None, pattern='orthographic.png', debug=False, **kwargs):
if out_dir is None:
out_dir = os.path.splitext(path_to_zip_file)[0]
try:
os.makedirs(out_dir)
except Exception as e:
print e
print type(e)
zip_ref = zipfile.ZipFile(path_to_zip_file, 'r')
print "Extracting contents of the zip file.."
zip_ref.extractall(out_dir)
zip_ref.close()
files = find_files(out_dir, pattern=pattern)
process_files(files, debug=debug, **kwargs)
def process_url(url, filename=None, out_dir=None, pattern='orthographic.png', debug=False, **kwargs):
import urllib
filename, headers = urllib.urlretrieve(url, filename=filename)
process_zip(filename, out_dir, pattern=pattern, debug=debug, **kwargs)
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('dir', nargs='?', default='-')
parser.add_argument('--pat', type=str, default='orthographic.png')
parser.add_argument('--debug', action='store_true')
parser.add_argument('--rectify', dest='rectify', action='store_true')
parser.add_argument('--no-rectify', dest='rectify', action='store_false')
parser.add_argument('--gpu', type=int, help="Which GPU to use (we are not using the CPU)", default=0)
parser.add_argument('--cpu', action='store_true', help="Use the CPU (and not the GPU)")
args = parser.parse_args()
# init caffe
import caffe
if args.cpu:
caffe.set_mode_cpu()
else:
caffe.set_device(args.gpu)
caffe.set_mode_gpu()
print os.getcwd()
if args.dir == '-':
args.dir = raw_input("Enter a source url, zip, or folder:")
print args.dir
options = dict(
rectify=args.rectify,
pattern=args.pat,
debug=args.debug
)
if is_url(args.dir):
print "Processing from a URL..."
process_url(args.dir, **options)
elif is_zip(args.dir):
print "Processing from a ZIP"
process_zip(args.dir, **options)
elif os.path.isdir(args.dir):
print "Processing a Directory"
process_dir(args.dir, **options)
else:
print "Did not recognize argument as a URL, ZIP, or Folder..."
print "Now you need to copy the images to the labelme server and rebuild the db"
if __name__ == '__main__':
main()
| {
"content_hash": "22c3b5c3f83461a5da506427ce1a9d82",
"timestamp": "",
"source": "github",
"line_count": 180,
"max_line_length": 107,
"avg_line_length": 30.711111111111112,
"alnum_prop": 0.5815846599131693,
"repo_name": "jfemiani/facade-segmentation",
"id": "e746698b0ee2f0b0cdaf9c3b15e5b3258af81efe",
"size": "5528",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "pyfacades/labelme/prepare.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Jupyter Notebook",
"bytes": "6417"
},
{
"name": "Python",
"bytes": "151545"
},
{
"name": "Shell",
"bytes": "3399"
}
],
"symlink_target": ""
} |
from setuptools import setup
# see requirements.txt for dependencies
setup(
name = "servee_blog",
version = "0.1.1.dev10",
author = "Servee LLC, Bibilion (the base) originally by Eldarion",
author_email = "[email protected]",
description = "The Servee Blog was originally extacted from servee_blog",
long_description = open("README.rst").read(),
license = "BSD",
url = "http://github.com/servee/servee_blog",
packages = [
"servee_blog",
"servee_blog.templatetags",
"servee_blog.migrations",
],
package_data = {
"servee_blog": [
"templates/servee_blog/*.xml",
"templates/servee_blog/*.html",
"fixtures/*.json",
]
},
classifiers = [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Framework :: Django",
]
)
| {
"content_hash": "34444270e6fac2dd829df2a3082d6b8a",
"timestamp": "",
"source": "github",
"line_count": 34,
"max_line_length": 77,
"avg_line_length": 31.294117647058822,
"alnum_prop": 0.581766917293233,
"repo_name": "kellycreativetech/servee_blog",
"id": "a848f755fb3c13655d9656f17c862e376e5334dd",
"size": "1064",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "setup.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Python",
"bytes": "60476"
}
],
"symlink_target": ""
} |
DWI_ANALYSIS_DIR = "/speechlab/subjects/pool_dwi/"
BASE_TRACULA_CFG = "/speechlab/software/pool_dwi/base.cfg"
# FS_SUBJECTS_DIR = "/speechlab/subjects/pool"
FIX_BVECS_SCRIPT = "/speechlab/software/pool_dwi/fix_bvecs.py"
# INIT_FLIRT_MATS must be an array
INIT_FLIRT_MATS = ["/speechlab/software/pool_dwi/_defaults/d2a.mat"]
#=== FS auto parcellation paradigms ===#
# name: name of the parcellation
# gcs: classifier gcs file name, use {hemi} for hemisphere name
# ctab: color table name
SURF_CLASSIFIERS = {"name": ["aparc12"], \
"gcs": ["/speechlab/software/pool_dwi/_defaults/{hemi}.slFRSatlas17.gcs"], \
"ctab": ["/speechlab/software/pool_dwi/_defaults/slFRS17.ctab"], \
"list_py": ["aparc12"]}
# In order to ensure a homogenous parcellation across all subjects,
# the script will stipulate that the specified version of FreeSurfer
# is used for mris_ca_label
PARC_FS_VER = "5.0.0"
TRACULA_DOEDDY = 0
TRACULA_DOROTVECS = 0
TRACULA_THR_BET = 0.3
# Depths at which average WM FA will be extracted #
# This may also be useful for other analyses in the future. #
# Must be positive integers <=5. #
WM_DEPTHS = [1, 2, 3]
# List of tensor measures to be extracted in steps including roi_tensor
# Need to match dtifit_*.nii.gz file names in dmri directory (generated by dtifit)
TENSOR_MEASURES = ["FA", "MD"]
SUBJECT_MASTER_CODE_FILE = "/speechlab/2/jtour/SID/Master_Code.xls"
FNIRT_TEMPLATE = "/speechlab/software/fsl64/data/standard/MNI152_T1_1mm_brain.nii.gz"
FNIRT_CNF = "/speechlab/5/scai/RHY/rhythm-fmri/fmri_code/MNI152_T1_1mm_brain.cnf"
SUBCORT_LABEL_LIST = "ASAP_subcortical_labels.txt"
SUBCORT_TRACT_SEEDS = ["Left-Thalamus-Proper", "Left-Caudate",
"Left-Putamen", "Left-Pallidum",
"Right-Thalamus-Proper", "Right-Caudate",
"Right-Putamen", "Right-Pallidum"]
if __name__ == "__main__":
from scipy.io import savemat
from scai_utils import check_file, info_log
matFN = __file__.replace(".py", ".mat")
analysisSettings = {"DWI_ANALYSIS_DIR": DWI_ANALYSIS_DIR,
"BASE_TRACULA_CFG": BASE_TRACULA_CFG,
"SUBJECT_MASTER_CODE_FILE": SUBJECT_MASTER_CODE_FILE}
savemat(matFN, analysisSettings)
check_file(matFN)
info_log("Saved analysis settings to mat file: %s" % matFN)
| {
"content_hash": "3eb056361a213399263bc0f4fa264bc7",
"timestamp": "",
"source": "github",
"line_count": 59,
"max_line_length": 96,
"avg_line_length": 40.59322033898305,
"alnum_prop": 0.6613778705636744,
"repo_name": "shanqing-cai/dwi_workflow",
"id": "0050b8670482397d93880e5c378d07c7abf19cf7",
"size": "2395",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "dwi_analysis_settings.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Matlab",
"bytes": "48324"
},
{
"name": "Python",
"bytes": "133379"
},
{
"name": "Shell",
"bytes": "1800"
}
],
"symlink_target": ""
} |
def square(n):
# Return `n` times `n`.
return n * n
# TODO: Talk about automated testing
# TODO: Show what happens if the function returned `n`.
# TODO: Show what happens if the function returned `n + n`.
# TODO: SHow what happens if the function returned `n ** n`.
if square(0) == 0:
print('Thumbs up.')
else:
print('Thumbs down.')
if square(1) == 1:
print('Thumbs up.')
else:
print('Thumbs down.')
if square(2) == 4:
print('Thumbs up.')
else:
print('Thumbs down.')
if square(3) == 9:
print('Thumbs up.')
else:
print('Thumbs down.')
| {
"content_hash": "c9542ecb98be0955bad7fc53e5367044",
"timestamp": "",
"source": "github",
"line_count": 27,
"max_line_length": 60,
"avg_line_length": 20.296296296296298,
"alnum_prop": 0.6441605839416058,
"repo_name": "regnart-tech-club/programming-concepts",
"id": "6a02812134c442cd3c53d51ba541d9c644ea249b",
"size": "689",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "course-2:combining-building-blocks/subject-2:functions/topic-1:Defining and calling functions/lesson-7.1:Square multiplication solution.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "40873"
}
],
"symlink_target": ""
} |
from django import forms
from cms.models import Page
from cms.utils.urlutils import static_with_version
from .wizard_pool import entry_choices, wizard_pool
def step2_form_factory(mixin_cls, entry_form_class, attrs=None):
"""
Combines a form mixin with a form class, sets attrs to the resulting class.
This is used to provide a common behavior/logic for all wizard content
forms.
"""
if attrs is None:
attrs = {}
# class name is hardcoded to be consistent with the step 1 form.
# this is meant to be used only in the context of the form wizard.
class_name = 'WizardStep2Form'
meta_class = type(entry_form_class)
FormClass = meta_class(class_name, (mixin_cls, entry_form_class), attrs)
return FormClass
class BaseFormMixin(object):
has_separate_optional_fields = False
def __init__(self, *args, **kwargs):
self.page = kwargs.pop('wizard_page', None)
self.user = kwargs.pop('wizard_user', None)
self.language_code = kwargs.pop('wizard_language')
super(BaseFormMixin, self).__init__(*args, **kwargs)
@property
def required_fields(self):
return [f for f in self.visible_fields() if f.field.required]
@property
def optional_fields(self):
return [f for f in self.visible_fields() if not f.field.required]
class WizardOptionWidgets(forms.RadioSelect):
template_name = 'cms/wizards/wizardoptionwidget.html'
def create_option(self, name, value, label, selected, index, subindex=None, attrs=None):
wizard = wizard_pool.get_entry(value)
attrs.update(wizard.widget_attributes)
return super(WizardOptionWidgets, self).create_option(name, value, label, selected, index, subindex, attrs)
class WizardStep1Form(BaseFormMixin, forms.Form):
class Media:
css = {
'all': (
static_with_version('cms/css/cms.wizard.css'),
)
}
js = (
static_with_version('cms/js/dist/bundle.admin.base.min.js'),
'cms/js/modules/cms.wizards.js',
)
page = forms.ModelChoiceField(
queryset=Page.objects.all(),
required=False,
widget=forms.HiddenInput
)
language = forms.CharField(widget=forms.HiddenInput)
entry = forms.ChoiceField(choices=[], widget=WizardOptionWidgets())
def __init__(self, *args, **kwargs):
super(WizardStep1Form, self).__init__(*args, **kwargs)
# set the entries here to get an up to date list of entries.
self.fields['entry'].choices = entry_choices(user=self.user,
page=self.page)
def get_wizard_entries(self):
for entry in self['entry']:
wizard = wizard_pool.get_entry(entry.choice_value)
yield(entry, wizard)
class WizardStep2BaseForm(BaseFormMixin):
user = None
| {
"content_hash": "c818ef839cc4ba6c8a977c848cbce92a",
"timestamp": "",
"source": "github",
"line_count": 87,
"max_line_length": 115,
"avg_line_length": 33.172413793103445,
"alnum_prop": 0.6420651420651421,
"repo_name": "timgraham/django-cms",
"id": "426de8e1c0fd1fbc1e245697a920433e6eb3cbcc",
"size": "2910",
"binary": false,
"copies": "2",
"ref": "refs/heads/develop",
"path": "cms/wizards/forms.py",
"mode": "33261",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "105760"
},
{
"name": "JavaScript",
"bytes": "672964"
},
{
"name": "PHP",
"bytes": "2156"
},
{
"name": "Python",
"bytes": "2258825"
},
{
"name": "XSLT",
"bytes": "5122"
}
],
"symlink_target": ""
} |
from oslo_config import cfg
COMMON_OPTIONS = [
cfg.ListOpt(
'gearman',
default=['localhost:4730'],
help=('List of gearman servers in HOST:PORT format')
),
cfg.ListOpt(
'db_uris',
default=['influx://localhost:8086/crawler'],
help=('List of InfluxDB servers \
in schema://user:password@host:port/db_name format')
),
]
| {
"content_hash": "6032d7c994585cf2e59aed8da4be76db",
"timestamp": "",
"source": "github",
"line_count": 16,
"max_line_length": 64,
"avg_line_length": 24.625,
"alnum_prop": 0.5761421319796954,
"repo_name": "dukov/simplecrawler",
"id": "b43970993bbfe743a7bc2e2d61ebea272b953185",
"size": "940",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "crawler/config.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "21533"
},
{
"name": "Shell",
"bytes": "621"
}
],
"symlink_target": ""
} |
from __future__ import print_function
import collections
from datetime import datetime
import functools
import os
import sys
import pytz
import re
import matplotlib
matplotlib.use('Agg')
import pylab
pylab.rcParams['figure.figsize'] = (35.0, 12.0)
import pandas as pd
from bcbio import utils
from bcbio.graph.collectl import load_collectl
def get_bcbio_nodes(path):
"""Fetch the local nodes (-c local) that contain collectl files from
the bcbio log file.
:returns: A list with unique (non-FQDN) local hostnames
where collectl raw logs can be found.
"""
with open(path, 'r') as file_handle:
hosts = collections.defaultdict(dict)
for line in file_handle:
matches = re.search(r'\]\s([^:]+):', line)
if not matches:
continue
hosts[matches.group(1)]
return hosts
def get_bcbio_timings(path):
"""Fetch timing information from a bcbio log file."""
with open(path, 'r') as file_handle:
steps = {}
for line in file_handle:
matches = re.search(r'^\[([^\]]+)\] ([^:]+: .*)', line)
if not matches:
continue
tstamp = matches.group(1)
msg = matches.group(2)
if not msg.find('Timing: ') >= 0:
continue
when = datetime.strptime(tstamp, '%Y-%m-%dT%H:%MZ').replace(
tzinfo=pytz.timezone('UTC'))
step = msg.split(":")[-1].strip()
steps[when] = step
return steps
def this_and_prev(iterable):
"""Walk an iterable, returning the current and previous items
as a two-tuple."""
try:
item = next(iterable)
while True:
next_item = next(iterable)
yield item, next_item
item = next_item
except StopIteration:
return
def delta_from_prev(prev_values, tstamps, value):
try:
prev_val = next(prev_values)
cur_tstamp, prev_tstamp = next(tstamps)
except StopIteration:
return 0
# Take the difference from the previous value and divide by the interval
# since the previous sample, so we always return values in units/second.
return (value - prev_val) / (cur_tstamp - prev_tstamp).seconds
def calc_deltas(data_frame, series=None):
"""Many of collectl's data values are cumulative (monotonically
increasing), so subtract the previous value to determine the value
for the current interval.
"""
series = series or []
data_frame = data_frame.sort(ascending=False)
for s in series:
prev_values = iter(data_frame[s])
# Burn the first value, so the first row we call delta_from_prev()
# for gets its previous value from the second row in the series,
# and so on.
next(prev_values)
data_frame[s] = data_frame[s].apply(functools.partial(
delta_from_prev, iter(prev_values),
this_and_prev(iter(data_frame.index))))
return data_frame
def remove_outliers(series, stddev):
"""Remove the outliers from a series."""
return series[(series - series.mean()).abs() < stddev * series.std()]
def prep_for_graph(data_frame, series=None, delta_series=None, smoothing=None,
outlier_stddev=None):
"""Prepare a dataframe for graphing by calculating deltas for
series that need them, resampling, and removing outliers.
"""
series = series or []
delta_series = delta_series or []
graph = calc_deltas(data_frame, delta_series)
for s in series + delta_series:
if smoothing:
graph[s] = graph[s].resample(smoothing)
if outlier_stddev:
graph[s] = remove_outliers(graph[s], outlier_stddev)
return graph[series + delta_series]
def add_common_plot_features(plot, steps):
"""Add plot features common to all plots, such as bcbio step
information.
"""
plot.yaxis.set_tick_params(labelright=True)
plot.set_xlabel('')
ymax = plot.get_ylim()[1]
ticks = {}
for tstamp, step in steps.iteritems():
if step == 'finished':
continue
plot.vlines(tstamp, 0, ymax, linestyles='dashed')
ticks[tstamp] = step
tick_kvs = sorted(ticks.iteritems())
top_axis = plot.twiny()
top_axis.set_xlim(*plot.get_xlim())
top_axis.set_xticks([k for k, v in tick_kvs])
top_axis.set_xticklabels([v for k, v in tick_kvs],
rotation=45, ha='left', size=16)
plot.set_ylim(0)
return plot
def graph_cpu(data_frame, steps, num_cpus):
graph = prep_for_graph(
data_frame, delta_series=['cpu_user', 'cpu_sys', 'cpu_wait'])
graph['cpu_user'] /= 100.0
graph['cpu_sys'] /= 100.0
graph['cpu_wait'] /= 100.0
plot = graph.plot()
plot.set_ylabel('CPU core usage')
plot.set_ylim(0, num_cpus)
add_common_plot_features(plot, steps)
return plot
def graph_net_bytes(data_frame, steps, ifaces):
series = []
for iface in ifaces:
series.extend(['{}_rbyte'.format(iface), '{}_tbyte'.format(iface)])
graph = prep_for_graph(data_frame, delta_series=series)
for iface in ifaces:
old_series = '{}_rbyte'.format(iface)
new_series = '{}_receive'.format(iface)
graph[new_series] = graph[old_series] * 8 / 1024 / 1024
del graph[old_series]
old_series = '{}_tbyte'.format(iface)
new_series = '{}_transmit'.format(iface)
graph[new_series] = graph[old_series] * 8 / 1024 / 1024
del graph[old_series]
plot = graph.plot()
plot.set_ylabel('mbits/s')
add_common_plot_features(plot, steps)
return plot
def graph_net_pkts(data_frame, steps, ifaces):
series = []
for iface in ifaces:
series.extend(['{}_rpkt'.format(iface), '{}_tpkt'.format(iface)])
graph = prep_for_graph(data_frame, delta_series=series)
plot = graph.plot()
plot.set_ylabel('packets/s')
add_common_plot_features(plot, steps)
return plot
def graph_memory(data_frame, steps, total_mem):
graph = prep_for_graph(
data_frame, series=['mem_total', 'mem_free', 'mem_buffers',
'mem_cached'])
free_memory = graph['mem_free'] + graph['mem_buffers'] + \
graph['mem_cached']
graph = (graph['mem_total'] - free_memory) / 1024 / 1024
plot = graph.plot()
plot.set_ylabel('gbytes')
plot.set_ylim(0, total_mem)
add_common_plot_features(plot, steps)
return plot
def graph_disk_io(data_frame, steps, disks):
series = []
for disk in disks:
series.extend([
'{}_sectors_read'.format(disk),
'{}_sectors_written'.format(disk),
])
graph = prep_for_graph(data_frame, delta_series=series, outlier_stddev=2)
for disk in disks:
old_series = '{}_sectors_read'.format(disk)
new_series = '{}_read'.format(disk)
graph[new_series] = graph[old_series] * 512 / 1024 / 1024
del graph[old_series]
old_series = '{}_sectors_written'.format(disk)
new_series = '{}_write'.format(disk)
graph[new_series] = graph[old_series] * 512 / 1024 / 1024
del graph[old_series]
plot = graph.plot()
plot.set_ylabel('mbytes/s')
add_common_plot_features(plot, steps)
return plot
def log_time_frame(bcbio_log):
"""The bcbio running time frame.
:return: an instance of :class collections.namedtuple:
with the following fields: start and end
"""
output = collections.namedtuple("Time", ["start", "end", "steps"])
bcbio_timings = get_bcbio_timings(bcbio_log)
return output(min(bcbio_timings), max(bcbio_timings), bcbio_timings)
def rawfile_within_timeframe(rawfile, timeframe):
""" Checks whether the given raw filename timestamp falls within [start, end] timeframe.
"""
matches = re.search(r'-(\d{8})-', rawfile)
if matches:
ftime = datetime.strptime(matches.group(1), "%Y%m%d")
ftime = pytz.utc.localize(ftime)
return ftime >= timeframe[0] and ftime <= timeframe[1]
def resource_usage(bcbio_log, cluster, rawdir, verbose):
"""Generate system statistics from bcbio runs.
Parse the obtained files and put the information in
a :class pandas.DataFrame:.
:param bcbio_log: local path to bcbio log file written by the run
:param cluster:
:param rawdir: directory to put raw data files
:param verbose: increase verbosity
:return: a tuple with three dictionaries, the first one contains
an instance of :pandas.DataFrame: for each host, the second one
contains information regarding the hardware configuration and
the last one contains information regarding timing.
:type return: tuple
"""
data_frames = {}
hardware_info = {}
time_frame = log_time_frame(bcbio_log)
for collectl_file in sorted(os.listdir(rawdir)):
if not collectl_file.endswith('.raw.gz'):
continue
# Only load filenames within sampling timerange (gathered from bcbio_log time_frame)
if rawfile_within_timeframe(collectl_file, time_frame):
collectl_path = os.path.join(rawdir, collectl_file)
data, hardware = load_collectl(
collectl_path, time_frame.start, time_frame.end)
if len(data) == 0:
#raise ValueError("No data present in collectl file %s, mismatch in timestamps between raw collectl and log file?", collectl_path)
continue
host = re.sub(r'-\d{8}-\d{6}\.raw\.gz$', '', collectl_file)
hardware_info[host] = hardware
if host not in data_frames:
data_frames[host] = data
else:
data_frames[host] = pd.concat([data_frames[host], data])
return (data_frames, hardware_info, time_frame.steps)
def generate_graphs(data_frames, hardware_info, steps, outdir,
verbose=False):
"""Generate all graphs for a bcbio run."""
for host, data_frame in data_frames.iteritems():
if verbose:
print('Generating CPU graph for {}...'.format(host))
graph = graph_cpu(data_frame, steps, hardware_info[host]['num_cpus'])
graph.get_figure().savefig(
os.path.join(outdir, '{}_cpu.png'.format(host)),
bbox_inches='tight', pad_inches=0.25)
pylab.close()
ifaces = set([series.split('_')[0]
for series in data_frame.keys()
if series.startswith(('eth', 'ib'))])
if verbose:
print('Generating network graphs for {}...'.format(host))
graph = graph_net_bytes(data_frame, steps, ifaces)
graph.get_figure().savefig(
os.path.join(outdir, '{}_net_bytes.png'.format(host)),
bbox_inches='tight', pad_inches=0.25)
pylab.close()
graph = graph_net_pkts(data_frame, steps, ifaces)
graph.get_figure().savefig(
os.path.join(outdir, '{}_net_pkts.png'.format(host)),
bbox_inches='tight', pad_inches=0.25)
pylab.close()
if verbose:
print('Generating memory graph for {}...'.format(host))
graph = graph_memory(data_frame, steps, hardware_info[host]["memory"])
graph.get_figure().savefig(
os.path.join(outdir, '{}_memory.png'.format(host)),
bbox_inches='tight', pad_inches=0.25)
pylab.close()
if verbose:
print('Generating storage I/O graph for {}...'.format(host))
drives = set([
series.split('_')[0]
for series in data_frame.keys()
if series.startswith(('sd', 'vd', 'hd', 'xvd'))
])
graph = graph_disk_io(data_frame, steps, drives)
graph.get_figure().savefig(
os.path.join(outdir, '{}_disk_io.png'.format(host)),
bbox_inches='tight', pad_inches=0.25)
pylab.close()
def add_subparser(subparsers):
parser = subparsers.add_parser(
"graph",
help=("Generate system graphs (CPU/memory/network/disk I/O "
"consumption) from bcbio runs"))
parser.add_argument(
"log",
help="Local path to bcbio log file written by the run.")
parser.add_argument(
"-o", "--outdir", default="monitoring/graphs",
help="Directory to write graphs to.")
parser.add_argument(
"-r", "--rawdir", default="monitoring/collectl", required=True,
help="Directory to put raw collectl data files.")
parser.add_argument(
"-v", "--verbose", action="store_true", default=False,
help="Emit verbose output")
return parser
| {
"content_hash": "f75f543bc8b5288cae30e98741fb1e35",
"timestamp": "",
"source": "github",
"line_count": 393,
"max_line_length": 146,
"avg_line_length": 32.38422391857507,
"alnum_prop": 0.6006914433880726,
"repo_name": "elkingtonmcb/bcbio-nextgen",
"id": "a33cb5a8a19035bf8800e8a770699748436c6fd5",
"size": "12727",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "bcbio/graph/graph.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "1485192"
},
{
"name": "Ruby",
"bytes": "624"
},
{
"name": "Shell",
"bytes": "14156"
}
],
"symlink_target": ""
} |
import sys
from textwrap import dedent
import py
import pytest
import tox
import tox.config
from tox.config import * # noqa
from tox.venv import VirtualEnv
class TestVenvConfig:
def test_config_parsing_minimal(self, tmpdir, newconfig):
config = newconfig([], """
[testenv:py1]
""")
assert len(config.envconfigs) == 1
assert config.toxworkdir.realpath() == tmpdir.join(".tox").realpath()
assert config.envconfigs['py1'].basepython == sys.executable
assert config.envconfigs['py1'].deps == []
assert config.envconfigs['py1'].platform == ".*"
def test_config_parsing_multienv(self, tmpdir, newconfig):
config = newconfig([], """
[tox]
toxworkdir = %s
indexserver =
xyz = xyz_repo
[testenv:py1]
deps=hello
[testenv:py2]
deps=
world1
:xyz:http://hello/world
""" % (tmpdir, ))
assert config.toxworkdir == tmpdir
assert len(config.envconfigs) == 2
assert config.envconfigs['py1'].envdir == tmpdir.join("py1")
dep = config.envconfigs['py1'].deps[0]
assert dep.name == "hello"
assert dep.indexserver is None
assert config.envconfigs['py2'].envdir == tmpdir.join("py2")
dep1, dep2 = config.envconfigs['py2'].deps
assert dep1.name == "world1"
assert dep2.name == "http://hello/world"
assert dep2.indexserver.name == "xyz"
assert dep2.indexserver.url == "xyz_repo"
def test_envdir_set_manually(self, tmpdir, newconfig):
config = newconfig([], """
[testenv:devenv]
envdir = devenv
""")
envconfig = config.envconfigs['devenv']
assert envconfig.envdir == tmpdir.join('devenv')
def test_envdir_set_manually_with_substitutions(self, tmpdir, newconfig):
config = newconfig([], """
[testenv:devenv]
envdir = {toxworkdir}/foobar
""")
envconfig = config.envconfigs['devenv']
assert envconfig.envdir == config.toxworkdir.join('foobar')
def test_force_dep_version(self, initproj):
"""
Make sure we can override dependencies configured in tox.ini when using the command line
option --force-dep.
"""
initproj("example123-0.5", filedefs={
'tox.ini': '''
[tox]
[testenv]
deps=
dep1==1.0
dep2>=2.0
dep3
dep4==4.0
'''
})
config = parseconfig(
['--force-dep=dep1==1.5', '--force-dep=dep2==2.1',
'--force-dep=dep3==3.0'])
assert config.option.force_dep == [
'dep1==1.5', 'dep2==2.1', 'dep3==3.0']
assert [str(x) for x in config.envconfigs['python'].deps] == [
'dep1==1.5', 'dep2==2.1', 'dep3==3.0', 'dep4==4.0',
]
def test_is_same_dep(self):
"""
Ensure correct parseini._is_same_dep is working with a few samples.
"""
assert DepOption._is_same_dep('pkg_hello-world3==1.0', 'pkg_hello-world3')
assert DepOption._is_same_dep('pkg_hello-world3==1.0', 'pkg_hello-world3>=2.0')
assert DepOption._is_same_dep('pkg_hello-world3==1.0', 'pkg_hello-world3>2.0')
assert DepOption._is_same_dep('pkg_hello-world3==1.0', 'pkg_hello-world3<2.0')
assert DepOption._is_same_dep('pkg_hello-world3==1.0', 'pkg_hello-world3<=2.0')
assert not DepOption._is_same_dep('pkg_hello-world3==1.0', 'otherpkg>=2.0')
class TestConfigPlatform:
def test_config_parse_platform(self, newconfig):
config = newconfig([], """
[testenv:py1]
platform = linux2
""")
assert len(config.envconfigs) == 1
assert config.envconfigs['py1'].platform == "linux2"
def test_config_parse_platform_rex(self, newconfig, mocksession, monkeypatch):
config = newconfig([], """
[testenv:py1]
platform = a123|b123
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['py1']
venv = VirtualEnv(envconfig, session=mocksession)
assert not venv.matching_platform()
monkeypatch.setattr(sys, "platform", "a123")
assert venv.matching_platform()
monkeypatch.setattr(sys, "platform", "b123")
assert venv.matching_platform()
monkeypatch.undo()
assert not venv.matching_platform()
@pytest.mark.parametrize("plat", ["win", "lin", ])
def test_config_parse_platform_with_factors(self, newconfig, plat, monkeypatch):
monkeypatch.setattr(sys, "platform", "win32")
config = newconfig([], """
[tox]
envlist = py27-{win,lin,osx}
[testenv]
platform =
win: win32
lin: linux2
""")
assert len(config.envconfigs) == 3
platform = config.envconfigs['py27-' + plat].platform
expected = {"win": "win32", "lin": "linux2"}.get(plat)
assert platform == expected
class TestConfigPackage:
def test_defaults(self, tmpdir, newconfig):
config = newconfig([], "")
assert config.setupdir.realpath() == tmpdir.realpath()
assert config.toxworkdir.realpath() == tmpdir.join(".tox").realpath()
envconfig = config.envconfigs['python']
assert envconfig.args_are_paths
assert not envconfig.recreate
assert not envconfig.pip_pre
def test_defaults_distshare(self, tmpdir, newconfig):
config = newconfig([], "")
assert config.distshare == config.homedir.join(".tox", "distshare")
def test_defaults_changed_dir(self, tmpdir, newconfig):
tmpdir.mkdir("abc").chdir()
config = newconfig([], "")
assert config.setupdir.realpath() == tmpdir.realpath()
assert config.toxworkdir.realpath() == tmpdir.join(".tox").realpath()
def test_project_paths(self, tmpdir, newconfig):
config = newconfig("""
[tox]
toxworkdir=%s
""" % tmpdir)
assert config.toxworkdir == tmpdir
class TestParseconfig:
def test_search_parents(self, tmpdir):
b = tmpdir.mkdir("a").mkdir("b")
toxinipath = tmpdir.ensure("tox.ini")
old = b.chdir()
try:
config = parseconfig([])
finally:
old.chdir()
assert config.toxinipath == toxinipath
def test_get_homedir(monkeypatch):
monkeypatch.setattr(py.path.local, "_gethomedir",
classmethod(lambda x: {}[1]))
assert not get_homedir()
monkeypatch.setattr(py.path.local, "_gethomedir",
classmethod(lambda x: 0 / 0))
assert not get_homedir()
monkeypatch.setattr(py.path.local, "_gethomedir",
classmethod(lambda x: "123"))
assert get_homedir() == "123"
class TestGetcontextname:
def test_blank(self, monkeypatch):
monkeypatch.setattr(os, "environ", {})
assert getcontextname() is None
def test_jenkins(self, monkeypatch):
monkeypatch.setattr(os, "environ", {"JENKINS_URL": "xyz"})
assert getcontextname() == "jenkins"
def test_hudson_legacy(self, monkeypatch):
monkeypatch.setattr(os, "environ", {"HUDSON_URL": "xyz"})
assert getcontextname() == "jenkins"
class TestIniParserAgainstCommandsKey:
"""Test parsing commands with substitutions"""
def test_command_substitution_from_other_section(self, newconfig):
config = newconfig("""
[section]
key = whatever
[testenv]
commands =
echo {[section]key}
""")
reader = SectionReader("testenv", config._cfg)
x = reader.getargvlist("commands")
assert x == [["echo", "whatever"]]
def test_command_substitution_from_other_section_multiline(self, newconfig):
"""Ensure referenced multiline commands form from other section injected
as multiple commands."""
config = newconfig("""
[section]
commands =
cmd1 param11 param12
# comment is omitted
cmd2 param21 \
param22
[base]
commands = cmd 1 \
2 3 4
cmd 2
[testenv]
commands =
{[section]commands}
{[section]commands}
# comment is omitted
echo {[base]commands}
""")
reader = SectionReader("testenv", config._cfg)
x = reader.getargvlist("commands")
assert x == [
"cmd1 param11 param12".split(),
"cmd2 param21 param22".split(),
"cmd1 param11 param12".split(),
"cmd2 param21 param22".split(),
["echo", "cmd", "1", "2", "3", "4", "cmd", "2"],
]
class TestIniParser:
def test_getstring_single(self, tmpdir, newconfig):
config = newconfig("""
[section]
key=value
""")
reader = SectionReader("section", config._cfg)
x = reader.getstring("key")
assert x == "value"
assert not reader.getstring("hello")
x = reader.getstring("hello", "world")
assert x == "world"
def test_missing_substitution(self, tmpdir, newconfig):
config = newconfig("""
[mydefault]
key2={xyz}
""")
reader = SectionReader("mydefault", config._cfg, fallbacksections=['mydefault'])
assert reader is not None
with py.test.raises(tox.exception.ConfigError):
reader.getstring("key2")
def test_getstring_fallback_sections(self, tmpdir, newconfig):
config = newconfig("""
[mydefault]
key2=value2
[section]
key=value
""")
reader = SectionReader("section", config._cfg, fallbacksections=['mydefault'])
x = reader.getstring("key2")
assert x == "value2"
x = reader.getstring("key3")
assert not x
x = reader.getstring("key3", "world")
assert x == "world"
def test_getstring_substitution(self, tmpdir, newconfig):
config = newconfig("""
[mydefault]
key2={value2}
[section]
key={value}
""")
reader = SectionReader("section", config._cfg, fallbacksections=['mydefault'])
reader.addsubstitutions(value="newvalue", value2="newvalue2")
x = reader.getstring("key2")
assert x == "newvalue2"
x = reader.getstring("key3")
assert not x
x = reader.getstring("key3", "{value2}")
assert x == "newvalue2"
def test_getlist(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
item1
{item2}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(item1="not", item2="grr")
x = reader.getlist("key2")
assert x == ['item1', 'grr']
def test_getdict(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
key1=item1
key2={item2}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(item1="not", item2="grr")
x = reader.getdict("key2")
assert 'key1' in x
assert 'key2' in x
assert x['key1'] == 'item1'
assert x['key2'] == 'grr'
x = reader.getdict("key3", {1: 2})
assert x == {1: 2}
def test_getstring_environment_substitution(self, monkeypatch, newconfig):
monkeypatch.setenv("KEY1", "hello")
config = newconfig("""
[section]
key1={env:KEY1}
key2={env:KEY2}
""")
reader = SectionReader("section", config._cfg)
x = reader.getstring("key1")
assert x == "hello"
with py.test.raises(tox.exception.ConfigError):
reader.getstring("key2")
def test_getstring_environment_substitution_with_default(self, monkeypatch, newconfig):
monkeypatch.setenv("KEY1", "hello")
config = newconfig("""
[section]
key1={env:KEY1:DEFAULT_VALUE}
key2={env:KEY2:DEFAULT_VALUE}
key3={env:KEY3:}
""")
reader = SectionReader("section", config._cfg)
x = reader.getstring("key1")
assert x == "hello"
x = reader.getstring("key2")
assert x == "DEFAULT_VALUE"
x = reader.getstring("key3")
assert x == ""
def test_value_matches_section_substituion(self):
assert is_section_substitution("{[setup]commands}")
def test_value_doesn_match_section_substitution(self):
assert is_section_substitution("{[ ]commands}") is None
assert is_section_substitution("{[setup]}") is None
assert is_section_substitution("{[setup] commands}") is None
def test_getstring_other_section_substitution(self, newconfig):
config = newconfig("""
[section]
key = rue
[testenv]
key = t{[section]key}
""")
reader = SectionReader("testenv", config._cfg)
x = reader.getstring("key")
assert x == "true"
def test_argvlist(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
cmd1 {item1} {item2}
cmd2 {item2}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(item1="with space", item2="grr")
# py.test.raises(tox.exception.ConfigError,
# "reader.getargvlist('key1')")
assert reader.getargvlist('key1') == []
x = reader.getargvlist("key2")
assert x == [["cmd1", "with", "space", "grr"],
["cmd2", "grr"]]
def test_argvlist_windows_escaping(self, tmpdir, newconfig):
config = newconfig("""
[section]
comm = py.test {posargs}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions([r"hello\this"])
argv = reader.getargv("comm")
assert argv == ["py.test", "hello\\this"]
def test_argvlist_multiline(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
cmd1 {item1} \ # a comment
{item2}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(item1="with space", item2="grr")
# py.test.raises(tox.exception.ConfigError,
# "reader.getargvlist('key1')")
assert reader.getargvlist('key1') == []
x = reader.getargvlist("key2")
assert x == [["cmd1", "with", "space", "grr"]]
def test_argvlist_quoting_in_command(self, tmpdir, newconfig):
config = newconfig("""
[section]
key1=
cmd1 'with space' \ # a comment
'after the comment'
""")
reader = SectionReader("section", config._cfg)
x = reader.getargvlist("key1")
assert x == [["cmd1", "with space", "after the comment"]]
def test_argvlist_positional_substitution(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
cmd1 []
cmd2 {posargs:{item2} \
other}
""")
reader = SectionReader("section", config._cfg)
posargs = ['hello', 'world']
reader.addsubstitutions(posargs, item2="value2")
# py.test.raises(tox.exception.ConfigError,
# "reader.getargvlist('key1')")
assert reader.getargvlist('key1') == []
argvlist = reader.getargvlist("key2")
assert argvlist[0] == ["cmd1"] + posargs
assert argvlist[1] == ["cmd2"] + posargs
reader = SectionReader("section", config._cfg)
reader.addsubstitutions([], item2="value2")
# py.test.raises(tox.exception.ConfigError,
# "reader.getargvlist('key1')")
assert reader.getargvlist('key1') == []
argvlist = reader.getargvlist("key2")
assert argvlist[0] == ["cmd1"]
assert argvlist[1] == ["cmd2", "value2", "other"]
def test_argvlist_quoted_posargs(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
cmd1 --foo-args='{posargs}'
cmd2 -f '{posargs}'
cmd3 -f {posargs}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(["foo", "bar"])
assert reader.getargvlist('key1') == []
x = reader.getargvlist("key2")
assert x == [["cmd1", "--foo-args=foo bar"],
["cmd2", "-f", "foo bar"],
["cmd3", "-f", "foo", "bar"]]
def test_argvlist_posargs_with_quotes(self, tmpdir, newconfig):
config = newconfig("""
[section]
key2=
cmd1 -f {posargs}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(["foo", "'bar", "baz'"])
assert reader.getargvlist('key1') == []
x = reader.getargvlist("key2")
assert x == [["cmd1", "-f", "foo", "bar baz"]]
def test_positional_arguments_are_only_replaced_when_standing_alone(self, tmpdir, newconfig):
config = newconfig("""
[section]
key=
cmd0 []
cmd1 -m '[abc]'
cmd2 -m '\'something\'' []
cmd3 something[]else
""")
reader = SectionReader("section", config._cfg)
posargs = ['hello', 'world']
reader.addsubstitutions(posargs)
argvlist = reader.getargvlist('key')
assert argvlist[0] == ['cmd0'] + posargs
assert argvlist[1] == ['cmd1', '-m', '[abc]']
assert argvlist[2] == ['cmd2', '-m', "something"] + posargs
assert argvlist[3] == ['cmd3', 'something[]else']
def test_substitution_with_multiple_words(self, newconfig):
inisource = """
[section]
key = py.test -n5 --junitxml={envlogdir}/junit-{envname}.xml []
"""
config = newconfig(inisource)
reader = SectionReader("section", config._cfg)
posargs = ['hello', 'world']
reader.addsubstitutions(posargs, envlogdir='ENV_LOG_DIR', envname='ENV_NAME')
expected = [
'py.test', '-n5', '--junitxml=ENV_LOG_DIR/junit-ENV_NAME.xml', 'hello', 'world'
]
assert reader.getargvlist('key')[0] == expected
def test_getargv(self, newconfig):
config = newconfig("""
[section]
key=some command "with quoting"
""")
reader = SectionReader("section", config._cfg)
expected = ['some', 'command', 'with quoting']
assert reader.getargv('key') == expected
def test_getpath(self, tmpdir, newconfig):
config = newconfig("""
[section]
path1={HELLO}
""")
reader = SectionReader("section", config._cfg)
reader.addsubstitutions(toxinidir=tmpdir, HELLO="mypath")
x = reader.getpath("path1", tmpdir)
assert x == tmpdir.join("mypath")
def test_getbool(self, tmpdir, newconfig):
config = newconfig("""
[section]
key1=True
key2=False
key1a=true
key2a=falsE
key5=yes
""")
reader = SectionReader("section", config._cfg)
assert reader.getbool("key1") is True
assert reader.getbool("key1a") is True
assert reader.getbool("key2") is False
assert reader.getbool("key2a") is False
py.test.raises(KeyError, 'reader.getbool("key3")')
py.test.raises(tox.exception.ConfigError, 'reader.getbool("key5")')
class TestConfigTestEnv:
def test_commentchars_issue33(self, tmpdir, newconfig):
config = newconfig("""
[testenv] # hello
deps = http://abc#123
commands=
python -c "x ; y"
""")
envconfig = config.envconfigs["python"]
assert envconfig.deps[0].name == "http://abc#123"
assert envconfig.commands[0] == ["python", "-c", "x ; y"]
def test_defaults(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
commands=
xyz --abc
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert envconfig.commands == [["xyz", "--abc"]]
assert envconfig.changedir == config.setupdir
assert envconfig.sitepackages is False
assert envconfig.usedevelop is False
assert envconfig.ignore_errors is False
assert envconfig.envlogdir == envconfig.envdir.join("log")
assert list(envconfig.setenv.keys()) == ['PYTHONHASHSEED']
hashseed = envconfig.setenv['PYTHONHASHSEED']
assert isinstance(hashseed, str)
# The following line checks that hashseed parses to an integer.
int_hashseed = int(hashseed)
# hashseed is random by default, so we can't assert a specific value.
assert int_hashseed > 0
def test_sitepackages_switch(self, tmpdir, newconfig):
config = newconfig(["--sitepackages"], "")
envconfig = config.envconfigs['python']
assert envconfig.sitepackages is True
def test_installpkg_tops_develop(self, newconfig):
config = newconfig(["--installpkg=abc"], """
[testenv]
usedevelop = True
""")
assert not config.envconfigs["python"].usedevelop
def test_specific_command_overrides(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
commands=xyz
[testenv:py]
commands=abc
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['py']
assert envconfig.commands == [["abc"]]
def test_whitelist_externals(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
whitelist_externals = xyz
commands=xyz
[testenv:x]
[testenv:py]
whitelist_externals = xyz2
commands=abc
""")
assert len(config.envconfigs) == 2
envconfig = config.envconfigs['py']
assert envconfig.commands == [["abc"]]
assert envconfig.whitelist_externals == ["xyz2"]
envconfig = config.envconfigs['x']
assert envconfig.whitelist_externals == ["xyz"]
def test_changedir(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
changedir=xyz
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert envconfig.changedir.basename == "xyz"
assert envconfig.changedir == config.toxinidir.join("xyz")
def test_ignore_errors(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
ignore_errors=True
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert envconfig.ignore_errors is True
def test_envbindir(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
basepython=python
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert envconfig.envpython == envconfig.envbindir.join("python")
@pytest.mark.parametrize("bp", ["jython", "pypy", "pypy3"])
def test_envbindir_jython(self, tmpdir, newconfig, bp):
config = newconfig("""
[testenv]
basepython=%s
""" % bp)
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
# on win32 and linux virtualenv uses "bin" for pypy/jython
assert envconfig.envbindir.basename == "bin"
if bp == "jython":
assert envconfig.envpython == envconfig.envbindir.join(bp)
def test_setenv_overrides(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
setenv =
PYTHONPATH = something
ANOTHER_VAL=else
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert 'PYTHONPATH' in envconfig.setenv
assert 'ANOTHER_VAL' in envconfig.setenv
assert envconfig.setenv['PYTHONPATH'] == 'something'
assert envconfig.setenv['ANOTHER_VAL'] == 'else'
@pytest.mark.parametrize("plat", ["win32", "linux2"])
def test_passenv_as_multiline_list(self, tmpdir, newconfig, monkeypatch, plat):
monkeypatch.setattr(sys, "platform", plat)
monkeypatch.setenv("A123A", "a")
monkeypatch.setenv("A123B", "b")
monkeypatch.setenv("BX23", "0")
config = newconfig("""
[testenv]
passenv =
A123*
# isolated comment
B?23
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
if plat == "win32":
assert "PATHEXT" in envconfig.passenv
assert "SYSTEMDRIVE" in envconfig.passenv
assert "SYSTEMROOT" in envconfig.passenv
assert "TEMP" in envconfig.passenv
assert "TMP" in envconfig.passenv
else:
assert "TMPDIR" in envconfig.passenv
assert "PATH" in envconfig.passenv
assert "PIP_INDEX_URL" in envconfig.passenv
assert "LANG" in envconfig.passenv
assert "A123A" in envconfig.passenv
assert "A123B" in envconfig.passenv
@pytest.mark.parametrize("plat", ["win32", "linux2"])
def test_passenv_as_space_separated_list(self, tmpdir, newconfig, monkeypatch, plat):
monkeypatch.setattr(sys, "platform", plat)
monkeypatch.setenv("A123A", "a")
monkeypatch.setenv("A123B", "b")
monkeypatch.setenv("BX23", "0")
config = newconfig("""
[testenv]
passenv =
# comment
A123* B?23
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
if plat == "win32":
assert "PATHEXT" in envconfig.passenv
assert "SYSTEMDRIVE" in envconfig.passenv
assert "SYSTEMROOT" in envconfig.passenv
assert "TEMP" in envconfig.passenv
assert "TMP" in envconfig.passenv
else:
assert "TMPDIR" in envconfig.passenv
assert "PATH" in envconfig.passenv
assert "PIP_INDEX_URL" in envconfig.passenv
assert "LANG" in envconfig.passenv
assert "A123A" in envconfig.passenv
assert "A123B" in envconfig.passenv
def test_passenv_with_factor(self, tmpdir, newconfig, monkeypatch):
monkeypatch.setenv("A123A", "a")
monkeypatch.setenv("A123B", "b")
monkeypatch.setenv("A123C", "c")
monkeypatch.setenv("A123D", "d")
monkeypatch.setenv("BX23", "0")
monkeypatch.setenv("CCA43", "3")
monkeypatch.setenv("CB21", "4")
config = newconfig("""
[tox]
envlist = {x1,x2}
[testenv]
passenv =
x1: A123A CC*
x1: CB21
# passed to both environments
A123C
x2: A123B A123D
""")
assert len(config.envconfigs) == 2
assert "A123A" in config.envconfigs["x1"].passenv
assert "A123C" in config.envconfigs["x1"].passenv
assert "CCA43" in config.envconfigs["x1"].passenv
assert "CB21" in config.envconfigs["x1"].passenv
assert "A123B" not in config.envconfigs["x1"].passenv
assert "A123D" not in config.envconfigs["x1"].passenv
assert "BX23" not in config.envconfigs["x1"].passenv
assert "A123B" in config.envconfigs["x2"].passenv
assert "A123D" in config.envconfigs["x2"].passenv
assert "A123A" not in config.envconfigs["x2"].passenv
assert "A123C" in config.envconfigs["x2"].passenv
assert "CCA43" not in config.envconfigs["x2"].passenv
assert "CB21" not in config.envconfigs["x2"].passenv
assert "BX23" not in config.envconfigs["x2"].passenv
def test_passenv_from_global_env(self, tmpdir, newconfig, monkeypatch):
monkeypatch.setenv("A1", "a1")
monkeypatch.setenv("A2", "a2")
monkeypatch.setenv("TOX_TESTENV_PASSENV", "A1")
config = newconfig("""
[testenv]
passenv = A2
""")
env = config.envconfigs["python"]
assert "A1" in env.passenv
assert "A2" in env.passenv
def test_changedir_override(self, tmpdir, newconfig):
config = newconfig("""
[testenv]
changedir=xyz
[testenv:python]
changedir=abc
basepython=python2.6
""")
assert len(config.envconfigs) == 1
envconfig = config.envconfigs['python']
assert envconfig.changedir.basename == "abc"
assert envconfig.changedir == config.setupdir.join("abc")
def test_install_command_setting(self, newconfig):
config = newconfig("""
[testenv]
install_command=some_install {packages}
""")
envconfig = config.envconfigs['python']
assert envconfig.install_command == [
'some_install', '{packages}']
def test_install_command_must_contain_packages(self, newconfig):
py.test.raises(tox.exception.ConfigError, newconfig, """
[testenv]
install_command=pip install
""")
def test_install_command_substitutions(self, newconfig):
config = newconfig("""
[testenv]
install_command=some_install --arg={toxinidir}/foo \
{envname} {opts} {packages}
""")
envconfig = config.envconfigs['python']
assert envconfig.install_command == [
'some_install', '--arg=%s/foo' % config.toxinidir, 'python',
'{opts}', '{packages}']
def test_pip_pre(self, newconfig):
config = newconfig("""
[testenv]
pip_pre=true
""")
envconfig = config.envconfigs['python']
assert envconfig.pip_pre
def test_pip_pre_cmdline_override(self, newconfig):
config = newconfig(
['--pre'],
"""
[testenv]
pip_pre=false
""")
envconfig = config.envconfigs['python']
assert envconfig.pip_pre
def test_downloadcache(self, newconfig, monkeypatch):
monkeypatch.delenv("PIP_DOWNLOAD_CACHE", raising=False)
config = newconfig("""
[testenv]
downloadcache=thecache
""")
envconfig = config.envconfigs['python']
assert envconfig.downloadcache.basename == 'thecache'
def test_downloadcache_env_override(self, newconfig, monkeypatch):
monkeypatch.setenv("PIP_DOWNLOAD_CACHE", 'fromenv')
config = newconfig("""
[testenv]
downloadcache=somepath
""")
envconfig = config.envconfigs['python']
assert envconfig.downloadcache.basename == "fromenv"
def test_downloadcache_only_if_in_config(self, newconfig, tmpdir,
monkeypatch):
monkeypatch.setenv("PIP_DOWNLOAD_CACHE", tmpdir)
config = newconfig('')
envconfig = config.envconfigs['python']
assert not envconfig.downloadcache
def test_simple(tmpdir, newconfig):
config = newconfig("""
[testenv:py26]
basepython=python2.6
[testenv:py27]
basepython=python2.7
""")
assert len(config.envconfigs) == 2
assert "py26" in config.envconfigs
assert "py27" in config.envconfigs
def test_substitution_error(tmpdir, newconfig):
py.test.raises(tox.exception.ConfigError, newconfig, """
[testenv:py27]
basepython={xyz}
""")
def test_substitution_defaults(tmpdir, newconfig):
config = newconfig("""
[testenv:py27]
commands =
{toxinidir}
{toxworkdir}
{envdir}
{envbindir}
{envtmpdir}
{envpython}
{homedir}
{distshare}
{envlogdir}
""")
conf = config.envconfigs['py27']
argv = conf.commands
assert argv[0][0] == config.toxinidir
assert argv[1][0] == config.toxworkdir
assert argv[2][0] == conf.envdir
assert argv[3][0] == conf.envbindir
assert argv[4][0] == conf.envtmpdir
assert argv[5][0] == conf.envpython
assert argv[6][0] == str(config.homedir)
assert argv[7][0] == config.homedir.join(".tox", "distshare")
assert argv[8][0] == conf.envlogdir
def test_substitution_positional(self, newconfig):
inisource = """
[testenv:py27]
commands =
cmd1 [hello] \
world
cmd1 {posargs:hello} \
world
"""
conf = newconfig([], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "[hello]", "world"]
assert argv[1] == ["cmd1", "hello", "world"]
conf = newconfig(['brave', 'new'], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "[hello]", "world"]
assert argv[1] == ["cmd1", "brave", "new", "world"]
def test_substitution_noargs_issue240(self, newconfig):
inisource = """
[testenv]
commands = echo {posargs:foo}
"""
conf = newconfig([""], inisource).envconfigs['python']
argv = conf.commands
assert argv[0] == ["echo"]
def test_posargs_backslashed_or_quoted(self, tmpdir, newconfig):
inisource = """
[testenv:py27]
commands =
echo "\{posargs\}" = {posargs}
echo "posargs = " "{posargs}"
"""
conf = newconfig([], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ['echo', '\\{posargs\\}', '=']
assert argv[1] == ['echo', 'posargs = ', ""]
conf = newconfig(['dog', 'cat'], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ['echo', '\\{posargs\\}', '=', 'dog', 'cat']
assert argv[1] == ['echo', 'posargs = ', 'dog cat']
def test_rewrite_posargs(self, tmpdir, newconfig):
inisource = """
[testenv:py27]
args_are_paths = True
changedir = tests
commands = cmd1 {posargs:hello}
"""
conf = newconfig([], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "hello"]
conf = newconfig(["tests/hello"], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "tests/hello"]
tmpdir.ensure("tests", "hello")
conf = newconfig(["tests/hello"], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "hello"]
def test_rewrite_simple_posargs(self, tmpdir, newconfig):
inisource = """
[testenv:py27]
args_are_paths = True
changedir = tests
commands = cmd1 {posargs}
"""
conf = newconfig([], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1"]
conf = newconfig(["tests/hello"], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "tests/hello"]
tmpdir.ensure("tests", "hello")
conf = newconfig(["tests/hello"], inisource).envconfigs['py27']
argv = conf.commands
assert argv[0] == ["cmd1", "hello"]
def test_take_dependencies_from_other_testenv(self, newconfig):
inisource = """
[testenv]
deps=
pytest
pytest-cov
[testenv:py27]
deps=
{[testenv]deps}
fun
"""
conf = newconfig([], inisource).envconfigs['py27']
packages = [dep.name for dep in conf.deps]
assert packages == ['pytest', 'pytest-cov', 'fun']
def test_take_dependencies_from_other_section(self, newconfig):
inisource = """
[testing:pytest]
deps=
pytest
pytest-cov
[testing:mock]
deps=
mock
[testenv]
deps=
{[testing:pytest]deps}
{[testing:mock]deps}
fun
"""
conf = newconfig([], inisource)
env = conf.envconfigs['python']
packages = [dep.name for dep in env.deps]
assert packages == ['pytest', 'pytest-cov', 'mock', 'fun']
def test_multilevel_substitution(self, newconfig):
inisource = """
[testing:pytest]
deps=
pytest
pytest-cov
[testing:mock]
deps=
mock
[testing]
deps=
{[testing:pytest]deps}
{[testing:mock]deps}
[testenv]
deps=
{[testing]deps}
fun
"""
conf = newconfig([], inisource)
env = conf.envconfigs['python']
packages = [dep.name for dep in env.deps]
assert packages == ['pytest', 'pytest-cov', 'mock', 'fun']
def test_recursive_substitution_cycle_fails(self, newconfig):
inisource = """
[testing:pytest]
deps=
{[testing:mock]deps}
[testing:mock]
deps=
{[testing:pytest]deps}
[testenv]
deps=
{[testing:pytest]deps}
"""
py.test.raises(ValueError, newconfig, [], inisource)
def test_single_value_from_other_secton(self, newconfig, tmpdir):
inisource = """
[common]
changedir = testing
[testenv]
changedir = {[common]changedir}
"""
conf = newconfig([], inisource).envconfigs['python']
assert conf.changedir.basename == 'testing'
assert conf.changedir.dirpath().realpath() == tmpdir.realpath()
def test_factors(self, newconfig):
inisource = """
[tox]
envlist = a-x,b
[testenv]
deps=
dep-all
a: dep-a
b: dep-b
x: dep-x
"""
conf = newconfig([], inisource)
configs = conf.envconfigs
assert [dep.name for dep in configs['a-x'].deps] == \
["dep-all", "dep-a", "dep-x"]
assert [dep.name for dep in configs['b'].deps] == ["dep-all", "dep-b"]
def test_factor_ops(self, newconfig):
inisource = """
[tox]
envlist = {a,b}-{x,y}
[testenv]
deps=
a,b: dep-a-or-b
a-x: dep-a-and-x
{a,b}-y: dep-ab-and-y
"""
configs = newconfig([], inisource).envconfigs
get_deps = lambda env: [dep.name for dep in configs[env].deps]
assert get_deps("a-x") == ["dep-a-or-b", "dep-a-and-x"]
assert get_deps("a-y") == ["dep-a-or-b", "dep-ab-and-y"]
assert get_deps("b-x") == ["dep-a-or-b"]
assert get_deps("b-y") == ["dep-a-or-b", "dep-ab-and-y"]
def test_default_factors(self, newconfig):
inisource = """
[tox]
envlist = py{26,27,33,34}-dep
[testenv]
deps=
dep: dep
"""
conf = newconfig([], inisource)
configs = conf.envconfigs
for name, config in configs.items():
assert config.basepython == 'python%s.%s' % (name[2], name[3])
@pytest.mark.issue188
def test_factors_in_boolean(self, newconfig):
inisource = """
[tox]
envlist = py{27,33}
[testenv]
recreate =
py27: True
"""
configs = newconfig([], inisource).envconfigs
assert configs["py27"].recreate
assert not configs["py33"].recreate
@pytest.mark.issue190
def test_factors_in_setenv(self, newconfig):
inisource = """
[tox]
envlist = py27,py26
[testenv]
setenv =
py27: X = 1
"""
configs = newconfig([], inisource).envconfigs
assert configs["py27"].setenv["X"] == "1"
assert "X" not in configs["py26"].setenv
@pytest.mark.issue191
def test_factor_use_not_checked(self, newconfig):
inisource = """
[tox]
envlist = py27-{a,b}
[testenv]
deps = b: test
"""
configs = newconfig([], inisource).envconfigs
assert set(configs.keys()) == set(['py27-a', 'py27-b'])
@pytest.mark.issue198
def test_factors_groups_touch(self, newconfig):
inisource = """
[tox]
envlist = {a,b}{-x,}
[testenv]
deps=
a,b,x,y: dep
"""
configs = newconfig([], inisource).envconfigs
assert set(configs.keys()) == set(['a', 'a-x', 'b', 'b-x'])
def test_period_in_factor(self, newconfig):
inisource = """
[tox]
envlist = py27-{django1.6,django1.7}
[testenv]
deps =
django1.6: Django==1.6
django1.7: Django==1.7
"""
configs = newconfig([], inisource).envconfigs
assert sorted(configs) == ["py27-django1.6", "py27-django1.7"]
assert [d.name for d in configs["py27-django1.6"].deps] \
== ["Django==1.6"]
class TestGlobalOptions:
def test_notest(self, newconfig):
config = newconfig([], "")
assert not config.option.notest
config = newconfig(["--notest"], "")
assert config.option.notest
def test_verbosity(self, newconfig):
config = newconfig([], "")
assert config.option.verbosity == 0
config = newconfig(["-v"], "")
assert config.option.verbosity == 1
config = newconfig(["-vv"], "")
assert config.option.verbosity == 2
def test_substitution_jenkins_default(self, tmpdir,
monkeypatch, newconfig):
monkeypatch.setenv("HUDSON_URL", "xyz")
config = newconfig("""
[testenv:py27]
commands =
{distshare}
""")
conf = config.envconfigs['py27']
argv = conf.commands
expect_path = config.toxworkdir.join("distshare")
assert argv[0][0] == expect_path
def test_substitution_jenkins_context(self, tmpdir, monkeypatch, newconfig):
monkeypatch.setenv("HUDSON_URL", "xyz")
monkeypatch.setenv("WORKSPACE", tmpdir)
config = newconfig("""
[tox:jenkins]
distshare = {env:WORKSPACE}/hello
[testenv:py27]
commands =
{distshare}
""")
conf = config.envconfigs['py27']
argv = conf.commands
assert argv[0][0] == config.distshare
assert config.distshare == tmpdir.join("hello")
def test_sdist_specification(self, tmpdir, newconfig):
config = newconfig("""
[tox]
sdistsrc = {distshare}/xyz.zip
""")
assert config.sdistsrc == config.distshare.join("xyz.zip")
config = newconfig([], "")
assert not config.sdistsrc
def test_env_selection(self, tmpdir, newconfig, monkeypatch):
inisource = """
[tox]
envlist = py26
[testenv:py26]
basepython=python2.6
[testenv:py31]
basepython=python3.1
[testenv:py27]
basepython=python2.7
"""
# py.test.raises(tox.exception.ConfigError,
# "newconfig(['-exyz'], inisource)")
config = newconfig([], inisource)
assert config.envlist == ["py26"]
config = newconfig(["-epy31"], inisource)
assert config.envlist == ["py31"]
monkeypatch.setenv("TOXENV", "py31,py26")
config = newconfig([], inisource)
assert config.envlist == ["py31", "py26"]
monkeypatch.setenv("TOXENV", "ALL")
config = newconfig([], inisource)
assert config.envlist == ['py26', 'py27', 'py31']
config = newconfig(["-eALL"], inisource)
assert config.envlist == ['py26', 'py27', 'py31']
def test_py_venv(self, tmpdir, newconfig, monkeypatch):
config = newconfig(["-epy"], "")
env = config.envconfigs['py']
assert str(env.basepython) == sys.executable
def test_default_environments(self, tmpdir, newconfig, monkeypatch):
envs = "py26,py27,py32,py33,py34,py35,py36,jython,pypy,pypy3"
inisource = """
[tox]
envlist = %s
""" % envs
config = newconfig([], inisource)
envlist = envs.split(",")
assert config.envlist == envlist
for name in config.envlist:
env = config.envconfigs[name]
if name == "jython":
assert env.basepython == "jython"
elif name.startswith("pypy"):
assert env.basepython == name
else:
assert name.startswith("py")
bp = "python%s.%s" % (name[2], name[3])
assert env.basepython == bp
def test_envlist_expansion(self, newconfig):
inisource = """
[tox]
envlist = py{26,27},docs
"""
config = newconfig([], inisource)
assert config.envlist == ["py26", "py27", "docs"]
def test_envlist_cross_product(self, newconfig):
inisource = """
[tox]
envlist = py{26,27}-dep{1,2}
"""
config = newconfig([], inisource)
assert config.envlist == \
["py26-dep1", "py26-dep2", "py27-dep1", "py27-dep2"]
def test_envlist_multiline(self, newconfig):
inisource = """
[tox]
envlist =
py27
py34
"""
config = newconfig([], inisource)
assert config.envlist == \
["py27", "py34"]
def test_minversion(self, tmpdir, newconfig, monkeypatch):
inisource = """
[tox]
minversion = 3.0
"""
config = newconfig([], inisource)
assert config.minversion == "3.0"
def test_skip_missing_interpreters_true(self, tmpdir, newconfig, monkeypatch):
inisource = """
[tox]
skip_missing_interpreters = True
"""
config = newconfig([], inisource)
assert config.option.skip_missing_interpreters
def test_skip_missing_interpreters_false(self, tmpdir, newconfig, monkeypatch):
inisource = """
[tox]
skip_missing_interpreters = False
"""
config = newconfig([], inisource)
assert not config.option.skip_missing_interpreters
def test_defaultenv_commandline(self, tmpdir, newconfig, monkeypatch):
config = newconfig(["-epy27"], "")
env = config.envconfigs['py27']
assert env.basepython == "python2.7"
assert not env.commands
def test_defaultenv_partial_override(self, tmpdir, newconfig, monkeypatch):
inisource = """
[tox]
envlist = py27
[testenv:py27]
commands= xyz
"""
config = newconfig([], inisource)
env = config.envconfigs['py27']
assert env.basepython == "python2.7"
assert env.commands == [['xyz']]
class TestHashseedOption:
def _get_envconfigs(self, newconfig, args=None, tox_ini=None,
make_hashseed=None):
if args is None:
args = []
if tox_ini is None:
tox_ini = """
[testenv]
"""
if make_hashseed is None:
make_hashseed = lambda: '123456789'
original_make_hashseed = tox.config.make_hashseed
tox.config.make_hashseed = make_hashseed
try:
config = newconfig(args, tox_ini)
finally:
tox.config.make_hashseed = original_make_hashseed
return config.envconfigs
def _get_envconfig(self, newconfig, args=None, tox_ini=None):
envconfigs = self._get_envconfigs(newconfig, args=args,
tox_ini=tox_ini)
return envconfigs["python"]
def _check_hashseed(self, envconfig, expected):
assert envconfig.setenv == {'PYTHONHASHSEED': expected}
def _check_testenv(self, newconfig, expected, args=None, tox_ini=None):
envconfig = self._get_envconfig(newconfig, args=args, tox_ini=tox_ini)
self._check_hashseed(envconfig, expected)
def test_default(self, tmpdir, newconfig):
self._check_testenv(newconfig, '123456789')
def test_passing_integer(self, tmpdir, newconfig):
args = ['--hashseed', '1']
self._check_testenv(newconfig, '1', args=args)
def test_passing_string(self, tmpdir, newconfig):
args = ['--hashseed', 'random']
self._check_testenv(newconfig, 'random', args=args)
def test_passing_empty_string(self, tmpdir, newconfig):
args = ['--hashseed', '']
self._check_testenv(newconfig, '', args=args)
@pytest.mark.xfail(sys.version_info >= (3, 2),
reason="at least Debian python 3.2/3.3 have a bug: "
"http://bugs.python.org/issue11884")
def test_passing_no_argument(self, tmpdir, newconfig):
"""Test that passing no arguments to --hashseed is not allowed."""
args = ['--hashseed']
try:
self._check_testenv(newconfig, '', args=args)
except SystemExit:
e = sys.exc_info()[1]
assert e.code == 2
return
assert False # getting here means we failed the test.
def test_setenv(self, tmpdir, newconfig):
"""Check that setenv takes precedence."""
tox_ini = """
[testenv]
setenv =
PYTHONHASHSEED = 2
"""
self._check_testenv(newconfig, '2', tox_ini=tox_ini)
args = ['--hashseed', '1']
self._check_testenv(newconfig, '2', args=args, tox_ini=tox_ini)
def test_noset(self, tmpdir, newconfig):
args = ['--hashseed', 'noset']
envconfig = self._get_envconfig(newconfig, args=args)
assert envconfig.setenv == {}
def test_noset_with_setenv(self, tmpdir, newconfig):
tox_ini = """
[testenv]
setenv =
PYTHONHASHSEED = 2
"""
args = ['--hashseed', 'noset']
self._check_testenv(newconfig, '2', args=args, tox_ini=tox_ini)
def test_one_random_hashseed(self, tmpdir, newconfig):
"""Check that different testenvs use the same random seed."""
tox_ini = """
[testenv:hash1]
[testenv:hash2]
"""
next_seed = [1000]
# This function is guaranteed to generate a different value each time.
def make_hashseed():
next_seed[0] += 1
return str(next_seed[0])
# Check that make_hashseed() works.
assert make_hashseed() == '1001'
envconfigs = self._get_envconfigs(newconfig, tox_ini=tox_ini,
make_hashseed=make_hashseed)
self._check_hashseed(envconfigs["hash1"], '1002')
# Check that hash2's value is not '1003', for example.
self._check_hashseed(envconfigs["hash2"], '1002')
def test_setenv_in_one_testenv(self, tmpdir, newconfig):
"""Check using setenv in one of multiple testenvs."""
tox_ini = """
[testenv:hash1]
setenv =
PYTHONHASHSEED = 2
[testenv:hash2]
"""
envconfigs = self._get_envconfigs(newconfig, tox_ini=tox_ini)
self._check_hashseed(envconfigs["hash1"], '2')
self._check_hashseed(envconfigs["hash2"], '123456789')
class TestIndexServer:
def test_indexserver(self, tmpdir, newconfig):
config = newconfig("""
[tox]
indexserver =
name1 = XYZ
name2 = ABC
""")
assert config.indexserver['default'].url is None
assert config.indexserver['name1'].url == "XYZ"
assert config.indexserver['name2'].url == "ABC"
def test_parse_indexserver(self, newconfig):
inisource = """
[tox]
indexserver =
default = http://pypi.testrun.org
name1 = whatever
"""
config = newconfig([], inisource)
assert config.indexserver['default'].url == "http://pypi.testrun.org"
assert config.indexserver['name1'].url == "whatever"
config = newconfig(['-i', 'qwe'], inisource)
assert config.indexserver['default'].url == "qwe"
assert config.indexserver['name1'].url == "whatever"
config = newconfig(['-i', 'name1=abc', '-i', 'qwe2'], inisource)
assert config.indexserver['default'].url == "qwe2"
assert config.indexserver['name1'].url == "abc"
config = newconfig(["-i", "ALL=xzy"], inisource)
assert len(config.indexserver) == 2
assert config.indexserver["default"].url == "xzy"
assert config.indexserver["name1"].url == "xzy"
def test_multiple_homedir_relative_local_indexservers(self, newconfig):
inisource = """
[tox]
indexserver =
default = file://{homedir}/.pip/downloads/simple
local1 = file://{homedir}/.pip/downloads/simple
local2 = file://{toxinidir}/downloads/simple
pypi = http://pypi.python.org/simple
"""
config = newconfig([], inisource)
expected = "file://%s/.pip/downloads/simple" % config.homedir
assert config.indexserver['default'].url == expected
assert config.indexserver['local1'].url == config.indexserver['default'].url
class TestParseEnv:
def test_parse_recreate(self, newconfig):
inisource = ""
config = newconfig([], inisource)
assert not config.envconfigs['python'].recreate
config = newconfig(['--recreate'], inisource)
assert config.envconfigs['python'].recreate
config = newconfig(['-r'], inisource)
assert config.envconfigs['python'].recreate
inisource = """
[testenv:hello]
recreate = True
"""
config = newconfig([], inisource)
assert config.envconfigs['hello'].recreate
class TestCmdInvocation:
def test_help(self, cmd):
result = cmd.run("tox", "-h")
assert not result.ret
result.stdout.fnmatch_lines([
"*help*",
])
def test_version(self, cmd):
result = cmd.run("tox", "--version")
assert not result.ret
stdout = result.stdout.str()
assert tox.__version__ in stdout
assert "imported from" in stdout
def test_listenvs(self, cmd, initproj):
initproj('listenvs', filedefs={
'tox.ini': '''
[tox]
envlist=py26,py27,py33,pypy,docs
[testenv:notincluded]
changedir = whatever
[testenv:docs]
changedir = docs
''',
})
result = cmd.run("tox", "-l")
result.stdout.fnmatch_lines("""
*py26*
*py27*
*py33*
*pypy*
*docs*
""")
def test_config_specific_ini(self, tmpdir, cmd):
ini = tmpdir.ensure("hello.ini")
result = cmd.run("tox", "-c", ini, "--showconfig")
assert not result.ret
result.stdout.fnmatch_lines([
"*config-file*hello.ini*",
])
def test_no_tox_ini(self, cmd, initproj):
initproj("noini-0.5", )
result = cmd.run("tox")
assert result.ret
result.stderr.fnmatch_lines([
"*ERROR*tox.ini*not*found*",
])
def test_showconfig_with_force_dep_version(self, cmd, initproj):
initproj('force_dep_version', filedefs={
'tox.ini': '''
[tox]
[testenv]
deps=
dep1==2.3
dep2
''',
})
result = cmd.run("tox", "--showconfig")
assert result.ret == 0
result.stdout.fnmatch_lines([
r'*deps*dep1==2.3, dep2*',
])
# override dep1 specific version, and force version for dep2
result = cmd.run("tox", "--showconfig", "--force-dep=dep1",
"--force-dep=dep2==5.0")
assert result.ret == 0
result.stdout.fnmatch_lines([
r'*deps*dep1, dep2==5.0*',
])
@pytest.mark.parametrize("cmdline,envlist", [
("-e py26", ['py26']),
("-e py26,py33", ['py26', 'py33']),
("-e py26,py26", ['py26', 'py26']),
("-e py26,py33 -e py33,py27", ['py26', 'py33', 'py33', 'py27'])
])
def test_env_spec(cmdline, envlist):
args = cmdline.split()
config = parseconfig(args)
assert config.envlist == envlist
class TestCommandParser:
def test_command_parser_for_word(self):
p = CommandParser('word')
# import pytest; pytest.set_trace()
assert list(p.words()) == ['word']
def test_command_parser_for_posargs(self):
p = CommandParser('[]')
assert list(p.words()) == ['[]']
def test_command_parser_for_multiple_words(self):
p = CommandParser('w1 w2 w3 ')
assert list(p.words()) == ['w1', ' ', 'w2', ' ', 'w3']
def test_command_parser_for_substitution_with_spaces(self):
p = CommandParser('{sub:something with spaces}')
assert list(p.words()) == ['{sub:something with spaces}']
def test_command_parser_with_complex_word_set(self):
complex_case = (
'word [] [literal] {something} {some:other thing} w{ord} w{or}d w{ord} '
'w{o:rd} w{o:r}d {w:or}d w[]ord {posargs:{a key}}')
p = CommandParser(complex_case)
parsed = list(p.words())
expected = [
'word', ' ', '[]', ' ', '[literal]', ' ', '{something}', ' ', '{some:other thing}',
' ', 'w', '{ord}', ' ', 'w', '{or}', 'd', ' ', 'w', '{ord}', ' ', 'w', '{o:rd}', ' ',
'w', '{o:r}', 'd', ' ', '{w:or}', 'd',
' ', 'w[]ord', ' ', '{posargs:{a key}}',
]
assert parsed == expected
def test_command_with_runs_of_whitespace(self):
cmd = "cmd1 {item1}\n {item2}"
p = CommandParser(cmd)
parsed = list(p.words())
assert parsed == ['cmd1', ' ', '{item1}', '\n ', '{item2}']
def test_command_with_split_line_in_subst_arguments(self):
cmd = dedent(""" cmd2 {posargs:{item2}
other}""")
p = CommandParser(cmd)
parsed = list(p.words())
assert parsed == ['cmd2', ' ', '{posargs:{item2}\n other}']
def test_command_parsing_for_issue_10(self):
cmd = "nosetests -v -a !deferred --with-doctest []"
p = CommandParser(cmd)
parsed = list(p.words())
assert parsed == [
'nosetests', ' ', '-v', ' ', '-a', ' ', '!deferred', ' ',
'--with-doctest', ' ', '[]'
]
@pytest.mark.skipif("sys.platform != 'win32'")
def test_commands_with_backslash(self, newconfig):
config = newconfig([r"hello\world"], """
[testenv:py26]
commands = some {posargs}
""")
envconfig = config.envconfigs["py26"]
assert envconfig.commands[0] == ["some", r"hello\world"]
| {
"content_hash": "3b74ea789e84546b59c6e75c3e5d70d3",
"timestamp": "",
"source": "github",
"line_count": 1731,
"max_line_length": 97,
"avg_line_length": 35.0421721548238,
"alnum_prop": 0.5385274819479706,
"repo_name": "samstav/circleci-python-sandbox",
"id": "8aa7abd5e82e07d3614ae2bdf4c9a2cd4f1d1af2",
"size": "60658",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "tox/tests/test_config.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Makefile",
"bytes": "4749"
},
{
"name": "Python",
"bytes": "257976"
}
],
"symlink_target": ""
} |
"""Footnotes for documents using chicago manual style.
By default specific languages forgo the symbol based footnotes.
https://en.wikipedia.org/wiki/Dagger_(typography) :
While daggers are freely used in English-language texts, they are often avoided
in other languages because of their similarity to the Christian cross. In
German, for example, daggers are commonly employed only to indicate a person's
death or the extinction of a word, language, species or the like.
http://graphicdesign.stackexchange.com/questions/10892/footnote-typographic-conventions :
In Germany the two footnote signs * † have also the meaning of born (*) and died (†)
"""
import collections
import re
SYMBOLS = [
'*',
'†',
'‡',
'§',
'||',
'¶',
'#',
]
NUMERICAL_SYMBOLS = {
'0': '⁰',
'1': '¹',
'2': '²',
'3': '³',
'4': '⁴',
'5': '⁵',
'6': '⁶',
'7': '⁷',
'8': '⁸',
'9': '⁹',
}
NUMERIC_LOCALES_REGEX = re.compile(r'_(DE)$', re.IGNORECASE)
class Error(Exception):
def __init__(self, message):
super(Error, self).__init__(message)
self.message = message
class DuplicateSymbolError(Error, KeyError):
pass
def symbol_generator(symbols=SYMBOLS):
loop_count = 1
loop_index = 0
while True:
if loop_index > len(symbols) - 1:
loop_count += 1
loop_index = 0
yield symbols[loop_index] * loop_count
loop_index += 1
def numberic_symbol_generator():
index = 1
while True:
yield ''.join([NUMERICAL_SYMBOLS[char] for char in str(index)])
index += 1
def numberic_generator():
index = 1
while True:
yield index
index += 1
class Footnotes:
def __init__(self, locale, symbols=None, use_numeric=None,
use_numeric_symbols=None, numeric_locales_pattern=None):
self.symbol_to_footnote = collections.OrderedDict()
self.symbols = symbols or SYMBOLS
self.locale = locale
self.use_numeric = use_numeric
self.use_numeric_symbols = use_numeric_symbols
if numeric_locales_pattern is not None:
self.numeric_locales_pattern = re.compile(
numeric_locales_pattern, re.IGNORECASE)
else:
self.numeric_locales_pattern = NUMERIC_LOCALES_REGEX
self.reset()
def __getitem__(self, key):
return self.symbol_to_footnote[key]
def __iter__(self):
return self.symbol_to_footnote.items()
def __len__(self):
return len(self.symbol_to_footnote)
@property
def footnotes(self):
return self.symbol_to_footnote
@property
def is_numeric(self):
return (self.use_numeric
or self.use_numeric_symbols
or self.is_numeric_territory)
@property
def is_numeric_territory(self):
if self.locale is None:
return False
# When explicitly not using numbers, do not use numeric.
if self.use_numeric_symbols is False or self.use_numeric is False:
return False
return self.numeric_locales_pattern.search(self.locale) is not None
def add(self, value, custom_symbol=None):
for symbol, note_value in self.symbol_to_footnote.items():
if value == note_value:
return symbol
if custom_symbol:
# Check that the symbol is not in use.
if custom_symbol in self.symbol_to_footnote.keys():
raise DuplicateSymbolError(
'Custom symbol already in use: {}'.format(custom_symbol))
self.symbol_to_footnote[custom_symbol] = value
return custom_symbol
# Doesn't exist, add as new symbol.
symbol = next(self.generator)
self.symbol_to_footnote[symbol] = value
return symbol
def index(self, key):
return list(self.symbol_to_footnote.keys()).index(key)
def items(self):
return self.symbol_to_footnote.items()
def reset(self):
self.symbol_to_footnote = collections.OrderedDict()
if self.is_numeric:
if self.use_numeric:
self.generator = numberic_generator()
else:
self.generator = numberic_symbol_generator()
else:
self.generator = symbol_generator(self.symbols)
| {
"content_hash": "6296e2bd80917a574b8c7459ce3b2244",
"timestamp": "",
"source": "github",
"line_count": 153,
"max_line_length": 89,
"avg_line_length": 28.287581699346404,
"alnum_prop": 0.607902033271719,
"repo_name": "grow/grow",
"id": "e84c7cc6ec3cf60110ed2054c530489351a80312",
"size": "4379",
"binary": false,
"copies": "1",
"ref": "refs/heads/main",
"path": "grow/pods/footnotes.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "377"
},
{
"name": "Dockerfile",
"bytes": "2301"
},
{
"name": "HTML",
"bytes": "12074"
},
{
"name": "JavaScript",
"bytes": "5183"
},
{
"name": "Makefile",
"bytes": "2962"
},
{
"name": "Python",
"bytes": "1102607"
},
{
"name": "Sass",
"bytes": "7080"
},
{
"name": "Shell",
"bytes": "1835"
}
],
"symlink_target": ""
} |
"""
Integer parameter type testcases - INT32
List of tested functions :
--------------------------
- [setParameter] function
- [getParameter] function
Initial Settings :
------------------
INT32 :
- size = 32
- range : [-1000, 1000]
Test cases :
------------
- INT32 parameter min value = -1000
- INT32 parameter min value out of bounds = -1001
- INT32 parameter max value = 1000
- INT32 parameter max value out of bounds = 1001
- INT32 parameter in nominal case = 50
"""
import commands
from Util.PfwUnitTestLib import PfwTestCase
from Util import ACTLogging
log=ACTLogging.Logger()
# Test of type INT32 - range [-1000, 1000]
class TestCases(PfwTestCase):
def setUp(self):
self.param_name = "/Test/Test/TEST_DIR/INT32"
self.pfw.sendCmd("setTuningMode", "on")
def tearDown(self):
self.pfw.sendCmd("setTuningMode", "off")
def test_Nominal_Case(self):
"""
Testing INT32 in nominal case = 50
----------------------------------
Test case description :
~~~~~~~~~~~~~~~~~~~~~~~
- set INT32 parameter in nominal case = 50
Tested commands :
~~~~~~~~~~~~~~~~~
- [setParameter] function
Used commands :
~~~~~~~~~~~~~~~
- [getParameter] function
Expected result :
~~~~~~~~~~~~~~~~~
- INT32 parameter set to 50
- Blackboard and filesystem values checked
"""
print self.test_Nominal_Case.__doc__
print "INFO : INT32 parameter in nominal case = 50"
value = "50"
hex_value = "0x32"
#Set parameter value
out, err = self.pfw.sendCmd("setParameter", self.param_name, value)
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out == "Done", out
#Check parameter value on blackboard
out, err = self.pfw.sendCmd("getParameter", self.param_name, "")
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out == value, "BLACKBOARD : Incorrect value for %s, expected: %s, found: %s" % (self.param_name, value, out)
#Check parameter value on filesystem
assert commands.getoutput('cat $PFW_RESULT/INT32') == hex_value, "FILESYSTEM : parameter update error"
print "INFO : test OK"
def test_TypeMin(self):
"""
Testing INT32 minimal value = -1000
-----------------------------------
Test case description :
~~~~~~~~~~~~~~~~~~~~~~~
- set INT32 parameter min value = -1000
Tested commands :
~~~~~~~~~~~~~~~~~
- [setParameter] function
Used commands :
~~~~~~~~~~~~~~~
- [getParameter] function
Expected result :
~~~~~~~~~~~~~~~~~
- INT32 parameter set to -1000
- Blackboard and filesystem values checked
"""
print self.test_TypeMin.__doc__
print "INFO : INT32 parameter min value = -1000"
value = "-1000"
hex_value = "0xfffffc18"
#Set parameter value
out, err = self.pfw.sendCmd("setParameter", self.param_name, value)
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out == "Done", out
#Check parameter value on blackboard
out, err = self.pfw.sendCmd("getParameter", self.param_name, "")
assert err == None, "PFW : Error when setting parameter %s : %s" % (self.param_name, err)
assert out == value, "BLACKBOARD : Incorrect value for %s, expected: %s, found: %s" % (self.param_name, value, out)
#Check parameter value on filesystem
assert commands.getoutput('cat $PFW_RESULT/INT32') == hex_value, "FILESYSTEM : parameter update error"
print "INFO : test OK"
def test_TypeMin_Overflow(self):
"""
Testing INT32 parameter value out of negative range
---------------------------------------------------
Test case description :
~~~~~~~~~~~~~~~~~~~~~~~
- set INT32 to -1001
Tested commands :
~~~~~~~~~~~~~~~~~
- [setParameter] function
Used commands :
~~~~~~~~~~~~~~~
- [getParameter] function
Expected result :
~~~~~~~~~~~~~~~~~
- error detected
- INT32 parameter not updated
- Blackboard and filesystem values checked
"""
print self.test_TypeMin_Overflow.__doc__
print "INFO : INT32 parameter min value out of bounds = -1001"
value = "-1001"
param_check = commands.getoutput('cat $PFW_RESULT/INT32')
#Set parameter value
out, err = self.pfw.sendCmd("setParameter", self.param_name, value)
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out != "Done", "PFW : Error not detected when setting parameter %s out of bounds" % (self.param_name)
#Check parameter value on filesystem
assert commands.getoutput('cat $PFW_RESULT/INT32') == param_check, "FILESYSTEM : Forbiden parameter change"
print "INFO : test OK"
def test_TypeMax(self):
"""
Testing INT32 parameter maximum value
-------------------------------------
Test case description :
~~~~~~~~~~~~~~~~~~~~~~~
- set INT32 to 1000
Tested commands :
~~~~~~~~~~~~~~~~~
- [setParameter] function
Used commands :
~~~~~~~~~~~~~~~
- [getParameter] function
Expected result :
~~~~~~~~~~~~~~~~~
- INT32 parameter set to 1000
- Blackboard and filesystem values checked
"""
print self.test_TypeMax.__doc__
print "INFO : INT32 parameter max value = 1000"
value = "1000"
hex_value = "0x3e8"
#Set parameter value
out, err = self.pfw.sendCmd("setParameter", self.param_name, value)
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out == "Done", out
#Check parameter value on blackboard
out, err = self.pfw.sendCmd("getParameter", self.param_name, "")
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out == value, "BLACKBOARD : Incorrect value for %s, expected: %s, found: %s" % (self.param_name, value, out)
#Check parameter value on filesystem
assert commands.getoutput('cat $PFW_RESULT/INT32') == hex_value, "FILESYSTEM : parameter update error"
print "INFO : test OK"
def test_TypeMax_Overflow(self):
"""
Testing INT32 parameter value out of positive range
---------------------------------------------------
Test case description :
~~~~~~~~~~~~~~~~~~~~~~~
- set INT32 to 1001
Tested commands :
~~~~~~~~~~~~~~~~~
- [setParameter] function
Used commands :
~~~~~~~~~~~~~~~
- [getParameter] function
Expected result :
~~~~~~~~~~~~~~~~~
- error detected
- INT32 parameter not updated
- Blackboard and filesystem values checked
"""
print self.test_TypeMax_Overflow.__doc__
print "INFO : INT32 parameter max value out of bounds = 1001"
value = "1001"
param_check = commands.getoutput('cat $PFW_RESULT/INT32')
#Set parameter value
out, err = self.pfw.sendCmd("setParameter", self.param_name, value)
assert err == None, "Error when setting parameter %s : %s" % (self.param_name, err)
assert out != "Done", "PFW : Error not detected when setting parameter %s out of bounds" % (self.param_name)
#Check parameter value on filesystem
assert commands.getoutput('cat $PFW_RESULT/INT32') == param_check, "FILESYSTEM : Forbiden parameter change"
print "INFO : test OK"
| {
"content_hash": "8bdc761153104e41f32105140266da3e",
"timestamp": "",
"source": "github",
"line_count": 199,
"max_line_length": 123,
"avg_line_length": 41.814070351758794,
"alnum_prop": 0.5239754837158995,
"repo_name": "zchokri/parameter-framework",
"id": "579e4f8d58ee48d2b0963c0ea91c1f3d89417693",
"size": "9886",
"binary": false,
"copies": "10",
"ref": "refs/heads/master",
"path": "test/functional-tests/PfwTestCase/Types/tINT32.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "C",
"bytes": "18110"
},
{
"name": "C++",
"bytes": "1169449"
},
{
"name": "CMake",
"bytes": "43450"
},
{
"name": "Makefile",
"bytes": "34278"
},
{
"name": "Python",
"bytes": "679012"
}
],
"symlink_target": ""
} |
"""
.. py:currentmodule:: pysemeels.test_raw_spectrum
.. moduleauthor:: Hendrix Demers <[email protected]>
Tests for the module :py:mod:`pysemeels.raw_spectrum`.
"""
###############################################################################
# Copyright 2017 Hendrix Demers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# Standard library modules.
import unittest
import os
import sys
# Third party modules.
import h5py
import numpy as np
import pytest
# Local modules.
# Project modules.
from pysemeels import get_current_module_path
from pysemeels.raw_spectrum import RawSpectrum, HDF5_GROUP_EXTRA_PARAMETERS, HDF5_GROUP_EELS_PARAMETERS, \
HDF5_DATASET_ENERGIES_keV, HDF5_DATASET_RAW_COUNTS, HDF5_DATASET_GAIN_CORRECTIONS, HDF5_DATASET_DARK_CURRENTS
from pysemeels.hdf5_sem_parameters import HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V
from pysemeels.hitachi.eels_su.elv_text_file import *
from pysemeels.tools.hdf5_file_labels import *
from tests import is_bad_file
# Globals and constants variables.
class TestRawSpectrum(unittest.TestCase):
"""
TestCase class for the module `pysemeels.raw_spectrum`.
"""
def setUp(self):
"""
Setup method.
"""
unittest.TestCase.setUp(self)
if sys.platform != "win32":
pytest.skip("only run on windows")
self.name_ref = "TestRawSpectrum"
self.spectrum = RawSpectrum(self.name_ref)
self.test_data_path = get_current_module_path(__file__, '../test_data')
self.name_import_ref = "30kV_7eV"
self.import_sem_parameters = {HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V: 30.0e3}
self.elv_file_path = os.path.join(self.test_data_path, "hitachi/eels_su/30kV_7eV.elv")
if is_bad_file(self.elv_file_path):
pytest.skip("File not found: {}".format(self.elv_file_path))
filepath = self.elv_file_path
name = os.path.splitext(os.path.basename(filepath))[0]
self.spectrum_import = RawSpectrum(name)
self.spectrum_import.import_data(filepath, self.import_sem_parameters)
def tearDown(self):
"""
Teardown method.
"""
unittest.TestCase.tearDown(self)
def testSkeleton(self):
"""
First test to check if the testcase is working with the testing framework.
"""
# self.fail("Test if the testcase is working.")
self.assert_(True)
def test_init(self):
"""
Test __init__ method.
"""
name_ref = "TestRawSpectrum_init"
spectrum = RawSpectrum(name_ref)
self.assertEqual(name_ref, spectrum.name)
# self.fail("Test if the testcase is working.")
def test_write_hdf5(self):
"""
Test write_hdf5 method.
"""
filepath = os.path.join(self.test_data_path, "test_raw_spectrum_write_hdf5.hdf5")
with h5py.File(filepath, "w") as hdf5_file:
self.spectrum.write_hdf5(hdf5_file)
self.assertTrue(self.name_ref in hdf5_file)
root_group = hdf5_file[self.name_ref]
os.remove(filepath)
# self.fail("Test if the testcase is working.")
def test_read_hdf5(self):
"""
Test read_hdf5 method.
"""
filepath = os.path.join(self.test_data_path, "test_raw_spectrum_read_hdf5.hdf5")
if is_bad_file(filepath):
pytest.skip("File not found: {}".format(filepath))
with h5py.File(filepath, "r") as hdf5_file:
self.spectrum.read_hdf5(hdf5_file)
# self.fail("Test if the testcase is working.")
def test_read_hdf5_bad_project(self):
"""
Test read_hdf5 method with a different project name.
"""
name_ref = "TestProject_init"
spectrum = RawSpectrum(name_ref)
filepath = os.path.join(self.test_data_path, "test_raw_spectrum_read_hdf5.hdf5")
if is_bad_file(filepath):
pytest.skip("File not found: {}".format(filepath))
with h5py.File(filepath, "r") as hdf5_file:
self.assertRaises(ValueError, spectrum.read_hdf5, hdf5_file)
# self.fail("Test if the testcase is working.")
def test_import_data(self):
"""
Test import_data method.
"""
self.elv_file_path = os.path.join(self.test_data_path, "hitachi/eels_su/30kV_7eV.elv")
if is_bad_file(self.elv_file_path):
pytest.skip("File not found: {}".format(self.elv_file_path))
filepath = self.elv_file_path
name = os.path.splitext(os.path.basename(filepath))[0]
spectrum = RawSpectrum(name)
spectrum.import_data(filepath)
self.assertEqual(-32.00, spectrum.energies_eV[0])
self.assertEqual(2282, spectrum.raw_counts[0])
self.assertEqual(21.84, spectrum.energies_eV[-1])
self.assertEqual(0, spectrum.raw_counts[-1])
self.assertEqual(1024, len(spectrum.energies_eV))
self.assertEqual(1024, len(spectrum.raw_counts))
self.assertEqual(0.918375, spectrum.gain_corrections[0])
self.assertEqual(0.000000, spectrum.gain_corrections[-1])
self.assertEqual(1024, len(spectrum.gain_corrections))
self.assertEqual(2313, spectrum.dark_currents[0])
self.assertEqual(0, spectrum.dark_currents[-1])
self.assertEqual(1024, len(spectrum.dark_currents))
# self.fail("Test if the testcase is working.")
def test_import_data_write_hdf5(self):
"""
Test import_data method.
"""
self.assertEqual(-32.00, self.spectrum_import.energies_eV[0])
self.assertEqual(2282, self.spectrum_import.raw_counts[0])
self.assertEqual(21.84, self.spectrum_import.energies_eV[-1])
self.assertEqual(0, self.spectrum_import.raw_counts[-1])
self.assertEqual(1024, len(self.spectrum_import.energies_eV))
self.assertEqual(1024, len(self.spectrum_import.raw_counts))
self.assertEqual(0.918375, self.spectrum_import.gain_corrections[0])
self.assertEqual(0.000000, self.spectrum_import.gain_corrections[-1])
self.assertEqual(1024, len(self.spectrum_import.gain_corrections))
self.assertEqual(2313, self.spectrum_import.dark_currents[0])
self.assertEqual(0, self.spectrum_import.dark_currents[-1])
self.assertEqual(1024, len(self.spectrum_import.dark_currents))
filepath = os.path.join(self.test_data_path, "test_raw_spectrum_import_data_write_hdf5.hdf5")
with h5py.File(filepath, "w") as hdf5_file:
self.spectrum_import.write_hdf5(hdf5_file)
self.assertTrue(self.name_import_ref in hdf5_file)
root_group = hdf5_file[self.name_import_ref]
self.assertTrue(HDF5_DATASET_ENERGIES_keV in root_group)
self.assertTrue(HDF5_DATASET_RAW_COUNTS in root_group)
self.assertTrue(HDF5_DATASET_GAIN_CORRECTIONS in root_group)
self.assertTrue(HDF5_DATASET_DARK_CURRENTS in root_group)
self.assertTrue(HDF5_GROUP_EXTRA_PARAMETERS in root_group)
self.assertTrue(HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V in root_group[HDF5_GROUP_EXTRA_PARAMETERS].attrs)
self.assertTrue(HDF5_GROUP_EELS_PARAMETERS in root_group)
self.assertTrue(HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V in root_group[HDF5_GROUP_EELS_PARAMETERS].attrs)
os.remove(filepath)
# self.fail("Test if the testcase is working.")
def test_import_data_read_hdf5(self):
"""
Test read_hdf5 method.
"""
filepath = self.elv_file_path
name = os.path.splitext(os.path.basename(filepath))[0]
spectrum_read = RawSpectrum(name)
filepath = os.path.join(self.test_data_path, "test_raw_spectrum_import_data_read_hdf5.hdf5")
if is_bad_file(filepath):
pytest.skip("File not found: {}".format(filepath))
with h5py.File(filepath, "r") as hdf5_file:
spectrum_read.read_hdf5(hdf5_file)
self.assertEqual(self.name_import_ref, spectrum_read.name)
self.assertTrue(HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V in spectrum_read.extra_parameters)
self.assertAlmostEqual(self.import_sem_parameters[HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V],
spectrum_read.extra_parameters[HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V])
self.assertEqual(-32.00, spectrum_read.energies_eV[0])
self.assertEqual(21.84, spectrum_read.energies_eV[-1])
self.assertEqual(1024, len(spectrum_read.energies_eV))
self.assertEqual(2282, spectrum_read.raw_counts[0])
self.assertEqual(0, spectrum_read.raw_counts[-1])
self.assertEqual(1024, len(spectrum_read.raw_counts))
self.assertEqual(0.918375, spectrum_read.gain_corrections[0])
self.assertEqual(0.000000, spectrum_read.gain_corrections[-1])
self.assertEqual(1024, len(spectrum_read.gain_corrections))
self.assertEqual(2313, spectrum_read.dark_currents[0])
self.assertEqual(0, spectrum_read.dark_currents[-1])
self.assertEqual(1024, len(spectrum_read.dark_currents))
self.assertEqual(-33.755274261603375, spectrum_read.counts[0])
self.assertTrue(np.isnan(spectrum_read.counts)[-1])
self.assertEqual(1024, len(spectrum_read.counts))
parameters = spectrum_read.eels_parameters
self.assertEqual("SU-EELS", parameters[HDF5_ATTRIBUTE_MODEL])
self.assertEqual(0.0, parameters[HDF5_ATTRIBUTE_SAMPLE_HEIGHT_mm])
self.assertEqual(r"D:\2017\System_baseline_march2017\30kV_7eV.elv", parameters[HDF5_ATTRIBUTE_FILEPATH])
self.assertEqual("", parameters[HDF5_ATTRIBUTE_COMMENT])
self.assertEqual("01/Mar/2017", parameters[HDF5_ATTRIBUTE_DATE])
self.assertEqual("10:59", parameters[HDF5_ATTRIBUTE_TIME])
self.assertEqual(30000, parameters[HDF5_ATTRIBUTE_ACCELERATING_VOLTAGE_V])
self.assertEqual(7.0, parameters[HDF5_ATTRIBUTE_ENERGY_WIDTH_eV])
self.assertEqual(0.0, parameters[HDF5_ATTRIBUTE_ENERGY_LOSS_eV])
self.assertEqual(500, parameters[HDF5_ATTRIBUTE_SPEED_us])
self.assertEqual("01/Mar/2017", parameters[HDF5_DATE])
self.assertEqual("10:59", parameters[HDF5_TIME])
self.assertEqual("", parameters[HDF5_COMMENT])
self.assertEqual(500.0, parameters[HDF5_ACQUISITION_SPEED])
self.assertEqual(0.0, parameters[HDF5_ENERGY_LOSS])
self.assertEqual(98.7, parameters[HDF5_RAW])
self.assertEqual(7.0, parameters[HDF5_ENERGY_WIDTH_eV])
self.assertEqual(586, parameters[HDF5_DUAL_DET_POSITION])
self.assertEqual(133, parameters[HDF5_DUAL_DET_POST])
self.assertEqual(608, parameters[HDF5_DUAL_DET_CENTER])
self.assertEqual(13575, parameters[HDF5_Q1])
self.assertEqual(3850, parameters[HDF5_Q1S])
self.assertEqual(0, parameters[HDF5_Q2])
self.assertEqual(0, parameters[HDF5_Q2S])
self.assertEqual(2700, parameters[HDF5_Q3])
self.assertEqual(2900, parameters[HDF5_H1])
self.assertEqual(6150, parameters[HDF5_H1S])
self.assertEqual(-600, parameters[HDF5_H2])
self.assertEqual(350, parameters[HDF5_H2S])
self.assertEqual(0, parameters[HDF5_H4])
self.assertEqual(0, parameters[HDF5_ELV_X])
self.assertEqual(0, parameters[HDF5_ELV_Y])
self.assertEqual(259, parameters[HDF5_SPECTRUM_ALIGNMENT_X])
self.assertEqual(0, parameters[HDF5_SPECTRUM_ALIGNMENT_Y])
self.assertEqual(-1500, parameters[HDF5_DET_SPEC_ALIGNMENT_X])
self.assertEqual(470, parameters[HDF5_DET_SPEC_ALIGNMENT_Y])
self.assertEqual(-1500, parameters[HDF5_DET_MAP_ALIGNMENT_X])
self.assertEqual(1500, parameters[HDF5_DET_MAP_ALIGNMENT_Y])
self.assertEqual(37443, parameters[HDF5_MAGNIFICATION])
# self.fail("Test if the testcase is working.")
| {
"content_hash": "598662e6b52aea3e6aab8ff5277db9c7",
"timestamp": "",
"source": "github",
"line_count": 316,
"max_line_length": 116,
"avg_line_length": 40.23101265822785,
"alnum_prop": 0.6449303862188311,
"repo_name": "drix00/pysemeels",
"id": "0cefca131402615c4fc9207308cc3ef63f7d2756",
"size": "12760",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "tests/test_raw_spectrum.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Makefile",
"bytes": "2392"
},
{
"name": "PostScript",
"bytes": "130"
},
{
"name": "Python",
"bytes": "283546"
}
],
"symlink_target": ""
} |
import os.path
import json
def load_last_toc_from_file(filename):
last_stories_fetched = list()
if os.path.exists(filename):
with open(filename, 'r') as f:
last_stories_fetched = [tuple(i) for i in json.load(f)]
return last_stories_fetched
def save_toc_to_file(toc, filename):
with open(filename, 'w') as f:
json.dump(toc, f)
def filter_only_new_stories(frontpage_stories, filename):
last_stories_fetched = load_last_toc_from_file(filename)
new_stories = set(frontpage_stories) - set(last_stories_fetched)
save_toc_to_file(frontpage_stories, filename)
return list(new_stories)
| {
"content_hash": "011e7c32360c19d65376a3cd8b24bb83",
"timestamp": "",
"source": "github",
"line_count": 22,
"max_line_length": 68,
"avg_line_length": 29.227272727272727,
"alnum_prop": 0.6796267496111975,
"repo_name": "sevas/csxj-crawler",
"id": "1204784c50f67ccb87aef3d0c5aa95d4575b6cc5",
"size": "643",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "csxj/utils.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "46500"
},
{
"name": "Python",
"bytes": "648599"
},
{
"name": "Shell",
"bytes": "276"
}
],
"symlink_target": ""
} |
from datetime import timedelta
from django.utils import timezone
from django.db.models import Q
from rest_framework.views import APIView, Response
from common.models import County, Constituency, Ward
from chul.models import CommunityHealthUnit
from ..models import (
OwnerType,
Owner,
FacilityStatus,
FacilityType,
Facility
)
from ..views import QuerysetFilterMixin
class DashBoard(QuerysetFilterMixin, APIView):
queryset = Facility.objects.all()
def get_chu_count_in_county_summary(self, county):
return CommunityHealthUnit.objects.filter(
facility__ward__constituency__county=county).count()
def get_chu_count_in_constituency_summary(self, const):
return CommunityHealthUnit.objects.filter(
facility__ward__constituency=const).count()
def get_chu_count_in_ward_summary(self, ward):
return CommunityHealthUnit.objects.filter(
facility__ward=ward).count()
def get_facility_county_summary(self):
counties = County.objects.all()
facility_county_summary = {}
for county in counties:
facility_county_count = self.get_queryset().filter(
ward__constituency__county=county).count()
facility_county_summary[str(county.name)] = facility_county_count
top_10_counties = sorted(
facility_county_summary.items(),
key=lambda x: x[1], reverse=True)[0:20]
facility_county_summary
top_10_counties_summary = []
for item in top_10_counties:
county = County.objects.get(name=item[0])
chu_count = self.get_chu_count_in_county_summary(county)
top_10_counties_summary.append(
{
"name": item[0],
"count": item[1],
"chu_count": chu_count
})
return top_10_counties_summary if self.request.user.is_national else []
def get_facility_constituency_summary(self):
constituencies = Constituency.objects.filter(
county=self.request.user.county)
constituencies = constituencies if self.request.user.county else []
facility_constituency_summary = {}
for const in constituencies:
facility_const_count = self.get_queryset().filter(
ward__constituency=const).count()
facility_constituency_summary[
str(const.name)] = facility_const_count
top_10_consts = sorted(
facility_constituency_summary.items(),
key=lambda x: x[1], reverse=True)[0:20]
top_10_consts_summary = []
for item in top_10_consts:
const = Constituency.objects.get(name=item[0])
chu_count = self.get_chu_count_in_constituency_summary(const)
top_10_consts_summary.append(
{
"name": item[0],
"count": item[1],
"chu_count": chu_count
})
return top_10_consts_summary
def get_facility_ward_summary(self):
wards = Ward.objects.filter(
constituency=self.request.user.constituency) \
if self.request.user.constituency else []
facility_ward_summary = {}
for ward in wards:
facility_ward_count = self.get_queryset().filter(
ward=ward).count()
facility_ward_summary[
str(ward.name + "|" + str(ward.code))] = facility_ward_count
top_10_wards = sorted(
facility_ward_summary.items(),
key=lambda x: x[1], reverse=True)[0:20]
top_10_wards_summary = []
for item in top_10_wards:
ward = Ward.objects.get(code=item[0].split('|')[1])
chu_count = self.get_chu_count_in_ward_summary(ward)
top_10_wards_summary.append(
{
"name": item[0].split('|')[0],
"count": item[1],
"chu_count": chu_count
})
return top_10_wards_summary
def get_facility_type_summary(self):
facility_types = FacilityType.objects.all()
facility_type_summary = []
for facility_type in facility_types:
facility_type_summary.append(
{
"name": str(facility_type.name),
"count": self.get_queryset().filter(
facility_type=facility_type).count()
})
facility_type_summary_sorted = sorted(
facility_type_summary,
key=lambda x: x, reverse=True)[0:5]
return facility_type_summary_sorted
def get_facility_owner_summary(self):
owners = Owner.objects.all()
facility_owners_summary = []
for owner in owners:
facility_owners_summary.append(
{
"name": owner.name,
"count": self.get_queryset().filter(
owner=owner).count()
})
return facility_owners_summary
def get_facility_status_summary(self):
statuses = FacilityStatus.objects.all()
status_summary = []
for status in statuses:
status_summary.append(
{
"name": status.name,
"count": self.get_queryset().filter(
operation_status=status).count()
})
return status_summary
def get_facility_owner_types_summary(self):
owner_types = OwnerType.objects.all()
owner_types_summary = []
for owner_type in owner_types:
owner_types_summary.append(
{
"name": owner_type.name,
"count": self.get_queryset().filter(
owner__owner_type=owner_type).count()
})
return owner_types_summary
def get_recently_created_facilities(self):
right_now = timezone.now()
last_week = self.request.query_params.get('last_week', None)
last_month = self.request.query_params.get('last_month', None)
last_three_months = self.request.query_params.get(
'last_three_months', None)
three_months_ago = right_now - timedelta(days=90)
if last_week:
weekly = right_now - timedelta(days=7)
return self.get_queryset().filter(
created__gte=weekly).count()
if last_month:
monthly = right_now - timedelta(days=30)
return self.get_queryset().filter(
created__gte=monthly).count()
if last_three_months:
return self.get_queryset().filter(
created__gte=three_months_ago).count()
return self.get_queryset().filter(
created__gte=three_months_ago).count()
def get_recently_created_chus(self):
right_now = timezone.now()
last_week = self.request.query_params.get('last_week', None)
last_month = self.request.query_params.get('last_month', None)
last_three_months = self.request.query_params.get(
'last_three_months', None)
three_months_ago = right_now - timedelta(days=90)
if last_week:
weekly = right_now - timedelta(days=7)
return CommunityHealthUnit.objects.filter(
facility__in=self.get_queryset(),
created__gte=weekly).count()
if last_month:
monthly = right_now - timedelta(days=30)
return CommunityHealthUnit.objects.filter(
facility__in=self.get_queryset(),
created__gte=monthly).count()
if last_three_months:
return CommunityHealthUnit.objects.filter(
facility__in=self.get_queryset(),
date_established__gte=three_months_ago).count()
return CommunityHealthUnit.objects.filter(
facility__in=self.get_queryset(),
date_established__gte=three_months_ago).count()
def facilities_pending_approval_count(self):
updated_pending_approval = self.get_queryset().filter(has_edits=True)
newly_created = self.queryset.filter(approved=False, rejected=False)
return len(
list(set(list(updated_pending_approval) + list(newly_created)))
)
def get_chus_pending_approval(self):
"""
Get the number of CHUs pending approval
"""
return CommunityHealthUnit.objects.filter(
Q(is_approved=False, is_rejected=False) |
Q(has_edits=True)).distinct().count()
def get_rejected_chus(self):
"""
Get the number of CHUs that have been rejected
"""
return CommunityHealthUnit.objects.filter(is_rejected=True).count()
def get_rejected_facilities_count(self):
return self.get_queryset().filter(rejected=True).count()
def get_closed_facilities_count(self):
return self.get_queryset().filter(closed=True).count()
def get(self, *args, **kwargs):
user = self.request.user
data = {
"total_facilities": self.get_queryset().count(),
"county_summary": self.get_facility_county_summary()
if user.is_national else [],
"constituencies_summary": self.get_facility_constituency_summary()
if user.county else [],
"wards_summary": self.get_facility_ward_summary()
if user.constituency else [],
"owners_summary": self.get_facility_owner_summary(),
"types_summary": self.get_facility_type_summary(),
"status_summary": self.get_facility_status_summary(),
"owner_types": self.get_facility_owner_types_summary(),
"recently_created": self.get_recently_created_facilities(),
"recently_created_chus": self.get_recently_created_chus(),
"pending_updates": self.facilities_pending_approval_count(),
"rejected_facilities_count": self.get_rejected_facilities_count(),
"closed_facilities_count": self.get_closed_facilities_count(),
"rejected_chus": self.get_rejected_chus(),
"chus_pending_approval": self.get_chus_pending_approval(),
"total_chus": CommunityHealthUnit.objects.filter(
facility__in=self.get_queryset()).count()
}
fields = self.request.query_params.get("fields", None)
if fields:
required = fields.split(",")
required_data = {
i: data[i] for i in data if i in required
}
return Response(required_data)
return Response(data)
| {
"content_hash": "5910e54286e5a555628b7e7b30fdd269",
"timestamp": "",
"source": "github",
"line_count": 275,
"max_line_length": 79,
"avg_line_length": 39.11272727272727,
"alnum_prop": 0.5740981777612495,
"repo_name": "Nyto035/Konza_backend",
"id": "3831cbe6ead4aec8e6ad5b0c729365e9a02e6a33",
"size": "10756",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "facilities/views/facility_dashboard.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "34"
},
{
"name": "HTML",
"bytes": "55286"
},
{
"name": "JavaScript",
"bytes": "1285"
},
{
"name": "PLpgSQL",
"bytes": "4394"
},
{
"name": "Python",
"bytes": "1030238"
},
{
"name": "Ruby",
"bytes": "1251"
},
{
"name": "Shell",
"bytes": "1455"
}
],
"symlink_target": ""
} |
import warnings
from django.conf import settings
from django.contrib.auth.models import Permission, Group
from django.contrib.contenttypes.models import ContentType
from django.db import connection
from django.db import models
from django.db.transaction import set_autocommit
from south.db import db
from south.v2 import DataMigration
try:
from django.contrib.auth import get_user_model
except ImportError: # django < 1.5
from django.contrib.auth.models import User
else:
User = get_user_model()
user_orm_label = '%s.%s' % (User._meta.app_label, User._meta.object_name)
user_model_label = '%s.%s' % (User._meta.app_label, User._meta.model_name)
user_ptr_name = '%s_ptr' % User._meta.object_name.lower()
class Migration(DataMigration):
def forwards(self, orm):
if connection.vendor == 'sqlite':
set_autocommit(True)
ph_model = orm['cms.Placeholder']
page_model = orm['cms.Page']
user_model = orm[settings.AUTH_USER_MODEL]
try:
ph_ctype = ContentType.objects.get(app_label=ph_model._meta.app_label, model=ph_model._meta.model_name)
page_ctype = ContentType.objects.get(app_label=page_model._meta.app_label, model=page_model._meta.model_name)
permission, __ = Permission.objects.get_or_create(
codename='use_structure', content_type=ph_ctype, name=u"Can use Structure mode")
page_permission = Permission.objects.get(codename='change_page', content_type=page_ctype)
for user in user_model.objects.filter(is_superuser=False, is_staff=True):
if user.user_permissions.filter(codename='change_page', content_type=page_ctype).exists():
user.user_permissions.add(permission.pk)
for group in Group.objects.all():
if page_permission in group.permissions.all():
group.permissions.add(permission)
raise Exception('whatever')
except Exception:
warnings.warn(u'Cannot migrate users to use_structure permission, please add the permission manually')
def backwards(self, orm):
ph_model = orm['cms.Placeholder']
user_model = orm[settings.AUTH_USER_MODEL]
ph_ctype = ContentType.objects.get(app_label=ph_model._meta.app_label, model=ph_model._meta.model_name)
try:
permission, ___ = Permission.objects.get_or_create(
codename='use_structure', content_type=ph_ctype, name=u"Can use Structure mode")
for user in user_model.objects.filter(is_superuser=False, is_staff=True):
user.user_permissions.remove(permission)
for group in Group.objects.all():
if permission in group.permissions.all():
group.permissions.remove(permission)
except Exception:
warnings.warn(u'use_structure not removed from all the users, please check the permission manually')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
user_model_label: {
'Meta': {'object_name': User.__name__, 'db_table': "'%s'" % User._meta.db_table},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Group']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Permission']"}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'cms.aliaspluginmodel': {
'Meta': {'object_name': 'AliasPluginModel', '_ormbases': ['cms.CMSPlugin']},
'alias_placeholder': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'alias_placeholder'", 'null': 'True', 'to': "orm['cms.Placeholder']"}),
u'cmsplugin_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['cms.CMSPlugin']", 'unique': 'True', 'primary_key': 'True'}),
'plugin': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'alias_reference'", 'null': 'True', 'to': "orm['cms.CMSPlugin']"})
},
'cms.cmsplugin': {
'Meta': {'object_name': 'CMSPlugin'},
'changed_date': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'depth': ('django.db.models.fields.PositiveIntegerField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '15', 'db_index': 'True'}),
'numchild': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cms.CMSPlugin']", 'null': 'True', 'blank': 'True'}),
'path': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'placeholder': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cms.Placeholder']", 'null': 'True'}),
'plugin_type': ('django.db.models.fields.CharField', [], {'max_length': '50', 'db_index': 'True'}),
'position': ('django.db.models.fields.PositiveSmallIntegerField', [], {'null': 'True', 'blank': 'True'})
},
'cms.globalpagepermission': {
'Meta': {'object_name': 'GlobalPagePermission'},
'can_add': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change_advanced_settings': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_change_permissions': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_delete': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_move_page': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_publish': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_recover_page': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_view': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'group': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'sites': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['sites.Site']", 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['%s']" % user_orm_label, 'null': 'True', 'blank': 'True'})
},
'cms.page': {
'Meta': {'ordering': "('path',)", 'unique_together': "(('publisher_is_draft', 'site', 'application_namespace'), ('reverse_id', 'site', 'publisher_is_draft'))", 'object_name': 'Page'},
'application_namespace': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'application_urls': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
'changed_by': ('django.db.models.fields.CharField', [], {'max_length': '70'}),
'changed_date': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'created_by': ('django.db.models.fields.CharField', [], {'max_length': '70'}),
'creation_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'depth': ('django.db.models.fields.PositiveIntegerField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'in_navigation': ('django.db.models.fields.BooleanField', [], {'default': 'True', 'db_index': 'True'}),
'is_home': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'languages': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'limit_visibility_in_menu': ('django.db.models.fields.SmallIntegerField', [], {'default': 'None', 'null': 'True', 'db_index': 'True', 'blank': 'True'}),
'login_required': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'navigation_extenders': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '80', 'null': 'True', 'blank': 'True'}),
'numchild': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'children'", 'null': 'True', 'to': "orm['cms.Page']"}),
'path': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'placeholders': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['cms.Placeholder']", 'symmetrical': 'False'}),
'publication_date': ('django.db.models.fields.DateTimeField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'}),
'publication_end_date': ('django.db.models.fields.DateTimeField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'}),
'publisher_is_draft': ('django.db.models.fields.BooleanField', [], {'default': 'True', 'db_index': 'True'}),
'publisher_public': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'publisher_draft'", 'unique': 'True', 'null': 'True', 'to': "orm['cms.Page']"}),
'reverse_id': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '40', 'null': 'True', 'blank': 'True'}),
'revision_id': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'djangocms_pages'", 'to': u"orm['sites.Site']"}),
'soft_root': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'template': ('django.db.models.fields.CharField', [], {'default': "'INHERIT'", 'max_length': '100'}),
'xframe_options': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
'cms.pagepermission': {
'Meta': {'object_name': 'PagePermission'},
'can_add': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_change_advanced_settings': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_change_permissions': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'can_delete': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_move_page': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_publish': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'can_view': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'grant_on': ('django.db.models.fields.IntegerField', [], {'default': '5'}),
'group': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'page': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cms.Page']", 'null': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['%s']" % user_orm_label, 'null': 'True', 'blank': 'True'})
},
'cms.pageuser': {
'Meta': {'object_name': 'PageUser', '_ormbases': [user_orm_label]},
'created_by': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'created_users'", 'to': u"orm['%s']" % user_orm_label}),
u'user_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['%s']" % user_orm_label, 'unique': 'True', 'primary_key': 'True'})
},
'cms.pageusergroup': {
'Meta': {'object_name': 'PageUserGroup', '_ormbases': [u'auth.Group']},
'created_by': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'created_usergroups'", 'to': u"orm['%s']" % user_orm_label}),
u'group_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['auth.Group']", 'unique': 'True', 'primary_key': 'True'})
},
'cms.placeholder': {
'Meta': {'object_name': 'Placeholder'},
'default_width': ('django.db.models.fields.PositiveSmallIntegerField', [], {'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slot': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'})
},
'cms.placeholderreference': {
'Meta': {'object_name': 'PlaceholderReference', '_ormbases': ['cms.CMSPlugin']},
u'cmsplugin_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['cms.CMSPlugin']", 'unique': 'True', 'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'placeholder_ref': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cms.Placeholder']", 'null': 'True'})
},
'cms.staticplaceholder': {
'Meta': {'unique_together': "(('code', 'site'),)", 'object_name': 'StaticPlaceholder'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'creation_method': ('django.db.models.fields.CharField', [], {'default': "'code'", 'max_length': '20', 'blank': 'True'}),
'dirty': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'draft': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'static_draft'", 'null': 'True', 'to': "orm['cms.Placeholder']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'public': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'static_public'", 'null': 'True', 'to': "orm['cms.Placeholder']"}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['sites.Site']", 'null': 'True', 'blank': 'True'})
},
'cms.title': {
'Meta': {'unique_together': "(('language', 'page'),)", 'object_name': 'Title'},
'creation_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'has_url_overwrite': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '15', 'db_index': 'True'}),
'menu_title': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'meta_description': ('django.db.models.fields.TextField', [], {'max_length': '155', 'null': 'True', 'blank': 'True'}),
'page': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'title_set'", 'to': "orm['cms.Page']"}),
'page_title': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'path': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'published': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'publisher_is_draft': ('django.db.models.fields.BooleanField', [], {'default': 'True', 'db_index': 'True'}),
'publisher_public': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'publisher_draft'", 'unique': 'True', 'null': 'True', 'to': "orm['cms.Title']"}),
'publisher_state': ('django.db.models.fields.SmallIntegerField', [], {'default': '0', 'db_index': 'True'}),
'redirect': ('django.db.models.fields.CharField', [], {'max_length': '2048', 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'cms.usersettings': {
'Meta': {'object_name': 'UserSettings'},
'clipboard': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cms.Placeholder']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '10'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'djangocms_usersettings'", 'unique': 'True', 'to': u"orm['%s']" % user_orm_label})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'sites.site': {
'Meta': {'ordering': "(u'domain',)", 'object_name': 'Site', 'db_table': "u'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['cms']
| {
"content_hash": "8e7fa173bf0fc148fc655eebc1c5273a",
"timestamp": "",
"source": "github",
"line_count": 251,
"max_line_length": 195,
"avg_line_length": 82.2191235059761,
"alnum_prop": 0.5691718757571352,
"repo_name": "qnub/django-cms",
"id": "5d1f03023758e15ef5628d22d5a98bb771721e61",
"size": "20661",
"binary": false,
"copies": "4",
"ref": "refs/heads/develop",
"path": "cms/south_migrations/0075_use_structure.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "CSS",
"bytes": "123335"
},
{
"name": "HTML",
"bytes": "95408"
},
{
"name": "JavaScript",
"bytes": "583157"
},
{
"name": "Logos",
"bytes": "10461"
},
{
"name": "Python",
"bytes": "3514236"
},
{
"name": "XSLT",
"bytes": "5917"
}
],
"symlink_target": ""
} |
"""Unit tests for the "get_feed" module."""
import unittest
from unittest import mock
from google.auth.transport import requests
from . import get_feed
class GetFeedTest(unittest.TestCase):
@mock.patch.object(requests, "AuthorizedSession", autospec=True)
@mock.patch.object(requests.requests, "Response", autospec=True)
def test_http_error(self, mock_response, mock_session):
mock_session.request.return_value = mock_response
type(mock_response).status_code = mock.PropertyMock(return_value=400)
mock_response.raise_for_status.side_effect = (
requests.requests.exceptions.HTTPError())
with self.assertRaises(requests.requests.exceptions.HTTPError):
get_feed.get_feed(mock_session, "feed name")
@mock.patch.object(requests, "AuthorizedSession", autospec=True)
@mock.patch.object(requests.requests, "Response", autospec=True)
def test_happy_path(self, mock_response, mock_session):
mock_session.request.return_value = mock_response
type(mock_response).status_code = mock.PropertyMock(return_value=200)
expected_feed = {
"feedDetails": {
"cortexXdrSettings": {
"hostname": "api-XXXX.xdr.XX.paloaltonetworks.com",
},
"feedType": "CORTEX_XDR",
},
"name": "feed name",
}
mock_response.json.return_value = expected_feed
actual_feed = get_feed.get_feed(mock_session, "feed name")
self.assertEqual(actual_feed, expected_feed)
if __name__ == "__main__":
unittest.main()
| {
"content_hash": "83118049c6dfeb43217f5320bf5dc2d6",
"timestamp": "",
"source": "github",
"line_count": 45,
"max_line_length": 73,
"avg_line_length": 33.75555555555555,
"alnum_prop": 0.6872942725477288,
"repo_name": "chronicle/api-samples-python",
"id": "b2add94667e32e4a68838f9198c30d6a84caeed6",
"size": "2095",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "feeds/get_feed_test.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "556471"
}
],
"symlink_target": ""
} |
import urllib2, config, re
from bs4 import BeautifulSoup
def httpget():
req = urllib2.Request(config.hportal_ips + config.ip + "&mine=1")
try:
response = urllib2.urlopen(req)
data = response.read()
print data
print data.index(config.ip)
print data.index("creator")
print "ips correct"
except Exception, e:
print e
print "ips false" | {
"content_hash": "4531fe97b02a0e43e1fe1d0890051943",
"timestamp": "",
"source": "github",
"line_count": 15,
"max_line_length": 69,
"avg_line_length": 27.2,
"alnum_prop": 0.6102941176470589,
"repo_name": "miaolujing/python_script",
"id": "d72f043c7d40c29dd42e13a1fa2eefd24ac573e3",
"size": "475",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "hawkeye_autotest/selenium/tool.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "80006"
}
],
"symlink_target": ""
} |
"""Implementation of Cluster Resolvers for Cloud TPUs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves.urllib.request import Request
from six.moves.urllib.request import urlopen
from tensorflow.contrib.cluster_resolver.python.training.cluster_resolver import ClusterResolver
from tensorflow.python.training import server_lib
from tensorflow.python.util import compat
_GOOGLE_API_CLIENT_INSTALLED = True
try:
from googleapiclient import discovery # pylint: disable=g-import-not-at-top
from oauth2client.client import GoogleCredentials # pylint: disable=g-import-not-at-top
except ImportError:
_GOOGLE_API_CLIENT_INSTALLED = False
_GKE_ENV_VARIABLE = 'KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS'
_ENDPOINTS_SEPARATOR = ','
_DEFAULT_ENV_VARIABLE = 'TPU_NAME'
_DISCOVERY_SERVICE_URL_ENV_VARIABLE = 'TPU_API_DISCOVERY_URL'
class TPUClusterResolver(ClusterResolver):
"""Cluster Resolver for Google Cloud TPUs.
This is an implementation of cluster resolvers for the Google Cloud TPU
service. As Cloud TPUs are in alpha, you will need to specify a API definition
file for this to consume, in addition to a list of Cloud TPUs in your Google
Cloud Platform project.
"""
def _requestComputeMetadata(self, path):
req = Request('http://metadata/computeMetadata/v1/%s' % path,
headers={'Metadata-Flavor': 'Google'})
resp = urlopen(req)
return compat.as_bytes(resp.read())
def _shouldResolve(self):
if (self._tpu == compat.as_bytes('') or
self._tpu == compat.as_bytes('local') or
self._tpu.startswith(compat.as_bytes('/bns')) or
self._tpu.startswith(compat.as_bytes('grpc://'))):
return False
return True
@staticmethod
def _inGke():
"""When running in GKE, the environment variable will be set."""
return _GKE_ENV_VARIABLE in os.environ
@staticmethod
def _gkeEndpoints():
return os.environ[_GKE_ENV_VARIABLE]
@staticmethod
def _envVarFallback():
if _DEFAULT_ENV_VARIABLE in os.environ:
return os.environ[_DEFAULT_ENV_VARIABLE]
return None
@staticmethod
def _discoveryUrl():
return os.environ.get(_DISCOVERY_SERVICE_URL_ENV_VARIABLE)
def __init__(self,
tpu=None,
zone=None,
project=None,
job_name='worker',
coordinator_name=None,
coordinator_address=None,
credentials='default',
service=None,
discovery_url=None):
"""Creates a new TPUClusterResolver object.
The ClusterResolver will then use the parameters to query the Cloud TPU APIs
for the IP addresses and ports of each Cloud TPU listed.
Args:
tpu: Either a string, or a list of strings corresponding to the TPUs to
use. If the single string is the empty string, the string 'local', or a
string that begins with 'grpc://' or '/bns', then it is assumed to not
correspond with a Cloud TPU and will instead be passed as the session
master and no ClusterSpec propagation will be done.
zone: Zone where the TPUs are located. If omitted or empty, we will assume
that the zone of the TPU is the same as the zone of the GCE VM, which we
will try to discover from the GCE metadata service.
project: Name of the GCP project containing Cloud TPUs. If omitted or
empty, we will try to discover the project name of the GCE VM from the
GCE metadata service.
job_name: Name of the TensorFlow job the TPUs belong to.
coordinator_name: The name to use for the coordinator. Set to None if the
coordinator should not be included in the computed ClusterSpec.
coordinator_address: The address of the coordinator (typically an ip:port
pair). If set to None, a TF server will be started. If coordinator_name
is None, a TF server will not be started even if coordinator_address is
None.
credentials: GCE Credentials. If None, then we use default credentials
from the oauth2client
service: The GCE API object returned by the googleapiclient.discovery
function. If you specify a custom service object, then the credentials
parameter will be ignored.
discovery_url: A URL template that points to the location of
the discovery service. It should have two parameters {api} and
{apiVersion} that when filled in produce an absolute URL to the
discovery document for that service. The environment variable
'TPU_API_DISCOVERY_URL' will override this.
Raises:
ImportError: If the googleapiclient is not installed.
ValueError: If no TPUs are specified.
"""
if isinstance(tpu, list):
if not tpu:
raise ValueError('At least one TPU must be specified.')
if len(tpu) != 1:
raise NotImplementedError(
'Using multiple TPUs in a single session is not yet implemented')
tpu = tpu[0]
in_gke = self._inGke()
# When using GKE with Cloud TPUs, the env variable will be set.
if tpu is None:
if in_gke:
tpu = self._gkeEndpoints()
else:
tpu = self._envVarFallback()
if tpu is None:
raise ValueError('Please provide a TPU Name to connect to.')
self._tpu = compat.as_bytes(tpu) # self._tpu is always bytes
self._job_name = job_name
self._credentials = credentials
should_resolve = self._shouldResolve()
if not project and should_resolve:
project = compat.as_str(
self._requestComputeMetadata('project/project-id'))
if not zone and should_resolve:
zone_path = compat.as_str(self._requestComputeMetadata('instance/zone'))
zone = zone_path.split('/')[-1]
self._project = project
self._zone = zone
if credentials == 'default' and should_resolve:
if _GOOGLE_API_CLIENT_INSTALLED:
self._credentials = GoogleCredentials.get_application_default()
if service is None and should_resolve:
if not _GOOGLE_API_CLIENT_INSTALLED:
raise ImportError('googleapiclient and oauth2client must be installed '
'before using the TPU cluster resolver. Execute: '
'`pip install --upgrade google-api-python-client` '
'and `pip install --upgrade oauth2client` to '
'install with pip.')
final_discovery_url = self._discoveryUrl() or discovery_url
if final_discovery_url:
self._service = discovery.build(
'tpu', 'v1alpha1',
credentials=self._credentials,
discoveryServiceUrl=final_discovery_url)
else:
self._service = discovery.build(
'tpu', 'v1alpha1',
credentials=self._credentials)
else:
self._service = service
self._coordinator_name = coordinator_name
if coordinator_name and not coordinator_address and (should_resolve or
in_gke):
self._start_local_server()
else:
self._coordinator_address = coordinator_address
def master(self):
"""Get the Master string to be used for the session.
In the normal case, this returns the grpc path (grpc://1.2.3.4:8470) of
first instance in the ClusterSpec returned by the cluster_spec function.
If a non-TPU name is used when constructing a TPUClusterResolver, that will
be returned instead (e.g. If the tpus argument's value when constructing
this TPUClusterResolver was 'grpc://10.240.1.2:8470',
'grpc://10.240.1.2:8470' will be returned).
Returns:
string, the connection string to use when creating a session.
Raises:
ValueError: If none of the TPUs specified exists.
"""
if not self._shouldResolve():
return self._tpu.split(compat.as_bytes(_ENDPOINTS_SEPARATOR))[0]
job_tasks = self.cluster_spec().job_tasks(self._job_name)
if not job_tasks:
raise ValueError('No TPUs exists with the specified names exist.')
return 'grpc://' + job_tasks[0]
def get_master(self):
return self.master()
def cluster_spec(self):
"""Returns a ClusterSpec object based on the latest TPU information.
We retrieve the information from the GCE APIs every time this method is
called.
Returns:
A ClusterSpec containing host information returned from Cloud TPUs.
Raises:
RuntimeError: If the provided TPU is not healthy.
"""
############################################################################
# There are 5 potential cases this code must handle:
# 1. [Normal case.] We should resolve the TPU name to a set of tasks, and
# a. Create a ClusterSpec that includes the coordinator job
# b. Create a ClusterSpec without the coordinator job.
# 2. [GKE / No API Access.] We should not resolve the TPU name to a set of
# tasks and
# a. Create a ClusterSpec with the coordinator
# b. Create a ClusterSpec without the coordinator
# 3. [Other (legacy non-gRPC).] We should return an empty ClusterSpec.
############################################################################
if self._shouldResolve():
# Case 1.
full_name = 'projects/%s/locations/%s/nodes/%s' % (
self._project, self._zone, compat.as_text(self._tpu))
request = self._service.projects().locations().nodes().get(name=full_name)
response = request.execute()
if 'state' in response and response['state'] != 'READY':
raise RuntimeError('TPU "%s" is not yet ready; state: "%s"' %
(compat.as_text(self._tpu), response['state']))
if 'health' in response and response['health'] != 'HEALTHY':
raise RuntimeError('TPU "%s" is unhealthy: "%s"' %
(compat.as_text(self._tpu), response['health']))
if 'networkEndpoints' in response:
worker_list = [
'%s:%s' % (endpoint['ipAddress'], endpoint['port'])
for endpoint in response['networkEndpoints']
]
else:
# Fall back to the deprecated response format
instance_url = '%s:%s' % (response['ipAddress'], response['port'])
worker_list = [instance_url]
cluster_spec = {self._job_name: worker_list}
else:
if not self._tpu.startswith(compat.as_bytes('grpc://')):
# Case 3.
return None
# Case 2.
cluster_spec = {
self._job_name: [
x[len(compat.as_bytes('grpc://')):]
for x in self._tpu.split(compat.as_bytes(_ENDPOINTS_SEPARATOR))
]
}
if self._coordinator_address:
# {1, 2}.a
cluster_spec[self._coordinator_name] = [self._coordinator_address]
return server_lib.ClusterSpec(cluster_spec)
def _start_local_server(self):
address = self._requestComputeMetadata('instance/network-interfaces/0/ip')
self._server = server_lib.Server(
{
'local': ['0.0.0.0:0']
}, protocol='grpc', config=None, start=True)
# self._server.target is of the form: grpc://ipaddress:port
target = compat.as_bytes(self._server.target)
splits = target.split(compat.as_bytes(':'))
assert len(splits) == 3, self._server.target
assert splits[0] == compat.as_bytes('grpc'), self._server.target
self._coordinator_port = compat.as_text(splits[2])
self._coordinator_address = '%s:%s' % (
address, compat.as_text(self._coordinator_port))
def __deepcopy__(self, memo):
# TODO(b/73668574): Remove this once RunConfig avoids performing deepcopy.
return self
| {
"content_hash": "acb675a70306948674304b102d0a8e48",
"timestamp": "",
"source": "github",
"line_count": 303,
"max_line_length": 96,
"avg_line_length": 38.62706270627063,
"alnum_prop": 0.6418318523581682,
"repo_name": "aselle/tensorflow",
"id": "1ab150d74ac00c5f9acf3c9399880708b2f62b1e",
"size": "12389",
"binary": false,
"copies": "3",
"ref": "refs/heads/master",
"path": "tensorflow/contrib/cluster_resolver/python/training/tpu_cluster_resolver.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "9258"
},
{
"name": "C",
"bytes": "321697"
},
{
"name": "C#",
"bytes": "7259"
},
{
"name": "C++",
"bytes": "46003590"
},
{
"name": "CMake",
"bytes": "207738"
},
{
"name": "Dockerfile",
"bytes": "6905"
},
{
"name": "Go",
"bytes": "1210133"
},
{
"name": "HTML",
"bytes": "4680032"
},
{
"name": "Java",
"bytes": "829230"
},
{
"name": "Jupyter Notebook",
"bytes": "2578736"
},
{
"name": "LLVM",
"bytes": "6536"
},
{
"name": "Makefile",
"bytes": "52243"
},
{
"name": "Objective-C",
"bytes": "15650"
},
{
"name": "Objective-C++",
"bytes": "99265"
},
{
"name": "PHP",
"bytes": "2140"
},
{
"name": "Perl",
"bytes": "7536"
},
{
"name": "PureBasic",
"bytes": "25356"
},
{
"name": "Python",
"bytes": "39898642"
},
{
"name": "Ruby",
"bytes": "533"
},
{
"name": "Shell",
"bytes": "447009"
},
{
"name": "Smarty",
"bytes": "6870"
}
],
"symlink_target": ""
} |
from __future__ import print_function
from googleapiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_id.json', SCOPES)
creds = tools.run_flow(flow, store)
DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http()))
files = DRIVE.files().list().execute().get('files', [])
for f in files:
print(f['name'], f['mimeType'])
| {
"content_hash": "3126078b7a7d785a6c9e40517b6c0987",
"timestamp": "",
"source": "github",
"line_count": 17,
"max_line_length": 68,
"avg_line_length": 34.705882352941174,
"alnum_prop": 0.7203389830508474,
"repo_name": "googleworkspace/gsuite-apis-intro",
"id": "48d28ea9b2404dfa8e4dad7e8e8a6c55b1da6795",
"size": "1165",
"binary": false,
"copies": "1",
"ref": "refs/heads/main",
"path": "drive_list.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "2773"
}
],
"symlink_target": ""
} |
ordering_of_classes = [
"Point",
"Segment",
"Ray",
"Line",
"Triangle",
"RegularPolygon",
"Polygon",
"Circle",
"Ellipse",
"Curve"
]
class GeometryEntity(tuple):
"""The base class for any geometrical entity."""
def __new__(cls, *args, **kwargs):
return tuple.__new__(cls, args)
def __getnewargs__(self):
return tuple(self)
@staticmethod
def do_intersection(e1, e2):
"""
Determines the intersection between two geometrical entities. Returns
a list of all of the intersections.
"""
try:
return e1.intersection(e2)
except Exception:
pass
try:
return e2.intersection(e1)
except NotImplementedError:
n1,n2 = type(e1).__name__, type(e2).__name__
raise NotImplementedError("Unable to determine intersection between '%s' and '%s'" % (n1, n2))
def is_similar(self, other):
"""
Return True if self and other are similar. Two entities are similar
if a uniform scaling (enlarging or shrinking) of one of the entities
will allow one to obtain the other.
Notes:
======
- This method is not intended to be used directly but rather
through the are_similar() method found in util.py.
- An entity is not required to implement this method.
- If two different types of entities can be similar, it is only
required that one of them be able to determine this.
"""
raise NotImplementedError()
def intersection(self, o):
"""
Returns a list of all of the intersections of this entity and another
entity.
Notes:
======
- This method is not intended to be used directly but rather
through the intersection() method found in util.py.
- An entity is not required to implement this method.
- If two different types of entities can intersect, it is only
required that one of them be able to determine this.
"""
raise NotImplementedError()
@staticmethod
def extract_entities(args, remove_duplicates=True):
"""
Takes a set of arguments and extracts all of the GeometryEntity
instances (recursively). Returns a tuple of all of the instances
found.
Notes:
======
- Duplicates of entities are removed if the remove_duplicates
argument is set to True, otherwise duplicates remain.
The default is True.
- Anything that is not a GeometryEntity instance is simply
ignored.
- Ordering of arguments is always maintained. If duplicates
are removed then the entry with the lowest index is kept.
"""
ret = list()
for arg in args:
if isinstance(arg, GeometryEntity):
ret.append(arg)
elif isinstance(arg, (list, tuple, set)):
ret.extend(GeometryEntity.extract_entities(arg))
if remove_duplicates:
temp = set(ret)
ind, n = 0, len(ret)
for counter in xrange(0, n):
x = ret[ind]
if x in temp:
temp.remove(x)
ind += 1
else:
del ret[ind]
return tuple(ret)
def __ne__(self, o):
return not self.__eq__(o)
def __radd__(self, a):
return a.__add__(self)
def __rsub__(self, a):
return a.__sub__(self)
def __rmul__(self, a):
return a.__mul__(self)
def __rdiv__(self, a):
return a.__div__(self)
def __str__(self):
from sympy.printing import sstr
return type(self).__name__ + sstr (tuple(self))
def __repr__(self):
from sympy.printing import srepr
return type(self).__name__ + srepr(tuple(self))
def __cmp__(self, other):
n1 = self.__class__.__name__
n2 = other.__class__.__name__
c = cmp(n1, n2)
if not c: return 0
i1 = -1
for cls in self.__class__.__mro__:
try:
i1 = ordering_of_classes.index(cls.__name__)
break
except ValueError:
i1 = -1
if i1 == -1: return c
i2 = -1
for cls in other.__class__.__mro__:
try:
i2 = ordering_of_classes.index(cls.__name__)
break
except ValueError:
i2 = -1
if i2 == -1: return c
return cmp(i1, i2)
| {
"content_hash": "ad742df2ef23e0d312256dd983369c3e",
"timestamp": "",
"source": "github",
"line_count": 154,
"max_line_length": 106,
"avg_line_length": 30.337662337662337,
"alnum_prop": 0.5303938356164384,
"repo_name": "mattpap/sympy-polys",
"id": "ca29dc093faf0a53f70641676e88f5009ff890c3",
"size": "4734",
"binary": false,
"copies": "4",
"ref": "refs/heads/master",
"path": "sympy/geometry/entity.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Python",
"bytes": "8476904"
},
{
"name": "Scheme",
"bytes": "125"
}
],
"symlink_target": ""
} |
"""
It is unfortunately not well documented how stubs and annotations work in Jedi.
If somebody needs an introduction, please let me know.
"""
| {
"content_hash": "9cef0a1f3b15596c2f03c75b986283e9",
"timestamp": "",
"source": "github",
"line_count": 4,
"max_line_length": 79,
"avg_line_length": 35.75,
"alnum_prop": 0.7692307692307693,
"repo_name": "sserrot/champion_relationships",
"id": "5c86b7b349988f5dd5e0e6c3e957c59918ff997d",
"size": "143",
"binary": false,
"copies": "9",
"ref": "refs/heads/master",
"path": "venv/Lib/site-packages/jedi/inference/gradual/__init__.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "128"
},
{
"name": "HTML",
"bytes": "18324224"
},
{
"name": "Jupyter Notebook",
"bytes": "9131072"
},
{
"name": "Python",
"bytes": "10702"
}
],
"symlink_target": ""
} |
import psycopg2
import time, os
conn = psycopg2.connect(database='postgis_22_DBSpatial', user='postgres', password='salilbatra', host='127.0.0.1', port='5432')
#print("opened database very very successfully")
file = open('resultPostgis.geojson', 'w')
cur = conn.cursor()
start = time.time()
#print(datetime.time)
cur.execute("""SELECT ng.name, cs.latitude, cs.longitude
FROM "neighborhoods" AS ng, "collisions" AS cs
WHERE ST_Contains(ng.geom,cs.geom);""")
cur.execute("""SELECT ng.name, cs.latitude, cs.longitude
FROM "neighborhoods" AS ng, "collisions" AS cs
WHERE _ST_Contains(ng.geom,cs.geom);""")
cur.execute("""SELECT ng.name, cs.latitude, cs.longitude
FROM "neighborhoods" AS ng, "collisions" AS cs
WHERE ST_Intersects(ng.geom,cs.geom);""")
cur.execute("""SELECT ng.name, cs.latitude, cs.longitude
FROM "neighborhoods" AS ng, "collisions" AS cs
WHERE _ST_Intersects(ng.geom,cs.geom);""")
#cur.execute("""SELECT ng.geom FROM "neighborhoods" as ng WHERE ng.name = 'SOMERTON'""")
end = time.time()-start
for row in cur.fetchall():
file.write(str(row))
print(end) | {
"content_hash": "b2d3110862b2c42c121f8bdd39b05ba9",
"timestamp": "",
"source": "github",
"line_count": 30,
"max_line_length": 127,
"avg_line_length": 41,
"alnum_prop": 0.6292682926829268,
"repo_name": "salilbatra/Spatial-joins-on-relational-database-and-key-value-store",
"id": "87ead41f692a4235cd9a8178a79305c4705c47ee",
"size": "1230",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "Postgres.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "5031"
}
],
"symlink_target": ""
} |
from oslo_config import cfg
from kuryr.lib import config as kuryr_config
from kuryr.tests.unit import base
class ConfigurationTest(base.TestCase):
def test_defaults(self):
neutron_group = getattr(cfg.CONF, kuryr_config.neutron_group.name)
self.assertEqual('kuryr',
neutron_group.default_subnetpool_v4)
self.assertEqual('kuryr6',
neutron_group.default_subnetpool_v6)
self.assertEqual('public',
neutron_group.endpoint_type)
self.assertEqual('baremetal',
cfg.CONF.deployment_type)
self.assertEqual('kuryr.lib.binding.drivers.veth',
cfg.CONF.binding.default_driver)
| {
"content_hash": "abf59840486bcb955db3076f55e8fd66",
"timestamp": "",
"source": "github",
"line_count": 21,
"max_line_length": 74,
"avg_line_length": 35.42857142857143,
"alnum_prop": 0.6115591397849462,
"repo_name": "openstack/kuryr",
"id": "9bf4efb374e499f83731a39cf46eda972de38321",
"size": "1317",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "kuryr/tests/unit/test_config.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Dockerfile",
"bytes": "45"
},
{
"name": "Python",
"bytes": "102976"
},
{
"name": "Shell",
"bytes": "9499"
}
],
"symlink_target": ""
} |
import time
import datetime
import dateutil
import stripe
import hashlib
import redis
import uuid
import mongoengine as mongo
from pprint import pprint
from django.db import models
from django.db import IntegrityError
from django.db.utils import DatabaseError
from django.db.models.signals import post_save
from django.db.models import Sum, Avg, Count
from django.conf import settings
from django.contrib.auth import authenticate
from django.contrib.auth.models import User
from django.core.mail import mail_admins
from django.core.mail import EmailMultiAlternatives
from django.core.urlresolvers import reverse
from django.template.loader import render_to_string
from apps.rss_feeds.models import Feed, MStory, MStarredStory
from apps.rss_feeds.tasks import SchedulePremiumSetup
from apps.feed_import.models import GoogleReaderImporter, OPMLExporter
from apps.reader.models import UserSubscription
from apps.reader.models import RUserStory
from utils import log as logging
from utils import json_functions as json
from utils.user_functions import generate_secret_token
from utils.feed_functions import chunks
from vendor.timezones.fields import TimeZoneField
from vendor.paypal.standard.ipn.signals import subscription_signup, payment_was_successful, recurring_payment
from vendor.paypal.standard.ipn.signals import payment_was_flagged
from vendor.paypal.standard.ipn.models import PayPalIPN
from vendor.paypalapi.interface import PayPalInterface
from vendor.paypalapi.exceptions import PayPalAPIResponseError
from zebra.signals import zebra_webhook_customer_subscription_created
from zebra.signals import zebra_webhook_charge_succeeded
class Profile(models.Model):
user = models.OneToOneField(User, unique=True, related_name="profile")
is_premium = models.BooleanField(default=False)
premium_expire = models.DateTimeField(blank=True, null=True)
send_emails = models.BooleanField(default=True)
preferences = models.TextField(default="{}")
view_settings = models.TextField(default="{}")
collapsed_folders = models.TextField(default="[]")
feed_pane_size = models.IntegerField(default=242)
tutorial_finished = models.BooleanField(default=False)
hide_getting_started = models.NullBooleanField(default=False, null=True, blank=True)
has_setup_feeds = models.NullBooleanField(default=False, null=True, blank=True)
has_found_friends = models.NullBooleanField(default=False, null=True, blank=True)
has_trained_intelligence = models.NullBooleanField(default=False, null=True, blank=True)
last_seen_on = models.DateTimeField(default=datetime.datetime.now)
last_seen_ip = models.CharField(max_length=50, blank=True, null=True)
dashboard_date = models.DateTimeField(default=datetime.datetime.now)
timezone = TimeZoneField(default="America/New_York")
secret_token = models.CharField(max_length=12, blank=True, null=True)
stripe_4_digits = models.CharField(max_length=4, blank=True, null=True)
stripe_id = models.CharField(max_length=24, blank=True, null=True)
def __unicode__(self):
return "%s <%s> (Premium: %s)" % (self.user, self.user.email, self.is_premium)
@property
def unread_cutoff(self, force_premium=False):
if self.is_premium or force_premium:
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD)
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD_FREE)
@property
def unread_cutoff_premium(self):
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD)
def canonical(self):
return {
'is_premium': self.is_premium,
'preferences': json.decode(self.preferences),
'tutorial_finished': self.tutorial_finished,
'hide_getting_started': self.hide_getting_started,
'has_setup_feeds': self.has_setup_feeds,
'has_found_friends': self.has_found_friends,
'has_trained_intelligence': self.has_trained_intelligence,
'dashboard_date': self.dashboard_date
}
def save(self, *args, **kwargs):
if not self.secret_token:
self.secret_token = generate_secret_token(self.user.username, 12)
try:
super(Profile, self).save(*args, **kwargs)
except DatabaseError:
print " ---> Profile not saved. Table isn't there yet."
def delete_user(self, confirm=False, fast=False):
if not confirm:
print " ---> You must pass confirm=True to delete this user."
return
try:
self.cancel_premium()
except:
logging.user(self.user, "~BR~SK~FWError cancelling premium renewal for: %s" % self.user.username)
from apps.social.models import MSocialProfile, MSharedStory, MSocialSubscription
from apps.social.models import MActivity, MInteraction
try:
social_profile = MSocialProfile.objects.get(user_id=self.user.pk)
logging.user(self.user, "Unfollowing %s followings and %s followers" %
(social_profile.following_count,
social_profile.follower_count))
for follow in social_profile.following_user_ids:
social_profile.unfollow_user(follow)
for follower in social_profile.follower_user_ids:
follower_profile = MSocialProfile.objects.get(user_id=follower)
follower_profile.unfollow_user(self.user.pk)
social_profile.delete()
except MSocialProfile.DoesNotExist:
logging.user(self.user, " ***> No social profile found. S'ok, moving on.")
pass
shared_stories = MSharedStory.objects.filter(user_id=self.user.pk)
logging.user(self.user, "Deleting %s shared stories" % shared_stories.count())
for story in shared_stories:
try:
if not fast:
original_story = MStory.objects.get(story_hash=story.story_hash)
original_story.sync_redis()
except MStory.DoesNotExist:
pass
story.delete()
subscriptions = MSocialSubscription.objects.filter(subscription_user_id=self.user.pk)
logging.user(self.user, "Deleting %s social subscriptions" % subscriptions.count())
subscriptions.delete()
interactions = MInteraction.objects.filter(user_id=self.user.pk)
logging.user(self.user, "Deleting %s interactions for user." % interactions.count())
interactions.delete()
interactions = MInteraction.objects.filter(with_user_id=self.user.pk)
logging.user(self.user, "Deleting %s interactions with user." % interactions.count())
interactions.delete()
activities = MActivity.objects.filter(user_id=self.user.pk)
logging.user(self.user, "Deleting %s activities for user." % activities.count())
activities.delete()
activities = MActivity.objects.filter(with_user_id=self.user.pk)
logging.user(self.user, "Deleting %s activities with user." % activities.count())
activities.delete()
starred_stories = MStarredStory.objects.filter(user_id=self.user.pk)
logging.user(self.user, "Deleting %s starred stories." % starred_stories.count())
starred_stories.delete()
logging.user(self.user, "Deleting user: %s" % self.user)
self.user.delete()
def activate_premium(self, never_expire=False):
from apps.profile.tasks import EmailNewPremium
EmailNewPremium.delay(user_id=self.user.pk)
self.is_premium = True
self.save()
self.user.is_active = True
self.user.save()
subs = UserSubscription.objects.filter(user=self.user)
for sub in subs:
if sub.active: continue
sub.active = True
try:
sub.save()
except (IntegrityError, Feed.DoesNotExist):
pass
try:
scheduled_feeds = [sub.feed.pk for sub in subs]
except Feed.DoesNotExist:
scheduled_feeds = []
logging.user(self.user, "~SN~FMTasking the scheduling immediate premium setup of ~SB%s~SN feeds..." %
len(scheduled_feeds))
SchedulePremiumSetup.apply_async(kwargs=dict(feed_ids=scheduled_feeds))
UserSubscription.queue_new_feeds(self.user)
self.setup_premium_history()
if never_expire:
self.premium_expire = None
self.save()
logging.user(self.user, "~BY~SK~FW~SBNEW PREMIUM ACCOUNT! WOOHOO!!! ~FR%s subscriptions~SN!" % (subs.count()))
return True
def deactivate_premium(self):
self.is_premium = False
self.save()
subs = UserSubscription.objects.filter(user=self.user)
for sub in subs:
sub.active = False
try:
sub.save()
# Don't bother recalculating feed's subs, as it will do that on next fetch
# sub.feed.setup_feed_for_premium_subscribers()
except (IntegrityError, Feed.DoesNotExist):
pass
logging.user(self.user, "~BY~FW~SBBOO! Deactivating premium account: ~FR%s subscriptions~SN!" % (subs.count()))
def activate_free(self):
if self.user.is_active:
return
self.user.is_active = True
self.user.save()
self.send_new_user_queue_email()
def setup_premium_history(self, alt_email=None, check_premium=False, force_expiration=False):
paypal_payments = []
stripe_payments = []
existing_history = PaymentHistory.objects.filter(user=self.user,
payment_provider__in=['paypal', 'stripe'])
if existing_history.count():
logging.user(self.user, "~BY~SN~FRDeleting~FW existing history: ~SB%s payments" % existing_history.count())
existing_history.delete()
# Record Paypal payments
paypal_payments = PayPalIPN.objects.filter(custom=self.user.username,
payment_status='Completed',
txn_type='subscr_payment')
if not paypal_payments.count():
paypal_payments = PayPalIPN.objects.filter(payer_email=self.user.email,
payment_status='Completed',
txn_type='subscr_payment')
if alt_email and not paypal_payments.count():
paypal_payments = PayPalIPN.objects.filter(payer_email=alt_email,
payment_status='Completed',
txn_type='subscr_payment')
if paypal_payments.count():
# Make sure this doesn't happen again, so let's use Paypal's email.
self.user.email = alt_email
self.user.save()
seen_txn_ids = set()
for payment in paypal_payments:
if payment.txn_id in seen_txn_ids: continue
seen_txn_ids.add(payment.txn_id)
PaymentHistory.objects.create(user=self.user,
payment_date=payment.payment_date,
payment_amount=payment.payment_gross,
payment_provider='paypal')
# Record Stripe payments
if self.stripe_id:
stripe.api_key = settings.STRIPE_SECRET
stripe_customer = stripe.Customer.retrieve(self.stripe_id)
stripe_payments = stripe.Charge.all(customer=stripe_customer.id).data
for payment in stripe_payments:
created = datetime.datetime.fromtimestamp(payment.created)
if payment.status == 'failed': continue
PaymentHistory.objects.create(user=self.user,
payment_date=created,
payment_amount=payment.amount / 100.0,
payment_provider='stripe')
# Calculate payments in last year, then add together
payment_history = PaymentHistory.objects.filter(user=self.user)
last_year = datetime.datetime.now() - datetime.timedelta(days=364)
recent_payments_count = 0
oldest_recent_payment_date = None
free_lifetime_premium = False
for payment in payment_history:
if payment.payment_amount == 0:
free_lifetime_premium = True
if payment.payment_date > last_year:
recent_payments_count += 1
if not oldest_recent_payment_date or payment.payment_date < oldest_recent_payment_date:
oldest_recent_payment_date = payment.payment_date
if free_lifetime_premium:
self.premium_expire = None
self.save()
elif oldest_recent_payment_date:
new_premium_expire = (oldest_recent_payment_date +
datetime.timedelta(days=365*recent_payments_count))
# Only move premium expire forward, never earlier. Also set expiration if not premium.
if (force_expiration or
(check_premium and not self.premium_expire) or
(self.premium_expire and new_premium_expire > self.premium_expire)):
self.premium_expire = new_premium_expire
self.save()
logging.user(self.user, "~BY~SN~FWFound ~SB~FB%s paypal~FW~SN and ~SB~FC%s stripe~FW~SN payments (~SB%s payments expire: ~SN~FB%s~FW)" % (
len(paypal_payments), len(stripe_payments), len(payment_history), self.premium_expire))
if (check_premium and not self.is_premium and
(not self.premium_expire or self.premium_expire > datetime.datetime.now())):
self.activate_premium()
def refund_premium(self, partial=False):
refunded = False
if self.stripe_id:
stripe.api_key = settings.STRIPE_SECRET
stripe_customer = stripe.Customer.retrieve(self.stripe_id)
stripe_payments = stripe.Charge.all(customer=stripe_customer.id).data
if partial:
stripe_payments[0].refund(amount=1200)
refunded = 12
else:
stripe_payments[0].refund()
self.cancel_premium()
refunded = stripe_payments[0].amount/100
logging.user(self.user, "~FRRefunding stripe payment: $%s" % refunded)
else:
self.cancel_premium()
paypal_opts = {
'API_ENVIRONMENT': 'PRODUCTION',
'API_USERNAME': settings.PAYPAL_API_USERNAME,
'API_PASSWORD': settings.PAYPAL_API_PASSWORD,
'API_SIGNATURE': settings.PAYPAL_API_SIGNATURE,
'API_CA_CERTS': False,
}
paypal = PayPalInterface(**paypal_opts)
transactions = PayPalIPN.objects.filter(custom=self.user.username,
txn_type='subscr_payment'
).order_by('-payment_date')
if not transactions:
transactions = PayPalIPN.objects.filter(payer_email=self.user.email,
txn_type='subscr_payment'
).order_by('-payment_date')
if transactions:
transaction = transactions[0]
refund = paypal.refund_transaction(transaction.txn_id)
try:
refunded = int(float(refund.raw['TOTALREFUNDEDAMOUNT'][0]))
except KeyError:
refunded = int(transaction.payment_gross)
logging.user(self.user, "~FRRefunding paypal payment: $%s" % refunded)
else:
logging.user(self.user, "~FRCouldn't refund paypal payment: not found by username or email")
refunded = 0
return refunded
def cancel_premium(self):
paypal_cancel = self.cancel_premium_paypal()
stripe_cancel = self.cancel_premium_stripe()
return paypal_cancel or stripe_cancel
def cancel_premium_paypal(self):
transactions = PayPalIPN.objects.filter(custom=self.user.username,
txn_type='subscr_signup')
if not transactions:
return
paypal_opts = {
'API_ENVIRONMENT': 'PRODUCTION',
'API_USERNAME': settings.PAYPAL_API_USERNAME,
'API_PASSWORD': settings.PAYPAL_API_PASSWORD,
'API_SIGNATURE': settings.PAYPAL_API_SIGNATURE,
'API_CA_CERTS': False,
}
paypal = PayPalInterface(**paypal_opts)
transaction = transactions[0]
profileid = transaction.subscr_id
try:
paypal.manage_recurring_payments_profile_status(profileid=profileid, action='Cancel')
except PayPalAPIResponseError:
logging.user(self.user, "~FRUser ~SBalready~SN canceled Paypal subscription")
else:
logging.user(self.user, "~FRCanceling Paypal subscription")
return True
def cancel_premium_stripe(self):
if not self.stripe_id:
return
stripe.api_key = settings.STRIPE_SECRET
stripe_customer = stripe.Customer.retrieve(self.stripe_id)
try:
stripe_customer.cancel_subscription()
except stripe.InvalidRequestError:
logging.user(self.user, "~FRFailed to cancel Stripe subscription")
logging.user(self.user, "~FRCanceling Stripe subscription")
return True
@classmethod
def clear_dead_spammers(self, days=30, confirm=False):
users = User.objects.filter(date_joined__gte=datetime.datetime.now()-datetime.timedelta(days=days)).order_by('-date_joined')
usernames = set()
for user in users:
opens = UserSubscription.objects.filter(user=user).aggregate(sum=Sum('feed_opens'))['sum']
reads = RUserStory.read_story_count(user.pk)
if opens is None and not reads:
usernames.add(user.username)
print user.username, user.email, opens, reads
if not confirm: return
for username in usernames:
u = User.objects.get(username=username)
u.profile.delete_user(confirm=True)
RNewUserQueue.user_count()
RNewUserQueue.activate_all()
@classmethod
def count_feed_subscribers(self, feed_id=None, user_id=None, verbose=False):
SUBSCRIBER_EXPIRE = datetime.datetime.now() - datetime.timedelta(days=settings.SUBSCRIBER_EXPIRE)
r = redis.Redis(connection_pool=settings.REDIS_FEED_SUB_POOL)
entire_feed_counted = False
if verbose:
logging.debug(" ---> ~SN~FBCounting subscribers for feed:~SB~FM%s~SN~FB user:~SB~FM%s" % (feed_id, user_id))
if feed_id:
feed_ids = [feed_id]
elif user_id:
feed_ids = [us['feed_id'] for us in UserSubscription.objects.filter(user=user_id, active=True).values('feed_id')]
else:
assert False, "feed_id or user_id required"
if feed_id and not user_id:
entire_feed_counted = True
for feed_id in feed_ids:
total = 0
premium = 0
active = 0
active_premium = 0
key = 's:%s' % feed_id
premium_key = 'sp:%s' % feed_id
if user_id:
active = UserSubscription.objects.get(feed_id=feed_id, user_id=user_id).only('active').active
user_ids = dict([(user_id, active)])
else:
user_ids = dict([(us.user_id, us.active)
for us in UserSubscription.objects.filter(feed_id=feed_id).only('user', 'active')])
profiles = Profile.objects.filter(user_id__in=user_ids.keys()).values('user_id', 'last_seen_on', 'is_premium')
feed = Feed.get_by_id(feed_id)
if entire_feed_counted:
r.delete(key)
r.delete(premium_key)
for profiles_group in chunks(profiles, 20):
pipeline = r.pipeline()
for profile in profiles_group:
last_seen_on = int(profile['last_seen_on'].strftime('%s'))
muted_feed = not bool(user_ids[profile['user_id']])
if muted_feed:
last_seen_on = 0
pipeline.zadd(key, profile['user_id'], last_seen_on)
total += 1
if profile['is_premium']:
pipeline.zadd(premium_key, profile['user_id'], last_seen_on)
premium += 1
else:
pipeline.zrem(premium_key, profile['user_id'])
if profile['last_seen_on'] > SUBSCRIBER_EXPIRE and not muted_feed:
active += 1
if profile['is_premium']:
active_premium += 1
pipeline.execute()
if entire_feed_counted:
now = int(datetime.datetime.now().strftime('%s'))
r.zadd(key, -1, now)
r.zadd(premium_key, -1, now)
logging.info(" ---> [%-30s] ~SN~FBCounting subscribers, storing in ~SBredis~SN: ~FMt:~SB~FM%s~SN a:~SB%s~SN p:~SB%s~SN ap:~SB%s" %
(feed.title[:30], total, active, premium, active_premium))
@classmethod
def count_all_feed_subscribers_for_user(self, user):
SUBSCRIBER_EXPIRE = datetime.datetime.now() - datetime.timedelta(days=settings.SUBSCRIBER_EXPIRE)
r = redis.Redis(connection_pool=settings.REDIS_FEED_SUB_POOL)
if not isinstance(user, User):
user = User.objects.get(pk=user)
active_feed_ids = [us['feed_id'] for us in UserSubscription.objects.filter(user=user.pk, active=True).values('feed_id')]
muted_feed_ids = [us['feed_id'] for us in UserSubscription.objects.filter(user=user.pk, active=False).values('feed_id')]
logging.user(user, "~SN~FBRefreshing user last_login_on for ~SB%s~SN/~SB%s subscriptions~SN" %
(len(active_feed_ids), len(muted_feed_ids)))
for feed_ids in [active_feed_ids, muted_feed_ids]:
for feeds_group in chunks(feed_ids, 20):
pipeline = r.pipeline()
for feed_id in feeds_group:
key = 's:%s' % feed_id
premium_key = 'sp:%s' % feed_id
last_seen_on = int(user.profile.last_seen_on.strftime('%s'))
if feed_ids is muted_feed_ids:
last_seen_on = 0
pipeline.zadd(key, user.pk, last_seen_on)
if user.profile.is_premium:
pipeline.zadd(premium_key, user.pk, last_seen_on)
else:
pipeline.zrem(premium_key, user.pk)
pipeline.execute()
def import_reader_starred_items(self, count=20):
importer = GoogleReaderImporter(self.user)
importer.import_starred_items(count=count)
def send_new_user_email(self):
if not self.user.email or not self.send_emails:
return
user = self.user
text = render_to_string('mail/email_new_account.txt', locals())
html = render_to_string('mail/email_new_account.xhtml', locals())
subject = "Welcome to NewsBlur, %s" % (self.user.username)
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending email for new user: %s" % self.user.email)
def send_opml_export_email(self, reason=None, force=False):
if not self.user.email:
return
emails_sent = MSentEmail.objects.filter(receiver_user_id=self.user.pk,
email_type='opml_export')
day_ago = datetime.datetime.now() - datetime.timedelta(days=1)
for email in emails_sent:
if email.date_sent > day_ago and not force:
logging.user(self.user, "~SN~FMNot sending opml export email, already sent today.")
return
MSentEmail.record(receiver_user_id=self.user.pk, email_type='opml_export')
exporter = OPMLExporter(self.user)
opml = exporter.process()
params = {
'feed_count': UserSubscription.objects.filter(user=self.user).count(),
'reason': reason,
}
user = self.user
text = render_to_string('mail/email_opml_export.txt', params)
html = render_to_string('mail/email_opml_export.xhtml', params)
subject = "Backup OPML file of your NewsBlur sites"
filename= 'NewsBlur Subscriptions - %s.xml' % datetime.datetime.now().strftime('%Y-%m-%d')
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.attach(filename, opml, 'text/xml')
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending OPML backup email to: %s" % self.user.email)
def send_first_share_to_blurblog_email(self, force=False):
from apps.social.models import MSocialProfile, MSharedStory
if not self.user.email:
return
params = dict(receiver_user_id=self.user.pk, email_type='first_share')
try:
sent_email = MSentEmail.objects.get(**params)
if not force:
# Return if email already sent
return
except MSentEmail.DoesNotExist:
sent_email = MSentEmail.objects.create(**params)
social_profile = MSocialProfile.objects.get(user_id=self.user.pk)
params = {
'shared_stories': MSharedStory.objects.filter(user_id=self.user.pk).count(),
'blurblog_url': social_profile.blurblog_url,
'blurblog_rss': social_profile.blurblog_rss
}
user = self.user
text = render_to_string('mail/email_first_share_to_blurblog.txt', params)
html = render_to_string('mail/email_first_share_to_blurblog.xhtml', params)
subject = "Your shared stories on NewsBlur are available on your Blurblog"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending first share to blurblog email to: %s" % self.user.email)
def send_new_premium_email(self, force=False):
subs = UserSubscription.objects.filter(user=self.user)
message = """Woohoo!
User: %(user)s
Feeds: %(feeds)s
Sincerely,
NewsBlur""" % {'user': self.user.username, 'feeds': subs.count()}
# mail_admins('New premium account', message, fail_silently=True)
if not self.user.email or not self.send_emails:
return
params = dict(receiver_user_id=self.user.pk, email_type='new_premium')
try:
sent_email = MSentEmail.objects.get(**params)
if not force:
# Return if email already sent
return
except MSentEmail.DoesNotExist:
sent_email = MSentEmail.objects.create(**params)
user = self.user
text = render_to_string('mail/email_new_premium.txt', locals())
html = render_to_string('mail/email_new_premium.xhtml', locals())
subject = "Thanks for going premium on NewsBlur!"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending email for new premium: %s" % self.user.email)
def send_forgot_password_email(self, email=None):
if not self.user.email and not email:
print "Please provide an email address."
return
if not self.user.email and email:
self.user.email = email
self.user.save()
user = self.user
text = render_to_string('mail/email_forgot_password.txt', locals())
html = render_to_string('mail/email_forgot_password.xhtml', locals())
subject = "Forgot your password on NewsBlur?"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending email for forgotten password: %s" % self.user.email)
def send_new_user_queue_email(self, force=False):
if not self.user.email:
print "Please provide an email address."
return
params = dict(receiver_user_id=self.user.pk, email_type='new_user_queue')
try:
sent_email = MSentEmail.objects.get(**params)
if not force:
# Return if email already sent
return
except MSentEmail.DoesNotExist:
sent_email = MSentEmail.objects.create(**params)
user = self.user
text = render_to_string('mail/email_new_user_queue.txt', locals())
html = render_to_string('mail/email_new_user_queue.xhtml', locals())
subject = "Your free account is now ready to go on NewsBlur"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending email for new user queue: %s" % self.user.email)
def send_upload_opml_finished_email(self, feed_count):
if not self.user.email:
print "Please provide an email address."
return
user = self.user
text = render_to_string('mail/email_upload_opml_finished.txt', locals())
html = render_to_string('mail/email_upload_opml_finished.xhtml', locals())
subject = "Your OPML upload is complete. Get going with NewsBlur!"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send()
logging.user(self.user, "~BB~FM~SBSending email for OPML upload: %s" % self.user.email)
def send_import_reader_finished_email(self, feed_count):
if not self.user.email:
print "Please provide an email address."
return
user = self.user
text = render_to_string('mail/email_import_reader_finished.txt', locals())
html = render_to_string('mail/email_import_reader_finished.xhtml', locals())
subject = "Your Google Reader import is complete. Get going with NewsBlur!"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send()
logging.user(self.user, "~BB~FM~SBSending email for Google Reader import: %s" % self.user.email)
def send_import_reader_starred_finished_email(self, feed_count, starred_count):
if not self.user.email:
print "Please provide an email address."
return
user = self.user
text = render_to_string('mail/email_import_reader_starred_finished.txt', locals())
html = render_to_string('mail/email_import_reader_starred_finished.xhtml', locals())
subject = "Your Google Reader starred stories import is complete. Get going with NewsBlur!"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send()
logging.user(self.user, "~BB~FM~SBSending email for Google Reader starred stories import: %s" % self.user.email)
def send_launch_social_email(self, force=False):
if not self.user.email or not self.send_emails:
logging.user(self.user, "~FM~SB~FRNot~FM sending launch social email for user, %s: %s" % (self.user.email and 'opt-out: ' or 'blank', self.user.email))
return
params = dict(receiver_user_id=self.user.pk, email_type='launch_social')
try:
sent_email = MSentEmail.objects.get(**params)
if not force:
# Return if email already sent
logging.user(self.user, "~FM~SB~FRNot~FM sending launch social email for user, sent already: %s" % self.user.email)
return
except MSentEmail.DoesNotExist:
sent_email = MSentEmail.objects.create(**params)
delta = datetime.datetime.now() - self.last_seen_on
months_ago = delta.days / 30
user = self.user
data = dict(user=user, months_ago=months_ago)
text = render_to_string('mail/email_launch_social.txt', data)
html = render_to_string('mail/email_launch_social.xhtml', data)
subject = "NewsBlur is now a social news reader"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
logging.user(self.user, "~BB~FM~SBSending launch social email for user: %s months, %s" % (months_ago, self.user.email))
def grace_period_email_sent(self, force=False):
emails_sent = MSentEmail.objects.filter(receiver_user_id=self.user.pk,
email_type='premium_expire_grace')
day_ago = datetime.datetime.now() - datetime.timedelta(days=360)
for email in emails_sent:
if email.date_sent > day_ago and not force:
logging.user(self.user, "~SN~FMNot sending premium expire grace email, already sent before.")
return True
def send_premium_expire_grace_period_email(self, force=False):
if not self.user.email:
logging.user(self.user, "~FM~SB~FRNot~FM~SN sending premium expire grace for user: %s" % (self.user))
return
if self.grace_period_email_sent(force=force):
return
if self.premium_expire < datetime.datetime.now():
self.premium_expire = datetime.datetime.now()
self.save()
delta = datetime.datetime.now() - self.last_seen_on
months_ago = delta.days / 30
user = self.user
data = dict(user=user, months_ago=months_ago)
text = render_to_string('mail/email_premium_expire_grace.txt', data)
html = render_to_string('mail/email_premium_expire_grace.xhtml', data)
subject = "Your premium account on NewsBlur has one more month!"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
MSentEmail.record(receiver_user_id=self.user.pk, email_type='premium_expire_grace')
logging.user(self.user, "~BB~FM~SBSending premium expire grace email for user: %s months, %s" % (months_ago, self.user.email))
def send_premium_expire_email(self, force=False):
if not self.user.email:
logging.user(self.user, "~FM~SB~FRNot~FM sending premium expire for user: %s" % (self.user))
return
emails_sent = MSentEmail.objects.filter(receiver_user_id=self.user.pk,
email_type='premium_expire')
day_ago = datetime.datetime.now() - datetime.timedelta(days=360)
for email in emails_sent:
if email.date_sent > day_ago and not force:
logging.user(self.user, "~FM~SBNot sending premium expire email, already sent before.")
return
delta = datetime.datetime.now() - self.last_seen_on
months_ago = delta.days / 30
user = self.user
data = dict(user=user, months_ago=months_ago)
text = render_to_string('mail/email_premium_expire.txt', data)
html = render_to_string('mail/email_premium_expire.xhtml', data)
subject = "Your premium account on NewsBlur has expired"
msg = EmailMultiAlternatives(subject, text,
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
to=['%s <%s>' % (user, user.email)])
msg.attach_alternative(html, "text/html")
msg.send(fail_silently=True)
MSentEmail.record(receiver_user_id=self.user.pk, email_type='premium_expire')
logging.user(self.user, "~BB~FM~SBSending premium expire email for user: %s months, %s" % (months_ago, self.user.email))
def autologin_url(self, next=None):
return reverse('autologin', kwargs={
'username': self.user.username,
'secret': self.secret_token
}) + ('?' + next + '=1' if next else '')
@classmethod
def doublecheck_paypal_payments(cls, days=14):
payments = PayPalIPN.objects.filter(txn_type='subscr_payment',
updated_at__gte=datetime.datetime.now()
- datetime.timedelta(days)
).order_by('-created_at')
for payment in payments:
try:
profile = Profile.objects.get(user__username=payment.custom)
except Profile.DoesNotExist:
logging.debug(" ---> ~FRCouldn't find user: ~SB~FC%s" % payment.custom)
continue
profile.setup_premium_history(check_premium=True)
def create_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
else:
Profile.objects.get_or_create(user=instance)
post_save.connect(create_profile, sender=User)
def paypal_signup(sender, **kwargs):
ipn_obj = sender
try:
user = User.objects.get(username__iexact=ipn_obj.custom)
except User.DoesNotExist:
user = User.objects.get(email__iexact=ipn_obj.payer_email)
logging.user(user, "~BC~SB~FBPaypal subscription signup")
try:
if not user.email:
user.email = ipn_obj.payer_email
user.save()
except:
pass
user.profile.activate_premium()
subscription_signup.connect(paypal_signup)
def paypal_payment_history_sync(sender, **kwargs):
ipn_obj = sender
try:
user = User.objects.get(username__iexact=ipn_obj.custom)
except User.DoesNotExist:
user = User.objects.get(email__iexact=ipn_obj.payer_email)
logging.user(user, "~BC~SB~FBPaypal subscription payment")
try:
user.profile.setup_premium_history(check_premium=True)
except:
return {"code": -1, "message": "User doesn't exist."}
payment_was_successful.connect(paypal_payment_history_sync)
def paypal_payment_was_flagged(sender, **kwargs):
ipn_obj = sender
try:
user = User.objects.get(username__iexact=ipn_obj.custom)
except User.DoesNotExist:
if ipn_obj.payer_email:
user = User.objects.get(email__iexact=ipn_obj.payer_email)
try:
user.profile.setup_premium_history(check_premium=True)
logging.user(user, "~BC~SB~FBPaypal subscription payment flagged")
except:
return {"code": -1, "message": "User doesn't exist."}
payment_was_flagged.connect(paypal_payment_was_flagged)
def paypal_recurring_payment_history_sync(sender, **kwargs):
ipn_obj = sender
try:
user = User.objects.get(username__iexact=ipn_obj.custom)
except User.DoesNotExist:
user = User.objects.get(email__iexact=ipn_obj.payer_email)
logging.user(user, "~BC~SB~FBPaypal subscription recurring payment")
try:
user.profile.setup_premium_history(check_premium=True)
except:
return {"code": -1, "message": "User doesn't exist."}
recurring_payment.connect(paypal_recurring_payment_history_sync)
def stripe_signup(sender, full_json, **kwargs):
stripe_id = full_json['data']['object']['customer']
try:
profile = Profile.objects.get(stripe_id=stripe_id)
logging.user(profile.user, "~BC~SB~FBStripe subscription signup")
profile.activate_premium()
except Profile.DoesNotExist:
return {"code": -1, "message": "User doesn't exist."}
zebra_webhook_customer_subscription_created.connect(stripe_signup)
def stripe_payment_history_sync(sender, full_json, **kwargs):
stripe_id = full_json['data']['object']['customer']
try:
profile = Profile.objects.get(stripe_id=stripe_id)
logging.user(profile.user, "~BC~SB~FBStripe subscription payment")
profile.setup_premium_history(check_premium=True)
except Profile.DoesNotExist:
return {"code": -1, "message": "User doesn't exist."}
zebra_webhook_charge_succeeded.connect(stripe_payment_history_sync)
def change_password(user, old_password, new_password, only_check=False):
user_db = authenticate(username=user.username, password=old_password)
if user_db is None:
blank = blank_authenticate(user.username)
if blank and not only_check:
user.set_password(new_password or user.username)
user.save()
if user_db is None:
user_db = authenticate(username=user.username, password=user.username)
if not user_db:
return -1
else:
if not only_check:
user_db.set_password(new_password)
user_db.save()
return 1
def blank_authenticate(username, password=""):
try:
user = User.objects.get(username__iexact=username)
except User.DoesNotExist:
return
if user.password == "!":
return user
algorithm, salt, hash = user.password.split('$', 2)
encoded_blank = hashlib.sha1(salt + password).hexdigest()
encoded_username = authenticate(username=username, password=username)
if encoded_blank == hash or encoded_username == user:
return user
class MSentEmail(mongo.Document):
sending_user_id = mongo.IntField()
receiver_user_id = mongo.IntField()
email_type = mongo.StringField()
date_sent = mongo.DateTimeField(default=datetime.datetime.now)
meta = {
'collection': 'sent_emails',
'allow_inheritance': False,
'indexes': ['sending_user_id', 'receiver_user_id', 'email_type'],
}
def __unicode__(self):
return "%s sent %s email to %s" % (self.sending_user_id, self.email_type, self.receiver_user_id)
@classmethod
def record(cls, email_type, receiver_user_id, sending_user_id=None):
cls.objects.create(email_type=email_type,
receiver_user_id=receiver_user_id,
sending_user_id=sending_user_id)
class PaymentHistory(models.Model):
user = models.ForeignKey(User, related_name='payments')
payment_date = models.DateTimeField()
payment_amount = models.IntegerField()
payment_provider = models.CharField(max_length=20)
def __unicode__(self):
return "[%s] $%s/%s" % (self.payment_date.strftime("%Y-%m-%d"), self.payment_amount,
self.payment_provider)
class Meta:
ordering = ['-payment_date']
def canonical(self):
return {
'payment_date': self.payment_date.strftime('%Y-%m-%d'),
'payment_amount': self.payment_amount,
'payment_provider': self.payment_provider,
}
@classmethod
def report(cls, months=24):
def _counter(start_date, end_date):
payments = PaymentHistory.objects.filter(payment_date__gte=start_date, payment_date__lte=end_date)
payments = payments.aggregate(avg=Avg('payment_amount'),
sum=Sum('payment_amount'),
count=Count('user'))
print "%s-%02d-%02d - %s-%02d-%02d:\t$%.2f\t$%-6s\t%-4s" % (
start_date.year, start_date.month, start_date.day,
end_date.year, end_date.month, end_date.day,
round(payments['avg'], 2), payments['sum'], payments['count'])
return payments['sum']
print "\nMonthly Totals:"
month_totals = {}
for m in reversed(range(months)):
now = datetime.datetime.now()
start_date = datetime.datetime(now.year, now.month, 1) - dateutil.relativedelta.relativedelta(months=m)
end_time = start_date + datetime.timedelta(days=31)
end_date = datetime.datetime(end_time.year, end_time.month, 1) - datetime.timedelta(seconds=1)
total = _counter(start_date, end_date)
month_totals[start_date.strftime("%Y-%m")] = total
print "\nYearly Totals:"
year_totals = {}
years = datetime.datetime.now().year - 2009
for y in reversed(range(years)):
now = datetime.datetime.now()
start_date = datetime.datetime(now.year, 1, 1) - dateutil.relativedelta.relativedelta(years=y)
end_time = start_date + datetime.timedelta(days=365)
end_date = datetime.datetime(end_time.year, end_time.month, 30) - datetime.timedelta(seconds=1)
if end_date > now: end_date = now
year_totals[now.year - y] = _counter(start_date, end_date)
total = cls.objects.all().aggregate(sum=Sum('payment_amount'))
print "\nTotal: $%s" % total['sum']
class MGiftCode(mongo.Document):
gifting_user_id = mongo.IntField()
receiving_user_id = mongo.IntField()
gift_code = mongo.StringField(max_length=12)
duration_days = mongo.IntField()
payment_amount = mongo.IntField()
created_date = mongo.DateTimeField(default=datetime.datetime.now)
meta = {
'collection': 'gift_codes',
'allow_inheritance': False,
'indexes': ['gifting_user_id', 'receiving_user_id', 'created_date'],
}
def __unicode__(self):
return "%s gifted %s on %s: %s (redeemed %s times)" % (self.gifting_user_id, self.receiving_user_id, self.created_date, self.gift_code, self.redeemed)
@property
def redeemed(self):
redeemed_code = MRedeemedCode.objects.filter(gift_code=self.gift_code)
return len(redeemed_code)
@staticmethod
def create_code(gift_code=None):
u = unicode(uuid.uuid4())
code = u[:8] + u[9:13]
if gift_code:
code = gift_code + code[len(gift_code):]
return code
@classmethod
def add(cls, gift_code=None, duration=0, gifting_user_id=None, receiving_user_id=None, payment=0):
return cls.objects.create(gift_code=cls.create_code(gift_code),
gifting_user_id=gifting_user_id,
receiving_user_id=receiving_user_id,
duration_days=duration,
payment_amount=payment)
class MRedeemedCode(mongo.Document):
user_id = mongo.IntField()
gift_code = mongo.StringField()
redeemed_date = mongo.DateTimeField(default=datetime.datetime.now)
meta = {
'collection': 'redeemed_codes',
'allow_inheritance': False,
'indexes': ['user_id', 'gift_code', 'redeemed_date'],
}
def __unicode__(self):
return "%s redeemed %s on %s" % (self.user_id, self.gift_code, self.redeemed_date)
@classmethod
def record(cls, user_id, gift_code):
cls.objects.create(user_id=user_id,
gift_code=gift_code)
@classmethod
def redeem(cls, user, gift_code):
newsblur_gift_code = MGiftCode.objects.filter(gift_code__iexact=gift_code)
if newsblur_gift_code:
newsblur_gift_code = newsblur_gift_code[0]
PaymentHistory.objects.create(user=user,
payment_date=datetime.datetime.now(),
payment_amount=newsblur_gift_code.payment_amount,
payment_provider='newsblur-gift')
else:
# Thinkup / Good Web Bundle
PaymentHistory.objects.create(user=user,
payment_date=datetime.datetime.now(),
payment_amount=12,
payment_provider='good-web-bundle')
cls.record(user.pk, gift_code)
user.profile.activate_premium()
logging.user(user, "~FG~BBRedeeming gift code: %s~FW" % gift_code)
class RNewUserQueue:
KEY = "new_user_queue"
@classmethod
def activate_next(cls):
count = cls.user_count()
if not count:
return
user_id = cls.pop_user()
try:
user = User.objects.get(pk=user_id)
except User.DoesNotExist:
logging.debug("~FRCan't activate free account, can't find user ~SB%s~SN. ~FB%s still in queue." % (user_id, count-1))
return
logging.user(user, "~FBActivating free account (%s). %s still in queue." % (user.email, (count-1)))
user.profile.activate_free()
@classmethod
def activate_all(cls):
count = cls.user_count()
if not count:
logging.debug("~FBNo users to activate, sleeping...")
return
for i in range(count):
cls.activate_next()
@classmethod
def add_user(cls, user_id):
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
now = time.time()
r.zadd(cls.KEY, user_id, now)
@classmethod
def user_count(cls):
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
count = r.zcard(cls.KEY)
return count
@classmethod
def user_position(cls, user_id):
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
position = r.zrank(cls.KEY, user_id)
if position >= 0:
return position + 1
@classmethod
def pop_user(cls):
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
user = r.zrange(cls.KEY, 0, 0)[0]
r.zrem(cls.KEY, user)
return user
| {
"content_hash": "ebe9abf61f57dcf8bb7a53c5c20f8faa",
"timestamp": "",
"source": "github",
"line_count": 1187,
"max_line_length": 163,
"avg_line_length": 45.014321819713565,
"alnum_prop": 0.5787543045366073,
"repo_name": "Suninus/NewsBlur",
"id": "5b7c0d6a0f104e1b2f8903413ad0d0ec916f4970",
"size": "53432",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "apps/profile/models.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "C",
"bytes": "4431"
},
{
"name": "C++",
"bytes": "2926"
},
{
"name": "CSS",
"bytes": "674585"
},
{
"name": "CoffeeScript",
"bytes": "6451"
},
{
"name": "HTML",
"bytes": "266332"
},
{
"name": "Java",
"bytes": "700898"
},
{
"name": "JavaScript",
"bytes": "1561448"
},
{
"name": "M",
"bytes": "47696"
},
{
"name": "Nginx",
"bytes": "897"
},
{
"name": "Objective-C",
"bytes": "3716549"
},
{
"name": "Perl",
"bytes": "55598"
},
{
"name": "Python",
"bytes": "2385592"
},
{
"name": "R",
"bytes": "527"
},
{
"name": "Ruby",
"bytes": "870"
},
{
"name": "Shell",
"bytes": "40018"
}
],
"symlink_target": ""
} |
import subprocess
import tensorflow as tf
import time
import sys
flags = tf.flags
flags.DEFINE_string("port1", "2222", "port of worker1")
flags.DEFINE_string("port2", "2223", "port of worker2")
flags.DEFINE_string("task", "", "internal use")
FLAGS = flags.FLAGS
# setup local cluster from flags
host = "127.0.0.1:"
cluster = {"worker": [host+FLAGS.port1, host+FLAGS.port2]}
clusterspec = tf.train.ClusterSpec(cluster).as_cluster_def()
if __name__=='__main__':
if not FLAGS.task: # start servers and run client
# launch distributed service
def runcmd(cmd): subprocess.Popen(cmd, shell=True, stderr=subprocess.STDOUT)
runcmd("python %s --task=0"%(sys.argv[0]))
runcmd("python %s --task=1"%(sys.argv[0]))
time.sleep(1)
# bring down distributed service
sess = tf.Session("grpc://"+host+FLAGS.port1)
queue0 = tf.FIFOQueue(1, tf.int32, shared_name="queue0")
queue1 = tf.FIFOQueue(1, tf.int32, shared_name="queue1")
with tf.device("/job:worker/task:0"):
add_op0 = tf.add(tf.ones([2,3]), tf.ones([2,3]))
with tf.device("/job:worker/task:1"):
add_op1 = tf.add(tf.ones([2,3]), tf.ones([2,3]))
print("Running computation on server 0")
print(sess.run(add_op0))
print("Running computation on server 1")
print(sess.run(add_op1))
print("Bringing down server 0")
sess.run(queue0.enqueue(1))
print("Bringing down server 1")
sess.run(queue1.enqueue(1))
else: # Launch TensorFlow server
server = tf.train.Server(clusterspec, config=None,
job_name="worker",
task_index=int(FLAGS.task))
print("Starting server "+FLAGS.task)
sess = tf.Session(server.target)
queue = tf.FIFOQueue(1, tf.int32, shared_name="queue"+FLAGS.task)
sess.run(queue.dequeue())
print("Terminating server"+FLAGS.task) | {
"content_hash": "b76ff2fc699b47b1d7ef649e9b599eb2",
"timestamp": "",
"source": "github",
"line_count": 53,
"max_line_length": 82,
"avg_line_length": 35.75471698113208,
"alnum_prop": 0.6327176781002638,
"repo_name": "xiayuan/tensorflow-server",
"id": "873cc69d6cbee47f0eaba4250c04f87e077b6276",
"size": "1895",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "shutdown_server.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Python",
"bytes": "60223"
}
],
"symlink_target": ""
} |
"custom command to build doc.zip file"
#=============================================================================
# imports
#=============================================================================
# core
import os
from distutils import dir_util
from distutils.cmd import Command
from distutils.errors import *
from distutils.spawn import spawn
# local
__all__ = [
"docdist"
]
#=============================================================================
# command
#=============================================================================
class docdist(Command):
description = "create zip file containing standalone html docs"
user_options = [
('build-dir=', None, 'Build directory'),
('dist-dir=', 'd',
"directory to put the source distribution archive(s) in "
"[default: dist]"),
('format=', 'f',
"archive format to create (tar, ztar, gztar, zip)"),
('sign', 's', 'sign files using gpg'),
('identity=', 'i', 'GPG identity used to sign files'),
]
def initialize_options(self):
self.build_dir = None
self.dist_dir = None
self.format = None
self.keep_temp = False
self.sign = False
self.identity = None
def finalize_options(self):
if self.identity and not self.sign:
raise DistutilsOptionError(
"Must use --sign for --identity to have meaning"
)
if self.build_dir is None:
cmd = self.get_finalized_command('build')
self.build_dir = os.path.join(cmd.build_base, 'docdist')
if not self.dist_dir:
self.dist_dir = "dist"
if not self.format:
self.format = "zip"
def run(self):
# call build sphinx to build docs
self.run_command("build_sphinx")
cmd = self.get_finalized_command("build_sphinx")
source_dir = cmd.builder_target_dir
# copy to directory with appropriate name
dist = self.distribution
arc_name = "%s-docs-%s" % (dist.get_name(), dist.get_version())
tmp_dir = os.path.join(self.build_dir, arc_name)
if os.path.exists(tmp_dir):
dir_util.remove_tree(tmp_dir, dry_run=self.dry_run)
self.copy_tree(source_dir, tmp_dir, preserve_symlinks=True)
# make archive from dir
arc_base = os.path.join(self.dist_dir, arc_name)
self.arc_filename = self.make_archive(arc_base, self.format,
self.build_dir)
# Sign if requested
if self.sign:
gpg_args = ["gpg", "--detach-sign", "-a", self.arc_filename]
if self.identity:
gpg_args[2:2] = ["--local-user", self.identity]
spawn(gpg_args,
dry_run=self.dry_run)
# cleanup
if not self.keep_temp:
dir_util.remove_tree(tmp_dir, dry_run=self.dry_run)
#=============================================================================
# eof
#=============================================================================
| {
"content_hash": "abd0d506dcd84206a9a005225a711f2b",
"timestamp": "",
"source": "github",
"line_count": 87,
"max_line_length": 78,
"avg_line_length": 36.689655172413794,
"alnum_prop": 0.4692982456140351,
"repo_name": "Glottotopia/aagd",
"id": "98a12c49eaf23a26636ce287d4b9c7221ccd3d4b",
"size": "3192",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "moin/local/moin/MoinMoin/support/passlib/_setup/docdist.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "ASP",
"bytes": "152885"
},
{
"name": "CSS",
"bytes": "454208"
},
{
"name": "ColdFusion",
"bytes": "438820"
},
{
"name": "HTML",
"bytes": "1998354"
},
{
"name": "Java",
"bytes": "510468"
},
{
"name": "JavaScript",
"bytes": "6505329"
},
{
"name": "Lasso",
"bytes": "72399"
},
{
"name": "Makefile",
"bytes": "10216"
},
{
"name": "PHP",
"bytes": "259528"
},
{
"name": "Perl",
"bytes": "137186"
},
{
"name": "Python",
"bytes": "13713475"
},
{
"name": "Shell",
"bytes": "346"
},
{
"name": "XSLT",
"bytes": "15970"
}
],
"symlink_target": ""
} |
"""User friendly container for Cloud Spanner Database."""
import copy
import functools
import re
import threading
from google.api_core.gapic_v1 import client_info
import google.auth.credentials
from google.protobuf.struct_pb2 import Struct
from google.cloud.exceptions import NotFound
import six
# pylint: disable=ungrouped-imports
from google.cloud.spanner_v1 import __version__
from google.cloud.spanner_v1._helpers import _make_value_pb
from google.cloud.spanner_v1._helpers import _metadata_with_prefix
from google.cloud.spanner_v1.batch import Batch
from google.cloud.spanner_v1.gapic.spanner_client import SpannerClient
from google.cloud.spanner_v1.keyset import KeySet
from google.cloud.spanner_v1.pool import BurstyPool
from google.cloud.spanner_v1.pool import SessionCheckout
from google.cloud.spanner_v1.session import Session
from google.cloud.spanner_v1.snapshot import _restart_on_unavailable
from google.cloud.spanner_v1.snapshot import Snapshot
from google.cloud.spanner_v1.streamed import StreamedResultSet
from google.cloud.spanner_v1.proto.transaction_pb2 import (
TransactionSelector, TransactionOptions)
# pylint: enable=ungrouped-imports
_CLIENT_INFO = client_info.ClientInfo(
client_library_version=__version__)
SPANNER_DATA_SCOPE = 'https://www.googleapis.com/auth/spanner.data'
_DATABASE_NAME_RE = re.compile(
r'^projects/(?P<project>[^/]+)/'
r'instances/(?P<instance_id>[a-z][-a-z0-9]*)/'
r'databases/(?P<database_id>[a-z][a-z0-9_\-]*[a-z0-9])$'
)
class Database(object):
"""Representation of a Cloud Spanner Database.
We can use a :class:`Database` to:
* :meth:`create` the database
* :meth:`reload` the database
* :meth:`update` the database
* :meth:`drop` the database
:type database_id: str
:param database_id: The ID of the database.
:type instance: :class:`~google.cloud.spanner_v1.instance.Instance`
:param instance: The instance that owns the database.
:type ddl_statements: list of string
:param ddl_statements: (Optional) DDL statements, excluding the
CREATE DATABASE statement.
:type pool: concrete subclass of
:class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`.
:param pool: (Optional) session pool to be used by database. If not
passed, the database will construct an instance of
:class:`~google.cloud.spanner_v1.pool.BurstyPool`.
"""
_spanner_api = None
def __init__(self, database_id, instance, ddl_statements=(), pool=None):
self.database_id = database_id
self._instance = instance
self._ddl_statements = _check_ddl_statements(ddl_statements)
self._local = threading.local()
if pool is None:
pool = BurstyPool()
self._pool = pool
pool.bind(self)
@classmethod
def from_pb(cls, database_pb, instance, pool=None):
"""Creates an instance of this class from a protobuf.
:type database_pb:
:class:`google.spanner.v2.spanner_instance_admin_pb2.Instance`
:param database_pb: A instance protobuf object.
:type instance: :class:`~google.cloud.spanner_v1.instance.Instance`
:param instance: The instance that owns the database.
:type pool: concrete subclass of
:class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`.
:param pool: (Optional) session pool to be used by database.
:rtype: :class:`Database`
:returns: The database parsed from the protobuf response.
:raises ValueError:
if the instance name does not match the expected format
or if the parsed project ID does not match the project ID
on the instance's client, or if the parsed instance ID does
not match the instance's ID.
"""
match = _DATABASE_NAME_RE.match(database_pb.name)
if match is None:
raise ValueError('Database protobuf name was not in the '
'expected format.', database_pb.name)
if match.group('project') != instance._client.project:
raise ValueError('Project ID on database does not match the '
'project ID on the instance\'s client')
instance_id = match.group('instance_id')
if instance_id != instance.instance_id:
raise ValueError('Instance ID on database does not match the '
'Instance ID on the instance')
database_id = match.group('database_id')
return cls(database_id, instance, pool=pool)
@property
def name(self):
"""Database name used in requests.
.. note::
This property will not change if ``database_id`` does not, but the
return value is not cached.
The database name is of the form
``"projects/../instances/../databases/{database_id}"``
:rtype: str
:returns: The database name.
"""
return self._instance.name + '/databases/' + self.database_id
@property
def ddl_statements(self):
"""DDL Statements used to define database schema.
See
cloud.google.com/spanner/docs/data-definition-language
:rtype: sequence of string
:returns: the statements
"""
return self._ddl_statements
@property
def spanner_api(self):
"""Helper for session-related API calls."""
if self._spanner_api is None:
credentials = self._instance._client.credentials
if isinstance(credentials, google.auth.credentials.Scoped):
credentials = credentials.with_scopes((SPANNER_DATA_SCOPE,))
self._spanner_api = SpannerClient(
credentials=credentials,
client_info=_CLIENT_INFO,
)
return self._spanner_api
def __eq__(self, other):
if not isinstance(other, self.__class__):
return NotImplemented
return (other.database_id == self.database_id and
other._instance == self._instance)
def __ne__(self, other):
return not self == other
def create(self):
"""Create this database within its instance
Inclues any configured schema assigned to :attr:`ddl_statements`.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase
:rtype: :class:`~google.api_core.operation.Operation`
:returns: a future used to poll the status of the create request
:raises Conflict: if the database already exists
:raises NotFound: if the instance owning the database does not exist
"""
api = self._instance._client.database_admin_api
metadata = _metadata_with_prefix(self.name)
db_name = self.database_id
if '-' in db_name:
db_name = '`%s`' % (db_name,)
future = api.create_database(
parent=self._instance.name,
create_statement='CREATE DATABASE %s' % (db_name,),
extra_statements=list(self._ddl_statements),
metadata=metadata,
)
return future
def exists(self):
"""Test whether this database exists.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDDL
:rtype: bool
:returns: True if the database exists, else false.
"""
api = self._instance._client.database_admin_api
metadata = _metadata_with_prefix(self.name)
try:
api.get_database_ddl(self.name, metadata=metadata)
except NotFound:
return False
return True
def reload(self):
"""Reload this database.
Refresh any configured schema into :attr:`ddl_statements`.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDDL
:raises NotFound: if the database does not exist
"""
api = self._instance._client.database_admin_api
metadata = _metadata_with_prefix(self.name)
response = api.get_database_ddl(self.name, metadata=metadata)
self._ddl_statements = tuple(response.statements)
def update_ddl(self, ddl_statements):
"""Update DDL for this database.
Apply any configured schema from :attr:`ddl_statements`.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase
:type ddl_statements: Sequence[str]
:param ddl_statements: a list of DDL statements to use on this database
:rtype: :class:`google.api_core.operation.Operation`
:returns: an operation instance
:raises NotFound: if the database does not exist
"""
client = self._instance._client
api = client.database_admin_api
metadata = _metadata_with_prefix(self.name)
future = api.update_database_ddl(
self.name, ddl_statements, '', metadata=metadata)
return future
def drop(self):
"""Drop this database.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase
"""
api = self._instance._client.database_admin_api
metadata = _metadata_with_prefix(self.name)
api.drop_database(self.name, metadata=metadata)
def execute_partitioned_dml(
self, dml, params=None, param_types=None):
"""Execute a partitionable DML statement.
:type dml: str
:param dml: DML statement
:type params: dict, {str -> column value}
:param params: values for parameter replacement. Keys must match
the names used in ``dml``.
:type param_types: dict[str -> Union[dict, .types.Type]]
:param param_types:
(Optional) maps explicit types for one or more param values;
required if parameters are passed.
:rtype: int
:returns: Count of rows affected by the DML statement.
"""
if params is not None:
if param_types is None:
raise ValueError(
"Specify 'param_types' when passing 'params'.")
params_pb = Struct(fields={
key: _make_value_pb(value) for key, value in params.items()})
else:
params_pb = None
api = self.spanner_api
txn_options = TransactionOptions(
partitioned_dml=TransactionOptions.PartitionedDml())
metadata = _metadata_with_prefix(self.name)
with SessionCheckout(self._pool) as session:
txn = api.begin_transaction(
session.name, txn_options, metadata=metadata)
txn_selector = TransactionSelector(id=txn.id)
restart = functools.partial(
api.execute_streaming_sql,
session.name,
dml,
transaction=txn_selector,
params=params_pb,
param_types=param_types,
metadata=metadata)
iterator = _restart_on_unavailable(restart)
result_set = StreamedResultSet(iterator)
list(result_set) # consume all partials
return result_set.stats.row_count_lower_bound
def session(self, labels=None):
"""Factory to create a session for this database.
:type labels: dict (str -> str) or None
:param labels: (Optional) user-assigned labels for the session.
:rtype: :class:`~google.cloud.spanner_v1.session.Session`
:returns: a session bound to this database.
"""
return Session(self, labels=labels)
def snapshot(self, **kw):
"""Return an object which wraps a snapshot.
The wrapper *must* be used as a context manager, with the snapshot
as the value returned by the wrapper.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.TransactionOptions.ReadOnly
:type kw: dict
:param kw:
Passed through to
:class:`~google.cloud.spanner_v1.snapshot.Snapshot` constructor.
:rtype: :class:`~google.cloud.spanner_v1.database.SnapshotCheckout`
:returns: new wrapper
"""
return SnapshotCheckout(self, **kw)
def batch(self):
"""Return an object which wraps a batch.
The wrapper *must* be used as a context manager, with the batch
as the value returned by the wrapper.
:rtype: :class:`~google.cloud.spanner_v1.database.BatchCheckout`
:returns: new wrapper
"""
return BatchCheckout(self)
def batch_snapshot(self, read_timestamp=None, exact_staleness=None):
"""Return an object which wraps a batch read / query.
:type read_timestamp: :class:`datetime.datetime`
:param read_timestamp: Execute all reads at the given timestamp.
:type exact_staleness: :class:`datetime.timedelta`
:param exact_staleness: Execute all reads at a timestamp that is
``exact_staleness`` old.
:rtype: :class:`~google.cloud.spanner_v1.database.BatchSnapshot`
:returns: new wrapper
"""
return BatchSnapshot(
self,
read_timestamp=read_timestamp,
exact_staleness=exact_staleness,
)
def run_in_transaction(self, func, *args, **kw):
"""Perform a unit of work in a transaction, retrying on abort.
:type func: callable
:param func: takes a required positional argument, the transaction,
and additional positional / keyword arguments as supplied
by the caller.
:type args: tuple
:param args: additional positional arguments to be passed to ``func``.
:type kw: dict
:param kw: optional keyword arguments to be passed to ``func``.
If passed, "timeout_secs" will be removed and used to
override the default timeout.
:rtype: :class:`datetime.datetime`
:returns: timestamp of committed transaction
"""
# Sanity check: Is there a transaction already running?
# If there is, then raise a red flag. Otherwise, mark that this one
# is running.
if getattr(self._local, 'transaction_running', False):
raise RuntimeError('Spanner does not support nested transactions.')
self._local.transaction_running = True
# Check out a session and run the function in a transaction; once
# done, flip the sanity check bit back.
try:
with SessionCheckout(self._pool) as session:
return session.run_in_transaction(func, *args, **kw)
finally:
self._local.transaction_running = False
class BatchCheckout(object):
"""Context manager for using a batch from a database.
Inside the context manager, checks out a session from the database,
creates a batch from it, making the batch available.
Caller must *not* use the batch to perform API requests outside the scope
of the context manager.
:type database: :class:`~google.cloud.spanner.database.Database`
:param database: database to use
"""
def __init__(self, database):
self._database = database
self._session = self._batch = None
def __enter__(self):
"""Begin ``with`` block."""
session = self._session = self._database._pool.get()
batch = self._batch = Batch(session)
return batch
def __exit__(self, exc_type, exc_val, exc_tb):
"""End ``with`` block."""
try:
if exc_type is None:
self._batch.commit()
finally:
self._database._pool.put(self._session)
class SnapshotCheckout(object):
"""Context manager for using a snapshot from a database.
Inside the context manager, checks out a session from the database,
creates a snapshot from it, making the snapshot available.
Caller must *not* use the snapshot to perform API requests outside the
scope of the context manager.
:type database: :class:`~google.cloud.spanner.database.Database`
:param database: database to use
:type kw: dict
:param kw:
Passed through to
:class:`~google.cloud.spanner_v1.snapshot.Snapshot` constructor.
"""
def __init__(self, database, **kw):
self._database = database
self._session = None
self._kw = kw
def __enter__(self):
"""Begin ``with`` block."""
session = self._session = self._database._pool.get()
return Snapshot(session, **self._kw)
def __exit__(self, exc_type, exc_val, exc_tb):
"""End ``with`` block."""
self._database._pool.put(self._session)
class BatchSnapshot(object):
"""Wrapper for generating and processing read / query batches.
:type database: :class:`~google.cloud.spanner.database.Database`
:param database: database to use
:type read_timestamp: :class:`datetime.datetime`
:param read_timestamp: Execute all reads at the given timestamp.
:type exact_staleness: :class:`datetime.timedelta`
:param exact_staleness: Execute all reads at a timestamp that is
``exact_staleness`` old.
"""
def __init__(self, database, read_timestamp=None, exact_staleness=None):
self._database = database
self._session = None
self._snapshot = None
self._read_timestamp = read_timestamp
self._exact_staleness = exact_staleness
@classmethod
def from_dict(cls, database, mapping):
"""Reconstruct an instance from a mapping.
:type database: :class:`~google.cloud.spanner.database.Database`
:param database: database to use
:type mapping: mapping
:param mapping: serialized state of the instance
:rtype: :class:`BatchSnapshot`
"""
instance = cls(database)
session = instance._session = database.session()
session._session_id = mapping['session_id']
snapshot = instance._snapshot = session.snapshot()
snapshot._transaction_id = mapping['transaction_id']
return instance
def to_dict(self):
"""Return state as a dictionary.
Result can be used to serialize the instance and reconstitute
it later using :meth:`from_dict`.
:rtype: dict
"""
session = self._get_session()
snapshot = self._get_snapshot()
return {
'session_id': session._session_id,
'transaction_id': snapshot._transaction_id,
}
def _get_session(self):
"""Create session as needed.
.. note::
Caller is responsible for cleaning up the session after
all partitions have been processed.
"""
if self._session is None:
session = self._session = self._database.session()
session.create()
return self._session
def _get_snapshot(self):
"""Create snapshot if needed."""
if self._snapshot is None:
self._snapshot = self._get_session().snapshot(
read_timestamp=self._read_timestamp,
exact_staleness=self._exact_staleness,
multi_use=True)
self._snapshot.begin()
return self._snapshot
def read(self, *args, **kw):
"""Convenience method: perform read operation via snapshot.
See :meth:`~google.cloud.spanner_v1.snapshot.Snapshot.read`.
"""
return self._get_snapshot().read(*args, **kw)
def execute_sql(self, *args, **kw):
"""Convenience method: perform query operation via snapshot.
See :meth:`~google.cloud.spanner_v1.snapshot.Snapshot.execute_sql`.
"""
return self._get_snapshot().execute_sql(*args, **kw)
def generate_read_batches(
self, table, columns, keyset,
index='', partition_size_bytes=None, max_partitions=None):
"""Start a partitioned batch read operation.
Uses the ``PartitionRead`` API request to initiate the partitioned
read. Returns a list of batch information needed to perform the
actual reads.
:type table: str
:param table: name of the table from which to fetch data
:type columns: list of str
:param columns: names of columns to be retrieved
:type keyset: :class:`~google.cloud.spanner_v1.keyset.KeySet`
:param keyset: keys / ranges identifying rows to be retrieved
:type index: str
:param index: (Optional) name of index to use, rather than the
table's primary key
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type max_partitions: int
:param max_partitions:
(Optional) desired maximum number of partitions generated. The
service uses this as a hint, the actual number of partitions may
differ.
:rtype: iterable of dict
:returns:
mappings of information used peform actual partitioned reads via
:meth:`process_read_batch`.
"""
partitions = self._get_snapshot().partition_read(
table=table, columns=columns, keyset=keyset, index=index,
partition_size_bytes=partition_size_bytes,
max_partitions=max_partitions)
read_info = {
'table': table,
'columns': columns,
'keyset': keyset._to_dict(),
'index': index,
}
for partition in partitions:
yield {'partition': partition, 'read': read_info.copy()}
def process_read_batch(self, batch):
"""Process a single, partitioned read.
:type batch: mapping
:param batch:
one of the mappings returned from an earlier call to
:meth:`generate_read_batches`.
:rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet`
:returns: a result set instance which can be used to consume rows.
"""
kwargs = copy.deepcopy(batch['read'])
keyset_dict = kwargs.pop('keyset')
kwargs['keyset'] = KeySet._from_dict(keyset_dict)
return self._get_snapshot().read(
partition=batch['partition'], **kwargs)
def generate_query_batches(
self, sql, params=None, param_types=None,
partition_size_bytes=None, max_partitions=None):
"""Start a partitioned query operation.
Uses the ``PartitionQuery`` API request to start a partitioned
query operation. Returns a list of batch information needed to
peform the actual queries.
:type sql: str
:param sql: SQL query statement
:type params: dict, {str -> column value}
:param params: values for parameter replacement. Keys must match
the names used in ``sql``.
:type param_types: dict[str -> Union[dict, .types.Type]]
:param param_types:
(Optional) maps explicit types for one or more param values;
required if parameters are passed.
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type max_partitions: int
:param max_partitions:
(Optional) desired maximum number of partitions generated. The
service uses this as a hint, the actual number of partitions may
differ.
:rtype: iterable of dict
:returns:
mappings of information used peform actual partitioned reads via
:meth:`process_read_batch`.
"""
partitions = self._get_snapshot().partition_query(
sql=sql, params=params, param_types=param_types,
partition_size_bytes=partition_size_bytes,
max_partitions=max_partitions)
query_info = {'sql': sql}
if params:
query_info['params'] = params
query_info['param_types'] = param_types
for partition in partitions:
yield {'partition': partition, 'query': query_info}
def process_query_batch(self, batch):
"""Process a single, partitioned query.
:type batch: mapping
:param batch:
one of the mappings returned from an earlier call to
:meth:`generate_query_batches`.
:rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet`
:returns: a result set instance which can be used to consume rows.
"""
return self._get_snapshot().execute_sql(
partition=batch['partition'], **batch['query'])
def process(self, batch):
"""Process a single, partitioned query or read.
:type batch: mapping
:param batch:
one of the mappings returned from an earlier call to
:meth:`generate_query_batches`.
:rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet`
:returns: a result set instance which can be used to consume rows.
:raises ValueError: if batch does not contain either 'read' or 'query'
"""
if 'query' in batch:
return self.process_query_batch(batch)
if 'read' in batch:
return self.process_read_batch(batch)
raise ValueError("Invalid batch")
def close(self):
"""Clean up underlying session.
.. note::
If the transaction has been shared across multiple machines,
calling this on any machine would invalidate the transaction
everywhere. Ideally this would be called when data has been read
from all the partitions.
"""
if self._session is not None:
self._session.delete()
def _check_ddl_statements(value):
"""Validate DDL Statements used to define database schema.
See
https://cloud.google.com/spanner/docs/data-definition-language
:type value: list of string
:param value: DDL statements, excluding the 'CREATE DATABSE' statement
:rtype: tuple
:returns: tuple of validated DDL statement strings.
:raises ValueError:
if elements in ``value`` are not strings, or if ``value`` contains
a ``CREATE DATABASE`` statement.
"""
if not all(isinstance(line, six.string_types) for line in value):
raise ValueError("Pass a list of strings")
if any('create database' in line.lower() for line in value):
raise ValueError("Do not pass a 'CREATE DATABASE' statement")
return tuple(value)
| {
"content_hash": "d55720ba9a51148dca5503d35af2eca8",
"timestamp": "",
"source": "github",
"line_count": 766,
"max_line_length": 149,
"avg_line_length": 35.907310704960835,
"alnum_prop": 0.6230139974550082,
"repo_name": "jonparrott/google-cloud-python",
"id": "6fb367d3ab87178dd64aaac5ed6b498c33e943e0",
"size": "28101",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "spanner/google/cloud/spanner_v1/database.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Batchfile",
"bytes": "3366"
},
{
"name": "PowerShell",
"bytes": "7195"
},
{
"name": "Protocol Buffer",
"bytes": "62009"
},
{
"name": "Python",
"bytes": "3459300"
},
{
"name": "Shell",
"bytes": "7548"
}
],
"symlink_target": ""
} |
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
import logging
import numpy as np
from .utils import SynHandlerMod, SynHandlerEcho
import uuid
import pytest
logger = logging.getLogger(__name__)
def test_context():
with SynHandlerMod('', (4, 2)) as hand:
for j in range(1, 5):
assert np.all(hand(j) < j)
def test_register_fail(fs):
fsa = fs
with fsa.handler_context({'syn-mod': SynHandlerMod}):
# shouldn't raise, it is a no-op as it is regiristering
# the same class with the same name
fsa.register_handler('syn-mod', SynHandlerMod)
# should raise as it is trying to change the registered class
with pytest.raises(RuntimeError):
fsa.register_handler('syn-mod', SynHandlerEcho)
def test_context_manager_replace(fs):
fsa = fs
# nuke anything already registered, just to be safe.
while len(fs.handler_reg):
for k in list(fs.handler_reg):
fs.deregister_handler(k)
# check syn-mod not in the registry
assert 'syn-mod' not in fsa.handler_reg
# put syn-mod in with context manager
with fsa.handler_context({'syn-mod': SynHandlerMod}):
# check that it is the version we expect
assert fsa.handler_reg['syn-mod'] is SynHandlerMod
# over-ride syn-mod with a second context manager
with fsa.handler_context({'syn-mod': SynHandlerEcho}):
# check that we get the second one
assert fsa.handler_reg['syn-mod'] is SynHandlerEcho
# and that it correctly rollsback to first value
assert fsa.handler_reg['syn-mod'] is SynHandlerMod
# and is empty again when we are done.
assert 'syn-mod' not in fsa.handler_reg
def test_deregister(fs):
test_reg = fs.handler_reg
test_spec_name = str(uuid.uuid4())
fs.register_handler(test_spec_name, SynHandlerMod)
assert test_reg[test_spec_name] is SynHandlerMod
fs.deregister_handler(test_spec_name)
assert test_spec_name not in test_reg
| {
"content_hash": "270b7a5f92d7cb9d241684c6105004f5",
"timestamp": "",
"source": "github",
"line_count": 65,
"max_line_length": 69,
"avg_line_length": 31.953846153846154,
"alnum_prop": 0.6605681271064034,
"repo_name": "ericdill/databroker",
"id": "b8ca49e3715fc5ae07de14e152f69c3d10f2e0da",
"size": "4560",
"binary": false,
"copies": "3",
"ref": "refs/heads/master",
"path": "databroker/assets/tests/test_retrieve.py",
"mode": "33188",
"license": "bsd-3-clause",
"language": [
{
"name": "Python",
"bytes": "171404"
},
{
"name": "Shell",
"bytes": "38"
}
],
"symlink_target": ""
} |
from spotally.spotally-bot import * | {
"content_hash": "f429e4b36047d59e9818d685d6322e24",
"timestamp": "",
"source": "github",
"line_count": 1,
"max_line_length": 35,
"avg_line_length": 35,
"alnum_prop": 0.8285714285714286,
"repo_name": "TheChugnut/spotally",
"id": "f1e0311a339274b892fc61fa0cb1a839bb678fe3",
"size": "35",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "src/spotally/__init__.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "5788"
}
],
"symlink_target": ""
} |
import rospy
import tf
from static_transform_manager.srv import ( SetTransformation, SetTransformationResponse,
StopTransformation, StopTransformationResponse
)
class StaticTransformationsManager(object):
def __init__(self, freq):
self._tf_broadcaster = tf.TransformBroadcaster()
self._rater = rospy.Rate(freq)
self._running = True
self._transforms = {} # Indexed by child name
# set up services
self._start_srv = rospy.Service(rospy.get_name()+"/set_tf",
SetTransformation,
self._start_transform_srv)
self._start_srv = rospy.Service(rospy.get_name()+"/stop_tf",
StopTransformation,
self._stop_tranform_srv)
pass
def _start_transform_srv(self, req):
resp = SetTransformationResponse()
self._transforms[req.transform.child_frame_id] = req.transform
resp.is_ok = True
resp.response = "Frame added"
return resp
def _stop_tranform_srv(self, req):
resp = StopTransformationResponse()
if self._transforms.has_key(req.child_frame_id):
self._transforms.pop(req.child_frame_id)
resp.response = "ok"
resp.is_ok = True
else:
resp.response = "frame not known by this managerx"
resp.is_ok = False
return resp
def start(self):
while self._running and not rospy.is_shutdown():
self._rater.sleep()
for transform in self._transforms.values():
self._tf_broadcaster.sendTransform((transform.transform.translation.x,
transform.transform.translation.y,
transform.transform.translation.z),
(transform.transform.rotation.x,
transform.transform.rotation.y,
transform.transform.rotation.z,
transform.transform.rotation.w),
rospy.Time.now(),
transform.child_frame_id,
transform.header.frame_id)
if __name__ == '__main__':
''' Main Program '''
rospy.init_node("static_transforms_manager")
manager = StaticTransformationsManager(30)
manager.start()
| {
"content_hash": "7074798a90f3f13fea6ff04570929bd1",
"timestamp": "",
"source": "github",
"line_count": 62,
"max_line_length": 89,
"avg_line_length": 44.306451612903224,
"alnum_prop": 0.48380050964688753,
"repo_name": "strands-project/strands_apps",
"id": "d06b3c2c742af4ae84e47eefa6f76a2c2fe6c404",
"size": "2769",
"binary": false,
"copies": "3",
"ref": "refs/heads/indigo-devel",
"path": "static_transform_manager/scripts/static_tf_services.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "C++",
"bytes": "32677"
},
{
"name": "CMake",
"bytes": "24963"
},
{
"name": "Python",
"bytes": "122957"
},
{
"name": "Shell",
"bytes": "649"
}
],
"symlink_target": ""
} |
"""
A contextualization script required by CloudMan (http://cloudman.irb.hr/).
The script is automatically run at instance startup (via an `upstart` job;
see `/etc/init/cloudman.conf`).
Requires:
PyYAML http://pyyaml.org/wiki/PyYAMLDocumentation (easy_install pyyaml)
boto http://code.google.com/p/boto/ (easy_install boto)
Assumptions:
`DEFAULT_BUCKET_NAME` and `DEFAULT_BOOT_SCRIPT_NAME` are publicly accessible
and do not require any form of authentication.
"""
import hashlib
import logging
import os
import random
import subprocess
import sys
import time
import urllib2
import yaml
from urlparse import urlparse
from boto.s3.key import Key
from boto.s3.connection import S3Connection
from boto.exception import S3ResponseError
from boto.s3.connection import OrdinaryCallingFormat
logging.getLogger('boto').setLevel(logging.INFO) # Only log boto messages >=INFO
log = None
USER_DATA_URL = 'http://169.254.169.254/latest/user-data'
# Local path destination used for storing/reading any files created herein
LOCAL_PATH = '/tmp/cm'
# Local file with user data (UD) formatted by this script
USER_DATA_FILE_NAME = 'userData.yaml'
# The final/processed UD file
USER_DATA_FILE = os.path.join(LOCAL_PATH, USER_DATA_FILE_NAME)
# Local file containing UD in its original format
USER_DATA_ORIG = os.path.join(LOCAL_PATH, 'original_%s' % USER_DATA_FILE_NAME)
# Object store root URL for retrieving CloudMan-related files
SERVICE_ROOT = 'http://s3.amazonaws.com/'
# The default bucket name used for retrieving the required files from the object
# store. Ensure this bucket is accessible to anyone!
DEFAULT_BUCKET_NAME = 'cloudman'
# Name of the boot script that will be invoked to continue the boot process
# Ensure this file is accessible to anyone in the public bucket!
DEFAULT_BOOT_SCRIPT_NAME = 'cm_boot.py'
CLOUDMAN_HOME = '/mnt/cm'
# ====================== Utility methods ======================
def _setup_logging():
# Logging setup
formatter = logging.Formatter("[%(levelname)s] %(module)s:%(lineno)d %(asctime)s: %(message)s")
console = logging.StreamHandler() # log to console - used during testing
# console.setLevel(logging.INFO) # accepts >INFO levels
console.setFormatter(formatter)
# log_file = logging.FileHandler(os.path.join(LOCAL_PATH, "%s.log" % os.path.splitext(sys.argv[0])[0]), 'w')
# log_file.setLevel(logging.DEBUG) # accepts all levels
# log_file.setFormatter(formatter)
log = logging.root
log.addHandler(console)
# log.addHandler(log_file)
log.setLevel(logging.DEBUG)
return log
def _get_user_data():
ud = ''
for i in range(0, 5):
try:
log.info("Getting user data from '%s', attempt %s" % (USER_DATA_URL, i))
fp = urllib2.urlopen(USER_DATA_URL)
ud = fp.read()
fp.close()
log.debug("Saving user data in its original format to file '%s'" % USER_DATA_ORIG)
with open(USER_DATA_ORIG, 'w') as ud_orig:
ud_orig.write(ud)
if ud:
log.debug("Got user data")
return ud
except IOError:
log.info("User data not found. Setting it to empty.")
return ''
# Used for testing
# return 'http://s3.amazonaws.com/cloudman/cm_boot'
# return ''
# return "gc_dev1|<account_key>|<secret_key>|somePWD"
# with open('sample.yaml') as ud_yaml:
# ud = ud_yaml.read()
if ud == '':
log.debug("Received empty/no user data")
return ud
def _get_bucket_name(cluster_name, access_key):
"""Compose bucket name based on the user-provided cluster name and user access key"""
m = hashlib.md5()
m.update(cluster_name + access_key)
return "cm-" + m.hexdigest()
def _isurl(path):
"""Test if path is a net location. Tests the scheme and netloc."""
# BUG : URLs require a scheme string ('http://') to be used.
# www.google.com will fail.
# Should we prepend the scheme for those that don't have it and
# test that also?
scheme, netloc, upath, uparams, uquery, ufrag = urlparse(path)
return bool(scheme and netloc)
def _get_s3_conn(ud):
try:
if 'cloud_type' in ud and ud['cloud_type'] != 'ec2':
# If the user has specified a cloud type other than EC2,
# create an s3 connection using the info from their user data
log.debug('Establishing boto S3 connection to a custom Object Store')
try:
s3_conn = S3Connection(aws_access_key_id=ud['access_key'],
aws_secret_access_key=ud['secret_key'],
is_secure=ud.get('is_secure', True),
host=ud.get('s3_host', ''),
port=ud.get('s3_port', 8888),
calling_format=OrdinaryCallingFormat(),
path=ud.get('s3_conn_path', '/'))
except S3ResponseError, e:
log.error("Trouble connecting to a custom Object Store. User data: {0}; Exception: {1}"
.format(ud, e))
else:
# Use the default Amazon S3 connection
log.debug('Establishing boto S3 connection to Amazon')
s3_conn = S3Connection(ud['access_key'], ud['secret_key'])
except Exception, e:
log.error("Exception getting S3 connection: %s" % e)
return None
return s3_conn
def _bucket_exists(s3_conn, bucket_name):
bucket = None
for i in range(1, 6):
try:
# log.debug("Looking for bucket '%s'" % bucket_name)
bucket = s3_conn.lookup(bucket_name)
break
except S3ResponseError:
log.error("Bucket '%s' not found, attempt %s/5" % (bucket_name, i+1))
time.sleep(2)
if bucket is not None:
log.debug("Cluster bucket '%s' found." % bucket_name)
return True
else:
log.debug("Cluster bucket '%s' not found." % bucket_name)
return False
def _remote_file_exists(s3_conn, bucket_name, remote_filename):
b = None
for i in range(0, 5):
try:
b = s3_conn.get_bucket(bucket_name)
break
except S3ResponseError:
log.error("Problem connecting to bucket '%s', attempt %s/5" % (bucket_name, i))
time.sleep(2)
if b is not None:
k = Key(b, remote_filename)
if k.exists():
return True
return False
def _save_file_to_bucket(s3_conn, bucket_name, remote_filename, local_file, force=False):
local_file = os.path.join(LOCAL_PATH, local_file)
# log.debug( "Establishing handle with bucket '%s'..." % bucket_name)
b = None
for i in range(0, 5):
try:
b = s3_conn.get_bucket(bucket_name)
break
except S3ResponseError, e:
log.error("Problem connecting to bucket '%s', attempt %s/5" % (bucket_name, i))
time.sleep(2)
if b is not None:
# log.debug("Establishing handle with key object '%s'..." % remote_filename)
k = Key(b, remote_filename)
if k.exists() and not force:
log.debug("Remote file '%s' already exists. Not overwriting it." % remote_filename)
return True
log.debug("Attempting to save local file '%s' to bucket '%s' as '%s'"
% (local_file, bucket_name, remote_filename))
try:
k.set_contents_from_filename(local_file)
log.info("Successfully saved file '%s' to bucket '%s'." % (remote_filename, bucket_name))
return True
except S3ResponseError, e:
log.error("Failed to save file local file '%s' to bucket '%s' as file '%s': %s"
% (local_file, bucket_name, remote_filename, e))
return False
else:
return False
def _get_file_from_bucket(s3_conn, bucket_name, remote_filename, local_filename):
local_filename = os.path.join(LOCAL_PATH, local_filename)
try:
# log.debug("Establishing handle with bucket '%s'" % bucket_name)
b = s3_conn.get_bucket(bucket_name)
# log.debug("Establishing handle with file object '%s'" % remote_filename)
k = Key(b, remote_filename)
log.debug("Attempting to retrieve file '%s' from bucket '%s'" % (remote_filename, bucket_name))
if k.exists():
k.get_contents_to_filename(local_filename)
log.info("Successfully retrieved file '%s' from bucket '%s' to '%s'."
% (remote_filename, bucket_name, local_filename))
return True
else:
log.error("File '%s' in bucket '%s' not found." % (remote_filename, bucket_name))
return False
except S3ResponseError, e:
log.error("Failed to get file '%s' from bucket '%s': %s" % (remote_filename, bucket_name, e))
return False
def _get_file_from_url(url):
local_filename = os.path.join(LOCAL_PATH, os.path.split(url)[1])
log.info("Getting boot script from '%s' and saving it locally to '%s'" % (url, local_filename))
try:
f = urllib2.urlopen(url)
with open(local_filename, 'w') as local_file:
local_file.write(f.read())
os.chmod(local_filename, 0744)
if f:
log.debug("Got boot script from '%s'" % url)
return True
return False
except IOError:
log.error("Boot script at '%s' not found." % url)
return False
def _get_boot_script(ud):
# Test if cluster bucket exists; if it does not, resort to the default
# bucket for downloading the boot script
use_default_bucket = ud.get("use_default_bucket", False)
if 'bucket_default' in ud:
default_bucket_name = ud['bucket_default']
else:
default_bucket_name = DEFAULT_BUCKET_NAME
if not use_default_bucket and 'bucket_cluster'in ud and \
ud['access_key'] is not None and ud['secret_key'] is not None:
s3_conn = _get_s3_conn(ud)
# Check if cluster bucket exists or use the default one
if not _bucket_exists(s3_conn, ud['bucket_cluster']) or \
not _remote_file_exists(s3_conn, ud['bucket_cluster'], ud['boot_script_name']):
log.debug("Using default bucket '%s'" % default_bucket_name)
use_default_bucket = True
else:
log.debug("Using cluster bucket '%s'" % ud['bucket_cluster'])
use_default_bucket = False
else:
log.debug("bucket_cluster not specified or no credentials provided; defaulting to bucket '%s'"
% default_bucket_name)
use_default_bucket = True
# If using cluster bucket, use credentials because the boot script may not be accessible to everyone
got_boot_script = False
if use_default_bucket is False:
log.debug("Trying to get boot script '%s' from cluster bucket '%s'"
% (ud['boot_script_name'], ud.get('bucket_cluster', None)))
got_boot_script = _get_file_from_bucket(s3_conn, ud['bucket_cluster'],
ud['boot_script_name'],
DEFAULT_BOOT_SCRIPT_NAME)
if got_boot_script:
os.chmod(os.path.join(LOCAL_PATH, DEFAULT_BOOT_SCRIPT_NAME), 0744)
# If did not get the boot script, fall back on the publicly available one
if not got_boot_script or use_default_bucket:
boot_script_url = os.path.join(_get_default_bucket_url(ud), ud.get('boot_script_name',
DEFAULT_BOOT_SCRIPT_NAME))
log.debug("Could not get boot script '%s' from cluster bucket '%s'; "
"retrieving the public one from bucket url '%s'"
% (ud['boot_script_name'], ud.get('bucket_cluster', None), boot_script_url))
got_boot_script = _get_file_from_url(boot_script_url)
if got_boot_script:
log.debug("Saved boot script to '%s'" % os.path.join(LOCAL_PATH, DEFAULT_BOOT_SCRIPT_NAME))
# Save the downloaded boot script to cluster bucket for future invocations
use_object_store = ud.get("use_object_store", True)
if use_object_store and 'bucket_cluster' in ud and ud['bucket_cluster']:
s3_conn = _get_s3_conn(ud)
if _bucket_exists(s3_conn, ud['bucket_cluster']) and \
not _remote_file_exists(s3_conn, ud['bucket_cluster'], ud['boot_script_name']):
_save_file_to_bucket(s3_conn, ud['bucket_cluster'], ud['boot_script_name'],
DEFAULT_BOOT_SCRIPT_NAME)
return True
log.debug("**Could not get the boot script**")
return False
def _run_boot_script(boot_script_name):
script = os.path.join(LOCAL_PATH, boot_script_name)
log.info("Running boot script '%s'" % script)
process = subprocess.Popen(script, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
log.debug("Successfully ran boot script '%s'" % script)
return True
else:
log.error("Error running boot script '%s'. Process returned code '%s' and following stderr: %s"
% (script, process.returncode, stderr))
return False
def _create_basic_user_data_file():
# Create a basic YAML file that is expected by CloudMan
with open(USER_DATA_FILE, 'w') as ud_file:
ud_formatted = {'access_key': None,
'boot_script_name': DEFAULT_BOOT_SCRIPT_NAME,
'boot_script_path': LOCAL_PATH,
'bucket_default': DEFAULT_BUCKET_NAME,
'bucket_cluster': None,
'cloudman_home': CLOUDMAN_HOME,
'cluster_name': 'aGalaxyCloudManCluster_%s' % random.randrange(1, 9999999),
'role': 'master',
'secret_key': None}
yaml.dump(ud_formatted, ud_file, default_flow_style=False)
return ud_formatted
def _get_default_bucket_url(ud=None):
if ud and 'bucket_default' in ud:
default_bucket_name = ud['bucket_default']
else:
default_bucket_name = DEFAULT_BUCKET_NAME
# TODO: Check if th bucket 'default_bucket_name' is accessible to everyone
# because it is being accessed as a URL
if ud:
bucket_url = ud.get("default_bucket_url", None)
else:
bucket_url = None
if not bucket_url:
bucket_url = os.path.join(SERVICE_ROOT, default_bucket_name)
log.debug("Default bucket url: %s" % bucket_url)
return bucket_url
def _user_exists(username):
""" Check if the given username exists as a system user
"""
with open('/etc/passwd', 'r') as f:
ep = f.read()
return ep.find(username) > 0
def _allow_password_logins(passwd):
for user in ["ubuntu", "galaxy"]:
if _user_exists(user):
log.info("Setting up password-based login for user '{0}'".format(user))
p1 = subprocess.Popen(["echo", "%s:%s" % (user, passwd)], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["chpasswd"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close()
p2.communicate()[0]
cl = ["sed", "-i", "s/^PasswordAuthentication .*/PasswordAuthentication yes/",
"/etc/ssh/sshd_config"]
subprocess.check_call(cl)
cl = ["/usr/sbin/service", "ssh", "reload"]
subprocess.check_call(cl)
def _handle_freenx(passwd):
# Check if FreeNX is installed on the image before trying to configure it
cl = "/usr/bin/dpkg --get-selections | /bin/grep freenx"
retcode = subprocess.call(cl, shell=True)
if retcode == 0:
log.info("Setting up FreeNX")
cl = ["dpkg-reconfigure", "-pcritical", "freenx-server"]
# On slower/small instance types, there can be a conflict when running
# debconf so try this a few times
for i in range(5):
retcode = subprocess.call(cl)
if retcode == 0:
break
else:
time.sleep(5)
else:
log.info("freenx-server is not installed; not configuring it")
def _run(cmd):
if not cmd:
log.error("Trying to run an empty command? '{0}'".format(cmd))
return False
try:
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
log.debug("Successfully ran '%s'" % cmd)
if stdout:
return stdout
else:
return True
else:
log.error("Error running '%s'. Process returned code '%s' and following stderr: '%s'"
% (cmd, process.returncode, stderr))
return False
except Exception, e:
log.error("Exception running '%s': '%s'" % (cmd, e))
return False
def _ensure_ephemeral_disk_mounted():
"""
Make sure `/mnt` is a mounted device vs. just being part of `/`.
At least some AWS instance types (e.g., r3) do not auto-mount what's in
`/ets/fstab` so make sure the ephemeral disks are in fact mounted.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreTrimSupport
"""
if not _run('mountpoint -q /mnt'):
device = '/dev/xvdb' # Most of AWS instances have this device
if os.path.exists(device):
log.debug("/mnt is not a mountpoint; will try to mount it from {0}"
.format(device))
_run('mkfs.xfs {0}'.format(device))
_run('mount -o discard {0} /mnt'.format(device))
else:
log.warning("Mountpoint /mnt not available and no device {0}"
.format(device))
else:
log.debug("/mnt is already a mountpoint so not mounting it again.")
# ====================== Actions methods ======================
def _handle_empty():
log.info("Received empty user data; assuming default contextualization")
_create_basic_user_data_file() # This file is expected by CloudMan
# Get & run boot script
file_url = os.path.join(_get_default_bucket_url(), DEFAULT_BOOT_SCRIPT_NAME)
log.debug("Resorting to the default bucket to get the boot script: %s" % file_url)
_get_file_from_url(file_url)
_run_boot_script(DEFAULT_BOOT_SCRIPT_NAME)
def _handle_url(url):
log.info("Handling user data provided URL: '%s'" % url)
_get_file_from_url(url)
boot_script_name = os.path.split(url)[1]
_run_boot_script(boot_script_name)
# http://stackoverflow.com/questions/823196/yaml-merge-in-python
def _merge(specific, default):
"""
Recursively merges two yaml produced data structures,
a more specific input (`specific`) and defaults
(`default`).
"""
if isinstance(specific, dict) and isinstance(default, dict):
for k, v in default.iteritems():
if k not in specific:
specific[k] = v
else:
specific[k] = _merge(specific[k], v)
return specific
def _load_user_data(user_data):
""" Loads user data into dict (using pyyaml). If machine image
contains default data this is loaded and populated in resulting
data structure as well. These separate options are merged using
the `_merge` function above and priority is always given to
user supplied options.
"""
ud = yaml.load(user_data)
if ud == user_data:
# Bad user data, cannot merge default
return ud
default_user_data_path = \
os.path.join(os.path.dirname(os.path.abspath(__file__)), 'IMAGE_USER_DATA')
if os.path.exists(default_user_data_path):
image_ud = yaml.load(open(default_user_data_path, 'r').read())
if image_ud:
ud = _merge(ud, image_ud)
return ud
def _handle_yaml(user_data):
""" Process user data in YAML format"""
log.info("Handling user data in YAML format.")
ud = _load_user_data(user_data)
# Handle bad user data as a string
if ud == user_data:
return _handle_empty()
# Allow password based logins. Do so also in case only NX is being setup.
if "freenxpass" in ud or "password" in ud:
passwd = ud.get("freenxpass", None) or ud.get("password", None)
_allow_password_logins(passwd)
# Handle freenx passwords and the case with only a NX password sent
if "freenxpass" in ud:
_handle_freenx(ud["freenxpass"])
if len(ud) == 1:
return _handle_empty()
# Create a YAML file from user data and store it as USER_DATA_FILE
# This code simply ensures fields required by CloudMan are in the
# created file. Any other fields that might be included as user data
# are also included in the created USER_DATA_FILE
if ud.get('no_start', None) is not None:
log.info("Received 'no_start' user data option. Not doing anything else.")
return
if 'cluster_name' not in ud:
log.warning("The provided user data should contain cluster_name field.")
ud['cluster_name'] = 'aCloudManCluster_%s' % random.randrange(1, 9999999)
elif ud['cluster_name'] == '':
log.warning("The cluster_name field of user data should not be empty.")
ud['cluster_name'] = 'aCloudManCluster_%s' % random.randrange(1, 9999999)
if 'access_key' not in ud:
log.info("The provided user data does not contain access_key field; setting it to None..")
ud['access_key'] = None
elif ud['access_key'] == '' or ud['access_key'] is None:
log.warning("The access_key field of user data should not be empty; setting it to None.")
ud['access_key'] = None
if 'secret_key' not in ud:
log.info("The provided user data does not contain secret_key field; setting it to None.")
ud['secret_key'] = None
elif ud['secret_key'] == '' or ud['secret_key'] is None:
log.warning("The secret_key field of user data should not be empty; setting it to None.")
ud['secret_key'] = None
if 'password' not in ud:
log.warning("The provided user data should contain password field.")
elif ud['password'] == '':
log.warning("The password field of user data should not be empty.")
else: # ensure the password is a string
ud['password'] = str(ud['password'])
if 'bucket_default' not in ud:
log.debug("The provided user data does not contain bucket_default field; setting it to '%s'."
% DEFAULT_BUCKET_NAME)
ud['bucket_default'] = DEFAULT_BUCKET_NAME
elif ud['bucket_default'] == '':
log.warning("The bucket_default field of user data was empty; setting it to '%s'."
% DEFAULT_BUCKET_NAME)
ud['bucket_default'] = DEFAULT_BUCKET_NAME
if 'bucket_cluster' not in ud:
if ud['access_key'] is not None and ud['secret_key'] is not None:
ud['bucket_cluster'] = _get_bucket_name(ud['cluster_name'], ud['access_key'])
if 'role' not in ud:
ud['role'] = 'master'
if 'cloudman_home' not in ud:
ud['cloudman_home'] = CLOUDMAN_HOME
if 'boot_script_name' not in ud:
ud['boot_script_name'] = DEFAULT_BOOT_SCRIPT_NAME
ud['boot_script_path'] = LOCAL_PATH # Marks where boot script was saved
log.debug("Composed user data: %s" % ud)
with open(USER_DATA_FILE, 'w') as ud_yaml:
yaml.dump(ud, ud_yaml, default_flow_style=False)
_ensure_ephemeral_disk_mounted()
# Get & run boot script
if _get_boot_script(ud):
_run_boot_script(DEFAULT_BOOT_SCRIPT_NAME)
# ====================== Driver code ======================
def _parse_user_data(ud):
if ud == '':
_handle_empty()
elif _isurl(ud):
_handle_url(ud)
else: # default to yaml
_handle_yaml(ud)
def main():
if not os.path.exists(LOCAL_PATH):
os.mkdir(LOCAL_PATH)
global log
log = _setup_logging()
ud = _get_user_data()
_parse_user_data(ud)
log.info("---> %s done <---" % sys.argv[0])
if __name__ == "__main__":
main()
| {
"content_hash": "48051774c9935bb4a9d99d8ee0130340",
"timestamp": "",
"source": "github",
"line_count": 604,
"max_line_length": 112,
"avg_line_length": 40.19867549668874,
"alnum_prop": 0.6021004942339374,
"repo_name": "IntersectAustralia/galaxy-cloudman-playbook",
"id": "9881198959a777ffc9184d553997f237ac283840",
"size": "24302",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "roles/galaxyprojectdotorg.cloudman-image/files/ec2autorun.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "HTML",
"bytes": "2812"
},
{
"name": "Python",
"bytes": "63655"
},
{
"name": "Shell",
"bytes": "12421"
},
{
"name": "VimL",
"bytes": "1331"
}
],
"symlink_target": ""
} |
import sys
from exceptions import Exception
import pcapy
from pcapy import open_offline
from impacket.ImpactDecoder import EthDecoder, LinuxSLLDecoder
class Connection:
"""This class can be used as a key in a dictionary to select a connection
given a pair of peers. Two connections are considered the same if both
peers are equal, despite the order in which they were passed to the
class constructor.
"""
def __init__(self, p1, p2):
"""This constructor takes two tuples, one for each peer. The first
element in each tuple is the IP address as a string, and the
second is the port as an integer.
"""
self.p1 = p1
self.p2 = p2
def getFilename(self):
"""Utility function that returns a filename composed by the IP
addresses and ports of both peers.
"""
return '%s.%d-%s.%d.pcap'%(self.p1[0],self.p1[1],self.p2[0],self.p2[1])
def __cmp__(self, other):
if ((self.p1 == other.p1 and self.p2 == other.p2)
or (self.p1 == other.p2 and self.p2 == other.p1)):
return 0
else:
return -1
def __hash__(self):
return (hash(self.p1[0]) ^ hash(self.p1[1])
^ hash(self.p2[0]) ^ hash(self.p2[1]))
class Decoder:
def __init__(self, pcapObj):
# Query the type of the link and instantiate a decoder accordingly.
datalink = pcapObj.datalink()
if pcapy.DLT_EN10MB == datalink:
self.decoder = EthDecoder()
elif pcapy.DLT_LINUX_SLL == datalink:
self.decoder = LinuxSLLDecoder()
else:
raise Exception("Datalink type not supported: " % datalink)
self.pcap = pcapObj
self.connections = {}
def start(self):
# Sniff ad infinitum.
# PacketHandler shall be invoked by pcap for every packet.
self.pcap.loop(0, self.packetHandler)
def packetHandler(self, hdr, data):
"""Handles an incoming pcap packet. This method only knows how
to recognize TCP/IP connections.
Be sure that only TCP packets are passed onto this handler (or
fix the code to ignore the others).
Setting r"ip proto \tcp" as part of the pcap filter expression
suffices, and there shouldn't be any problem combining that with
other expressions.
"""
# Use the ImpactDecoder to turn the rawpacket into a hierarchy
# of ImpactPacket instances.
p = self.decoder.decode(data)
ip = p.child()
tcp = ip.child()
# Build a distinctive key for this pair of peers.
src = (ip.get_ip_src(), tcp.get_th_sport() )
dst = (ip.get_ip_dst(), tcp.get_th_dport() )
con = Connection(src,dst)
# If there isn't an entry associated yetwith this connection,
# open a new pcapdumper and create an association.
if not self.connections.has_key(con):
fn = con.getFilename()
print "Found a new connection, storing into:", fn
try:
dumper = self.pcap.dump_open(fn)
except pcapy.PcapError, e:
print "Can't write packet to:", fn
return
self.connections[con] = dumper
# Write the packet to the corresponding file.
self.connections[con].dump(hdr, data)
def main(filename):
# Open file
p = open_offline(filename)
# At the moment the callback only accepts TCP/IP packets.
p.setfilter(r'ip proto \tcp')
print "Reading from %s: linktype=%d" % (filename, p.datalink())
# Start decoding process.
Decoder(p).start()
# Process command-line arguments.
if __name__ == '__main__':
if len(sys.argv) <= 1:
print "Usage: %s <filename>" % sys.argv[0]
sys.exit(1)
main(sys.argv[1])
| {
"content_hash": "458a57a2dc5de3ee5f08be080ee1b9ca",
"timestamp": "",
"source": "github",
"line_count": 120,
"max_line_length": 79,
"avg_line_length": 31.991666666666667,
"alnum_prop": 0.6017191977077364,
"repo_name": "tholum/PiBunny",
"id": "05841fbe00ab137dee53997d4d8608b4accca372",
"size": "4396",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "system.d/library/tools_installer/tools_to_install/impacket/examples/split.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Batchfile",
"bytes": "3527"
},
{
"name": "HTML",
"bytes": "195334"
},
{
"name": "JavaScript",
"bytes": "1156309"
},
{
"name": "PowerShell",
"bytes": "5359"
},
{
"name": "Python",
"bytes": "6368546"
},
{
"name": "Shell",
"bytes": "40720"
},
{
"name": "Visual Basic",
"bytes": "5660"
}
],
"symlink_target": ""
} |
from __future__ import print_function
from collections import deque
from datetime import datetime
import operator
import pytest
from numpy import nan, random
import numpy as np
from pandas.compat import lrange, range
from pandas import compat
from pandas import (DataFrame, Series, MultiIndex, Timestamp,
date_range)
import pandas.core.common as com
import pandas.io.formats.printing as printing
import pandas as pd
from pandas.util.testing import (assert_numpy_array_equal,
assert_series_equal,
assert_frame_equal)
import pandas.util.testing as tm
from pandas.tests.frame.common import (TestData, _check_mixed_float,
_check_mixed_int)
class TestDataFrameOperators(TestData):
def test_operators(self):
garbage = random.random(4)
colSeries = Series(garbage, index=np.array(self.frame.columns))
idSum = self.frame + self.frame
seriesSum = self.frame + colSeries
for col, series in compat.iteritems(idSum):
for idx, val in compat.iteritems(series):
origVal = self.frame[col][idx] * 2
if not np.isnan(val):
assert val == origVal
else:
assert np.isnan(origVal)
for col, series in compat.iteritems(seriesSum):
for idx, val in compat.iteritems(series):
origVal = self.frame[col][idx] + colSeries[col]
if not np.isnan(val):
assert val == origVal
else:
assert np.isnan(origVal)
added = self.frame2 + self.frame2
expected = self.frame2 * 2
assert_frame_equal(added, expected)
df = DataFrame({'a': ['a', None, 'b']})
assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']}))
# Test for issue #10181
for dtype in ('float', 'int64'):
frames = [
DataFrame(dtype=dtype),
DataFrame(columns=['A'], dtype=dtype),
DataFrame(index=[0], dtype=dtype),
]
for df in frames:
assert (df + df).equals(df)
assert_frame_equal(df + df, df)
def test_ops_np_scalar(self):
vals, xs = np.random.rand(5, 3), [nan, 7, -23, 2.718, -3.14, np.inf]
f = lambda x: DataFrame(x, index=list('ABCDE'),
columns=['jim', 'joe', 'jolie'])
df = f(vals)
for x in xs:
assert_frame_equal(df / np.array(x), f(vals / x))
assert_frame_equal(np.array(x) * df, f(vals * x))
assert_frame_equal(df + np.array(x), f(vals + x))
assert_frame_equal(np.array(x) - df, f(x - vals))
def test_operators_boolean(self):
# GH 5808
# empty frames, non-mixed dtype
result = DataFrame(index=[1]) & DataFrame(index=[1])
assert_frame_equal(result, DataFrame(index=[1]))
result = DataFrame(index=[1]) | DataFrame(index=[1])
assert_frame_equal(result, DataFrame(index=[1]))
result = DataFrame(index=[1]) & DataFrame(index=[1, 2])
assert_frame_equal(result, DataFrame(index=[1, 2]))
result = DataFrame(index=[1], columns=['A']) & DataFrame(
index=[1], columns=['A'])
assert_frame_equal(result, DataFrame(index=[1], columns=['A']))
result = DataFrame(True, index=[1], columns=['A']) & DataFrame(
True, index=[1], columns=['A'])
assert_frame_equal(result, DataFrame(True, index=[1], columns=['A']))
result = DataFrame(True, index=[1], columns=['A']) | DataFrame(
True, index=[1], columns=['A'])
assert_frame_equal(result, DataFrame(True, index=[1], columns=['A']))
# boolean ops
result = DataFrame(1, index=[1], columns=['A']) | DataFrame(
True, index=[1], columns=['A'])
assert_frame_equal(result, DataFrame(1, index=[1], columns=['A']))
def f():
DataFrame(1.0, index=[1], columns=['A']) | DataFrame(
True, index=[1], columns=['A'])
pytest.raises(TypeError, f)
def f():
DataFrame('foo', index=[1], columns=['A']) | DataFrame(
True, index=[1], columns=['A'])
pytest.raises(TypeError, f)
def test_operators_none_as_na(self):
df = DataFrame({"col1": [2, 5.0, 123, None],
"col2": [1, 2, 3, 4]}, dtype=object)
ops = [operator.add, operator.sub, operator.mul, operator.truediv]
# since filling converts dtypes from object, changed expected to be
# object
for op in ops:
filled = df.fillna(np.nan)
result = op(df, 3)
expected = op(filled, 3).astype(object)
expected[com.isna(expected)] = None
assert_frame_equal(result, expected)
result = op(df, df)
expected = op(filled, filled).astype(object)
expected[com.isna(expected)] = None
assert_frame_equal(result, expected)
result = op(df, df.fillna(7))
assert_frame_equal(result, expected)
result = op(df.fillna(7), df)
assert_frame_equal(result, expected, check_dtype=False)
def test_comparison_invalid(self):
def check(df, df2):
for (x, y) in [(df, df2), (df2, df)]:
pytest.raises(TypeError, lambda: x == y)
pytest.raises(TypeError, lambda: x != y)
pytest.raises(TypeError, lambda: x >= y)
pytest.raises(TypeError, lambda: x > y)
pytest.raises(TypeError, lambda: x < y)
pytest.raises(TypeError, lambda: x <= y)
# GH4968
# invalid date/int comparisons
df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a'])
df['dates'] = date_range('20010101', periods=len(df))
df2 = df.copy()
df2['dates'] = df['a']
check(df, df2)
df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b'])
df2 = DataFrame({'a': date_range('20010101', periods=len(
df)), 'b': date_range('20100101', periods=len(df))})
check(df, df2)
def test_timestamp_compare(self):
# make sure we can compare Timestamps on the right AND left hand side
# GH4982
df = DataFrame({'dates1': date_range('20010101', periods=10),
'dates2': date_range('20010102', periods=10),
'intcol': np.random.randint(1000000000, size=10),
'floatcol': np.random.randn(10),
'stringcol': list(tm.rands(10))})
df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT
ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq',
'ne': 'ne'}
for left, right in ops.items():
left_f = getattr(operator, left)
right_f = getattr(operator, right)
# no nats
expected = left_f(df, Timestamp('20010109'))
result = right_f(Timestamp('20010109'), df)
assert_frame_equal(result, expected)
# nats
expected = left_f(df, Timestamp('nat'))
result = right_f(Timestamp('nat'), df)
assert_frame_equal(result, expected)
def test_modulo(self):
# GH3590, modulo as ints
p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]})
# this is technically wrong as the integer portion is coerced to float
# ###
expected = DataFrame({'first': Series([0, 0, 0, 0], dtype='float64'),
'second': Series([np.nan, np.nan, np.nan, 0])})
result = p % p
assert_frame_equal(result, expected)
# numpy has a slightly different (wrong) treatement
with np.errstate(all='ignore'):
arr = p.values % p.values
result2 = DataFrame(arr, index=p.index,
columns=p.columns, dtype='float64')
result2.iloc[0:3, 1] = np.nan
assert_frame_equal(result2, expected)
result = p % 0
expected = DataFrame(np.nan, index=p.index, columns=p.columns)
assert_frame_equal(result, expected)
# numpy has a slightly different (wrong) treatement
with np.errstate(all='ignore'):
arr = p.values.astype('float64') % 0
result2 = DataFrame(arr, index=p.index, columns=p.columns)
assert_frame_equal(result2, expected)
# not commutative with series
p = DataFrame(np.random.randn(10, 5))
s = p[0]
res = s % p
res2 = p % s
assert not np.array_equal(res.fillna(0), res2.fillna(0))
def test_div(self):
# integer div, but deal with the 0's (GH 9144)
p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]})
result = p / p
expected = DataFrame({'first': Series([1.0, 1.0, 1.0, 1.0]),
'second': Series([nan, nan, nan, 1])})
assert_frame_equal(result, expected)
with np.errstate(all='ignore'):
arr = p.values.astype('float') / p.values
result2 = DataFrame(arr, index=p.index,
columns=p.columns)
assert_frame_equal(result2, expected)
result = p / 0
expected = DataFrame(np.inf, index=p.index, columns=p.columns)
expected.iloc[0:3, 1] = nan
assert_frame_equal(result, expected)
# numpy has a slightly different (wrong) treatement
with np.errstate(all='ignore'):
arr = p.values.astype('float64') / 0
result2 = DataFrame(arr, index=p.index,
columns=p.columns)
assert_frame_equal(result2, expected)
p = DataFrame(np.random.randn(10, 5))
s = p[0]
res = s / p
res2 = p / s
assert not np.array_equal(res.fillna(0), res2.fillna(0))
def test_logical_operators(self):
def _check_bin_op(op):
result = op(df1, df2)
expected = DataFrame(op(df1.values, df2.values), index=df1.index,
columns=df1.columns)
assert result.values.dtype == np.bool_
assert_frame_equal(result, expected)
def _check_unary_op(op):
result = op(df1)
expected = DataFrame(op(df1.values), index=df1.index,
columns=df1.columns)
assert result.values.dtype == np.bool_
assert_frame_equal(result, expected)
df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True},
'b': {'a': False, 'b': True, 'c': False,
'd': False, 'e': False},
'c': {'a': False, 'b': False, 'c': True,
'd': False, 'e': False},
'd': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True},
'e': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}}
df2 = {'a': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False},
'b': {'a': False, 'b': True, 'c': False,
'd': False, 'e': False},
'c': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False},
'd': {'a': False, 'b': False, 'c': False,
'd': True, 'e': False},
'e': {'a': False, 'b': False, 'c': False,
'd': False, 'e': True}}
df1 = DataFrame(df1)
df2 = DataFrame(df2)
_check_bin_op(operator.and_)
_check_bin_op(operator.or_)
_check_bin_op(operator.xor)
# operator.neg is deprecated in numpy >= 1.9
_check_unary_op(operator.inv)
@pytest.mark.parametrize('op,res', [('__eq__', False),
('__ne__', True)])
def test_logical_typeerror_with_non_valid(self, op, res):
# we are comparing floats vs a string
result = getattr(self.frame, op)('foo')
assert bool(result.all().all()) is res
def test_logical_with_nas(self):
d = DataFrame({'a': [np.nan, False], 'b': [True, True]})
# GH4947
# bool comparisons should return bool
result = d['a'] | d['b']
expected = Series([False, True])
assert_series_equal(result, expected)
# GH4604, automatic casting here
result = d['a'].fillna(False) | d['b']
expected = Series([True, True])
assert_series_equal(result, expected)
result = d['a'].fillna(False, downcast=False) | d['b']
expected = Series([True, True])
assert_series_equal(result, expected)
def test_neg(self):
# what to do?
assert_frame_equal(-self.frame, -1 * self.frame)
def test_invert(self):
assert_frame_equal(-(self.frame < 0), ~(self.frame < 0))
def test_arith_flex_frame(self):
ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod']
if not compat.PY3:
aliases = {}
else:
aliases = {'div': 'truediv'}
for op in ops:
try:
alias = aliases.get(op, op)
f = getattr(operator, alias)
result = getattr(self.frame, op)(2 * self.frame)
exp = f(self.frame, 2 * self.frame)
assert_frame_equal(result, exp)
# vs mix float
result = getattr(self.mixed_float, op)(2 * self.mixed_float)
exp = f(self.mixed_float, 2 * self.mixed_float)
assert_frame_equal(result, exp)
_check_mixed_float(result, dtype=dict(C=None))
# vs mix int
if op in ['add', 'sub', 'mul']:
result = getattr(self.mixed_int, op)(2 + self.mixed_int)
exp = f(self.mixed_int, 2 + self.mixed_int)
# no overflow in the uint
dtype = None
if op in ['sub']:
dtype = dict(B='uint64', C=None)
elif op in ['add', 'mul']:
dtype = dict(C=None)
assert_frame_equal(result, exp)
_check_mixed_int(result, dtype=dtype)
# rops
r_f = lambda x, y: f(y, x)
result = getattr(self.frame, 'r' + op)(2 * self.frame)
exp = r_f(self.frame, 2 * self.frame)
assert_frame_equal(result, exp)
# vs mix float
result = getattr(self.mixed_float, op)(
2 * self.mixed_float)
exp = f(self.mixed_float, 2 * self.mixed_float)
assert_frame_equal(result, exp)
_check_mixed_float(result, dtype=dict(C=None))
result = getattr(self.intframe, op)(2 * self.intframe)
exp = f(self.intframe, 2 * self.intframe)
assert_frame_equal(result, exp)
# vs mix int
if op in ['add', 'sub', 'mul']:
result = getattr(self.mixed_int, op)(
2 + self.mixed_int)
exp = f(self.mixed_int, 2 + self.mixed_int)
# no overflow in the uint
dtype = None
if op in ['sub']:
dtype = dict(B='uint64', C=None)
elif op in ['add', 'mul']:
dtype = dict(C=None)
assert_frame_equal(result, exp)
_check_mixed_int(result, dtype=dtype)
except:
printing.pprint_thing("Failing operation %r" % op)
raise
# ndim >= 3
ndim_5 = np.ones(self.frame.shape + (3, 4, 5))
msg = "Unable to coerce to Series/DataFrame"
with tm.assert_raises_regex(ValueError, msg):
f(self.frame, ndim_5)
with tm.assert_raises_regex(ValueError, msg):
getattr(self.frame, op)(ndim_5)
# res_add = self.frame.add(self.frame)
# res_sub = self.frame.sub(self.frame)
# res_mul = self.frame.mul(self.frame)
# res_div = self.frame.div(2 * self.frame)
# assert_frame_equal(res_add, self.frame + self.frame)
# assert_frame_equal(res_sub, self.frame - self.frame)
# assert_frame_equal(res_mul, self.frame * self.frame)
# assert_frame_equal(res_div, self.frame / (2 * self.frame))
const_add = self.frame.add(1)
assert_frame_equal(const_add, self.frame + 1)
# corner cases
result = self.frame.add(self.frame[:0])
assert_frame_equal(result, self.frame * np.nan)
result = self.frame[:0].add(self.frame)
assert_frame_equal(result, self.frame * np.nan)
with tm.assert_raises_regex(NotImplementedError, 'fill_value'):
self.frame.add(self.frame.iloc[0], fill_value=3)
with tm.assert_raises_regex(NotImplementedError, 'fill_value'):
self.frame.add(self.frame.iloc[0], axis='index', fill_value=3)
def test_binary_ops_align(self):
# test aligning binary ops
# GH 6681
index = MultiIndex.from_product([list('abc'),
['one', 'two', 'three'],
[1, 2, 3]],
names=['first', 'second', 'third'])
df = DataFrame(np.arange(27 * 3).reshape(27, 3),
index=index,
columns=['value1', 'value2', 'value3']).sort_index()
idx = pd.IndexSlice
for op in ['add', 'sub', 'mul', 'div', 'truediv']:
opa = getattr(operator, op, None)
if opa is None:
continue
x = Series([1.0, 10.0, 100.0], [1, 2, 3])
result = getattr(df, op)(x, level='third', axis=0)
expected = pd.concat([opa(df.loc[idx[:, :, i], :], v)
for i, v in x.iteritems()]).sort_index()
assert_frame_equal(result, expected)
x = Series([1.0, 10.0], ['two', 'three'])
result = getattr(df, op)(x, level='second', axis=0)
expected = (pd.concat([opa(df.loc[idx[:, i], :], v)
for i, v in x.iteritems()])
.reindex_like(df).sort_index())
assert_frame_equal(result, expected)
# GH9463 (alignment level of dataframe with series)
midx = MultiIndex.from_product([['A', 'B'], ['a', 'b']])
df = DataFrame(np.ones((2, 4), dtype='int64'), columns=midx)
s = pd.Series({'a': 1, 'b': 2})
df2 = df.copy()
df2.columns.names = ['lvl0', 'lvl1']
s2 = s.copy()
s2.index.name = 'lvl1'
# different cases of integer/string level names:
res1 = df.mul(s, axis=1, level=1)
res2 = df.mul(s2, axis=1, level=1)
res3 = df2.mul(s, axis=1, level=1)
res4 = df2.mul(s2, axis=1, level=1)
res5 = df2.mul(s, axis=1, level='lvl1')
res6 = df2.mul(s2, axis=1, level='lvl1')
exp = DataFrame(np.array([[1, 2, 1, 2], [1, 2, 1, 2]], dtype='int64'),
columns=midx)
for res in [res1, res2]:
assert_frame_equal(res, exp)
exp.columns.names = ['lvl0', 'lvl1']
for res in [res3, res4, res5, res6]:
assert_frame_equal(res, exp)
def test_arith_mixed(self):
left = DataFrame({'A': ['a', 'b', 'c'],
'B': [1, 2, 3]})
result = left + left
expected = DataFrame({'A': ['aa', 'bb', 'cc'],
'B': [2, 4, 6]})
assert_frame_equal(result, expected)
def test_arith_getitem_commute(self):
df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]})
self._test_op(df, operator.add)
self._test_op(df, operator.sub)
self._test_op(df, operator.mul)
self._test_op(df, operator.truediv)
self._test_op(df, operator.floordiv)
self._test_op(df, operator.pow)
self._test_op(df, lambda x, y: y + x)
self._test_op(df, lambda x, y: y - x)
self._test_op(df, lambda x, y: y * x)
self._test_op(df, lambda x, y: y / x)
self._test_op(df, lambda x, y: y ** x)
self._test_op(df, lambda x, y: x + y)
self._test_op(df, lambda x, y: x - y)
self._test_op(df, lambda x, y: x * y)
self._test_op(df, lambda x, y: x / y)
self._test_op(df, lambda x, y: x ** y)
@staticmethod
def _test_op(df, op):
result = op(df, 1)
if not df.columns.is_unique:
raise ValueError("Only unique columns supported by this test")
for col in result.columns:
assert_series_equal(result[col], op(df[col], 1))
def test_bool_flex_frame(self):
data = np.random.randn(5, 3)
other_data = np.random.randn(5, 3)
df = DataFrame(data)
other = DataFrame(other_data)
ndim_5 = np.ones(df.shape + (1, 3))
# Unaligned
def _check_unaligned_frame(meth, op, df, other):
part_o = other.loc[3:, 1:].copy()
rs = meth(part_o)
xp = op(df, part_o.reindex(index=df.index, columns=df.columns))
assert_frame_equal(rs, xp)
# DataFrame
assert df.eq(df).values.all()
assert not df.ne(df).values.any()
for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']:
f = getattr(df, op)
o = getattr(operator, op)
# No NAs
assert_frame_equal(f(other), o(df, other))
_check_unaligned_frame(f, o, df, other)
# ndarray
assert_frame_equal(f(other.values), o(df, other.values))
# scalar
assert_frame_equal(f(0), o(df, 0))
# NAs
msg = "Unable to coerce to Series/DataFrame"
assert_frame_equal(f(np.nan), o(df, np.nan))
with tm.assert_raises_regex(ValueError, msg):
f(ndim_5)
# Series
def _test_seq(df, idx_ser, col_ser):
idx_eq = df.eq(idx_ser, axis=0)
col_eq = df.eq(col_ser)
idx_ne = df.ne(idx_ser, axis=0)
col_ne = df.ne(col_ser)
assert_frame_equal(col_eq, df == Series(col_ser))
assert_frame_equal(col_eq, -col_ne)
assert_frame_equal(idx_eq, -idx_ne)
assert_frame_equal(idx_eq, df.T.eq(idx_ser).T)
assert_frame_equal(col_eq, df.eq(list(col_ser)))
assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0))
assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0))
idx_gt = df.gt(idx_ser, axis=0)
col_gt = df.gt(col_ser)
idx_le = df.le(idx_ser, axis=0)
col_le = df.le(col_ser)
assert_frame_equal(col_gt, df > Series(col_ser))
assert_frame_equal(col_gt, -col_le)
assert_frame_equal(idx_gt, -idx_le)
assert_frame_equal(idx_gt, df.T.gt(idx_ser).T)
idx_ge = df.ge(idx_ser, axis=0)
col_ge = df.ge(col_ser)
idx_lt = df.lt(idx_ser, axis=0)
col_lt = df.lt(col_ser)
assert_frame_equal(col_ge, df >= Series(col_ser))
assert_frame_equal(col_ge, -col_lt)
assert_frame_equal(idx_ge, -idx_lt)
assert_frame_equal(idx_ge, df.T.ge(idx_ser).T)
idx_ser = Series(np.random.randn(5))
col_ser = Series(np.random.randn(3))
_test_seq(df, idx_ser, col_ser)
# list/tuple
_test_seq(df, idx_ser.values, col_ser.values)
# NA
df.loc[0, 0] = np.nan
rs = df.eq(df)
assert not rs.loc[0, 0]
rs = df.ne(df)
assert rs.loc[0, 0]
rs = df.gt(df)
assert not rs.loc[0, 0]
rs = df.lt(df)
assert not rs.loc[0, 0]
rs = df.ge(df)
assert not rs.loc[0, 0]
rs = df.le(df)
assert not rs.loc[0, 0]
# complex
arr = np.array([np.nan, 1, 6, np.nan])
arr2 = np.array([2j, np.nan, 7, None])
df = DataFrame({'a': arr})
df2 = DataFrame({'a': arr2})
rs = df.gt(df2)
assert not rs.values.any()
rs = df.ne(df2)
assert rs.values.all()
arr3 = np.array([2j, np.nan, None])
df3 = DataFrame({'a': arr3})
rs = df3.gt(2j)
assert not rs.values.any()
# corner, dtype=object
df1 = DataFrame({'col': ['foo', np.nan, 'bar']})
df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']})
result = df1.ne(df2)
exp = DataFrame({'col': [False, True, False]})
assert_frame_equal(result, exp)
def test_return_dtypes_bool_op_costant(self):
# GH15077
df = DataFrame({'x': [1, 2, 3], 'y': [1., 2., 3.]})
const = 2
# not empty DataFrame
for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']:
result = getattr(df, op)(const).get_dtype_counts()
tm.assert_series_equal(result, Series([2], ['bool']))
# empty DataFrame
empty = df.iloc[:0]
for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']:
result = getattr(empty, op)(const).get_dtype_counts()
tm.assert_series_equal(result, Series([2], ['bool']))
def test_dti_tz_convert_to_utc(self):
base = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
'2011-01-03'], tz='UTC')
idx1 = base.tz_convert('Asia/Tokyo')[:2]
idx2 = base.tz_convert('US/Eastern')[1:]
df1 = DataFrame({'A': [1, 2]}, index=idx1)
df2 = DataFrame({'A': [1, 1]}, index=idx2)
exp = DataFrame({'A': [np.nan, 3, np.nan]}, index=base)
assert_frame_equal(df1 + df2, exp)
def test_arith_flex_series(self):
df = self.simple
row = df.xs('a')
col = df['two']
# after arithmetic refactor, add truediv here
ops = ['add', 'sub', 'mul', 'mod']
for op in ops:
f = getattr(df, op)
op = getattr(operator, op)
assert_frame_equal(f(row), op(df, row))
assert_frame_equal(f(col, axis=0), op(df.T, col).T)
# special case for some reason
assert_frame_equal(df.add(row, axis=None), df + row)
# cases which will be refactored after big arithmetic refactor
assert_frame_equal(df.div(row), df / row)
assert_frame_equal(df.div(col, axis=0), (df.T / col).T)
# broadcasting issue in GH7325
df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='int64')
expected = DataFrame([[nan, np.inf], [1.0, 1.5], [1.0, 1.25]])
result = df.div(df[0], axis='index')
assert_frame_equal(result, expected)
df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='float64')
expected = DataFrame([[np.nan, np.inf], [1.0, 1.5], [1.0, 1.25]])
result = df.div(df[0], axis='index')
assert_frame_equal(result, expected)
def test_arith_non_pandas_object(self):
df = self.simple
val1 = df.xs('a').values
added = DataFrame(df.values + val1, index=df.index, columns=df.columns)
assert_frame_equal(df + val1, added)
added = DataFrame((df.values.T + val1).T,
index=df.index, columns=df.columns)
assert_frame_equal(df.add(val1, axis=0), added)
val2 = list(df['two'])
added = DataFrame(df.values + val2, index=df.index, columns=df.columns)
assert_frame_equal(df + val2, added)
added = DataFrame((df.values.T + val2).T, index=df.index,
columns=df.columns)
assert_frame_equal(df.add(val2, axis='index'), added)
val3 = np.random.rand(*df.shape)
added = DataFrame(df.values + val3, index=df.index, columns=df.columns)
assert_frame_equal(df.add(val3), added)
@pytest.mark.parametrize('values', [[1, 2], (1, 2), np.array([1, 2]),
range(1, 3), deque([1, 2])])
def test_arith_alignment_non_pandas_object(self, values):
# GH 17901
df = DataFrame({'A': [1, 1], 'B': [1, 1]})
expected = DataFrame({'A': [2, 2], 'B': [3, 3]})
result = df + values
assert_frame_equal(result, expected)
def test_combineFrame(self):
frame_copy = self.frame.reindex(self.frame.index[::2])
del frame_copy['D']
frame_copy['C'][:5] = nan
added = self.frame + frame_copy
indexer = added['A'].valid().index
exp = (self.frame['A'] * 2).copy()
tm.assert_series_equal(added['A'].valid(), exp.loc[indexer])
exp.loc[~exp.index.isin(indexer)] = np.nan
tm.assert_series_equal(added['A'], exp.loc[added['A'].index])
assert np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()
# assert(False)
assert np.isnan(added['D']).all()
self_added = self.frame + self.frame
tm.assert_index_equal(self_added.index, self.frame.index)
added_rev = frame_copy + self.frame
assert np.isnan(added['D']).all()
assert np.isnan(added_rev['D']).all()
# corner cases
# empty
plus_empty = self.frame + self.empty
assert np.isnan(plus_empty.values).all()
empty_plus = self.empty + self.frame
assert np.isnan(empty_plus.values).all()
empty_empty = self.empty + self.empty
assert empty_empty.empty
# out of order
reverse = self.frame.reindex(columns=self.frame.columns[::-1])
assert_frame_equal(reverse + self.frame, self.frame * 2)
# mix vs float64, upcast
added = self.frame + self.mixed_float
_check_mixed_float(added, dtype='float64')
added = self.mixed_float + self.frame
_check_mixed_float(added, dtype='float64')
# mix vs mix
added = self.mixed_float + self.mixed_float2
_check_mixed_float(added, dtype=dict(C=None))
added = self.mixed_float2 + self.mixed_float
_check_mixed_float(added, dtype=dict(C=None))
# with int
added = self.frame + self.mixed_int
_check_mixed_float(added, dtype='float64')
def test_combineSeries(self):
# Series
series = self.frame.xs(self.frame.index[0])
added = self.frame + series
for key, s in compat.iteritems(added):
assert_series_equal(s, self.frame[key] + series[key])
larger_series = series.to_dict()
larger_series['E'] = 1
larger_series = Series(larger_series)
larger_added = self.frame + larger_series
for key, s in compat.iteritems(self.frame):
assert_series_equal(larger_added[key], s + series[key])
assert 'E' in larger_added
assert np.isnan(larger_added['E']).all()
# no upcast needed
added = self.mixed_float + series
_check_mixed_float(added)
# vs mix (upcast) as needed
added = self.mixed_float + series.astype('float32')
_check_mixed_float(added, dtype=dict(C=None))
added = self.mixed_float + series.astype('float16')
_check_mixed_float(added, dtype=dict(C=None))
# these raise with numexpr.....as we are adding an int64 to an
# uint64....weird vs int
# added = self.mixed_int + (100*series).astype('int64')
# _check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C =
# 'int64', D = 'int64'))
# added = self.mixed_int + (100*series).astype('int32')
# _check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C =
# 'int32', D = 'int64'))
# TimeSeries
ts = self.tsframe['A']
# 10890
# we no longer allow auto timeseries broadcasting
# and require explict broadcasting
added = self.tsframe.add(ts, axis='index')
for key, col in compat.iteritems(self.tsframe):
result = col + ts
assert_series_equal(added[key], result, check_names=False)
assert added[key].name == key
if col.name == ts.name:
assert result.name == 'A'
else:
assert result.name is None
smaller_frame = self.tsframe[:-5]
smaller_added = smaller_frame.add(ts, axis='index')
tm.assert_index_equal(smaller_added.index, self.tsframe.index)
smaller_ts = ts[:-5]
smaller_added2 = self.tsframe.add(smaller_ts, axis='index')
assert_frame_equal(smaller_added, smaller_added2)
# length 0, result is all-nan
result = self.tsframe.add(ts[:0], axis='index')
expected = DataFrame(np.nan, index=self.tsframe.index,
columns=self.tsframe.columns)
assert_frame_equal(result, expected)
# Frame is all-nan
result = self.tsframe[:0].add(ts, axis='index')
expected = DataFrame(np.nan, index=self.tsframe.index,
columns=self.tsframe.columns)
assert_frame_equal(result, expected)
# empty but with non-empty index
frame = self.tsframe[:1].reindex(columns=[])
result = frame.mul(ts, axis='index')
assert len(result) == len(ts)
def test_combineFunc(self):
result = self.frame * 2
tm.assert_numpy_array_equal(result.values, self.frame.values * 2)
# vs mix
result = self.mixed_float * 2
for c, s in compat.iteritems(result):
tm.assert_numpy_array_equal(
s.values, self.mixed_float[c].values * 2)
_check_mixed_float(result, dtype=dict(C=None))
result = self.empty * 2
assert result.index is self.empty.index
assert len(result.columns) == 0
def test_comparisons(self):
df1 = tm.makeTimeDataFrame()
df2 = tm.makeTimeDataFrame()
row = self.simple.xs('a')
ndim_5 = np.ones(df1.shape + (1, 1, 1))
def test_comp(func):
result = func(df1, df2)
tm.assert_numpy_array_equal(result.values,
func(df1.values, df2.values))
with tm.assert_raises_regex(ValueError,
'Wrong number of dimensions'):
func(df1, ndim_5)
result2 = func(self.simple, row)
tm.assert_numpy_array_equal(result2.values,
func(self.simple.values, row.values))
result3 = func(self.frame, 0)
tm.assert_numpy_array_equal(result3.values,
func(self.frame.values, 0))
with tm.assert_raises_regex(ValueError,
'Can only compare identically'
'-labeled DataFrame'):
func(self.simple, self.simple[:2])
test_comp(operator.eq)
test_comp(operator.ne)
test_comp(operator.lt)
test_comp(operator.gt)
test_comp(operator.ge)
test_comp(operator.le)
def test_comparison_protected_from_errstate(self):
missing_df = tm.makeDataFrame()
missing_df.iloc[0]['A'] = np.nan
with np.errstate(invalid='ignore'):
expected = missing_df.values < 0
with np.errstate(invalid='raise'):
result = (missing_df < 0).values
tm.assert_numpy_array_equal(result, expected)
def test_string_comparison(self):
df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}])
mask_a = df.a > 1
assert_frame_equal(df[mask_a], df.loc[1:1, :])
assert_frame_equal(df[-mask_a], df.loc[0:0, :])
mask_b = df.b == "foo"
assert_frame_equal(df[mask_b], df.loc[0:0, :])
assert_frame_equal(df[-mask_b], df.loc[1:1, :])
def test_float_none_comparison(self):
df = DataFrame(np.random.randn(8, 3), index=lrange(8),
columns=['A', 'B', 'C'])
pytest.raises(TypeError, df.__eq__, None)
def test_boolean_comparison(self):
# GH 4576
# boolean comparisons with a tuple/list give unexpected results
df = DataFrame(np.arange(6).reshape((3, 2)))
b = np.array([2, 2])
b_r = np.atleast_2d([2, 2])
b_c = b_r.T
l = (2, 2, 2)
tup = tuple(l)
# gt
expected = DataFrame([[False, False], [False, True], [True, True]])
result = df > b
assert_frame_equal(result, expected)
result = df.values > b
assert_numpy_array_equal(result, expected.values)
result = df > l
assert_frame_equal(result, expected)
result = df > tup
assert_frame_equal(result, expected)
result = df > b_r
assert_frame_equal(result, expected)
result = df.values > b_r
assert_numpy_array_equal(result, expected.values)
pytest.raises(ValueError, df.__gt__, b_c)
pytest.raises(ValueError, df.values.__gt__, b_c)
# ==
expected = DataFrame([[False, False], [True, False], [False, False]])
result = df == b
assert_frame_equal(result, expected)
result = df == l
assert_frame_equal(result, expected)
result = df == tup
assert_frame_equal(result, expected)
result = df == b_r
assert_frame_equal(result, expected)
result = df.values == b_r
assert_numpy_array_equal(result, expected.values)
pytest.raises(ValueError, lambda: df == b_c)
assert not np.array_equal(df.values, b_c)
# with alignment
df = DataFrame(np.arange(6).reshape((3, 2)),
columns=list('AB'), index=list('abc'))
expected.index = df.index
expected.columns = df.columns
result = df == l
assert_frame_equal(result, expected)
result = df == tup
assert_frame_equal(result, expected)
def test_boolean_comparison_error(self):
# GH 4576
# boolean comparisons with a tuple/list give unexpected results
df = DataFrame(np.arange(6).reshape((3, 2)))
# not shape compatible
pytest.raises(ValueError, lambda: df == (2, 2))
pytest.raises(ValueError, lambda: df == [2, 2])
def test_combine_generic(self):
df1 = self.frame
df2 = self.frame.loc[self.frame.index[:-5], ['A', 'B', 'C']]
combined = df1.combine(df2, np.add)
combined2 = df2.combine(df1, np.add)
assert combined['D'].isna().all()
assert combined2['D'].isna().all()
chunk = combined.loc[combined.index[:-5], ['A', 'B', 'C']]
chunk2 = combined2.loc[combined2.index[:-5], ['A', 'B', 'C']]
exp = self.frame.loc[self.frame.index[:-5],
['A', 'B', 'C']].reindex_like(chunk) * 2
assert_frame_equal(chunk, exp)
assert_frame_equal(chunk2, exp)
def test_inplace_ops_alignment(self):
# inplace ops / ops alignment
# GH 8511
columns = list('abcdefg')
X_orig = DataFrame(np.arange(10 * len(columns))
.reshape(-1, len(columns)),
columns=columns, index=range(10))
Z = 100 * X_orig.iloc[:, 1:-1].copy()
block1 = list('bedcf')
subs = list('bcdef')
# add
X = X_orig.copy()
result1 = (X[block1] + Z).reindex(columns=subs)
X[block1] += Z
result2 = X.reindex(columns=subs)
X = X_orig.copy()
result3 = (X[block1] + Z[block1]).reindex(columns=subs)
X[block1] += Z[block1]
result4 = X.reindex(columns=subs)
assert_frame_equal(result1, result2)
assert_frame_equal(result1, result3)
assert_frame_equal(result1, result4)
# sub
X = X_orig.copy()
result1 = (X[block1] - Z).reindex(columns=subs)
X[block1] -= Z
result2 = X.reindex(columns=subs)
X = X_orig.copy()
result3 = (X[block1] - Z[block1]).reindex(columns=subs)
X[block1] -= Z[block1]
result4 = X.reindex(columns=subs)
assert_frame_equal(result1, result2)
assert_frame_equal(result1, result3)
assert_frame_equal(result1, result4)
def test_inplace_ops_identity(self):
# GH 5104
# make sure that we are actually changing the object
s_orig = Series([1, 2, 3])
df_orig = DataFrame(np.random.randint(0, 5, size=10).reshape(-1, 5))
# no dtype change
s = s_orig.copy()
s2 = s
s += 1
assert_series_equal(s, s2)
assert_series_equal(s_orig + 1, s)
assert s is s2
assert s._data is s2._data
df = df_orig.copy()
df2 = df
df += 1
assert_frame_equal(df, df2)
assert_frame_equal(df_orig + 1, df)
assert df is df2
assert df._data is df2._data
# dtype change
s = s_orig.copy()
s2 = s
s += 1.5
assert_series_equal(s, s2)
assert_series_equal(s_orig + 1.5, s)
df = df_orig.copy()
df2 = df
df += 1.5
assert_frame_equal(df, df2)
assert_frame_equal(df_orig + 1.5, df)
assert df is df2
assert df._data is df2._data
# mixed dtype
arr = np.random.randint(0, 10, size=5)
df_orig = DataFrame({'A': arr.copy(), 'B': 'foo'})
df = df_orig.copy()
df2 = df
df['A'] += 1
expected = DataFrame({'A': arr.copy() + 1, 'B': 'foo'})
assert_frame_equal(df, expected)
assert_frame_equal(df2, expected)
assert df._data is df2._data
df = df_orig.copy()
df2 = df
df['A'] += 1.5
expected = DataFrame({'A': arr.copy() + 1.5, 'B': 'foo'})
assert_frame_equal(df, expected)
assert_frame_equal(df2, expected)
assert df._data is df2._data
@pytest.mark.parametrize('op', ['add', 'and', 'div', 'floordiv', 'mod',
'mul', 'or', 'pow', 'sub', 'truediv',
'xor'])
def test_inplace_ops_identity2(self, op):
if compat.PY3 and op == 'div':
return
df = DataFrame({'a': [1., 2., 3.],
'b': [1, 2, 3]})
operand = 2
if op in ('and', 'or', 'xor'):
# cannot use floats for boolean ops
df['a'] = [True, False, True]
df_copy = df.copy()
iop = '__i{}__'.format(op)
op = '__{}__'.format(op)
# no id change and value is correct
getattr(df, iop)(operand)
expected = getattr(df_copy, op)(operand)
assert_frame_equal(df, expected)
expected = id(df)
assert id(df) == expected
def test_alignment_non_pandas(self):
index = ['A', 'B', 'C']
columns = ['X', 'Y', 'Z']
df = pd.DataFrame(np.random.randn(3, 3), index=index, columns=columns)
align = pd.core.ops._align_method_FRAME
for val in [[1, 2, 3], (1, 2, 3), np.array([1, 2, 3], dtype=np.int64),
range(1, 4)]:
tm.assert_series_equal(align(df, val, 'index'),
Series([1, 2, 3], index=df.index))
tm.assert_series_equal(align(df, val, 'columns'),
Series([1, 2, 3], index=df.columns))
# length mismatch
msg = 'Unable to coerce to Series, length must be 3: given 2'
for val in [[1, 2], (1, 2), np.array([1, 2]), range(1, 3)]:
with tm.assert_raises_regex(ValueError, msg):
align(df, val, 'index')
with tm.assert_raises_regex(ValueError, msg):
align(df, val, 'columns')
val = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
tm.assert_frame_equal(align(df, val, 'index'),
DataFrame(val, index=df.index,
columns=df.columns))
tm.assert_frame_equal(align(df, val, 'columns'),
DataFrame(val, index=df.index,
columns=df.columns))
# shape mismatch
msg = 'Unable to coerce to DataFrame, shape must be'
val = np.array([[1, 2, 3], [4, 5, 6]])
with tm.assert_raises_regex(ValueError, msg):
align(df, val, 'index')
with tm.assert_raises_regex(ValueError, msg):
align(df, val, 'columns')
val = np.zeros((3, 3, 3))
with pytest.raises(ValueError):
align(df, val, 'index')
with pytest.raises(ValueError):
align(df, val, 'columns')
| {
"content_hash": "87b026aef75a4722689aea2d270182b9",
"timestamp": "",
"source": "github",
"line_count": 1249,
"max_line_length": 79,
"avg_line_length": 36.056845476381106,
"alnum_prop": 0.516598201398912,
"repo_name": "NixaSoftware/CVis",
"id": "986ba543141929a024cd8098cccc5f76819c35d7",
"size": "45060",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "venv/lib/python2.7/site-packages/pandas/tests/frame/test_operators.py",
"mode": "33261",
"license": "apache-2.0",
"language": [
{
"name": "Assembly",
"bytes": "160965"
},
{
"name": "Batchfile",
"bytes": "45451"
},
{
"name": "C",
"bytes": "4818355"
},
{
"name": "C#",
"bytes": "40804"
},
{
"name": "C++",
"bytes": "145737889"
},
{
"name": "CMake",
"bytes": "53495"
},
{
"name": "CSS",
"bytes": "287550"
},
{
"name": "CWeb",
"bytes": "174166"
},
{
"name": "Cuda",
"bytes": "26749"
},
{
"name": "Fortran",
"bytes": "9668"
},
{
"name": "HTML",
"bytes": "155266453"
},
{
"name": "IDL",
"bytes": "14"
},
{
"name": "JavaScript",
"bytes": "225380"
},
{
"name": "Lex",
"bytes": "1231"
},
{
"name": "M4",
"bytes": "29689"
},
{
"name": "Makefile",
"bytes": "1560105"
},
{
"name": "Max",
"bytes": "36857"
},
{
"name": "Objective-C",
"bytes": "4303"
},
{
"name": "Objective-C++",
"bytes": "218"
},
{
"name": "PHP",
"bytes": "59030"
},
{
"name": "Perl",
"bytes": "23580"
},
{
"name": "Perl 6",
"bytes": "7975"
},
{
"name": "Python",
"bytes": "28662237"
},
{
"name": "QML",
"bytes": "593"
},
{
"name": "Rebol",
"bytes": "354"
},
{
"name": "Roff",
"bytes": "8039"
},
{
"name": "Shell",
"bytes": "376471"
},
{
"name": "Smarty",
"bytes": "2045"
},
{
"name": "Tcl",
"bytes": "1172"
},
{
"name": "TeX",
"bytes": "13404"
},
{
"name": "XSLT",
"bytes": "746813"
},
{
"name": "Yacc",
"bytes": "18910"
}
],
"symlink_target": ""
} |
"""Tests for the BSBLan device config flow."""
import aiohttp
from homeassistant import data_entry_flow
from homeassistant.components.bsblan import config_flow
from homeassistant.components.bsblan.const import CONF_DEVICE_IDENT, CONF_PASSKEY
from homeassistant.config_entries import SOURCE_USER
from homeassistant.const import (
CONF_HOST,
CONF_PASSWORD,
CONF_PORT,
CONF_USERNAME,
CONTENT_TYPE_JSON,
)
from homeassistant.core import HomeAssistant
from . import init_integration
from tests.common import load_fixture
from tests.test_util.aiohttp import AiohttpClientMocker
async def test_show_user_form(hass: HomeAssistant) -> None:
"""Test that the user set up form is served."""
result = await hass.config_entries.flow.async_init(
config_flow.DOMAIN,
context={"source": SOURCE_USER},
)
assert result["step_id"] == "user"
assert result["type"] == data_entry_flow.FlowResultType.FORM
async def test_connection_error(
hass: HomeAssistant, aioclient_mock: AiohttpClientMocker
) -> None:
"""Test we show user form on BSBLan connection error."""
aioclient_mock.post(
"http://example.local:80/1234/JQ?Parameter=6224,6225,6226",
exc=aiohttp.ClientError,
)
result = await hass.config_entries.flow.async_init(
config_flow.DOMAIN,
context={"source": SOURCE_USER},
data={
CONF_HOST: "example.local",
CONF_USERNAME: "nobody",
CONF_PASSWORD: "qwerty",
CONF_PASSKEY: "1234",
CONF_PORT: 80,
},
)
assert result["errors"] == {"base": "cannot_connect"}
assert result["step_id"] == "user"
assert result["type"] == data_entry_flow.FlowResultType.FORM
async def test_user_device_exists_abort(
hass: HomeAssistant, aioclient_mock: AiohttpClientMocker
) -> None:
"""Test we abort zeroconf flow if BSBLan device already configured."""
await init_integration(hass, aioclient_mock)
result = await hass.config_entries.flow.async_init(
config_flow.DOMAIN,
context={"source": SOURCE_USER},
data={
CONF_HOST: "example.local",
CONF_USERNAME: "nobody",
CONF_PASSWORD: "qwerty",
CONF_PASSKEY: "1234",
CONF_PORT: 80,
},
)
assert result["type"] == data_entry_flow.FlowResultType.ABORT
async def test_full_user_flow_implementation(
hass: HomeAssistant, aioclient_mock
) -> None:
"""Test the full manual user flow from start to finish."""
aioclient_mock.post(
"http://example.local:80/1234/JQ?Parameter=6224,6225,6226",
text=load_fixture("bsblan/info.json"),
headers={"Content-Type": CONTENT_TYPE_JSON},
)
result = await hass.config_entries.flow.async_init(
config_flow.DOMAIN,
context={"source": SOURCE_USER},
)
assert result["step_id"] == "user"
assert result["type"] == data_entry_flow.FlowResultType.FORM
result = await hass.config_entries.flow.async_configure(
result["flow_id"],
user_input={
CONF_HOST: "example.local",
CONF_USERNAME: "nobody",
CONF_PASSWORD: "qwerty",
CONF_PASSKEY: "1234",
CONF_PORT: 80,
},
)
assert result["data"][CONF_HOST] == "example.local"
assert result["data"][CONF_USERNAME] == "nobody"
assert result["data"][CONF_PASSWORD] == "qwerty"
assert result["data"][CONF_PASSKEY] == "1234"
assert result["data"][CONF_PORT] == 80
assert result["data"][CONF_DEVICE_IDENT] == "RVS21.831F/127"
assert result["title"] == "RVS21.831F/127"
assert result["type"] == data_entry_flow.FlowResultType.CREATE_ENTRY
entries = hass.config_entries.async_entries(config_flow.DOMAIN)
assert entries[0].unique_id == "RVS21.831F/127"
async def test_full_user_flow_implementation_without_auth(
hass: HomeAssistant, aioclient_mock
) -> None:
"""Test the full manual user flow from start to finish."""
aioclient_mock.post(
"http://example2.local:80/JQ?Parameter=6224,6225,6226",
text=load_fixture("bsblan/info.json"),
headers={"Content-Type": CONTENT_TYPE_JSON},
)
result = await hass.config_entries.flow.async_init(
config_flow.DOMAIN,
context={"source": SOURCE_USER},
)
assert result["step_id"] == "user"
assert result["type"] == data_entry_flow.FlowResultType.FORM
result = await hass.config_entries.flow.async_configure(
result["flow_id"],
user_input={
CONF_HOST: "example2.local",
CONF_PORT: 80,
},
)
assert result["data"][CONF_HOST] == "example2.local"
assert result["data"][CONF_USERNAME] is None
assert result["data"][CONF_PASSWORD] is None
assert result["data"][CONF_PASSKEY] is None
assert result["data"][CONF_PORT] == 80
assert result["data"][CONF_DEVICE_IDENT] == "RVS21.831F/127"
assert result["title"] == "RVS21.831F/127"
assert result["type"] == data_entry_flow.FlowResultType.CREATE_ENTRY
entries = hass.config_entries.async_entries(config_flow.DOMAIN)
assert entries[0].unique_id == "RVS21.831F/127"
| {
"content_hash": "8998796f9a9fac73fd36aec24659450c",
"timestamp": "",
"source": "github",
"line_count": 159,
"max_line_length": 81,
"avg_line_length": 32.685534591194966,
"alnum_prop": 0.6430633057533193,
"repo_name": "nkgilley/home-assistant",
"id": "b8efa960fcabd9b3ce441d8673cc69ab8ecb9587",
"size": "5197",
"binary": false,
"copies": "1",
"ref": "refs/heads/dev",
"path": "tests/components/bsblan/test_config_flow.py",
"mode": "33188",
"license": "apache-2.0",
"language": [
{
"name": "Dockerfile",
"bytes": "2963"
},
{
"name": "PLSQL",
"bytes": "840"
},
{
"name": "Python",
"bytes": "51597279"
},
{
"name": "Shell",
"bytes": "6252"
}
],
"symlink_target": ""
} |
import rospy
from std_msgs.msg import String
import std_msgs.msg
from ros_quad_experimental.msg import Barometer
import Adafruit_BMP.BMP085 as BMP085
def talker():
pub = rospy.Publisher('barometer', Barometer, queue_size=10)
rospy.init_node('barometersensor', anonymous=True)
r = rospy.Rate(10) # 10hz
sensor = BMP085.BMP085()
while not rospy.is_shutdown():
bar = Barometer()
h = std_msgs.msg.Header()
h.frame_id="quad_test"
h.stamp = rospy.Time.now() # Note you need to call rospy.init_node() before this will work
bar.header = h
bar.pressure.header = h
bar.pressure_sea_level.header = h
bar.temperature.header = h
bar.pressure.fluid_pressure = sensor.read_pressure() #Pa
bar.temperature.temperature = sensor.read_temperature() #celcius
bar.altitude = sensor.read_altitude() #meters
bar.pressure_sea_level.fluid_pressure = sensor.read_sealevel_pressure() #Pa
#rospy.get_time()
#rospy.loginfo(bar)
pub.publish(bar)
r.sleep()
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException: pass
| {
"content_hash": "4ba327a29b6437e33f38698a91219168",
"timestamp": "",
"source": "github",
"line_count": 36,
"max_line_length": 98,
"avg_line_length": 33.138888888888886,
"alnum_prop": 0.6345347862531433,
"repo_name": "wonkothesanest/ros_quad_experimental",
"id": "80bd7baf69b95630fdbab130dae74d146374c560",
"size": "2928",
"binary": false,
"copies": "1",
"ref": "refs/heads/master",
"path": "scripts/talker_pressure.py",
"mode": "33261",
"license": "mit",
"language": [
{
"name": "Python",
"bytes": "13952"
}
],
"symlink_target": ""
} |
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Task',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', primary_key=True, serialize=False)),
('title', models.CharField(max_length=200)),
('pub_date', models.DateTimeField(verbose_name='date published')),
],
),
migrations.CreateModel(
name='User',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', primary_key=True, serialize=False)),
],
),
migrations.AddField(
model_name='task',
name='subscribers',
field=models.ManyToManyField(to='taskzilla.User'),
),
]
| {
"content_hash": "fa0d93e9f79f4de0e674b683f0bb29c9",
"timestamp": "",
"source": "github",
"line_count": 31,
"max_line_length": 114,
"avg_line_length": 29.838709677419356,
"alnum_prop": 0.5459459459459459,
"repo_name": "static-code-generators/taskzilla",
"id": "5b80f7e9ad7f07f50b672ce7e19f5cb06aa3ecb4",
"size": "949",
"binary": false,
"copies": "2",
"ref": "refs/heads/master",
"path": "taskzilla/migrations/0001_initial.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "38"
},
{
"name": "HTML",
"bytes": "7846"
},
{
"name": "JavaScript",
"bytes": "484"
},
{
"name": "Python",
"bytes": "16124"
}
],
"symlink_target": ""
} |
from __future__ import unicode_literals, division, absolute_import
from builtins import * # pylint: disable=unused-import, redefined-builtin
import pytest
@pytest.mark.online
class TestHeaders(object):
config = """
tasks:
test_headers:
text:
url: http://httpbin.org/cookies
entry:
title: '\"title\": \"(.*)\"'
url: '\"url\": \"(.*)\"'
headers:
Cookie: "title=blah; url=other"
"""
def test_headers(self, execute_task):
task = execute_task('test_headers', options={'nocache': True})
assert task.find_entry(title='blah', url='other'), 'Entry should have been created.'
| {
"content_hash": "48a969e07774a9e9c51fa13a11074256",
"timestamp": "",
"source": "github",
"line_count": 23,
"max_line_length": 92,
"avg_line_length": 31.08695652173913,
"alnum_prop": 0.5566433566433566,
"repo_name": "dsemi/Flexget",
"id": "d14f4646e532e61a87545958d7c0d3e439fbb0a7",
"size": "715",
"binary": false,
"copies": "4",
"ref": "refs/heads/develop",
"path": "flexget/tests/test_headers.py",
"mode": "33188",
"license": "mit",
"language": [
{
"name": "CSS",
"bytes": "9267"
},
{
"name": "HTML",
"bytes": "49634"
},
{
"name": "JavaScript",
"bytes": "239926"
},
{
"name": "Python",
"bytes": "2781815"
},
{
"name": "SRecode Template",
"bytes": "3"
}
],
"symlink_target": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.