repo_name
stringlengths 6
100
| path
stringlengths 4
294
| copies
stringlengths 1
5
| size
stringlengths 4
6
| content
stringlengths 606
896k
| license
stringclasses 15
values |
---|---|---|---|---|---|
mcus/SickRage | lib/sqlalchemy/ext/mutable.py | 76 | 22912 | # ext/mutable.py
# Copyright (C) 2005-2014 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Provide support for tracking of in-place changes to scalar values,
which are propagated into ORM change events on owning parent objects.
.. versionadded:: 0.7 :mod:`sqlalchemy.ext.mutable` replaces SQLAlchemy's
legacy approach to in-place mutations of scalar values; see
:ref:`07_migration_mutation_extension`.
.. _mutable_scalars:
Establishing Mutability on Scalar Column Values
===============================================
A typical example of a "mutable" structure is a Python dictionary.
Following the example introduced in :ref:`types_toplevel`, we
begin with a custom type that marshals Python dictionaries into
JSON strings before being persisted::
from sqlalchemy.types import TypeDecorator, VARCHAR
import json
class JSONEncodedDict(TypeDecorator):
"Represents an immutable structure as a json-encoded string."
impl = VARCHAR
def process_bind_param(self, value, dialect):
if value is not None:
value = json.dumps(value)
return value
def process_result_value(self, value, dialect):
if value is not None:
value = json.loads(value)
return value
The usage of ``json`` is only for the purposes of example. The
:mod:`sqlalchemy.ext.mutable` extension can be used
with any type whose target Python type may be mutable, including
:class:`.PickleType`, :class:`.postgresql.ARRAY`, etc.
When using the :mod:`sqlalchemy.ext.mutable` extension, the value itself
tracks all parents which reference it. Below, we illustrate the a simple
version of the :class:`.MutableDict` dictionary object, which applies
the :class:`.Mutable` mixin to a plain Python dictionary::
from sqlalchemy.ext.mutable import Mutable
class MutableDict(Mutable, dict):
@classmethod
def coerce(cls, key, value):
"Convert plain dictionaries to MutableDict."
if not isinstance(value, MutableDict):
if isinstance(value, dict):
return MutableDict(value)
# this call will raise ValueError
return Mutable.coerce(key, value)
else:
return value
def __setitem__(self, key, value):
"Detect dictionary set events and emit change events."
dict.__setitem__(self, key, value)
self.changed()
def __delitem__(self, key):
"Detect dictionary del events and emit change events."
dict.__delitem__(self, key)
self.changed()
The above dictionary class takes the approach of subclassing the Python
built-in ``dict`` to produce a dict
subclass which routes all mutation events through ``__setitem__``. There are
variants on this approach, such as subclassing ``UserDict.UserDict`` or
``collections.MutableMapping``; the part that's important to this example is
that the :meth:`.Mutable.changed` method is called whenever an in-place
change to the datastructure takes place.
We also redefine the :meth:`.Mutable.coerce` method which will be used to
convert any values that are not instances of ``MutableDict``, such
as the plain dictionaries returned by the ``json`` module, into the
appropriate type. Defining this method is optional; we could just as well
created our ``JSONEncodedDict`` such that it always returns an instance
of ``MutableDict``, and additionally ensured that all calling code
uses ``MutableDict`` explicitly. When :meth:`.Mutable.coerce` is not
overridden, any values applied to a parent object which are not instances
of the mutable type will raise a ``ValueError``.
Our new ``MutableDict`` type offers a class method
:meth:`~.Mutable.as_mutable` which we can use within column metadata
to associate with types. This method grabs the given type object or
class and associates a listener that will detect all future mappings
of this type, applying event listening instrumentation to the mapped
attribute. Such as, with classical table metadata::
from sqlalchemy import Table, Column, Integer
my_data = Table('my_data', metadata,
Column('id', Integer, primary_key=True),
Column('data', MutableDict.as_mutable(JSONEncodedDict))
)
Above, :meth:`~.Mutable.as_mutable` returns an instance of ``JSONEncodedDict``
(if the type object was not an instance already), which will intercept any
attributes which are mapped against this type. Below we establish a simple
mapping against the ``my_data`` table::
from sqlalchemy import mapper
class MyDataClass(object):
pass
# associates mutation listeners with MyDataClass.data
mapper(MyDataClass, my_data)
The ``MyDataClass.data`` member will now be notified of in place changes
to its value.
There's no difference in usage when using declarative::
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class MyDataClass(Base):
__tablename__ = 'my_data'
id = Column(Integer, primary_key=True)
data = Column(MutableDict.as_mutable(JSONEncodedDict))
Any in-place changes to the ``MyDataClass.data`` member
will flag the attribute as "dirty" on the parent object::
>>> from sqlalchemy.orm import Session
>>> sess = Session()
>>> m1 = MyDataClass(data={'value1':'foo'})
>>> sess.add(m1)
>>> sess.commit()
>>> m1.data['value1'] = 'bar'
>>> assert m1 in sess.dirty
True
The ``MutableDict`` can be associated with all future instances
of ``JSONEncodedDict`` in one step, using
:meth:`~.Mutable.associate_with`. This is similar to
:meth:`~.Mutable.as_mutable` except it will intercept all occurrences
of ``MutableDict`` in all mappings unconditionally, without
the need to declare it individually::
MutableDict.associate_with(JSONEncodedDict)
class MyDataClass(Base):
__tablename__ = 'my_data'
id = Column(Integer, primary_key=True)
data = Column(JSONEncodedDict)
Supporting Pickling
--------------------
The key to the :mod:`sqlalchemy.ext.mutable` extension relies upon the
placement of a ``weakref.WeakKeyDictionary`` upon the value object, which
stores a mapping of parent mapped objects keyed to the attribute name under
which they are associated with this value. ``WeakKeyDictionary`` objects are
not picklable, due to the fact that they contain weakrefs and function
callbacks. In our case, this is a good thing, since if this dictionary were
picklable, it could lead to an excessively large pickle size for our value
objects that are pickled by themselves outside of the context of the parent.
The developer responsibility here is only to provide a ``__getstate__`` method
that excludes the :meth:`~MutableBase._parents` collection from the pickle
stream::
class MyMutableType(Mutable):
def __getstate__(self):
d = self.__dict__.copy()
d.pop('_parents', None)
return d
With our dictionary example, we need to return the contents of the dict itself
(and also restore them on __setstate__)::
class MutableDict(Mutable, dict):
# ....
def __getstate__(self):
return dict(self)
def __setstate__(self, state):
self.update(state)
In the case that our mutable value object is pickled as it is attached to one
or more parent objects that are also part of the pickle, the :class:`.Mutable`
mixin will re-establish the :attr:`.Mutable._parents` collection on each value
object as the owning parents themselves are unpickled.
.. _mutable_composites:
Establishing Mutability on Composites
=====================================
Composites are a special ORM feature which allow a single scalar attribute to
be assigned an object value which represents information "composed" from one
or more columns from the underlying mapped table. The usual example is that of
a geometric "point", and is introduced in :ref:`mapper_composite`.
.. versionchanged:: 0.7
The internals of :func:`.orm.composite` have been
greatly simplified and in-place mutation detection is no longer enabled by
default; instead, the user-defined value must detect changes on its own and
propagate them to all owning parents. The :mod:`sqlalchemy.ext.mutable`
extension provides the helper class :class:`.MutableComposite`, which is a
slight variant on the :class:`.Mutable` class.
As is the case with :class:`.Mutable`, the user-defined composite class
subclasses :class:`.MutableComposite` as a mixin, and detects and delivers
change events to its parents via the :meth:`.MutableComposite.changed` method.
In the case of a composite class, the detection is usually via the usage of
Python descriptors (i.e. ``@property``), or alternatively via the special
Python method ``__setattr__()``. Below we expand upon the ``Point`` class
introduced in :ref:`mapper_composite` to subclass :class:`.MutableComposite`
and to also route attribute set events via ``__setattr__`` to the
:meth:`.MutableComposite.changed` method::
from sqlalchemy.ext.mutable import MutableComposite
class Point(MutableComposite):
def __init__(self, x, y):
self.x = x
self.y = y
def __setattr__(self, key, value):
"Intercept set events"
# set the attribute
object.__setattr__(self, key, value)
# alert all parents to the change
self.changed()
def __composite_values__(self):
return self.x, self.y
def __eq__(self, other):
return isinstance(other, Point) and \\
other.x == self.x and \\
other.y == self.y
def __ne__(self, other):
return not self.__eq__(other)
The :class:`.MutableComposite` class uses a Python metaclass to automatically
establish listeners for any usage of :func:`.orm.composite` that specifies our
``Point`` type. Below, when ``Point`` is mapped to the ``Vertex`` class,
listeners are established which will route change events from ``Point``
objects to each of the ``Vertex.start`` and ``Vertex.end`` attributes::
from sqlalchemy.orm import composite, mapper
from sqlalchemy import Table, Column
vertices = Table('vertices', metadata,
Column('id', Integer, primary_key=True),
Column('x1', Integer),
Column('y1', Integer),
Column('x2', Integer),
Column('y2', Integer),
)
class Vertex(object):
pass
mapper(Vertex, vertices, properties={
'start': composite(Point, vertices.c.x1, vertices.c.y1),
'end': composite(Point, vertices.c.x2, vertices.c.y2)
})
Any in-place changes to the ``Vertex.start`` or ``Vertex.end`` members
will flag the attribute as "dirty" on the parent object::
>>> from sqlalchemy.orm import Session
>>> sess = Session()
>>> v1 = Vertex(start=Point(3, 4), end=Point(12, 15))
>>> sess.add(v1)
>>> sess.commit()
>>> v1.end.x = 8
>>> assert v1 in sess.dirty
True
Coercing Mutable Composites
---------------------------
The :meth:`.MutableBase.coerce` method is also supported on composite types.
In the case of :class:`.MutableComposite`, the :meth:`.MutableBase.coerce`
method is only called for attribute set operations, not load operations.
Overriding the :meth:`.MutableBase.coerce` method is essentially equivalent
to using a :func:`.validates` validation routine for all attributes which
make use of the custom composite type::
class Point(MutableComposite):
# other Point methods
# ...
def coerce(cls, key, value):
if isinstance(value, tuple):
value = Point(*value)
elif not isinstance(value, Point):
raise ValueError("tuple or Point expected")
return value
.. versionadded:: 0.7.10,0.8.0b2
Support for the :meth:`.MutableBase.coerce` method in conjunction with
objects of type :class:`.MutableComposite`.
Supporting Pickling
--------------------
As is the case with :class:`.Mutable`, the :class:`.MutableComposite` helper
class uses a ``weakref.WeakKeyDictionary`` available via the
:meth:`MutableBase._parents` attribute which isn't picklable. If we need to
pickle instances of ``Point`` or its owning class ``Vertex``, we at least need
to define a ``__getstate__`` that doesn't include the ``_parents`` dictionary.
Below we define both a ``__getstate__`` and a ``__setstate__`` that package up
the minimal form of our ``Point`` class::
class Point(MutableComposite):
# ...
def __getstate__(self):
return self.x, self.y
def __setstate__(self, state):
self.x, self.y = state
As with :class:`.Mutable`, the :class:`.MutableComposite` augments the
pickling process of the parent's object-relational state so that the
:meth:`MutableBase._parents` collection is restored to all ``Point`` objects.
"""
from ..orm.attributes import flag_modified
from .. import event, types
from ..orm import mapper, object_mapper, Mapper
from ..util import memoized_property
import weakref
class MutableBase(object):
"""Common base class to :class:`.Mutable`
and :class:`.MutableComposite`.
"""
@memoized_property
def _parents(self):
"""Dictionary of parent object->attribute name on the parent.
This attribute is a so-called "memoized" property. It initializes
itself with a new ``weakref.WeakKeyDictionary`` the first time
it is accessed, returning the same object upon subsequent access.
"""
return weakref.WeakKeyDictionary()
@classmethod
def coerce(cls, key, value):
"""Given a value, coerce it into the target type.
Can be overridden by custom subclasses to coerce incoming
data into a particular type.
By default, raises ``ValueError``.
This method is called in different scenarios depending on if
the parent class is of type :class:`.Mutable` or of type
:class:`.MutableComposite`. In the case of the former, it is called
for both attribute-set operations as well as during ORM loading
operations. For the latter, it is only called during attribute-set
operations; the mechanics of the :func:`.composite` construct
handle coercion during load operations.
:param key: string name of the ORM-mapped attribute being set.
:param value: the incoming value.
:return: the method should return the coerced value, or raise
``ValueError`` if the coercion cannot be completed.
"""
if value is None:
return None
msg = "Attribute '%s' does not accept objects of type %s"
raise ValueError(msg % (key, type(value)))
@classmethod
def _listen_on_attribute(cls, attribute, coerce, parent_cls):
"""Establish this type as a mutation listener for the given
mapped descriptor.
"""
key = attribute.key
if parent_cls is not attribute.class_:
return
# rely on "propagate" here
parent_cls = attribute.class_
def load(state, *args):
"""Listen for objects loaded or refreshed.
Wrap the target data member's value with
``Mutable``.
"""
val = state.dict.get(key, None)
if val is not None:
if coerce:
val = cls.coerce(key, val)
state.dict[key] = val
val._parents[state.obj()] = key
def set(target, value, oldvalue, initiator):
"""Listen for set/replace events on the target
data member.
Establish a weak reference to the parent object
on the incoming value, remove it for the one
outgoing.
"""
if value is oldvalue:
return value
if not isinstance(value, cls):
value = cls.coerce(key, value)
if value is not None:
value._parents[target.obj()] = key
if isinstance(oldvalue, cls):
oldvalue._parents.pop(target.obj(), None)
return value
def pickle(state, state_dict):
val = state.dict.get(key, None)
if val is not None:
if 'ext.mutable.values' not in state_dict:
state_dict['ext.mutable.values'] = []
state_dict['ext.mutable.values'].append(val)
def unpickle(state, state_dict):
if 'ext.mutable.values' in state_dict:
for val in state_dict['ext.mutable.values']:
val._parents[state.obj()] = key
event.listen(parent_cls, 'load', load,
raw=True, propagate=True)
event.listen(parent_cls, 'refresh', load,
raw=True, propagate=True)
event.listen(attribute, 'set', set,
raw=True, retval=True, propagate=True)
event.listen(parent_cls, 'pickle', pickle,
raw=True, propagate=True)
event.listen(parent_cls, 'unpickle', unpickle,
raw=True, propagate=True)
class Mutable(MutableBase):
"""Mixin that defines transparent propagation of change
events to a parent object.
See the example in :ref:`mutable_scalars` for usage information.
"""
def changed(self):
"""Subclasses should call this method whenever change events occur."""
for parent, key in self._parents.items():
flag_modified(parent, key)
@classmethod
def associate_with_attribute(cls, attribute):
"""Establish this type as a mutation listener for the given
mapped descriptor.
"""
cls._listen_on_attribute(attribute, True, attribute.class_)
@classmethod
def associate_with(cls, sqltype):
"""Associate this wrapper with all future mapped columns
of the given type.
This is a convenience method that calls
``associate_with_attribute`` automatically.
.. warning::
The listeners established by this method are *global*
to all mappers, and are *not* garbage collected. Only use
:meth:`.associate_with` for types that are permanent to an
application, not with ad-hoc types else this will cause unbounded
growth in memory usage.
"""
def listen_for_type(mapper, class_):
for prop in mapper.column_attrs:
if isinstance(prop.columns[0].type, sqltype):
cls.associate_with_attribute(getattr(class_, prop.key))
event.listen(mapper, 'mapper_configured', listen_for_type)
@classmethod
def as_mutable(cls, sqltype):
"""Associate a SQL type with this mutable Python type.
This establishes listeners that will detect ORM mappings against
the given type, adding mutation event trackers to those mappings.
The type is returned, unconditionally as an instance, so that
:meth:`.as_mutable` can be used inline::
Table('mytable', metadata,
Column('id', Integer, primary_key=True),
Column('data', MyMutableType.as_mutable(PickleType))
)
Note that the returned type is always an instance, even if a class
is given, and that only columns which are declared specifically with
that type instance receive additional instrumentation.
To associate a particular mutable type with all occurrences of a
particular type, use the :meth:`.Mutable.associate_with` classmethod
of the particular :class:`.Mutable` subclass to establish a global
association.
.. warning::
The listeners established by this method are *global*
to all mappers, and are *not* garbage collected. Only use
:meth:`.as_mutable` for types that are permanent to an application,
not with ad-hoc types else this will cause unbounded growth
in memory usage.
"""
sqltype = types.to_instance(sqltype)
def listen_for_type(mapper, class_):
for prop in mapper.column_attrs:
if prop.columns[0].type is sqltype:
cls.associate_with_attribute(getattr(class_, prop.key))
event.listen(mapper, 'mapper_configured', listen_for_type)
return sqltype
class MutableComposite(MutableBase):
"""Mixin that defines transparent propagation of change
events on a SQLAlchemy "composite" object to its
owning parent or parents.
See the example in :ref:`mutable_composites` for usage information.
"""
def changed(self):
"""Subclasses should call this method whenever change events occur."""
for parent, key in self._parents.items():
prop = object_mapper(parent).get_property(key)
for value, attr_name in zip(
self.__composite_values__(),
prop._attribute_keys):
setattr(parent, attr_name, value)
def _setup_composite_listener():
def _listen_for_type(mapper, class_):
for prop in mapper.iterate_properties:
if (hasattr(prop, 'composite_class') and
isinstance(prop.composite_class, type) and
issubclass(prop.composite_class, MutableComposite)):
prop.composite_class._listen_on_attribute(
getattr(class_, prop.key), False, class_)
if not event.contains(Mapper, "mapper_configured", _listen_for_type):
event.listen(Mapper, 'mapper_configured', _listen_for_type)
_setup_composite_listener()
class MutableDict(Mutable, dict):
"""A dictionary type that implements :class:`.Mutable`.
.. versionadded:: 0.8
"""
def __setitem__(self, key, value):
"""Detect dictionary set events and emit change events."""
dict.__setitem__(self, key, value)
self.changed()
def __delitem__(self, key):
"""Detect dictionary del events and emit change events."""
dict.__delitem__(self, key)
self.changed()
def clear(self):
dict.clear(self)
self.changed()
@classmethod
def coerce(cls, key, value):
"""Convert plain dictionary to MutableDict."""
if not isinstance(value, MutableDict):
if isinstance(value, dict):
return MutableDict(value)
return Mutable.coerce(key, value)
else:
return value
def __getstate__(self):
return dict(self)
def __setstate__(self, state):
self.update(state)
| gpl-3.0 |
FenceAtMHacks/flaskbackend | fence-api/flask/lib/python2.7/site-packages/pip/_vendor/cachecontrol/adapter.py | 87 | 3967 | import functools
from pip._vendor.requests.adapters import HTTPAdapter
from .controller import CacheController
from .cache import DictCache
from .filewrapper import CallbackFileWrapper
class CacheControlAdapter(HTTPAdapter):
invalidating_methods = set(['PUT', 'DELETE'])
def __init__(self, cache=None,
cache_etags=True,
controller_class=None,
serializer=None,
heuristic=None,
*args, **kw):
super(CacheControlAdapter, self).__init__(*args, **kw)
self.cache = cache or DictCache()
self.heuristic = heuristic
controller_factory = controller_class or CacheController
self.controller = controller_factory(
self.cache,
cache_etags=cache_etags,
serializer=serializer,
)
def send(self, request, **kw):
"""
Send a request. Use the request information to see if it
exists in the cache and cache the response if we need to and can.
"""
if request.method == 'GET':
cached_response = self.controller.cached_request(request)
if cached_response:
return self.build_response(request, cached_response, from_cache=True)
# check for etags and add headers if appropriate
request.headers.update(self.controller.conditional_headers(request))
resp = super(CacheControlAdapter, self).send(request, **kw)
return resp
def build_response(self, request, response, from_cache=False):
"""
Build a response by making a request or using the cache.
This will end up calling send and returning a potentially
cached response
"""
if not from_cache and request.method == 'GET':
# apply any expiration heuristics
if response.status == 304:
# We must have sent an ETag request. This could mean
# that we've been expired already or that we simply
# have an etag. In either case, we want to try and
# update the cache if that is the case.
cached_response = self.controller.update_cached_response(
request, response
)
if cached_response is not response:
from_cache = True
# We are done with the server response, read a
# possible response body (compliant servers will
# not return one, but we cannot be 100% sure) and
# release the connection back to the pool.
response.read(decode_content=False)
response.release_conn()
response = cached_response
else:
# Check for any heuristics that might update headers
# before trying to cache.
if self.heuristic:
response = self.heuristic.apply(response)
# Wrap the response file with a wrapper that will cache the
# response when the stream has been consumed.
response._fp = CallbackFileWrapper(
response._fp,
functools.partial(
self.controller.cache_response,
request,
response,
)
)
resp = super(CacheControlAdapter, self).build_response(
request, response
)
# See if we should invalidate the cache.
if request.method in self.invalidating_methods and resp.ok:
cache_url = self.controller.cache_url(request.url)
self.cache.delete(cache_url)
# Give the request a from_cache attr to let people use it
resp.from_cache = from_cache
return resp
def close(self):
self.cache.close()
super(CacheControlAdapter, self).close()
| mit |
puzan/ansible | test/units/modules/cloud/amazon/test_ec2_vpc_nat_gateway.py | 41 | 17149 | import pytest
import unittest
boto3 = pytest.importorskip("boto3")
botocore = pytest.importorskip("botocore")
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
import ansible.modules.cloud.amazon.ec2_vpc_nat_gateway as ng
Options = (
namedtuple(
'Options', [
'connection', 'module_path', 'forks', 'become', 'become_method',
'become_user', 'remote_user', 'private_key_file', 'ssh_common_args',
'sftp_extra_args', 'scp_extra_args', 'ssh_extra_args', 'verbosity',
'check'
]
)
)
# initialize needed objects
variable_manager = VariableManager()
loader = DataLoader()
options = (
Options(
connection='local',
module_path='cloud/amazon',
forks=1, become=None, become_method=None, become_user=None, check=True,
remote_user=None, private_key_file=None, ssh_common_args=None,
sftp_extra_args=None, scp_extra_args=None, ssh_extra_args=None,
verbosity=3
)
)
passwords = dict(vault_pass='')
aws_region = 'us-west-2'
# create inventory and pass to var manager
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list='localhost')
variable_manager.set_inventory(inventory)
def run(play):
tqm = None
results = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback='default',
)
results = tqm.run(play)
finally:
if tqm is not None:
tqm.cleanup()
return tqm, results
class AnsibleVpcNatGatewayTasks(unittest.TestCase):
def test_create_gateway_using_allocation_id(self):
play_source = dict(
name = "Create new nat gateway with eip allocation-id",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
subnet_id='subnet-12345678',
allocation_id='eipalloc-12345678',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.failUnless(tqm._stats.changed['localhost'] == 1)
def test_create_gateway_using_allocation_id_idempotent(self):
play_source = dict(
name = "Create new nat gateway with eip allocation-id",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
subnet_id='subnet-123456789',
allocation_id='eipalloc-1234567',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.assertFalse('localhost' in tqm._stats.changed)
def test_create_gateway_using_eip_address(self):
play_source = dict(
name = "Create new nat gateway with eip address",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
subnet_id='subnet-12345678',
eip_address='55.55.55.55',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.failUnless(tqm._stats.changed['localhost'] == 1)
def test_create_gateway_using_eip_address_idempotent(self):
play_source = dict(
name = "Create new nat gateway with eip address",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
subnet_id='subnet-123456789',
eip_address='55.55.55.55',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.assertFalse('localhost' in tqm._stats.changed)
def test_create_gateway_in_subnet_only_if_one_does_not_exist_already(self):
play_source = dict(
name = "Create new nat gateway only if one does not exist already",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
if_exist_do_not_create='yes',
subnet_id='subnet-123456789',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.assertFalse('localhost' in tqm._stats.changed)
def test_delete_gateway(self):
play_source = dict(
name = "Delete Nat Gateway",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(
action=dict(
module='ec2_vpc_nat_gateway',
args=dict(
nat_gateway_id='nat-123456789',
state='absent',
wait='yes',
region=aws_region,
)
),
register='nat_gateway',
),
dict(
action=dict(
module='debug',
args=dict(
msg='{{nat_gateway}}'
)
)
)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm, results = run(play)
self.failUnless(tqm._stats.ok['localhost'] == 2)
self.assertTrue('localhost' in tqm._stats.changed)
class AnsibleEc2VpcNatGatewayFunctions(unittest.TestCase):
def test_convert_to_lower(self):
example = ng.DRY_RUN_GATEWAY_UNCONVERTED
converted_example = ng.convert_to_lower(example[0])
keys = list(converted_example.keys())
keys.sort()
for i in range(len(keys)):
if i == 0:
self.assertEqual(keys[i], 'create_time')
if i == 1:
self.assertEqual(keys[i], 'nat_gateway_addresses')
gw_addresses_keys = list(converted_example[keys[i]][0].keys())
gw_addresses_keys.sort()
for j in range(len(gw_addresses_keys)):
if j == 0:
self.assertEqual(gw_addresses_keys[j], 'allocation_id')
if j == 1:
self.assertEqual(gw_addresses_keys[j], 'network_interface_id')
if j == 2:
self.assertEqual(gw_addresses_keys[j], 'private_ip')
if j == 3:
self.assertEqual(gw_addresses_keys[j], 'public_ip')
if i == 2:
self.assertEqual(keys[i], 'nat_gateway_id')
if i == 3:
self.assertEqual(keys[i], 'state')
if i == 4:
self.assertEqual(keys[i], 'subnet_id')
if i == 5:
self.assertEqual(keys[i], 'vpc_id')
def test_get_nat_gateways(self):
client = boto3.client('ec2', region_name=aws_region)
success, err_msg, stream = (
ng.get_nat_gateways(client, 'subnet-123456789', check_mode=True)
)
should_return = ng.DRY_RUN_GATEWAYS
self.assertTrue(success)
self.assertEqual(stream, should_return)
def test_get_nat_gateways_no_gateways_found(self):
client = boto3.client('ec2', region_name=aws_region)
success, err_msg, stream = (
ng.get_nat_gateways(client, 'subnet-1234567', check_mode=True)
)
self.assertTrue(success)
self.assertEqual(stream, [])
def test_wait_for_status(self):
client = boto3.client('ec2', region_name=aws_region)
success, err_msg, gws = (
ng.wait_for_status(
client, 5, 'nat-123456789', 'available', check_mode=True
)
)
should_return = ng.DRY_RUN_GATEWAYS[0]
self.assertTrue(success)
self.assertEqual(gws, should_return)
def test_wait_for_status_to_timeout(self):
client = boto3.client('ec2', region_name=aws_region)
success, err_msg, gws = (
ng.wait_for_status(
client, 2, 'nat-12345678', 'available', check_mode=True
)
)
self.assertFalse(success)
self.assertEqual(gws, {})
def test_gateway_in_subnet_exists_with_allocation_id(self):
client = boto3.client('ec2', region_name=aws_region)
gws, err_msg = (
ng.gateway_in_subnet_exists(
client, 'subnet-123456789', 'eipalloc-1234567', check_mode=True
)
)
should_return = ng.DRY_RUN_GATEWAYS
self.assertEqual(gws, should_return)
def test_gateway_in_subnet_exists_with_allocation_id_does_not_exist(self):
client = boto3.client('ec2', region_name=aws_region)
gws, err_msg = (
ng.gateway_in_subnet_exists(
client, 'subnet-123456789', 'eipalloc-123', check_mode=True
)
)
should_return = list()
self.assertEqual(gws, should_return)
def test_gateway_in_subnet_exists_without_allocation_id(self):
client = boto3.client('ec2', region_name=aws_region)
gws, err_msg = (
ng.gateway_in_subnet_exists(
client, 'subnet-123456789', check_mode=True
)
)
should_return = ng.DRY_RUN_GATEWAYS
self.assertEqual(gws, should_return)
def test_get_eip_allocation_id_by_address(self):
client = boto3.client('ec2', region_name=aws_region)
allocation_id, _ = (
ng.get_eip_allocation_id_by_address(
client, '55.55.55.55', check_mode=True
)
)
should_return = 'eipalloc-1234567'
self.assertEqual(allocation_id, should_return)
def test_get_eip_allocation_id_by_address_does_not_exist(self):
client = boto3.client('ec2', region_name=aws_region)
allocation_id, err_msg = (
ng.get_eip_allocation_id_by_address(
client, '52.52.52.52', check_mode=True
)
)
self.assertEqual(err_msg, 'EIP 52.52.52.52 does not exist')
self.assertTrue(allocation_id is None)
def test_allocate_eip_address(self):
client = boto3.client('ec2', region_name=aws_region)
success, err_msg, eip_id = (
ng.allocate_eip_address(
client, check_mode=True
)
)
self.assertTrue(success)
def test_release_address(self):
client = boto3.client('ec2', region_name=aws_region)
success, _ = (
ng.release_address(
client, 'eipalloc-1234567', check_mode=True
)
)
self.assertTrue(success)
def test_create(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, results = (
ng.create(
client, 'subnet-123456', 'eipalloc-1234567', check_mode=True
)
)
self.assertTrue(success)
self.assertTrue(changed)
def test_pre_create(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, results = (
ng.pre_create(
client, 'subnet-123456', check_mode=True
)
)
self.assertTrue(success)
self.assertTrue(changed)
def test_pre_create_idemptotent_with_allocation_id(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, results = (
ng.pre_create(
client, 'subnet-123456789', allocation_id='eipalloc-1234567', check_mode=True
)
)
self.assertTrue(success)
self.assertFalse(changed)
def test_pre_create_idemptotent_with_eip_address(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, results = (
ng.pre_create(
client, 'subnet-123456789', eip_address='55.55.55.55', check_mode=True
)
)
self.assertTrue(success)
self.assertFalse(changed)
def test_pre_create_idemptotent_if_exist_do_not_create(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, results = (
ng.pre_create(
client, 'subnet-123456789', if_exist_do_not_create=True, check_mode=True
)
)
self.assertTrue(success)
self.assertFalse(changed)
def test_delete(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, _ = (
ng.remove(
client, 'nat-123456789', check_mode=True
)
)
self.assertTrue(success)
self.assertTrue(changed)
def test_delete_and_release_ip(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, _ = (
ng.remove(
client, 'nat-123456789', release_eip=True, check_mode=True
)
)
self.assertTrue(success)
self.assertTrue(changed)
def test_delete_if_does_not_exist(self):
client = boto3.client('ec2', region_name=aws_region)
success, changed, err_msg, _ = (
ng.remove(
client, 'nat-12345', check_mode=True
)
)
self.assertFalse(success)
self.assertFalse(changed)
| gpl-3.0 |
adlius/osf.io | admin/nodes/urls.py | 6 | 2100 | from django.conf.urls import url
from admin.nodes import views
app_name = 'admin'
urlpatterns = [
url(r'^$', views.NodeFormView.as_view(),
name='search'),
url(r'^flagged_spam$', views.NodeFlaggedSpamList.as_view(),
name='flagged-spam'),
url(r'^known_spam$', views.NodeKnownSpamList.as_view(),
name='known-spam'),
url(r'^known_ham$', views.NodeKnownHamList.as_view(),
name='known-ham'),
url(r'^(?P<guid>[a-z0-9]+)/$', views.NodeView.as_view(),
name='node'),
url(r'^(?P<guid>[a-z0-9]+)/logs/$', views.AdminNodeLogView.as_view(),
name='node-logs'),
url(r'^registration_list/$', views.RegistrationListView.as_view(),
name='registrations'),
url(r'^stuck_registration_list/$', views.StuckRegistrationListView.as_view(),
name='stuck-registrations'),
url(r'^(?P<guid>[a-z0-9]+)/update_embargo/$',
views.RegistrationUpdateEmbargoView.as_view(), name='update_embargo'),
url(r'^(?P<guid>[a-z0-9]+)/remove/$', views.NodeDeleteView.as_view(),
name='remove'),
url(r'^(?P<guid>[a-z0-9]+)/restore/$', views.NodeDeleteView.as_view(),
name='restore'),
url(r'^(?P<guid>[a-z0-9]+)/confirm_spam/$', views.NodeConfirmSpamView.as_view(),
name='confirm-spam'),
url(r'^(?P<guid>[a-z0-9]+)/confirm_ham/$', views.NodeConfirmHamView.as_view(),
name='confirm-ham'),
url(r'^(?P<guid>[a-z0-9]+)/reindex_share_node/$', views.NodeReindexShare.as_view(),
name='reindex-share-node'),
url(r'^(?P<guid>[a-z0-9]+)/reindex_elastic_node/$', views.NodeReindexElastic.as_view(),
name='reindex-elastic-node'),
url(r'^(?P<guid>[a-z0-9]+)/restart_stuck_registrations/$', views.RestartStuckRegistrationsView.as_view(),
name='restart-stuck-registrations'),
url(r'^(?P<guid>[a-z0-9]+)/remove_stuck_registrations/$', views.RemoveStuckRegistrationsView.as_view(),
name='remove-stuck-registrations'),
url(r'^(?P<guid>[a-z0-9]+)/remove_user/(?P<user_id>[a-z0-9]+)/$',
views.NodeRemoveContributorView.as_view(), name='remove_user'),
]
| apache-2.0 |
camptocamp/odoo | addons/fetchmail/__init__.py | 437 | 1120 | #-*- coding:utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). All Rights Reserved
# [email protected]
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import fetchmail
import res_config
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
ttm/percolation | percolation/rdf/ontology.py | 1 | 7460 | import percolation as P
from .rdflib import NS
a=NS.rdf.type
def percolationSystem():
triples=[
(NS.per.CurrentStatus, a, NS.per.SystemStatus)
]
def minimumTestOntology(context="minimum_ontology"):
triples=[
(NS.po.FacebookSnapshot,NS.rdfs.subClassOf,NS.po.Snapshot),
(NS.facebook.user,NS.rdfs.range,NS.po.Participant),
(NS.facebook.ego,NS.rdfs.domain,NS.po.FacebookSnapshot),
(NS.facebook.userID,NS.rdfs.subPropertyOf,NS.po.userID),
]
P.add(triples,context=context)
def minimumOntology(context="minimum_ontology"):
triples=rdfsTriples()
if context=="triples":
return triples
P.add(triples,context=context)
def rdfsTriples():
"""Sub Class/Property and range domain assertions"""
triples=[
(NS.po.onlineMetaXMLFile, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.onlineMetaXMLFile, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.FacebookSnapshot,NS.rdfs.subClassOf,NS.po.Snapshot),
(NS.po.onlineMetaXMLFile, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.onlineMetaTTLFile, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.MetaXMLFilename, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.MetaTTLFilename, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.onlineInteractionXMLFile,NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.onlineinteractionTTLFile,NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.interactionXMLFilename, NS.rdfs.subPropertyOf, NS.void.dataDump),
(NS.po.interactionTTLFilename, NS.rdfs.subPropertyOf, NS.void.dataDump),
]
return triples
def participantRDFSStructure(): # participant
triples=[
(NS.po.Participant, NS.rdfs.subClassOf, NS.foaf.Person),
(NS.gmane.Participant,NS.rdfs.subClassOf,NS.po.Participant),
(NS.facebook.Participant,NS.rdfs.subClassOf,NS.po.Participant),
(NS.tw.Participant,NS.rdfs.subClassOf,NS.po.Participant),
]
return triples
def snapshotRDFSStructure():
triples=[
(NS.po.InteractionSnapshot, NS.rdfs.subClassOf, NS.po.Snapshot), # fb, part, tw, irc, gmane, cidade
(NS.po.FriendshipSnapshot, NS.rdfs.subClassOf, NS.po.Snapshot), # fb, part
(NS.po.ReportSnapshot, NS.rdfs.subClassOf, NS.po.Snapshot), # aa
(NS.po.FacebookSnapshot, NS.rdfs.subClassOf, NS.po.Snapshot),
(NS.po.FacebookInteractionSnapshot, NS.rdfs.subClassOf, NS.po.FacebookSnapshot),
(NS.po.FacebookInteractionSnapshot, NS.rdfs.subClassOf, NS.po.InteractionSnapshot),
(NS.po.FacebookFriendshipSnapshot, NS.rdfs.subClassOf, NS.po.FacebookSnapshot),
(NS.po.FacebookFriendshipSnapshot, NS.rdfs.subClassOf, NS.po.FriendshipSnapshot),
(NS.po.TwitterSnapshot, NS.rdfs.subClassOf, NS.po.InteractionSnapshot),
(NS.po.GmaneSnapshot, NS.rdfs.subClassOf, NS.po.InteractionSnapshot),
(NS.po.IRCSnapshot, NS.rdfs.subClassOf, NS.po.InteractionSnapshot),
(NS.po.AASnapshot, NS.rdfs.subClassOf, NS.po.ReportSnapshot),
(NS.po.ParticipaSnapshot, NS.rdfs.subClassOf, NS.po.CompleteSnapshot),
(NS.po.CidadeDemocraticaSnapshot, NS.rdfs.subClassOf, NS.po.InteractionSnapshot),
]
return triples
def idRDFSStructure():
# User ID somente, na msg a ID eh a URI pois nao diferem em listas/grupos diferentes
# Mas IDs podem existir para grupos e pessoas, pois se repetem em datasets diferentes
triples=[
(NS.gmane.gmaneID, NS.rdfs.subPropertyOf, NS.po.auxID),
(NS.facebook.groupID, NS.rdfs.subPropertyOf, NS.po.auxID),
(NS.facebook.ID, NS.rdfs.subPropertyOf,NS.po.ID),
(NS.po.numericID, NS.rdfs.subPropertyOf,NS.po.ID),
(NS.po.stringID, NS.rdfs.subPropertyOf,NS.po.ID),
(NS.po.auxID, NS.rdfs.subPropertyOf,NS.po.ID),
(NS.facebook.numericID,NS.rdfs.subPropertyOf,NS.facebook.ID),
(NS.facebook.numericID,NS.rdfs.subPropertyOf,NS.po.numericID),
(NS.facebook.stringID, NS.rdfs.subPropertyOf,NS.facebook.ID),
(NS.facebook.stringID, NS.rdfs.subPropertyOf,NS.po.stringID),
(NS.gmane.stringID,NS.rdfs.subPropertyOf,NS.po.stringID),
(NS.gmane.email, NS.rdfs.subPropertyOf,NS.gmane.stringID),
(NS.tw.stringID,NS.rdfs.subPropertyOf,NS.po.stringID),
(NS.tw.email, NS.rdfs.subPropertyOf,NS.tw.stringID),
]
return triples
def fileRDFSStructure():
triples=[
(NS.po.interactionXMLFile, NS.rdfs.subPropertyOf,NS.po.defaultXML), # fb
(NS.po.rdfFile , NS.rdfs.subPropertyOf,NS.po.defaultXML), # twitter, gmane
(NS.po.friendshipXMLFile , NS.rdfs.subPropertyOf,NS.po.defaultXML), # fb
]
return triples
def graphRDFStructure():
triples=[
(NS.po.MetaNamedGraph, NS.rdfs.subClassOf,NS.po.NamedGraph),
(NS.po.TranslationNamedGraph, NS.rdfs.subClassOf, NS.po.NamedGraph),
(NS.po.metaGraph , NS.rdfs.subPropertyOf,NS.po.namedGraph), # fb
(NS.po.metaGraph , NS.rdfs.range,NS.po.MetaNamedGraph), # fb
(NS.po.translationGraph , NS.rdfs.subPropertyOf,NS.po.namedGraph), # fb
(NS.po.translationGraph , NS.rdfs.range,NS.po.TranslationNamedGraph), # fb
]
return triples
def messageRDFSStructure():
triples=[
(NS.gmane.Message,NS.rdfs.subClassOf,NS.po.Message),
(NS.tw.Message,NS.rdfs.subClassOf,NS.po.Message),
(NS.po.Message,NS.rdfs.subClassOf,NS.po.InteractionInstance),
]
def interactionRDFSStructure():
triples=[
(NS.facebook.Interaction,NS.rdfs.subClassOf,NS.po.InteractionInstance),
(NS.gmane.Response,NS.rdfs.subClassOf,NS.po.InteractionInstance),
(NS.gmane.Retweet,NS.rdfs.subClassOf,NS.po.InteractionInstance),
(NS.facebook.nInterations, NS.rdfs.subPropertyOf,NS.facebook.nRelations),
]
return triples
def friendshipRDFSStructure():
triples=[
(NS.facebook.friendOf,NS.rdfs.subPropertyOf,NS.po.friendOf),
(NS.participa.friendOf,NS.rdfs.subPropertyOf,NS.po.friendOf),
(NS.facebook.nFriendships, NS.rdfs.subPropertyOf,NS.facebook.nRelations),
]
return triples
def friendshipOWLStructure():
triples=[
(NS.facebook.friendOf,a,NS.owl.SymmetricProperty),
]
return triples
def participantRelationRDFStructure():
triples=[
(NS.facebook.nRelations, NS.rdfs.subPropertyOf,NS.po.nRelations),
]
triples+=friendshipRDFSStructure()+interactionRDFSStructure()
return triples
def anonymizationRDFSStructure():
triples=[
(NS.facebook.anonymized, NS.rdfs.subPropertyOf,NS.po.anonymized),
(NS.facebook.friendshipsAnonymized, NS.rdfs.subPropertyOf,NS.facebook.anonymized),
(NS.facebook.interactionssAnonymized, NS.rdfs.subPropertyOf,NS.facebook.anonymized),
]
return triples
def todo():
todo="""type of relation retrievement: 1, 2 or 3
labels equivalence: irc, etc
date equivalence
interaction/relation uris equivalence
textual content equivalence
if text is available"""
return todo
| gpl-3.0 |
debsankha/bedtime-programming | ls222/visual-lotka.py | 1 | 5120 | #!/usr/bin/env python
from math import *
import thread
import random
import time
import pygtk
pygtk.require("2.0")
import gtk
import gtk.glade
import commands
import matplotlib.pyplot
class rodent:
def __init__(self):
self.time_from_last_childbirth=0
class felix:
def __init__(self):
self.size=0
self.is_virgin=1
self.reproduction_gap=0
self.time_from_last_childbirth=0
self.age=0
# print 'painted'
class gui_display:
def __init__(self):
self.gladefile='./lvshort.glade'
self.wTree = gtk.glade.XML(self.gladefile)
dic={"on_start_clicked":self.dynamics,"on_mainwin_destroy":gtk.main_quit}
self.wTree.signal_autoconnect(dic)
self.wTree.get_widget("mainwin").show()
self.wTree.get_widget("image").set_from_file("./start.png")
def visualize(self,catn,mousen):
# while True:
num=40
size=10
catno=catn*num**2/(catn+mousen)
cats=random.sample(range(num**2),catno)
for i in range(num**2):
if i in cats:
self.dic[i].color=visual.color.red
else :
self.dic[i].color=visual.color.green
def dynamics(self,*args,**kwargs):
self.wTree.get_widget("image").set_from_file("./wait.png")
print 'dynamics started'
mouse_size=20 #ind parameter
cat_mature_size=60 #ind parameter
# catch_rate=5*10**-4 #parameter
# cat_efficiency=0.8 #parameter
# a=0.2 #will get from slider
# c=0.2 #will get from slider
cat_catch_rate=self.wTree.get_widget("catchrate").get_value()*10**-4 #parameter
cat_efficiency=self.wTree.get_widget("efficiency").get_value() #parameter
a=self.wTree.get_widget("a").get_value() #parameter
c=self.wTree.get_widget("c").get_value() #parameter
mouse_no=1000
cat_no=1000
t=0
tmax=200
dt=1
timeli=[]
miceli=[]
catli=[]
mice=[rodent() for i in range(mouse_no)]
cats=[felix() for i in range(cat_no)]
catn=len(cats)
mousen=len(mice)
self.dic={}
num=40
size=10
catno=catn*num**2/(catn+mousen)
disp_cats=random.sample(range(num**2),catno)
if self.wTree.get_widget("anim").get_active()==1:
print 'yay!'
for i in range(num**2):
coords=((i%num)*size*2-num*size,(i/num)*size*2-num*size)
if i in disp_cats:
self.dic[i]=visual.sphere(pos=coords,radius=size,color=visual.color.red)
else :
self.dic[i]=visual.sphere(pos=coords,radius=size,color=visual.color.green)
print self.dic
catn=len(cats)
mousen=len(mice)
data=open('tempdata.dat','w')
timestart=time.time()
while (len(mice)>0 or len(cats)>0) and t<tmax and (time.time()-timestart)<60:
# print time.time()-timestart
catn=len(cats)
mousen=len(mice)
if self.wTree.get_widget("anim").get_active()==1:
print 'yay!'
# self.visualize(catn,mousen)
thread.start_new_thread(self.visualize,(catn,mousen))
for mouse in mice:
if mouse.time_from_last_childbirth>=1/a:
mouse.time_from_last_childbirth=0
mice.append(rodent())
mouse.time_from_last_childbirth+=dt
ind=0
while ind<len(cats):
cat=cats[ind]
cat.age+=dt
num=cat_catch_rate*dt*len(mice)
for i in range(int(num)):
caught=random.randint(0,len(mice)-1)
cat.size+=mouse_size*cat_efficiency #size increases
mice.pop(caught)
if (num-int(num))>random.uniform(0,1):
caught=random.randint(0,len(mice)-1)
cat.size+=mouse_size*cat_efficiency #size increases
mice.pop(caught)
if cat.size>cat_mature_size:
if cat.is_virgin:
cat.is_virgin=0
cat.reproduction_gap=cat.age
cats.append(felix())
else :
if cat.time_from_last_childbirth>cat.reproduction_gap:
cats.append(felix())
cat.time_from_last_childbirth=0
if cat.is_virgin==0:
cat.time_from_last_childbirth+=dt
if len(cats)>0:
if c*dt*2*atan(0.05*len(cats))/pi>random.uniform(0,1):
cats.pop(ind)
else :
ind+=1
else :
ind+=1
timeli.append(t)
miceli.append(len(mice))
catli.append(len(cats))
print t,'\t',len(mice),'\t',len(cats)
print >> data, t,'\t',len(mice),'\t',len(cats)
t+=dt
data.close()
upper_limit=1.2*len(mice)
pltfile=open('lv.plt','w')
print >> pltfile,"""se te png
se o "/tmp/lv.png"
unse ke
#se yrange [0:%f]
se xl "Time"
se yl "Number of Prey/Predator"
p 'tempdata.dat' u 1:2 w l,'tempdata.dat' u 1:3 w l
"""%upper_limit
pltfile.close()
commands.getoutput('gnuplot lv.plt')
self.wTree.get_widget("image").set_from_file("/tmp/lv.png")
print 'dynamics ended'
reload(matplotlib.pyplot)
matplotlib.pyplot.plot(timeli,catli,'g-')#timeli,catli,'r-')
matplotlib.pyplot.xlabel("Time")
matplotlib.pyplot.ylabel("Number of mice and cats")
matplotlib.pyplot.show()
gui=gui_display()
gtk.main()
#dynamics()
#import matplotlib.pyplot as plt
#plt.plot(timeli,miceli,'go',timeli,catli,'ro')
#plt.show()
| gpl-3.0 |
freedomtan/tensorflow | tensorflow/python/ops/confusion_matrix.py | 14 | 10762 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Confusion matrix related utilities."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.util import deprecation
from tensorflow.python.util import dispatch
from tensorflow.python.util.tf_export import tf_export
def remove_squeezable_dimensions(
labels, predictions, expected_rank_diff=0, name=None):
"""Squeeze last dim if ranks differ from expected by exactly 1.
In the common case where we expect shapes to match, `expected_rank_diff`
defaults to 0, and we squeeze the last dimension of the larger rank if they
differ by 1.
But, for example, if `labels` contains class IDs and `predictions` contains 1
probability per class, we expect `predictions` to have 1 more dimension than
`labels`, so `expected_rank_diff` would be 1. In this case, we'd squeeze
`labels` if `rank(predictions) - rank(labels) == 0`, and
`predictions` if `rank(predictions) - rank(labels) == 2`.
This will use static shape if available. Otherwise, it will add graph
operations, which could result in a performance hit.
Args:
labels: Label values, a `Tensor` whose dimensions match `predictions`.
predictions: Predicted values, a `Tensor` of arbitrary dimensions.
expected_rank_diff: Expected result of `rank(predictions) - rank(labels)`.
name: Name of the op.
Returns:
Tuple of `labels` and `predictions`, possibly with last dim squeezed.
"""
with ops.name_scope(name, 'remove_squeezable_dimensions',
[labels, predictions]):
predictions = ops.convert_to_tensor(predictions)
labels = ops.convert_to_tensor(labels)
predictions_shape = predictions.get_shape()
predictions_rank = predictions_shape.ndims
labels_shape = labels.get_shape()
labels_rank = labels_shape.ndims
if (labels_rank is not None) and (predictions_rank is not None):
# Use static rank.
rank_diff = predictions_rank - labels_rank
if (rank_diff == expected_rank_diff + 1 and
predictions_shape.dims[-1].is_compatible_with(1)):
predictions = array_ops.squeeze(predictions, [-1])
elif (rank_diff == expected_rank_diff - 1 and
labels_shape.dims[-1].is_compatible_with(1)):
labels = array_ops.squeeze(labels, [-1])
return labels, predictions
# Use dynamic rank.
rank_diff = array_ops.rank(predictions) - array_ops.rank(labels)
if (predictions_rank is None) or (
predictions_shape.dims[-1].is_compatible_with(1)):
predictions = control_flow_ops.cond(
math_ops.equal(expected_rank_diff + 1, rank_diff),
lambda: array_ops.squeeze(predictions, [-1]),
lambda: predictions)
if (labels_rank is None) or (
labels_shape.dims[-1].is_compatible_with(1)):
labels = control_flow_ops.cond(
math_ops.equal(expected_rank_diff - 1, rank_diff),
lambda: array_ops.squeeze(labels, [-1]),
lambda: labels)
return labels, predictions
@tf_export('math.confusion_matrix', v1=[])
@dispatch.add_dispatch_support
def confusion_matrix(labels,
predictions,
num_classes=None,
weights=None,
dtype=dtypes.int32,
name=None):
"""Computes the confusion matrix from predictions and labels.
The matrix columns represent the prediction labels and the rows represent the
real labels. The confusion matrix is always a 2-D array of shape `[n, n]`,
where `n` is the number of valid labels for a given classification task. Both
prediction and labels must be 1-D arrays of the same shape in order for this
function to work.
If `num_classes` is `None`, then `num_classes` will be set to one plus the
maximum value in either predictions or labels. Class labels are expected to
start at 0. For example, if `num_classes` is 3, then the possible labels
would be `[0, 1, 2]`.
If `weights` is not `None`, then each prediction contributes its
corresponding weight to the total value of the confusion matrix cell.
For example:
```python
tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
```
Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`,
resulting in a 5x5 confusion matrix.
Args:
labels: 1-D `Tensor` of real labels for the classification task.
predictions: 1-D `Tensor` of predictions for a given classification.
num_classes: The possible number of labels the classification task can
have. If this value is not provided, it will be calculated
using both predictions and labels array.
weights: An optional `Tensor` whose shape matches `predictions`.
dtype: Data type of the confusion matrix.
name: Scope name.
Returns:
A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion
matrix, where `n` is the number of possible labels in the classification
task.
Raises:
ValueError: If both predictions and labels are not 1-D vectors and have
mismatched shapes, or if `weights` is not `None` and its shape doesn't
match `predictions`.
"""
with ops.name_scope(name, 'confusion_matrix',
(predictions, labels, num_classes, weights)) as name:
labels, predictions = remove_squeezable_dimensions(
ops.convert_to_tensor(labels, name='labels'),
ops.convert_to_tensor(
predictions, name='predictions'))
predictions = math_ops.cast(predictions, dtypes.int64)
labels = math_ops.cast(labels, dtypes.int64)
# Sanity checks - underflow or overflow can cause memory corruption.
labels = control_flow_ops.with_dependencies(
[check_ops.assert_non_negative(
labels, message='`labels` contains negative values')],
labels)
predictions = control_flow_ops.with_dependencies(
[check_ops.assert_non_negative(
predictions, message='`predictions` contains negative values')],
predictions)
if num_classes is None:
num_classes = math_ops.maximum(math_ops.reduce_max(predictions),
math_ops.reduce_max(labels)) + 1
else:
num_classes_int64 = math_ops.cast(num_classes, dtypes.int64)
labels = control_flow_ops.with_dependencies(
[check_ops.assert_less(
labels, num_classes_int64, message='`labels` out of bound')],
labels)
predictions = control_flow_ops.with_dependencies(
[check_ops.assert_less(
predictions, num_classes_int64,
message='`predictions` out of bound')],
predictions)
if weights is not None:
weights = ops.convert_to_tensor(weights, name='weights')
predictions.get_shape().assert_is_compatible_with(weights.get_shape())
weights = math_ops.cast(weights, dtype)
shape = array_ops.stack([num_classes, num_classes])
indices = array_ops.stack([labels, predictions], axis=1)
values = (array_ops.ones_like(predictions, dtype)
if weights is None else weights)
return array_ops.scatter_nd(
indices=indices,
updates=values,
shape=math_ops.cast(shape, dtypes.int64))
@tf_export(v1=['math.confusion_matrix', 'confusion_matrix'])
@dispatch.add_dispatch_support
@deprecation.deprecated_endpoints('confusion_matrix', 'train.confusion_matrix')
def confusion_matrix_v1(labels,
predictions,
num_classes=None,
dtype=dtypes.int32,
name=None,
weights=None):
"""Computes the confusion matrix from predictions and labels.
The matrix columns represent the prediction labels and the rows represent the
real labels. The confusion matrix is always a 2-D array of shape `[n, n]`,
where `n` is the number of valid labels for a given classification task. Both
prediction and labels must be 1-D arrays of the same shape in order for this
function to work.
If `num_classes` is `None`, then `num_classes` will be set to one plus the
maximum value in either predictions or labels. Class labels are expected to
start at 0. For example, if `num_classes` is 3, then the possible labels
would be `[0, 1, 2]`.
If `weights` is not `None`, then each prediction contributes its
corresponding weight to the total value of the confusion matrix cell.
For example:
```python
tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
```
Note that the possible labels are assumed to be `[0, 1, 2, 3, 4]`,
resulting in a 5x5 confusion matrix.
Args:
labels: 1-D `Tensor` of real labels for the classification task.
predictions: 1-D `Tensor` of predictions for a given classification.
num_classes: The possible number of labels the classification task can have.
If this value is not provided, it will be calculated using both
predictions and labels array.
dtype: Data type of the confusion matrix.
name: Scope name.
weights: An optional `Tensor` whose shape matches `predictions`.
Returns:
A `Tensor` of type `dtype` with shape `[n, n]` representing the confusion
matrix, where `n` is the number of possible labels in the classification
task.
Raises:
ValueError: If both predictions and labels are not 1-D vectors and have
mismatched shapes, or if `weights` is not `None` and its shape doesn't
match `predictions`.
"""
return confusion_matrix(labels, predictions, num_classes, weights, dtype,
name)
| apache-2.0 |
kaiyou/docker-py | tests/unit/errors_test.py | 2 | 3097 | import unittest
import requests
from docker.errors import (APIError, DockerException,
create_unexpected_kwargs_error)
class APIErrorTest(unittest.TestCase):
def test_api_error_is_caught_by_dockerexception(self):
try:
raise APIError("this should be caught by DockerException")
except DockerException:
pass
def test_status_code_200(self):
"""The status_code property is present with 200 response."""
resp = requests.Response()
resp.status_code = 200
err = APIError('', response=resp)
assert err.status_code == 200
def test_status_code_400(self):
"""The status_code property is present with 400 response."""
resp = requests.Response()
resp.status_code = 400
err = APIError('', response=resp)
assert err.status_code == 400
def test_status_code_500(self):
"""The status_code property is present with 500 response."""
resp = requests.Response()
resp.status_code = 500
err = APIError('', response=resp)
assert err.status_code == 500
def test_is_server_error_200(self):
"""Report not server error on 200 response."""
resp = requests.Response()
resp.status_code = 200
err = APIError('', response=resp)
assert err.is_server_error() is False
def test_is_server_error_300(self):
"""Report not server error on 300 response."""
resp = requests.Response()
resp.status_code = 300
err = APIError('', response=resp)
assert err.is_server_error() is False
def test_is_server_error_400(self):
"""Report not server error on 400 response."""
resp = requests.Response()
resp.status_code = 400
err = APIError('', response=resp)
assert err.is_server_error() is False
def test_is_server_error_500(self):
"""Report server error on 500 response."""
resp = requests.Response()
resp.status_code = 500
err = APIError('', response=resp)
assert err.is_server_error() is True
def test_is_client_error_500(self):
"""Report not client error on 500 response."""
resp = requests.Response()
resp.status_code = 500
err = APIError('', response=resp)
assert err.is_client_error() is False
def test_is_client_error_400(self):
"""Report client error on 400 response."""
resp = requests.Response()
resp.status_code = 400
err = APIError('', response=resp)
assert err.is_client_error() is True
class CreateUnexpectedKwargsErrorTest(unittest.TestCase):
def test_create_unexpected_kwargs_error_single(self):
e = create_unexpected_kwargs_error('f', {'foo': 'bar'})
assert str(e) == "f() got an unexpected keyword argument 'foo'"
def test_create_unexpected_kwargs_error_multiple(self):
e = create_unexpected_kwargs_error('f', {'foo': 'bar', 'baz': 'bosh'})
assert str(e) == "f() got unexpected keyword arguments 'baz', 'foo'"
| apache-2.0 |
blaze/distributed | distributed/protocol/tests/test_collection_cuda.py | 1 | 2448 | import pytest
from distributed.protocol import serialize, deserialize
from dask.dataframe.utils import assert_eq
import pandas as pd
@pytest.mark.parametrize("collection", [tuple, dict])
@pytest.mark.parametrize("y,y_serializer", [(50, "cuda"), (None, "pickle")])
def test_serialize_cupy(collection, y, y_serializer):
cupy = pytest.importorskip("cupy")
x = cupy.arange(100)
if y is not None:
y = cupy.arange(y)
if issubclass(collection, dict):
header, frames = serialize(
{"x": x, "y": y}, serializers=("cuda", "dask", "pickle")
)
else:
header, frames = serialize((x, y), serializers=("cuda", "dask", "pickle"))
t = deserialize(header, frames, deserializers=("cuda", "dask", "pickle", "error"))
assert header["is-collection"] is True
sub_headers = header["sub-headers"]
assert sub_headers[0]["serializer"] == "cuda"
assert sub_headers[1]["serializer"] == y_serializer
assert isinstance(t, collection)
assert ((t["x"] if isinstance(t, dict) else t[0]) == x).all()
if y is None:
assert (t["y"] if isinstance(t, dict) else t[1]) is None
else:
assert ((t["y"] if isinstance(t, dict) else t[1]) == y).all()
@pytest.mark.parametrize("collection", [tuple, dict])
@pytest.mark.parametrize(
"df2,df2_serializer",
[(pd.DataFrame({"C": [3, 4, 5], "D": [2.5, 3.5, 4.5]}), "cuda"), (None, "pickle")],
)
def test_serialize_pandas_pandas(collection, df2, df2_serializer):
cudf = pytest.importorskip("cudf")
df1 = cudf.DataFrame({"A": [1, 2, None], "B": [1.0, 2.0, None]})
if df2 is not None:
df2 = cudf.from_pandas(df2)
if issubclass(collection, dict):
header, frames = serialize(
{"df1": df1, "df2": df2}, serializers=("cuda", "dask", "pickle")
)
else:
header, frames = serialize((df1, df2), serializers=("cuda", "dask", "pickle"))
t = deserialize(header, frames, deserializers=("cuda", "dask", "pickle"))
assert header["is-collection"] is True
sub_headers = header["sub-headers"]
assert sub_headers[0]["serializer"] == "cuda"
assert sub_headers[1]["serializer"] == df2_serializer
assert isinstance(t, collection)
assert_eq(t["df1"] if isinstance(t, dict) else t[0], df1)
if df2 is None:
assert (t["df2"] if isinstance(t, dict) else t[1]) is None
else:
assert_eq(t["df2"] if isinstance(t, dict) else t[1], df2)
| bsd-3-clause |
LTD-Beget/tornado | tornado/util.py | 2 | 13519 | """Miscellaneous utility functions and classes.
This module is used internally by Tornado. It is not necessarily expected
that the functions and classes defined here will be useful to other
applications, but they are documented here in case they are.
The one public-facing part of this module is the `Configurable` class
and its `~Configurable.configure` method, which becomes a part of the
interface of its subclasses, including `.AsyncHTTPClient`, `.IOLoop`,
and `.Resolver`.
"""
from __future__ import absolute_import, division, print_function, with_statement
import array
import os
import sys
import zlib
try:
xrange # py2
except NameError:
xrange = range # py3
# inspect.getargspec() raises DeprecationWarnings in Python 3.5.
# The two functions have compatible interfaces for the parts we need.
try:
from inspect import getfullargspec as getargspec # py3
except ImportError:
from inspect import getargspec # py2
class ObjectDict(dict):
"""Makes a dictionary behave like an object, with attribute-style access.
"""
def __getattr__(self, name):
try:
return self[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
self[name] = value
class GzipDecompressor(object):
"""Streaming gzip decompressor.
The interface is like that of `zlib.decompressobj` (without some of the
optional arguments, but it understands gzip headers and checksums.
"""
def __init__(self):
# Magic parameter makes zlib module understand gzip header
# http://stackoverflow.com/questions/1838699/how-can-i-decompress-a-gzip-stream-with-zlib
# This works on cpython and pypy, but not jython.
self.decompressobj = zlib.decompressobj(16 + zlib.MAX_WBITS)
def decompress(self, value, max_length=None):
"""Decompress a chunk, returning newly-available data.
Some data may be buffered for later processing; `flush` must
be called when there is no more input data to ensure that
all data was processed.
If ``max_length`` is given, some input data may be left over
in ``unconsumed_tail``; you must retrieve this value and pass
it back to a future call to `decompress` if it is not empty.
"""
return self.decompressobj.decompress(value, max_length)
@property
def unconsumed_tail(self):
"""Returns the unconsumed portion left over
"""
return self.decompressobj.unconsumed_tail
def flush(self):
"""Return any remaining buffered data not yet returned by decompress.
Also checks for errors such as truncated input.
No other methods may be called on this object after `flush`.
"""
return self.decompressobj.flush()
# Fake unicode literal support: Python 3.2 doesn't have the u'' marker for
# literal strings, and alternative solutions like "from __future__ import
# unicode_literals" have other problems (see PEP 414). u() can be applied
# to ascii strings that include \u escapes (but they must not contain
# literal non-ascii characters).
if not isinstance(b'', type('')):
def u(s):
return s
unicode_type = str
basestring_type = str
else:
def u(s):
return s.decode('unicode_escape')
# These names don't exist in py3, so use noqa comments to disable
# warnings in flake8.
unicode_type = unicode # noqa
basestring_type = basestring # noqa
def import_object(name):
"""Imports an object by name.
import_object('x') is equivalent to 'import x'.
import_object('x.y.z') is equivalent to 'from x.y import z'.
>>> import tornado.escape
>>> import_object('tornado.escape') is tornado.escape
True
>>> import_object('tornado.escape.utf8') is tornado.escape.utf8
True
>>> import_object('tornado') is tornado
True
>>> import_object('tornado.missing_module')
Traceback (most recent call last):
...
ImportError: No module named missing_module
"""
if isinstance(name, unicode_type) and str is not unicode_type:
# On python 2 a byte string is required.
name = name.encode('utf-8')
if name.count('.') == 0:
return __import__(name, None, None)
parts = name.split('.')
obj = __import__('.'.join(parts[:-1]), None, None, [parts[-1]], 0)
try:
return getattr(obj, parts[-1])
except AttributeError:
raise ImportError("No module named %s" % parts[-1])
# Deprecated alias that was used before we dropped py25 support.
# Left here in case anyone outside Tornado is using it.
bytes_type = bytes
if sys.version_info > (3,):
exec("""
def raise_exc_info(exc_info):
raise exc_info[1].with_traceback(exc_info[2])
def exec_in(code, glob, loc=None):
if isinstance(code, str):
code = compile(code, '<string>', 'exec', dont_inherit=True)
exec(code, glob, loc)
""")
else:
exec("""
def raise_exc_info(exc_info):
raise exc_info[0], exc_info[1], exc_info[2]
def exec_in(code, glob, loc=None):
if isinstance(code, basestring):
# exec(string) inherits the caller's future imports; compile
# the string first to prevent that.
code = compile(code, '<string>', 'exec', dont_inherit=True)
exec code in glob, loc
""")
def errno_from_exception(e):
"""Provides the errno from an Exception object.
There are cases that the errno attribute was not set so we pull
the errno out of the args but if someone instantiates an Exception
without any args you will get a tuple error. So this function
abstracts all that behavior to give you a safe way to get the
errno.
"""
if hasattr(e, 'errno'):
return e.errno
elif e.args:
return e.args[0]
else:
return None
class Configurable(object):
"""Base class for configurable interfaces.
A configurable interface is an (abstract) class whose constructor
acts as a factory function for one of its implementation subclasses.
The implementation subclass as well as optional keyword arguments to
its initializer can be set globally at runtime with `configure`.
By using the constructor as the factory method, the interface
looks like a normal class, `isinstance` works as usual, etc. This
pattern is most useful when the choice of implementation is likely
to be a global decision (e.g. when `~select.epoll` is available,
always use it instead of `~select.select`), or when a
previously-monolithic class has been split into specialized
subclasses.
Configurable subclasses must define the class methods
`configurable_base` and `configurable_default`, and use the instance
method `initialize` instead of ``__init__``.
"""
__impl_class = None
__impl_kwargs = None
def __new__(cls, *args, **kwargs):
base = cls.configurable_base()
init_kwargs = {}
if cls is base:
impl = cls.configured_class()
if base.__impl_kwargs:
init_kwargs.update(base.__impl_kwargs)
else:
impl = cls
init_kwargs.update(kwargs)
instance = super(Configurable, cls).__new__(impl)
# initialize vs __init__ chosen for compatibility with AsyncHTTPClient
# singleton magic. If we get rid of that we can switch to __init__
# here too.
instance.initialize(*args, **init_kwargs)
return instance
@classmethod
def configurable_base(cls):
"""Returns the base class of a configurable hierarchy.
This will normally return the class in which it is defined.
(which is *not* necessarily the same as the cls classmethod parameter).
"""
raise NotImplementedError()
@classmethod
def configurable_default(cls):
"""Returns the implementation class to be used if none is configured."""
raise NotImplementedError()
def initialize(self):
"""Initialize a `Configurable` subclass instance.
Configurable classes should use `initialize` instead of ``__init__``.
.. versionchanged:: 4.2
Now accepts positional arguments in addition to keyword arguments.
"""
@classmethod
def configure(cls, impl, **kwargs):
"""Sets the class to use when the base class is instantiated.
Keyword arguments will be saved and added to the arguments passed
to the constructor. This can be used to set global defaults for
some parameters.
"""
base = cls.configurable_base()
if isinstance(impl, (unicode_type, bytes)):
impl = import_object(impl)
if impl is not None and not issubclass(impl, cls):
raise ValueError("Invalid subclass of %s" % cls)
base.__impl_class = impl
base.__impl_kwargs = kwargs
@classmethod
def configured_class(cls):
"""Returns the currently configured class."""
base = cls.configurable_base()
if cls.__impl_class is None:
base.__impl_class = cls.configurable_default()
return base.__impl_class
@classmethod
def _save_configuration(cls):
base = cls.configurable_base()
return (base.__impl_class, base.__impl_kwargs)
@classmethod
def _restore_configuration(cls, saved):
base = cls.configurable_base()
base.__impl_class = saved[0]
base.__impl_kwargs = saved[1]
class ArgReplacer(object):
"""Replaces one value in an ``args, kwargs`` pair.
Inspects the function signature to find an argument by name
whether it is passed by position or keyword. For use in decorators
and similar wrappers.
"""
def __init__(self, func, name):
self.name = name
try:
self.arg_pos = self._getargnames(func).index(name)
except ValueError:
# Not a positional parameter
self.arg_pos = None
def _getargnames(self, func):
try:
return getargspec(func).args
except TypeError:
if hasattr(func, 'func_code'):
# Cython-generated code has all the attributes needed
# by inspect.getargspec (when the
# @cython.binding(True) directive is used), but the
# inspect module only works with ordinary functions.
# Inline the portion of getargspec that we need here.
code = func.func_code
return code.co_varnames[:code.co_argcount]
raise
def get_old_value(self, args, kwargs, default=None):
"""Returns the old value of the named argument without replacing it.
Returns ``default`` if the argument is not present.
"""
if self.arg_pos is not None and len(args) > self.arg_pos:
return args[self.arg_pos]
else:
return kwargs.get(self.name, default)
def replace(self, new_value, args, kwargs):
"""Replace the named argument in ``args, kwargs`` with ``new_value``.
Returns ``(old_value, args, kwargs)``. The returned ``args`` and
``kwargs`` objects may not be the same as the input objects, or
the input objects may be mutated.
If the named argument was not found, ``new_value`` will be added
to ``kwargs`` and None will be returned as ``old_value``.
"""
if self.arg_pos is not None and len(args) > self.arg_pos:
# The arg to replace is passed positionally
old_value = args[self.arg_pos]
args = list(args) # *args is normally a tuple
args[self.arg_pos] = new_value
else:
# The arg to replace is either omitted or passed by keyword.
old_value = kwargs.get(self.name)
kwargs[self.name] = new_value
return old_value, args, kwargs
def timedelta_to_seconds(td):
"""Equivalent to td.total_seconds() (introduced in python 2.7)."""
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10 ** 6) / float(10 ** 6)
def _websocket_mask_python(mask, data):
"""Websocket masking function.
`mask` is a `bytes` object of length 4; `data` is a `bytes` object of any length.
Returns a `bytes` object of the same length as `data` with the mask applied
as specified in section 5.3 of RFC 6455.
This pure-python implementation may be replaced by an optimized version when available.
"""
mask = array.array("B", mask)
unmasked = array.array("B", data)
for i in xrange(len(data)):
unmasked[i] = unmasked[i] ^ mask[i % 4]
if hasattr(unmasked, 'tobytes'):
# tostring was deprecated in py32. It hasn't been removed,
# but since we turn on deprecation warnings in our tests
# we need to use the right one.
return unmasked.tobytes()
else:
return unmasked.tostring()
if (os.environ.get('TORNADO_NO_EXTENSION') or
os.environ.get('TORNADO_EXTENSION') == '0'):
# These environment variables exist to make it easier to do performance
# comparisons; they are not guaranteed to remain supported in the future.
_websocket_mask = _websocket_mask_python
else:
try:
from tornado.speedups import websocket_mask as _websocket_mask
except ImportError:
if os.environ.get('TORNADO_EXTENSION') == '1':
raise
_websocket_mask = _websocket_mask_python
def doctests():
import doctest
return doctest.DocTestSuite()
| apache-2.0 |
cs243iitg/vehicle-webapp | webapp/vms/forms.py | 1 | 15620 | from vms import models
from django.contrib.auth.models import User
from django import forms
from django.forms.extras.widgets import SelectDateWidget
from django.contrib.admin.widgets import AdminSplitDateTime
from django.utils.translation import ugettext_lazy as _
from datetimewidget.widgets import DateTimeWidget, DateWidget, DateTimeInput, TimeInput
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Fieldset, ButtonHolder, Submit
from crispy_forms.bootstrap import TabHolder, Tab, Div, Field
from crispy_forms.bootstrap import AppendedText, PrependedText, InlineCheckboxes
from crispy_forms.bootstrap import Accordion, AccordionGroup
from django.contrib.auth import forms as UserForms
from django.core.validators import RegexValidator
from datetime import datetime
from bootstrap3_datetime.widgets import DateTimePicker
class DocumentForm(forms.Form):
docfile = forms.FileField(
label='Select a file'
)
class StudentCycleForm(forms.ModelForm):
class Meta:
model = models.StudentCycle
exclude = ('user','cycle_pass_no')
def __init__(self, *args, **kwargs):
super(StudentCycleForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
class BusTimingForm(forms.ModelForm):
from_time = forms.DateTimeField(required=True, widget=DateTimePicker(options={"format": "DD-MM-YYYY HH:mm", "pickSeconds":True}))
class Meta:
model = models.BusTiming
fields = ['bus_route', 'from_time', 'bus_no', 'starting_point', 'ending_point', 'availability','working_day']
# widgets = {
# 'from_time': forms.TimeInput(format='%H:%M'),
# }
def __init__(self, *args, **kwargs):
super(BusTimingForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
# self.fields['from_time'].widget = TimeInput(attrs={
# 'class':'form-control',
# 'tabindex':index+1,
# 'placeholder': 'HH:MM',
# })
class SuspiciousVehicleForm(forms.ModelForm):
"""
User form for Reporting Suspicious Vehicle
"""
class Meta:
model = models.SuspiciousVehicle
exclude = ('reporter',)
widgets = {
'remarks': forms.Textarea(attrs={'rows':6}),
}
labels = {
'vehicle_image': _('Vehicle Photo'),
'vehicle_number': _('Vehicle Number'),
'vehicle_type': _('Vehicle Type'),
'vehicle_model': _('Vehicle Model'),
}
def __init__(self, *args, **kwargs):
super(SuspiciousVehicleForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
class PersonPassForm(forms.ModelForm):
"""
Admin form for Blocking Passes
"""
class Meta:
model = models.PersonPass
exclude = ('is_blocked','reason')
widgets = {
'expiry_date': SelectDateWidget(years=range(2000, 2030)),#(usel10n = True, bootstrap_version=3,),
'issue_date': SelectDateWidget(years=range(2000, 2030)),#(usel10n = True, bootstrap_version=3,),
}
labels = {
'user_photo': _('Your photo'),
'old_card_reference': _('Old Card Number'),
'age': _('Age'),
'pass_number': _('Pass Number'),
'name': _('Name'),
'identified_by': _('Office'),
'work_area': _('Work Area'),
'working_time': _('Working Time'),
'nature_of_work': _('Job'),
}
class TheftForm(forms.ModelForm):
"""
User form for reporting theft
"""
theft_time = forms.DateTimeField(required=True, widget=DateTimePicker(options={"format": "DD-MM-YYYY HH:mm", "pickSeconds":True}))
class Meta:
model = models.TheftReport
exclude = ('reporter', 'status','stud_vehicle','emp_vehicle')
widgets = {
# 'theft_time': DateTimeWidget(usel10n = True, bootstrap_version=3),
# 'theft_time':DateTimeInput(format="%d-%m-%Y %H:%M"),
'remarks': forms.Textarea(attrs={'rows':6}),
}
def __init__(self, *args, **kwargs):
super(TheftForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
self.fields['theft_time'].widget = DateTimeInput(attrs={
'class':'form-control',
'tabindex':index+1,
'placeholder': 'DD-MM-YYYY hh:mm',
})
class StudentVehicleForm(forms.ModelForm):
"""
Student form for registering vehicle
"""
# date_of_birth = forms.DateTimeField(required=True, widget=DateTimePicker(options={"format": "DD-MM-YYYY", "pickTime":False}))
class Meta:
model = models.StudentVehicle
exclude = ('registered_with_security_section', 'user', 'issue_date', 'expiry_date')
dateOptions = {
'startView': 4,
}
widgets = {
# 'date_of_birth':DateTimePicker(options={"format": "DD-MM-YYYY", "pickTime":False}),
'date_of_birth': SelectDateWidget(years=range(1950, datetime.now().year)),#(usel10n = True, bootstrap_version=3, options = dateOptions),
'insurance_valid_upto': SelectDateWidget(years=range(datetime.now().year, 2035)), #(usel10n = True, bootstrap_version=3, options = dateOptions),
'driving_license_issue_date': SelectDateWidget(years=range(1950, datetime.now().year)), #(usel10n = True, bootstrap_version=3, options = dateOptions),
'driving_license_expiry_date': SelectDateWidget(years=range(datetime.now().year, 2035)), #(usel10n = True, bootstrap_version=3, options = dateOptions),
'remarks': forms.Textarea(attrs={'rows':6}),
'address_of_communication': forms.Textarea(attrs={'rows':4}),
'permanent_address': forms.Textarea(attrs={'rows':4}),
'declaration': forms.Textarea(attrs={'rows':6,
'readonly':True,
'style':'resize:none;',}),
}
labels = {
'user_photo': _('Your photo'),
'address_of_communication': _('Address'),
'address_of_communication_district': _('District'),
'address_of_communication_state': _('State'),
'address_of_communication_pincode': _('Pincode'),
'permanent_address': _('Address'),
'permanent_address_district': _('District'),
'permanent_address_state': _('State'),
'permanent_address_pincode': _('Pincode'),
'parents_contact_no': _('Contact number'),
'parents_emailid': _('Email ID'),
'vehicle_registration_number': _('Registration Number'),
'driving_license_number': _('License number'),
'driving_license_issue_date': _('Issue date'),
'driving_license_expiry_date': _('Expiry Date'),
'driving_license': _('Scanned copy'),
}
def __init__(self, *args, **kwargs):
super(StudentVehicleForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
for field in self.fields.values():
field.error_messages = {'required':''}
self.helper = FormHelper()
self.helper.form_id = 'id_student_vehicle_form'
self.helper.form_class = 'form-horizontal'
self.helper.label_class = 'col-md-2 col-md-offset-1'
self.helper.field_class = 'col-md-4'
self.helper.form_method = 'post'
self.helper.form_action = '/vms/users/submit-vehicle-registration/'
self.helper.layout = Layout(
TabHolder(
Tab('Personal Details',
'name',
'roll_number',
'department',
'programme',
'date_of_birth',
'hostel_name',
'room_number',
'mobile_number',
'user_photo',
'identity_card',
),
Tab('Contact',
Accordion(
AccordionGroup('Address of communication',
'address_of_communication',
'address_of_communication_district',
'address_of_communication_state',
'address_of_communication_pincode',
),
AccordionGroup('Permanent Address',
'permanent_address',
'permanent_address_district',
'permanent_address_state',
'permanent_address_pincode',
),
AccordionGroup('Parent/Guardian Details',
'parents_contact_no',
'parents_emailid',
),
),
),
Tab('Vehicle Details',
'vehicle_registration_number',
#'registered_with_security_section',
'color',
'make_and_model',
'chassis_number',
'engine_number',
'registered_in_the_name_of',
'relation_with_owner',
'vehicle_insurance_no',
'insurance_valid_upto',
'vehicle_registration_card',
'vehicle_insurance',
'vehicle_photo',
),
Tab('Driving License',
'driving_license_number',
'driving_license_issue_date',
'driving_license_expiry_date',
'driving_license',
'declaration'
)
),
ButtonHolder(
Submit('submit', 'Submit',
css_class='btn-primary col-md-offset-5 form-submit')
)
)
class EmployeeVehicleForm(forms.ModelForm):
"""
Employee form for registering vehicle
"""
# date_of_birth=forms.DateField(widget=SelectDateWidget, initial="DD-MM-YYYY")
# insurance_valid_upto = forms.DateField(widget=SelectDateWidget, initial="DD-MM-YYYY")
# driving_license_issue_date = forms.DateField(widget=SelectDateWidget, initial="DD-MM-YYYY")
# driving_license_expiry_date = forms.DateField(widget=SelectDateWidget, initial="DD-MM-YYYY")
class Meta:
model = models.EmployeeVehicle
exclude = ('registered_with_security_section', 'user', 'issue_date', 'expiry_date')
dateOptions = {
'startView': 4,
}
widgets = {
'date_of_birth': SelectDateWidget(years=range(1950, datetime.now().year)), #DateWidget(usel10n = True, bootstrap_version=3,
# options = dateOptions),
'insurance_valid_upto': SelectDateWidget(years=range(datetime.now().year, 2035)), #DateWidget(usel10n = True, bootstrap_version=3,options = dateOptions),
'driving_license_issue_date':SelectDateWidget(years=range(1950, datetime.now().year)), # DateWidget(usel10n = True, bootstrap_version=3, options = dateOptions),
'driving_license_expiry_date': SelectDateWidget(years=range(datetime.now().year, 2035)), #DateWidget(usel10n = True, bootstrap_version=3, options = dateOptions),
'remarks': forms.Textarea(attrs={'rows':6}),
'address_of_communication': forms.Textarea(attrs={'rows':4}),
'permanent_address': forms.Textarea(attrs={'rows':4}),
'declaration': forms.Textarea(attrs={'rows':6,
'readonly':True,
'style':'resize:none;',}),
}
labels = {
'user_photo': _('Your photo'),
'vehicle_registration_number': _('Registration Number'),
'driving_license_number': _('License number'),
'driving_license_issue_date': _('Issue date'),
'driving_license_expiry_date': _('Expiry Date'),
'driving_license': _('Scanned copy'),
}
def __init__(self, *args, **kwargs):
super(EmployeeVehicleForm, self).__init__(*args, **kwargs)
for index, field in enumerate(self.fields):
self.fields[field].widget.attrs.update({
'class': 'form-control',
'tabindex': index+1,
})
for field in self.fields.values():
field.error_messages = {'required':''}
self.helper = FormHelper()
self.helper.form_id = 'id_student_vehicle_form'
self.helper.form_class = 'form-horizontal'
self.helper.label_class = 'col-md-2 col-md-offset-1'
self.helper.field_class = 'col-md-4'
self.helper.form_method = 'post'
self.helper.form_action = '/vms/users/submit-vehicle-registration/'
self.helper.layout = Layout(
TabHolder(
Tab('Personal Details',
'name',
'employee_no',
'department',
'date_of_birth',
'block_number',
'flat_number',
'mobile_number',
'user_photo',
'identity_card',
'parking_slot_no',
),
Tab('Vehicle Details',
'vehicle_registration_number',
'color',
'make_and_model',
'chassis_number',
'engine_number',
'registered_in_the_name_of',
'vehicle_insurance_no',
'insurance_valid_upto',
'vehicle_registration_card',
'vehicle_insurance',
'vehicle_photo',
),
Tab('Driving License',
'driving_license_number',
'driving_license_issue_date',
'driving_license_expiry_date',
'driving_license',
'declaration'
)
),
ButtonHolder(
Submit('submit', 'Submit',
css_class='btn-primary col-md-offset-5 form-submit')
)
)
| mit |
kernel64/AutobahnPython | examples/websocket/streaming/message_based_server.py | 27 | 1622 | ###############################################################################
##
## Copyright 2011 Tavendo GmbH
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
###############################################################################
import hashlib
from twisted.internet import reactor
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
class MessageBasedHashServerProtocol(WebSocketServerProtocol):
"""
Message-based WebSockets server that computes a SHA-256 for every
message it receives and sends back the computed digest.
"""
def onMessage(self, message, binary):
sha256 = hashlib.sha256()
sha256.update(message)
digest = sha256.hexdigest()
self.sendMessage(digest)
print "Sent digest for message: %s" % digest
if __name__ == '__main__':
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = MessageBasedHashServerProtocol
listenWS(factory)
reactor.run()
| apache-2.0 |
ringemup/satchmo | satchmo/apps/payment/views/contact.py | 6 | 5160 | ####################################################################
# First step in the order process - capture all the demographic info
#####################################################################
from django import http
from django.core import urlresolvers
from django.shortcuts import render_to_response
from django.template import RequestContext
from livesettings import config_get_group, config_value
from satchmo_store.contact import CUSTOMER_ID
from satchmo_store.contact.forms import area_choices_for_country
from satchmo_store.contact.models import Contact
from payment.decorators import cart_has_minimum_order
from payment.forms import ContactInfoForm, PaymentContactInfoForm
from satchmo_store.shop.models import Cart, Config, Order
from satchmo_utils.dynamic import lookup_url
from signals_ahoy.signals import form_initialdata
import logging
log = logging.getLogger('satchmo_store.contact.contact')
def authentication_required(request, template='shop/checkout/authentication_required.html'):
return render_to_response(
template, {}, context_instance = RequestContext(request)
)
def contact_info(request, **kwargs):
"""View which collects demographic information from customer."""
#First verify that the cart exists and has items
tempCart = Cart.objects.from_request(request)
if tempCart.numItems == 0:
return render_to_response('shop/checkout/empty_cart.html',
context_instance=RequestContext(request))
if not request.user.is_authenticated() and config_value('SHOP', 'AUTHENTICATION_REQUIRED'):
url = urlresolvers.reverse('satchmo_checkout_auth_required')
thisurl = urlresolvers.reverse('satchmo_checkout-step1')
return http.HttpResponseRedirect(url + "?next=" + thisurl)
init_data = {}
shop = Config.objects.get_current()
if request.user.is_authenticated():
if request.user.email:
init_data['email'] = request.user.email
if request.user.first_name:
init_data['first_name'] = request.user.first_name
if request.user.last_name:
init_data['last_name'] = request.user.last_name
try:
contact = Contact.objects.from_request(request, create=False)
except Contact.DoesNotExist:
contact = None
try:
order = Order.objects.from_request(request)
if order.discount_code:
init_data['discount'] = order.discount_code
except Order.DoesNotExist:
pass
if request.method == "POST":
new_data = request.POST.copy()
if not tempCart.is_shippable:
new_data['copy_address'] = True
form = PaymentContactInfoForm(data=new_data, shop=shop, contact=contact, shippable=tempCart.is_shippable,
initial=init_data, cart=tempCart)
if form.is_valid():
if contact is None and request.user and request.user.is_authenticated():
contact = Contact(user=request.user)
custID = form.save(request, cart=tempCart, contact=contact)
request.session[CUSTOMER_ID] = custID
modulename = new_data['paymentmethod']
if not modulename.startswith('PAYMENT_'):
modulename = 'PAYMENT_' + modulename
paymentmodule = config_get_group(modulename)
url = lookup_url(paymentmodule, 'satchmo_checkout-step2')
return http.HttpResponseRedirect(url)
else:
log.debug("Form errors: %s", form.errors)
else:
if contact:
#If a person has their contact info, make sure we populate it in the form
for item in contact.__dict__.keys():
init_data[item] = getattr(contact,item)
if contact.shipping_address:
for item in contact.shipping_address.__dict__.keys():
init_data["ship_"+item] = getattr(contact.shipping_address,item)
if contact.billing_address:
for item in contact.billing_address.__dict__.keys():
init_data[item] = getattr(contact.billing_address,item)
if contact.primary_phone:
init_data['phone'] = contact.primary_phone.phone
else:
# Allow them to login from this page.
request.session.set_test_cookie()
#Request additional init_data
form_initialdata.send(sender=PaymentContactInfoForm, initial=init_data,
contact=contact, cart=tempCart, shop=shop)
form = PaymentContactInfoForm(
shop=shop,
contact=contact,
shippable=tempCart.is_shippable,
initial=init_data,
cart=tempCart)
if shop.in_country_only:
only_country = shop.sales_country
else:
only_country = None
context = RequestContext(request, {
'form': form,
'country': only_country,
'paymentmethod_ct': len(form.fields['paymentmethod'].choices)
})
return render_to_response('shop/checkout/form.html',
context_instance=context)
contact_info_view = cart_has_minimum_order()(contact_info)
| bsd-3-clause |
gibiansky/tensorflow | tensorflow/python/kernel_tests/reduction_ops_test.py | 21 | 33412 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Functional tests for reduction ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradient_checker
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
class ReducedShapeTest(test.TestCase):
def _check(self, shape, axes, result):
output = math_ops.reduced_shape(shape, axes=axes)
self.assertAllEqual(output.eval(), result)
def testSimple(self):
with self.test_session():
self._check([3], [], [3])
self._check([3], [0], [1])
self._check([5, 3], [], [5, 3])
self._check([5, 3], [0], [1, 3])
self._check([5, 3], [1], [5, 1])
self._check([5, 3], [0, 1], [1, 1])
def testZeros(self):
"""Check that reduced_shape does the right thing with zero dimensions."""
with self.test_session():
self._check([0], [], [0])
self._check([0], [0], [1])
self._check([0, 3], [], [0, 3])
self._check([0, 3], [0], [1, 3])
self._check([0, 3], [1], [0, 1])
self._check([0, 3], [0, 1], [1, 1])
self._check([3, 0], [], [3, 0])
self._check([3, 0], [0], [1, 0])
self._check([3, 0], [1], [3, 1])
self._check([3, 0], [0, 1], [1, 1])
def testNegAxes(self):
with self.test_session():
self._check([10, 10, 10], [-1], [10, 10, 1])
self._check([10, 10, 10], [-1, 2], [10, 10, 1])
self._check([10, 10, 10], [-1, -1], [10, 10, 1])
self._check([10, 10, 10], [-1, 0], [1, 10, 1])
self._check([10, 10, 10], [-3], [1, 10, 10])
class SumReductionTest(test.TestCase):
def _compare(self,
x,
reduction_axes,
keep_dims,
use_gpu=False,
feed_dict=None):
np_ans = x
if reduction_axes is None:
np_ans = np.sum(np_ans, keepdims=keep_dims)
else:
reduction_axes = np.array(reduction_axes).astype(np.int32)
for ra in reduction_axes.ravel()[::-1]:
np_ans = np.sum(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu) as sess:
tf_ans = math_ops.reduce_sum(x, reduction_axes, keep_dims)
out = sess.run(tf_ans, feed_dict)
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes, feed_dict=None):
if reduction_axes is not None and np.shape(reduction_axes) == (1,):
# Test scalar reduction_axes argument
self._compareAll(x, reduction_axes[0])
self._compare(x, reduction_axes, False, use_gpu=True, feed_dict=feed_dict)
self._compare(x, reduction_axes, False, use_gpu=False, feed_dict=feed_dict)
self._compare(x, reduction_axes, True, use_gpu=True, feed_dict=feed_dict)
self._compare(x, reduction_axes, True, use_gpu=False, feed_dict=feed_dict)
def testInfinity(self):
for dtype in [np.float32, np.float64]:
for special_value_x in [-np.inf, np.inf]:
for special_value_y in [-np.inf, np.inf]:
np_arr = np.array([special_value_x, special_value_y]).astype(dtype)
self._compareAll(np_arr, None)
def testFloatReduce1D(self):
# Create a 1D array of floats
np_arr = np.arange(1, 6).reshape([5]).astype(np.float32)
self._compareAll(np_arr, [0])
def testFloatReduce2D(self):
# Create a 2D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 10).reshape([2, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [0, 1])
def testFloatReduce3D(self):
# Create a 3D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
self._compareAll(np_arr, [-1])
self._compareAll(np_arr, [-1, -3])
self._compareAll(np_arr, [-1, 1])
def testFloatReduce4D(self):
# Create a 4D array of floats and reduce across some
# dimensions
np_arr = np.arange(0, 210).reshape([2, 3, 5, 7]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
# Need specialization for reduce(4D, [0, 2])
# self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
self._compareAll(np_arr, [1, 2, 3])
self._compareAll(np_arr, [0, 1, 2, 3])
def testFloatReduce5D(self):
# Create a 5D array of floats and reduce across some dimensions
np_arr = np.arange(0, 840).reshape([2, 3, 5, 7, 4]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
# Need specialization for reduce(4D, [0, 2])
# self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
self._compareAll(np_arr, [1, 2, 3])
self._compareAll(np_arr, [0, 1, 2, 3])
self._compareAll(np_arr, [1, 2, 3, 4])
self._compareAll(np_arr, [0, 1, 2, 3, 4])
# Simple tests for various types.
def testDoubleReduce1D(self):
np_arr = np.arange(1, 6).reshape([5]).astype(np.float64)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
def testInt32Reduce1D(self):
np_arr = np.arange(1, 6).reshape([5]).astype(np.int32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
def testComplex64Reduce1D(self):
np_arr = np.arange(1, 6).reshape([5]).astype(np.complex64)
self._compare(np_arr, [], False)
self._compare(np_arr, [0], False)
def testComplex128Reduce1D(self):
np_arr = np.arange(1, 6).reshape([5]).astype(np.complex128)
self._compare(np_arr, [], False)
self._compare(np_arr, [0], False)
def testInvalidIndex(self):
np_arr = np.arange(0, 10).reshape([2, 5]).astype(np.float32)
input_tensor = ops.convert_to_tensor(np_arr)
with self.assertRaisesWithPredicateMatch(
ValueError, lambda e: "Invalid reduction dimension" in str(e)):
math_ops.reduce_sum(input_tensor, [-3])
with self.assertRaisesWithPredicateMatch(
ValueError, lambda e: "Invalid reduction dimension" in str(e)):
math_ops.reduce_sum(input_tensor, [2])
with self.assertRaisesWithPredicateMatch(
ValueError, lambda e: "Invalid reduction dimension" in str(e)):
math_ops.reduce_sum(input_tensor, [0, 2])
def testPartialShapes(self):
np.random.seed(1618)
# Input shape is unknown.
reduction_axes = [1, 2]
c_unknown = array_ops.placeholder(dtypes.float32)
s_unknown = math_ops.reduce_sum(c_unknown, reduction_axes)
self.assertEqual(tensor_shape.unknown_shape(), s_unknown.get_shape())
np_input = np.random.randn(3, 3, 3)
self._compareAll(np_input, reduction_axes, {c_unknown: np_input})
# Input shape only has known rank.
c_known_rank = array_ops.placeholder(dtypes.float32)
c_known_rank.set_shape(tensor_shape.unknown_shape(ndims=3))
s_known_rank = math_ops.reduce_sum(
c_known_rank, reduction_axes, keep_dims=True)
self.assertEqual(3, s_known_rank.get_shape().ndims)
np_input = np.random.randn(3, 3, 3)
self._compareAll(np_input, reduction_axes, {c_known_rank: np_input})
# Reduction indices are unknown.
unknown_indices = array_ops.placeholder(dtypes.int32)
c_unknown_indices = constant_op.constant([[10.0], [20.0]])
s_unknown_indices = math_ops.reduce_sum(
c_unknown_indices, unknown_indices, keep_dims=False)
self.assertEqual(tensor_shape.unknown_shape(),
s_unknown_indices.get_shape())
s_unknown_indices_keep = math_ops.reduce_sum(
c_unknown_indices, unknown_indices, keep_dims=True)
self.assertEqual(2, s_unknown_indices_keep.get_shape().ndims)
# Int64??
def _compareGradient(self, shape, sum_shape, reduction_axes):
if reduction_axes is not None and np.shape(reduction_axes) == (1,):
# Test scalar reduction_axes argument
self._compareGradient(shape, sum_shape, reduction_axes[0])
x = np.arange(1.0, 49.0).reshape(shape).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_sum(t, reduction_axes)
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, shape, su, sum_shape, x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient(self):
self._compareGradient([2, 3, 4, 2], [2, 2], [1, 2])
def testGradient2(self):
self._compareGradient([2, 3, 4, 2], [2, 4, 2], [1])
def testGradient3(self):
self._compareGradient([2, 3, 4, 2], [2, 3, 2], [2])
def testGradient4(self):
self._compareGradient([2, 3, 4, 2], [], None)
def testGradient5(self):
self._compareGradient([2, 3, 4, 2], [3, 4, 2], 0)
def testHighRank(self):
# Do a bunch of random high dimensional reductions
np.random.seed(42)
for _ in range(20):
rank = np.random.randint(4, 10 + 1)
axes, = np.nonzero(np.random.randint(2, size=rank))
shape = tuple(np.random.randint(1, 3 + 1, size=rank))
data = np.random.randint(1024, size=shape)
self._compareAll(data, axes)
# Check some particular axis patterns
for rank in 4, 7, 10:
shape = tuple(np.random.randint(1, 3 + 1, size=rank))
data = np.random.randint(1024, size=shape)
for axes in ([], np.arange(rank), np.arange(0, rank, 2),
np.arange(1, rank, 2)):
self._compareAll(data, axes)
def testExpand(self):
# Reduce an empty tensor to a nonempty tensor
x = np.zeros((5, 0))
self._compareAll(x, [1])
def testEmptyGradients(self):
with self.test_session():
x = array_ops.zeros([0, 3])
y = math_ops.reduce_sum(x, [1])
error = gradient_checker.compute_gradient_error(x, [0, 3], y, [0])
self.assertEqual(error, 0)
def testDegenerate(self):
for use_gpu in False, True:
with self.test_session(use_gpu=use_gpu):
for dtype in (dtypes.float16, dtypes.float32, dtypes.float64,
dtypes.complex64, dtypes.complex128):
# A large number is needed to get Eigen to die
x = array_ops.zeros((0, 9938), dtype=dtype)
y = math_ops.reduce_sum(x, [0])
self.assertAllEqual(y.eval(), np.zeros(9938))
class MeanReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims, use_gpu=False):
np_ans = x
if reduction_axes is None:
np_ans = np.mean(np_ans, keepdims=keep_dims)
else:
reduction_axes = np.array(reduction_axes).astype(np.int32)
count = 1
for ra in reduction_axes.ravel()[::-1]:
np_ans = np.sum(np_ans, axis=ra, keepdims=keep_dims)
count *= x.shape[ra]
np_ans /= count
with self.test_session(use_gpu=use_gpu):
tf_ans = math_ops.reduce_mean(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False, use_gpu=True)
self._compare(x, reduction_axes, True, use_gpu=True)
self._compare(x, reduction_axes, False, use_gpu=False)
self._compare(x, reduction_axes, True, use_gpu=False)
def testFloatReduce3D(self):
# Create a 3D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testInfinity(self):
for dtype in [np.float32, np.float64]:
for special_value_x in [-np.inf, np.inf]:
for special_value_y in [-np.inf, np.inf]:
np_arr = np.array([special_value_x, special_value_y]).astype(dtype)
self._compareAll(np_arr, None)
def testDoubleReduce3D(self):
# Create a 3D array of doubles and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float64)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testGradient(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float32)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_mean(t, [1, 2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_mean(t, [0, 1, 2, 3])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [1], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_mean(t, [])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 3, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_mean(t, 0)
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [3, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
def testEmptyGradients(self):
with self.test_session():
x = array_ops.zeros([0, 3])
y = math_ops.reduce_mean(x, [1])
error = gradient_checker.compute_gradient_error(x, [0, 3], y, [0])
self.assertEqual(error, 0)
def testDegenerate(self):
for use_gpu in False, True:
with self.test_session(use_gpu=use_gpu):
for dtype in (dtypes.float16, dtypes.float32, dtypes.float64):
# A large number is needed to get Eigen to die
x = array_ops.zeros((0, 9938), dtype=dtype)
y = math_ops.reduce_mean(x, [0]).eval()
self.assertEqual(y.shape, (9938,))
self.assertTrue(np.all(np.isnan(y)))
class ProdReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims):
np_ans = x
if reduction_axes is None:
np_ans = np.prod(np_ans, keepdims=keep_dims)
else:
for ra in reduction_axes[::-1]:
np_ans = np.prod(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session():
if reduction_axes is not None:
reduction_axes = np.array(reduction_axes).astype(np.int32)
tf_ans = math_ops.reduce_prod(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False)
self._compare(x, reduction_axes, True)
def testInfinity(self):
for dtype in [np.float32, np.float64]:
for special_value_x in [-np.inf, np.inf]:
for special_value_y in [-np.inf, np.inf]:
np_arr = np.array([special_value_x, special_value_y]).astype(dtype)
self._compareAll(np_arr, None)
def testFloatReduce3D(self):
# Create a 3D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def _compareGradient(self, x):
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_prod(t, [])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, x.shape, su, [2, 3, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_prod(t, [1, 2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, x.shape, su, [2, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_prod(t, [0, 1, 2, 3])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, x.shape, su, [1], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
su = math_ops.reduce_prod(t, 0)
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, x.shape, su, [3, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-3, atol=1e-3)
def testGradientWithZeros(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float32) / 20.
# No zeros in input
self._compareGradient(x)
# Zero at beginning
x1 = x.copy()
x1[:, :, 0, :] = 0
self._compareGradient(x1)
# Zero at end
x2 = x.copy()
x2[:, :, -1, :] = 0
self._compareGradient(x2)
# Zero in middle
x3 = x.copy()
x3[:, :, 2, :] = 0
self._compareGradient(x3)
# All zeros
x4 = x.copy()
x4[:, :, :, :] = 0
self._compareGradient(x4)
def testEmptyGradients(self):
with self.test_session():
x = array_ops.zeros([0, 3])
y = math_ops.reduce_prod(x, [1])
error = gradient_checker.compute_gradient_error(x, [0, 3], y, [0])
self.assertEqual(error, 0)
def testDegenerate(self):
for use_gpu in False, True:
with self.test_session(use_gpu=use_gpu):
for dtype in (dtypes.float16, dtypes.float32, dtypes.float64):
# A large number is needed to get Eigen to die
x = array_ops.zeros((0, 9938), dtype=dtype)
y = math_ops.reduce_prod(x, [0])
self.assertAllEqual(y.eval(), np.ones(9938))
class MinReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims, use_gpu=False):
np_ans = x
if reduction_axes is None:
np_ans = np.amin(np_ans, keepdims=keep_dims)
else:
for ra in reduction_axes[::-1]:
np_ans = np.amin(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu):
if reduction_axes is not None:
reduction_axes = np.array(reduction_axes).astype(np.int32)
tf_ans = math_ops.reduce_min(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False, use_gpu=True)
self._compare(x, reduction_axes, False, use_gpu=False)
self._compare(x, reduction_axes, True, use_gpu=True)
self._compare(x, reduction_axes, True, use_gpu=False)
def testInfinity(self):
for dtype in [np.float32, np.float64]:
for special_value_x in [-np.inf, np.inf]:
for special_value_y in [-np.inf, np.inf]:
np_arr = np.array([special_value_x, special_value_y]).astype(dtype)
self._compareAll(np_arr, None)
def testFloatReduce3D(self):
# Create a 3D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testDoubleReduce3D(self):
# Create a 3D array of doubles and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float64)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testGradient(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_min(t, [1, 2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient2(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_min(t, [1])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient3(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_min(t, [2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 3, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient4(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_min(t)
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [1], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testEmptyGradients(self):
with self.test_session():
x = array_ops.zeros([0, 3])
y = math_ops.reduce_min(x, [1])
error = gradient_checker.compute_gradient_error(x, [0, 3], y, [0])
self.assertEqual(error, 0)
class MaxReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims, use_gpu=False):
np_ans = x
if reduction_axes is None:
np_ans = np.amax(np_ans, keepdims=keep_dims)
else:
for ra in reduction_axes[::-1]:
np_ans = np.amax(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu):
if reduction_axes is not None:
reduction_axes = np.array(reduction_axes).astype(np.int32)
tf_ans = math_ops.reduce_max(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False, use_gpu=True)
self._compare(x, reduction_axes, False, use_gpu=False)
self._compare(x, reduction_axes, True, use_gpu=True)
self._compare(x, reduction_axes, True, use_gpu=False)
def testInfinity(self):
for dtype in [np.float32, np.float64]:
for special_value_x in [-np.inf, np.inf]:
for special_value_y in [-np.inf, np.inf]:
np_arr = np.array([special_value_x, special_value_y]).astype(dtype)
self._compareAll(np_arr, None)
def testFloatReduce3D(self):
# Create a 3D array of floats and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testDoubleReduce3D(self):
# Create a 3D array of doubles and reduce across all possible
# dimensions
np_arr = np.arange(0, 30).reshape([2, 3, 5]).astype(np.float64)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testGradient(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_max(t, [1, 2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient2(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_max(t, [1])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 4, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient3(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_max(t, [2])
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [2, 3, 2], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testGradient4(self):
s = [2, 3, 4, 2]
x = np.arange(1.0, 49.0).reshape(s).astype(np.float64)
with self.test_session():
t = ops.convert_to_tensor(x)
su = math_ops.reduce_max(t)
jacob_t, jacob_n = gradient_checker.compute_gradient(
t, s, su, [1], x_init_value=x, delta=1)
self.assertAllClose(jacob_t, jacob_n, rtol=1e-8, atol=1e-8)
def testEmptyGradients(self):
with self.test_session():
x = array_ops.zeros([0, 3])
y = math_ops.reduce_max(x, [1])
error = gradient_checker.compute_gradient_error(x, [0, 3], y, [0])
self.assertEqual(error, 0)
class AllReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims, use_gpu=False):
np_ans = x
if reduction_axes is None:
np_ans = np.all(np_ans, keepdims=keep_dims)
else:
for ra in reduction_axes[::-1]:
np_ans = np.all(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu):
if reduction_axes is not None:
reduction_axes = np.array(reduction_axes).astype(np.int32)
tf_ans = math_ops.reduce_all(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllEqual(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False, use_gpu=True)
self._compare(x, reduction_axes, False, use_gpu=False)
self._compare(x, reduction_axes, True, use_gpu=True)
self._compare(x, reduction_axes, True, use_gpu=False)
def testAll3D(self):
# Create a 3D array of bools and reduce across all possible
# dimensions
np_arr = (np.random.uniform(0, 1, 30) > 0.1).reshape([2, 3, 5])
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testEmpty(self):
self._compareAll([], [0])
class AnyReductionTest(test.TestCase):
def _compare(self, x, reduction_axes, keep_dims, use_gpu=False):
np_ans = x
if reduction_axes is None:
np_ans = np.any(np_ans, keepdims=keep_dims)
else:
for ra in reduction_axes[::-1]:
np_ans = np.any(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu):
if reduction_axes is not None:
reduction_axes = np.array(reduction_axes).astype(np.int32)
tf_ans = math_ops.reduce_any(x, reduction_axes, keep_dims)
out = tf_ans.eval()
self.assertAllEqual(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes):
self._compare(x, reduction_axes, False, use_gpu=True)
self._compare(x, reduction_axes, False, use_gpu=False)
self._compare(x, reduction_axes, True, use_gpu=True)
self._compare(x, reduction_axes, True, use_gpu=False)
def testAll3D(self):
# Create a 3D array of bools and reduce across all possible
# dimensions
np_arr = (np.random.uniform(0, 1, 30) > 0.9).reshape([2, 3, 5])
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
def testEmpty(self):
self._compareAll([], [0])
class CountNonzeroReductionTest(test.TestCase):
def _compare(self,
x,
reduction_axes,
keep_dims,
use_gpu=False,
feed_dict=None):
np_ans = (x != 0).astype(np.int32)
if reduction_axes is None:
np_ans = np.sum(np_ans, keepdims=keep_dims)
else:
reduction_axes = np.array(reduction_axes).astype(np.int32)
for ra in reduction_axes.ravel()[::-1]:
np_ans = np.sum(np_ans, axis=ra, keepdims=keep_dims)
with self.test_session(use_gpu=use_gpu) as sess:
tf_ans = math_ops.count_nonzero(x, reduction_axes, keep_dims)
out = sess.run(tf_ans, feed_dict)
self.assertAllClose(np_ans, out)
self.assertShapeEqual(np_ans, tf_ans)
def _compareAll(self, x, reduction_axes, feed_dict=None):
if reduction_axes is not None and np.shape(reduction_axes) == (1,):
# Test scalar reduction_axes argument
self._compareAll(x, reduction_axes[0])
self._compare(x, reduction_axes, False, use_gpu=True, feed_dict=feed_dict)
self._compare(x, reduction_axes, False, use_gpu=False, feed_dict=feed_dict)
self._compare(x, reduction_axes, True, use_gpu=True, feed_dict=feed_dict)
self._compare(x, reduction_axes, True, use_gpu=False, feed_dict=feed_dict)
def testBoolReduce1D(self):
# Create a 1D array of floats
np_arr = np.asarray([False, False, True, False, False, True])
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
def testFloatReduce1D(self):
# Create a 1D array of floats
np_arr = np.asarray([0.0, 1.0, -1.0, 0.0, 0.0, 3.0]).astype(np.float32)
self._compareAll(np_arr, [0])
def testFloatReduce4D(self):
# Create a 4D array of floats and reduce across some
# dimensions
np_arr = np.floor(np.arange(0.0, 210.0) / 100.0).reshape(
[2, 3, 5, 7]).astype(np.float32)
self._compareAll(np_arr, None)
self._compareAll(np_arr, [])
self._compareAll(np_arr, [0])
self._compareAll(np_arr, [1])
self._compareAll(np_arr, [2])
self._compareAll(np_arr, [0, 1])
self._compareAll(np_arr, [1, 2])
# Need specialization for reduce(4D, [0, 2])
# self._compareAll(np_arr, [0, 2])
self._compareAll(np_arr, [0, 1, 2])
self._compareAll(np_arr, [1, 2, 3])
self._compareAll(np_arr, [0, 1, 2, 3])
def testExpand(self):
# Reduce an empty tensor to a nonempty tensor
x = np.zeros((5, 0))
self._compareAll(x, [1])
def testDegenerate(self):
for use_gpu in False, True:
with self.test_session(use_gpu=use_gpu):
for dtype in (dtypes.bool,):
# A large number is needed to get Eigen to die
x = array_ops.zeros((0, 9938), dtype=dtype)
y = math_ops.count_nonzero(x, [0])
self.assertAllEqual(y.eval(), np.zeros(9938))
if __name__ == "__main__":
test.main()
| apache-2.0 |
xianggong/m2c_unit_test | test/operator/remainder_short16short16/compile.py | 1861 | 4430 | #!/usr/bin/python
import os
import subprocess
import re
def runCommand(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
p.wait()
return iter(p.stdout.readline, b'')
def dumpRunCommand(command, dump_file_name, postfix):
dumpFile = open(dump_file_name + postfix, "w+")
dumpFile.write(command + "\n")
for line in runCommand(command.split()):
dumpFile.write(line)
def rmFile(file_name):
cmd = "rm -rf " + file_name
runCommand(cmd.split())
def rnm_ir(file_name):
# Append all unnamed variable with prefix 'tmp_'
ir_file_name = file_name + ".ll"
if os.path.isfile(ir_file_name):
fo = open(ir_file_name, "rw+")
lines = fo.readlines()
fo.seek(0)
fo.truncate()
for line in lines:
# Add entry block identifier
if "define" in line:
line += "entry:\n"
# Rename all unnamed variables
line = re.sub('\%([0-9]+)',
r'%tmp_\1',
line.rstrip())
# Also rename branch name
line = re.sub('(\;\ \<label\>\:)([0-9]+)',
r'tmp_\2:',
line.rstrip())
fo.write(line + '\n')
def gen_ir(file_name):
# Directories
root_dir = '../../../'
header_dir = root_dir + "inc/"
# Headers
header = " -I " + header_dir
header += " -include " + header_dir + "m2c_buildin_fix.h "
header += " -include " + header_dir + "clc/clc.h "
header += " -D cl_clang_storage_class_specifiers "
gen_ir = "clang -S -emit-llvm -O0 -target r600-- -mcpu=verde "
cmd_gen_ir = gen_ir + header + file_name + ".cl"
dumpRunCommand(cmd_gen_ir, file_name, ".clang.log")
def asm_ir(file_name):
if os.path.isfile(file_name + ".ll"):
# Command to assemble IR to bitcode
gen_bc = "llvm-as "
gen_bc_src = file_name + ".ll"
gen_bc_dst = file_name + ".bc"
cmd_gen_bc = gen_bc + gen_bc_src + " -o " + gen_bc_dst
runCommand(cmd_gen_bc.split())
def opt_bc(file_name):
if os.path.isfile(file_name + ".bc"):
# Command to optmize bitcode
opt_bc = "opt --mem2reg "
opt_ir_src = file_name + ".bc"
opt_ir_dst = file_name + ".opt.bc"
cmd_opt_bc = opt_bc + opt_ir_src + " -o " + opt_ir_dst
runCommand(cmd_opt_bc.split())
def dis_bc(file_name):
if os.path.isfile(file_name + ".bc"):
# Command to disassemble bitcode
dis_bc = "llvm-dis "
dis_ir_src = file_name + ".opt.bc"
dis_ir_dst = file_name + ".opt.ll"
cmd_dis_bc = dis_bc + dis_ir_src + " -o " + dis_ir_dst
runCommand(cmd_dis_bc.split())
def m2c_gen(file_name):
if os.path.isfile(file_name + ".opt.bc"):
# Command to disassemble bitcode
m2c_gen = "m2c --llvm2si "
m2c_gen_src = file_name + ".opt.bc"
cmd_m2c_gen = m2c_gen + m2c_gen_src
dumpRunCommand(cmd_m2c_gen, file_name, ".m2c.llvm2si.log")
# Remove file if size is 0
if os.path.isfile(file_name + ".opt.s"):
if os.path.getsize(file_name + ".opt.s") == 0:
rmFile(file_name + ".opt.s")
def m2c_bin(file_name):
if os.path.isfile(file_name + ".opt.s"):
# Command to disassemble bitcode
m2c_bin = "m2c --si2bin "
m2c_bin_src = file_name + ".opt.s"
cmd_m2c_bin = m2c_bin + m2c_bin_src
dumpRunCommand(cmd_m2c_bin, file_name, ".m2c.si2bin.log")
def main():
# Commands
for file in os.listdir("./"):
if file.endswith(".cl"):
file_name = os.path.splitext(file)[0]
# Execute commands
gen_ir(file_name)
rnm_ir(file_name)
asm_ir(file_name)
opt_bc(file_name)
dis_bc(file_name)
m2c_gen(file_name)
m2c_bin(file_name)
if __name__ == "__main__":
main()
| gpl-2.0 |
nikolas/lettuce | tests/integration/lib/Django-1.2.5/django/core/mail/backends/filebased.py | 394 | 2485 | """Email backend that writes messages to a file."""
import datetime
import os
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.core.mail.backends.console import EmailBackend as ConsoleEmailBackend
class EmailBackend(ConsoleEmailBackend):
def __init__(self, *args, **kwargs):
self._fname = None
if 'file_path' in kwargs:
self.file_path = kwargs.pop('file_path')
else:
self.file_path = getattr(settings, 'EMAIL_FILE_PATH',None)
# Make sure self.file_path is a string.
if not isinstance(self.file_path, basestring):
raise ImproperlyConfigured('Path for saving emails is invalid: %r' % self.file_path)
self.file_path = os.path.abspath(self.file_path)
# Make sure that self.file_path is an directory if it exists.
if os.path.exists(self.file_path) and not os.path.isdir(self.file_path):
raise ImproperlyConfigured('Path for saving email messages exists, but is not a directory: %s' % self.file_path)
# Try to create it, if it not exists.
elif not os.path.exists(self.file_path):
try:
os.makedirs(self.file_path)
except OSError, err:
raise ImproperlyConfigured('Could not create directory for saving email messages: %s (%s)' % (self.file_path, err))
# Make sure that self.file_path is writable.
if not os.access(self.file_path, os.W_OK):
raise ImproperlyConfigured('Could not write to directory: %s' % self.file_path)
# Finally, call super().
# Since we're using the console-based backend as a base,
# force the stream to be None, so we don't default to stdout
kwargs['stream'] = None
super(EmailBackend, self).__init__(*args, **kwargs)
def _get_filename(self):
"""Return a unique file name."""
if self._fname is None:
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
fname = "%s-%s.log" % (timestamp, abs(id(self)))
self._fname = os.path.join(self.file_path, fname)
return self._fname
def open(self):
if self.stream is None:
self.stream = open(self._get_filename(), 'a')
return True
return False
def close(self):
try:
if self.stream is not None:
self.stream.close()
finally:
self.stream = None
| gpl-3.0 |
tashaxe/Red-DiscordBot | lib/youtube_dl/extractor/abc.py | 24 | 6210 | from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
ExtractorError,
js_to_json,
int_or_none,
parse_iso8601,
)
class ABCIE(InfoExtractor):
IE_NAME = 'abc.net.au'
_VALID_URL = r'https?://(?:www\.)?abc\.net\.au/news/(?:[^/]+/){1,2}(?P<id>\d+)'
_TESTS = [{
'url': 'http://www.abc.net.au/news/2014-11-05/australia-to-staff-ebola-treatment-centre-in-sierra-leone/5868334',
'md5': 'cb3dd03b18455a661071ee1e28344d9f',
'info_dict': {
'id': '5868334',
'ext': 'mp4',
'title': 'Australia to help staff Ebola treatment centre in Sierra Leone',
'description': 'md5:809ad29c67a05f54eb41f2a105693a67',
},
'skip': 'this video has expired',
}, {
'url': 'http://www.abc.net.au/news/2015-08-17/warren-entsch-introduces-same-sex-marriage-bill/6702326',
'md5': 'db2a5369238b51f9811ad815b69dc086',
'info_dict': {
'id': 'NvqvPeNZsHU',
'ext': 'mp4',
'upload_date': '20150816',
'uploader': 'ABC News (Australia)',
'description': 'Government backbencher Warren Entsch introduces a cross-party sponsored bill to legalise same-sex marriage, saying the bill is designed to promote "an inclusive Australia, not a divided one.". Read more here: http://ab.co/1Mwc6ef',
'uploader_id': 'NewsOnABC',
'title': 'Marriage Equality: Warren Entsch introduces same sex marriage bill',
},
'add_ie': ['Youtube'],
'skip': 'Not accessible from Travis CI server',
}, {
'url': 'http://www.abc.net.au/news/2015-10-23/nab-lifts-interest-rates-following-westpac-and-cba/6880080',
'md5': 'b96eee7c9edf4fc5a358a0252881cc1f',
'info_dict': {
'id': '6880080',
'ext': 'mp3',
'title': 'NAB lifts interest rates, following Westpac and CBA',
'description': 'md5:f13d8edc81e462fce4a0437c7dc04728',
},
}, {
'url': 'http://www.abc.net.au/news/2015-10-19/6866214',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mobj = re.search(
r'inline(?P<type>Video|Audio|YouTube)Data\.push\((?P<json_data>[^)]+)\);',
webpage)
if mobj is None:
expired = self._html_search_regex(r'(?s)class="expired-(?:video|audio)".+?<span>(.+?)</span>', webpage, 'expired', None)
if expired:
raise ExtractorError('%s said: %s' % (self.IE_NAME, expired), expected=True)
raise ExtractorError('Unable to extract video urls')
urls_info = self._parse_json(
mobj.group('json_data'), video_id, transform_source=js_to_json)
if not isinstance(urls_info, list):
urls_info = [urls_info]
if mobj.group('type') == 'YouTube':
return self.playlist_result([
self.url_result(url_info['url']) for url_info in urls_info])
formats = [{
'url': url_info['url'],
'vcodec': url_info.get('codec') if mobj.group('type') == 'Video' else 'none',
'width': int_or_none(url_info.get('width')),
'height': int_or_none(url_info.get('height')),
'tbr': int_or_none(url_info.get('bitrate')),
'filesize': int_or_none(url_info.get('filesize')),
} for url_info in urls_info]
self._sort_formats(formats)
return {
'id': video_id,
'title': self._og_search_title(webpage),
'formats': formats,
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
}
class ABCIViewIE(InfoExtractor):
IE_NAME = 'abc.net.au:iview'
_VALID_URL = r'https?://iview\.abc\.net\.au/programs/[^/]+/(?P<id>[^/?#]+)'
# ABC iview programs are normally available for 14 days only.
_TESTS = [{
'url': 'http://iview.abc.net.au/programs/diaries-of-a-broken-mind/ZX9735A001S00',
'md5': 'cde42d728b3b7c2b32b1b94b4a548afc',
'info_dict': {
'id': 'ZX9735A001S00',
'ext': 'mp4',
'title': 'Diaries Of A Broken Mind',
'description': 'md5:7de3903874b7a1be279fe6b68718fc9e',
'upload_date': '20161010',
'uploader_id': 'abc2',
'timestamp': 1476064920,
},
'skip': 'Video gone',
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_params = self._parse_json(self._search_regex(
r'videoParams\s*=\s*({.+?});', webpage, 'video params'), video_id)
title = video_params.get('title') or video_params['seriesTitle']
stream = next(s for s in video_params['playlist'] if s.get('type') == 'program')
formats = self._extract_akamai_formats(stream['hds-unmetered'], video_id)
self._sort_formats(formats)
subtitles = {}
src_vtt = stream.get('captions', {}).get('src-vtt')
if src_vtt:
subtitles['en'] = [{
'url': src_vtt,
'ext': 'vtt',
}]
return {
'id': video_id,
'title': title,
'description': self._html_search_meta(['og:description', 'twitter:description'], webpage),
'thumbnail': self._html_search_meta(['og:image', 'twitter:image:src'], webpage),
'duration': int_or_none(video_params.get('eventDuration')),
'timestamp': parse_iso8601(video_params.get('pubDate'), ' '),
'series': video_params.get('seriesTitle'),
'series_id': video_params.get('seriesHouseNumber') or video_id[:7],
'episode_number': int_or_none(self._html_search_meta('episodeNumber', webpage, default=None)),
'episode': self._html_search_meta('episode_title', webpage, default=None),
'uploader_id': video_params.get('channel'),
'formats': formats,
'subtitles': subtitles,
}
| gpl-3.0 |
halfcrazy/sqlalchemy | lib/sqlalchemy/testing/requirements.py | 42 | 19949 | # testing/requirements.py
# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Global database feature support policy.
Provides decorators to mark tests requiring specific feature support from the
target database.
External dialect test suites should subclass SuiteRequirements
to provide specific inclusion/exclusions.
"""
from . import exclusions
from .. import util
class Requirements(object):
pass
class SuiteRequirements(Requirements):
@property
def create_table(self):
"""target platform can emit basic CreateTable DDL."""
return exclusions.open()
@property
def drop_table(self):
"""target platform can emit basic DropTable DDL."""
return exclusions.open()
@property
def foreign_keys(self):
"""Target database must support foreign keys."""
return exclusions.open()
@property
def on_update_cascade(self):
""""target database must support ON UPDATE..CASCADE behavior in
foreign keys."""
return exclusions.open()
@property
def non_updating_cascade(self):
"""target database must *not* support ON UPDATE..CASCADE behavior in
foreign keys."""
return exclusions.closed()
@property
def deferrable_fks(self):
return exclusions.closed()
@property
def on_update_or_deferrable_fks(self):
# TODO: exclusions should be composable,
# somehow only_if([x, y]) isn't working here, negation/conjunctions
# getting confused.
return exclusions.only_if(
lambda: self.on_update_cascade.enabled or
self.deferrable_fks.enabled
)
@property
def self_referential_foreign_keys(self):
"""Target database must support self-referential foreign keys."""
return exclusions.open()
@property
def foreign_key_ddl(self):
"""Target database must support the DDL phrases for FOREIGN KEY."""
return exclusions.open()
@property
def named_constraints(self):
"""target database must support names for constraints."""
return exclusions.open()
@property
def subqueries(self):
"""Target database must support subqueries."""
return exclusions.open()
@property
def offset(self):
"""target database can render OFFSET, or an equivalent, in a
SELECT.
"""
return exclusions.open()
@property
def bound_limit_offset(self):
"""target database can render LIMIT and/or OFFSET using a bound
parameter
"""
return exclusions.open()
@property
def boolean_col_expressions(self):
"""Target database must support boolean expressions as columns"""
return exclusions.closed()
@property
def nullsordering(self):
"""Target backends that support nulls ordering."""
return exclusions.closed()
@property
def standalone_binds(self):
"""target database/driver supports bound parameters as column expressions
without being in the context of a typed column.
"""
return exclusions.closed()
@property
def intersect(self):
"""Target database must support INTERSECT or equivalent."""
return exclusions.closed()
@property
def except_(self):
"""Target database must support EXCEPT or equivalent (i.e. MINUS)."""
return exclusions.closed()
@property
def window_functions(self):
"""Target database must support window functions."""
return exclusions.closed()
@property
def autoincrement_insert(self):
"""target platform generates new surrogate integer primary key values
when insert() is executed, excluding the pk column."""
return exclusions.open()
@property
def fetch_rows_post_commit(self):
"""target platform will allow cursor.fetchone() to proceed after a
COMMIT.
Typically this refers to an INSERT statement with RETURNING which
is invoked within "autocommit". If the row can be returned
after the autocommit, then this rule can be open.
"""
return exclusions.open()
@property
def empty_inserts(self):
"""target platform supports INSERT with no values, i.e.
INSERT DEFAULT VALUES or equivalent."""
return exclusions.only_if(
lambda config: config.db.dialect.supports_empty_insert or
config.db.dialect.supports_default_values,
"empty inserts not supported"
)
@property
def insert_from_select(self):
"""target platform supports INSERT from a SELECT."""
return exclusions.open()
@property
def returning(self):
"""target platform supports RETURNING."""
return exclusions.only_if(
lambda config: config.db.dialect.implicit_returning,
"%(database)s %(does_support)s 'returning'"
)
@property
def duplicate_names_in_cursor_description(self):
"""target platform supports a SELECT statement that has
the same name repeated more than once in the columns list."""
return exclusions.open()
@property
def denormalized_names(self):
"""Target database must have 'denormalized', i.e.
UPPERCASE as case insensitive names."""
return exclusions.skip_if(
lambda config: not config.db.dialect.requires_name_normalize,
"Backend does not require denormalized names."
)
@property
def multivalues_inserts(self):
"""target database must support multiple VALUES clauses in an
INSERT statement."""
return exclusions.skip_if(
lambda config: not config.db.dialect.supports_multivalues_insert,
"Backend does not support multirow inserts."
)
@property
def implements_get_lastrowid(self):
""""target dialect implements the executioncontext.get_lastrowid()
method without reliance on RETURNING.
"""
return exclusions.open()
@property
def emulated_lastrowid(self):
""""target dialect retrieves cursor.lastrowid, or fetches
from a database-side function after an insert() construct executes,
within the get_lastrowid() method.
Only dialects that "pre-execute", or need RETURNING to get last
inserted id, would return closed/fail/skip for this.
"""
return exclusions.closed()
@property
def dbapi_lastrowid(self):
""""target platform includes a 'lastrowid' accessor on the DBAPI
cursor object.
"""
return exclusions.closed()
@property
def views(self):
"""Target database must support VIEWs."""
return exclusions.closed()
@property
def schemas(self):
"""Target database must support external schemas, and have one
named 'test_schema'."""
return exclusions.closed()
@property
def sequences(self):
"""Target database must support SEQUENCEs."""
return exclusions.only_if([
lambda config: config.db.dialect.supports_sequences
], "no sequence support")
@property
def sequences_optional(self):
"""Target database supports sequences, but also optionally
as a means of generating new PK values."""
return exclusions.only_if([
lambda config: config.db.dialect.supports_sequences and
config.db.dialect.sequences_optional
], "no sequence support, or sequences not optional")
@property
def reflects_pk_names(self):
return exclusions.closed()
@property
def table_reflection(self):
return exclusions.open()
@property
def view_column_reflection(self):
"""target database must support retrieval of the columns in a view,
similarly to how a table is inspected.
This does not include the full CREATE VIEW definition.
"""
return self.views
@property
def view_reflection(self):
"""target database must support inspection of the full CREATE VIEW definition.
"""
return self.views
@property
def schema_reflection(self):
return self.schemas
@property
def primary_key_constraint_reflection(self):
return exclusions.open()
@property
def foreign_key_constraint_reflection(self):
return exclusions.open()
@property
def temp_table_reflection(self):
return exclusions.open()
@property
def temp_table_names(self):
"""target dialect supports listing of temporary table names"""
return exclusions.closed()
@property
def temporary_tables(self):
"""target database supports temporary tables"""
return exclusions.open()
@property
def temporary_views(self):
"""target database supports temporary views"""
return exclusions.closed()
@property
def index_reflection(self):
return exclusions.open()
@property
def unique_constraint_reflection(self):
"""target dialect supports reflection of unique constraints"""
return exclusions.open()
@property
def duplicate_key_raises_integrity_error(self):
"""target dialect raises IntegrityError when reporting an INSERT
with a primary key violation. (hint: it should)
"""
return exclusions.open()
@property
def unbounded_varchar(self):
"""Target database must support VARCHAR with no length"""
return exclusions.open()
@property
def unicode_data(self):
"""Target database/dialect must support Python unicode objects with
non-ASCII characters represented, delivered as bound parameters
as well as in result rows.
"""
return exclusions.open()
@property
def unicode_ddl(self):
"""Target driver must support some degree of non-ascii symbol
names.
"""
return exclusions.closed()
@property
def datetime_literals(self):
"""target dialect supports rendering of a date, time, or datetime as a
literal string, e.g. via the TypeEngine.literal_processor() method.
"""
return exclusions.closed()
@property
def datetime(self):
"""target dialect supports representation of Python
datetime.datetime() objects."""
return exclusions.open()
@property
def datetime_microseconds(self):
"""target dialect supports representation of Python
datetime.datetime() with microsecond objects."""
return exclusions.open()
@property
def datetime_historic(self):
"""target dialect supports representation of Python
datetime.datetime() objects with historic (pre 1970) values."""
return exclusions.closed()
@property
def date(self):
"""target dialect supports representation of Python
datetime.date() objects."""
return exclusions.open()
@property
def date_coerces_from_datetime(self):
"""target dialect accepts a datetime object as the target
of a date column."""
return exclusions.open()
@property
def date_historic(self):
"""target dialect supports representation of Python
datetime.datetime() objects with historic (pre 1970) values."""
return exclusions.closed()
@property
def time(self):
"""target dialect supports representation of Python
datetime.time() objects."""
return exclusions.open()
@property
def time_microseconds(self):
"""target dialect supports representation of Python
datetime.time() with microsecond objects."""
return exclusions.open()
@property
def binary_comparisons(self):
"""target database/driver can allow BLOB/BINARY fields to be compared
against a bound parameter value.
"""
return exclusions.open()
@property
def binary_literals(self):
"""target backend supports simple binary literals, e.g. an
expression like::
SELECT CAST('foo' AS BINARY)
Where ``BINARY`` is the type emitted from :class:`.LargeBinary`,
e.g. it could be ``BLOB`` or similar.
Basically fails on Oracle.
"""
return exclusions.open()
@property
def precision_numerics_general(self):
"""target backend has general support for moderately high-precision
numerics."""
return exclusions.open()
@property
def precision_numerics_enotation_small(self):
"""target backend supports Decimal() objects using E notation
to represent very small values."""
return exclusions.closed()
@property
def precision_numerics_enotation_large(self):
"""target backend supports Decimal() objects using E notation
to represent very large values."""
return exclusions.closed()
@property
def precision_numerics_many_significant_digits(self):
"""target backend supports values with many digits on both sides,
such as 319438950232418390.273596, 87673.594069654243
"""
return exclusions.closed()
@property
def precision_numerics_retains_significant_digits(self):
"""A precision numeric type will return empty significant digits,
i.e. a value such as 10.000 will come back in Decimal form with
the .000 maintained."""
return exclusions.closed()
@property
def precision_generic_float_type(self):
"""target backend will return native floating point numbers with at
least seven decimal places when using the generic Float type.
"""
return exclusions.open()
@property
def floats_to_four_decimals(self):
"""target backend can return a floating-point number with four
significant digits (such as 15.7563) accurately
(i.e. without FP inaccuracies, such as 15.75629997253418).
"""
return exclusions.open()
@property
def fetch_null_from_numeric(self):
"""target backend doesn't crash when you try to select a NUMERIC
value that has a value of NULL.
Added to support Pyodbc bug #351.
"""
return exclusions.open()
@property
def text_type(self):
"""Target database must support an unbounded Text() "
"type such as TEXT or CLOB"""
return exclusions.open()
@property
def empty_strings_varchar(self):
"""target database can persist/return an empty string with a
varchar.
"""
return exclusions.open()
@property
def empty_strings_text(self):
"""target database can persist/return an empty string with an
unbounded text."""
return exclusions.open()
@property
def selectone(self):
"""target driver must support the literal statement 'select 1'"""
return exclusions.open()
@property
def savepoints(self):
"""Target database must support savepoints."""
return exclusions.closed()
@property
def two_phase_transactions(self):
"""Target database must support two-phase transactions."""
return exclusions.closed()
@property
def update_from(self):
"""Target must support UPDATE..FROM syntax"""
return exclusions.closed()
@property
def update_where_target_in_subquery(self):
"""Target must support UPDATE where the same table is present in a
subquery in the WHERE clause.
This is an ANSI-standard syntax that apparently MySQL can't handle,
such as:
UPDATE documents SET flag=1 WHERE documents.title IN
(SELECT max(documents.title) AS title
FROM documents GROUP BY documents.user_id
)
"""
return exclusions.open()
@property
def mod_operator_as_percent_sign(self):
"""target database must use a plain percent '%' as the 'modulus'
operator."""
return exclusions.closed()
@property
def percent_schema_names(self):
"""target backend supports weird identifiers with percent signs
in them, e.g. 'some % column'.
this is a very weird use case but often has problems because of
DBAPIs that use python formatting. It's not a critical use
case either.
"""
return exclusions.closed()
@property
def order_by_label_with_expression(self):
"""target backend supports ORDER BY a column label within an
expression.
Basically this::
select data as foo from test order by foo || 'bar'
Lots of databases including Postgresql don't support this,
so this is off by default.
"""
return exclusions.closed()
@property
def unicode_connections(self):
"""Target driver must support non-ASCII characters being passed at
all.
"""
return exclusions.open()
@property
def graceful_disconnects(self):
"""Target driver must raise a DBAPI-level exception, such as
InterfaceError, when the underlying connection has been closed
and the execute() method is called.
"""
return exclusions.open()
@property
def skip_mysql_on_windows(self):
"""Catchall for a large variety of MySQL on Windows failures"""
return exclusions.open()
@property
def ad_hoc_engines(self):
"""Test environment must allow ad-hoc engine/connection creation.
DBs that scale poorly for many connections, even when closed, i.e.
Oracle, may use the "--low-connections" option which flags this
requirement as not present.
"""
return exclusions.skip_if(
lambda config: config.options.low_connections)
@property
def timing_intensive(self):
return exclusions.requires_tag("timing_intensive")
@property
def memory_intensive(self):
return exclusions.requires_tag("memory_intensive")
@property
def threading_with_mock(self):
"""Mark tests that use threading and mock at the same time - stability
issues have been observed with coverage + python 3.3
"""
return exclusions.skip_if(
lambda config: util.py3k and config.options.has_coverage,
"Stability issues with coverage + py3k"
)
@property
def no_coverage(self):
"""Test should be skipped if coverage is enabled.
This is to block tests that exercise libraries that seem to be
sensitive to coverage, such as Postgresql notice logging.
"""
return exclusions.skip_if(
lambda config: config.options.has_coverage,
"Issues observed when coverage is enabled"
)
def _has_mysql_on_windows(self, config):
return False
def _has_mysql_fully_case_sensitive(self, config):
return False
@property
def sqlite(self):
return exclusions.skip_if(lambda: not self._has_sqlite())
@property
def cextensions(self):
return exclusions.skip_if(
lambda: not self._has_cextensions(), "C extensions not installed"
)
def _has_sqlite(self):
from sqlalchemy import create_engine
try:
create_engine('sqlite://')
return True
except ImportError:
return False
def _has_cextensions(self):
try:
from sqlalchemy import cresultproxy, cprocessors
return True
except ImportError:
return False
| mit |
dragon788/wordfreq | tests/test.py | 1 | 5277 | from wordfreq import (
word_frequency, available_languages, cB_to_freq,
top_n_list, random_words, random_ascii_words, tokenize
)
from nose.tools import (
eq_, assert_almost_equal, assert_greater, raises
)
def test_freq_examples():
# Stopwords are most common in the correct language
assert_greater(word_frequency('the', 'en'),
word_frequency('de', 'en'))
assert_greater(word_frequency('de', 'es'),
word_frequency('the', 'es'))
def test_languages():
# Make sure the number of available languages doesn't decrease
avail = available_languages()
assert_greater(len(avail), 15)
# Laughter is the universal language
for lang in avail:
if lang not in {'zh', 'ja'}:
# we do not have enough Chinese data
# Japanese people do not lol
assert_greater(word_frequency('lol', lang), 0)
# Make up a weirdly verbose language code and make sure
# we still get it
new_lang_code = '%s-001-x-fake-extension' % lang.upper()
assert_greater(word_frequency('lol', new_lang_code), 0)
def test_twitter():
avail = available_languages('twitter')
assert_greater(len(avail), 14)
for lang in avail:
assert_greater(word_frequency('rt', lang, 'twitter'),
word_frequency('rt', lang, 'combined'))
def test_minimums():
eq_(word_frequency('esquivalience', 'en'), 0)
eq_(word_frequency('esquivalience', 'en', minimum=1e-6), 1e-6)
eq_(word_frequency('the', 'en', minimum=1), 1)
def test_most_common_words():
# If something causes the most common words in well-supported languages to
# change, we should know.
def get_most_common(lang):
"""
Return the single most common word in the language.
"""
return top_n_list(lang, 1)[0]
eq_(get_most_common('ar'), 'في')
eq_(get_most_common('de'), 'die')
eq_(get_most_common('en'), 'the')
eq_(get_most_common('es'), 'de')
eq_(get_most_common('fr'), 'de')
eq_(get_most_common('it'), 'di')
eq_(get_most_common('ja'), 'の')
eq_(get_most_common('nl'), 'de')
eq_(get_most_common('pt'), 'de')
eq_(get_most_common('ru'), 'в')
eq_(get_most_common('tr'), 'bir')
eq_(get_most_common('zh'), '的')
def test_language_matching():
freq = word_frequency('的', 'zh')
eq_(word_frequency('的', 'zh-TW'), freq)
eq_(word_frequency('的', 'zh-CN'), freq)
eq_(word_frequency('的', 'zh-Hant'), freq)
eq_(word_frequency('的', 'zh-Hans'), freq)
eq_(word_frequency('的', 'yue-HK'), freq)
eq_(word_frequency('的', 'cmn'), freq)
def test_cB_conversion():
eq_(cB_to_freq(0), 1.)
assert_almost_equal(cB_to_freq(-100), 0.1)
assert_almost_equal(cB_to_freq(-600), 1e-6)
@raises(ValueError)
def test_failed_cB_conversion():
cB_to_freq(1)
def test_tokenization():
# We preserve apostrophes within words, so "can't" is a single word in the
# data
eq_(tokenize("I don't split at apostrophes, you see.", 'en'),
['i', "don't", 'split', 'at', 'apostrophes', 'you', 'see'])
# Certain punctuation does not inherently split a word.
eq_(tokenize("Anything is possible at zombo.com", 'en'),
['anything', 'is', 'possible', 'at', 'zombo.com'])
# Splits occur after symbols, and at splitting punctuation such as hyphens.
eq_(tokenize('😂test', 'en'), ['😂', 'test'])
eq_(tokenize("flip-flop", 'en'), ['flip', 'flop'])
def test_casefolding():
eq_(tokenize('WEISS', 'de'), ['weiss'])
eq_(tokenize('weiß', 'de'), ['weiss'])
eq_(tokenize('İstanbul', 'tr'), ['istanbul'])
eq_(tokenize('SIKISINCA', 'tr'), ['sıkısınca'])
def test_phrase_freq():
ff = word_frequency("flip-flop", 'en')
assert_greater(ff, 0)
assert_almost_equal(
1.0 / ff,
1.0 / word_frequency('flip', 'en') + 1.0 / word_frequency('flop', 'en')
)
def test_not_really_random():
# If your xkcd-style password comes out like this, maybe you shouldn't
# use it
eq_(random_words(nwords=4, lang='en', bits_per_word=0),
'the the the the')
# This not only tests random_ascii_words, it makes sure we didn't end
# up with 'eos' as a very common Japanese word
eq_(random_ascii_words(nwords=4, lang='ja', bits_per_word=0),
'rt rt rt rt')
@raises(ValueError)
def test_not_enough_ascii():
random_ascii_words(lang='zh')
def test_ar():
# Remove tatweels
eq_(
tokenize('متــــــــعب', 'ar'),
['متعب']
)
# Remove combining marks
eq_(
tokenize('حَرَكَات', 'ar'),
['حركات']
)
eq_(
tokenize('\ufefb', 'ar'), # An Arabic ligature...
['\u0644\u0627'] # ...that is affected by NFKC normalization
)
def test_ideographic_fallback():
# Try tokenizing Chinese text as English -- it should remain stuck together.
eq_(tokenize('中国文字', 'en'), ['中国文字'])
# When Japanese is tagged with the wrong language, it will be split
# at script boundaries.
ja_text = 'ひらがなカタカナromaji'
eq_(
tokenize(ja_text, 'en'),
['ひらがな', 'カタカナ', 'romaji']
)
| mit |
exelearning/iteexe | nevow/url.py | 14 | 16868 | # -*- test-case-name: "nevow.test.test_url" -*-
# Copyright (c) 2004 Divmod.
# See LICENSE for details.
"""URL parsing, construction and rendering.
"""
from __future__ import generators
import weakref
from nevow import inevow
from nevow.stan import raw
from nevow.flat import flatten, serialize
from nevow.context import WovenContext
import urlparse
import urllib
from twisted.web.util import redirectTo
def _uqf(query):
for x in query.split('&'):
if '=' in x:
yield tuple( [raw(urllib.unquote(s)) for s in x.split('=')] )
elif x:
yield (raw(urllib.unquote(x)), None)
unquerify = lambda query: list(_uqf(query))
class URL(object):
def __init__(self, scheme='http', netloc='localhost', pathsegs=None, querysegs=None, fragment=''):
self.scheme = scheme
self.netloc = netloc
if pathsegs is None:
pathsegs = ['']
self._qpathlist = pathsegs
if querysegs is None:
querysegs = []
self._querylist = querysegs
self.fragment = fragment
path = property(lambda self: '/'.join(self._qpathlist))
def __eq__(self, other):
if not isinstance(other, self.__class__):
return NotImplemented
for attr in ['scheme', 'netloc', '_qpathlist', '_querylist', 'fragment']:
if getattr(self, attr) != getattr(other, attr):
return False
return True
def __ne__(self, other):
if not isinstance(other, self.__class__):
return NotImplemented
return not self.__eq__(other)
query = property(
lambda self: [y is None and x or '='.join((x,y))
for (x,y) in self._querylist]
)
def _pathMod(self, newpathsegs, newqueryparts):
return self.__class__(self.scheme, self.netloc, newpathsegs, newqueryparts, self.fragment)
## class methods used to build URL objects ##
def fromString(klass, st):
scheme, netloc, path, query, fragment = urlparse.urlsplit(st)
u = klass(
scheme, netloc,
[raw(urllib.unquote(seg)) for seg in path.split('/')[1:]],
unquerify(query), fragment)
return u
fromString = classmethod(fromString)
def fromRequest(klass, request):
import warnings
warnings.warn(
"[v0.4] URL.fromRequest will change behaviour soon. Use fromContext instead",
DeprecationWarning,
stacklevel=2)
uri = request.prePathURL()
if '?' in request.uri:
uri += '?' + request.uri.split('?')[-1]
return klass.fromString(uri)
fromRequest = classmethod(fromRequest)
def fromContext(klass, context):
'''Create a URL object that represents the current URL in the traversal
process.'''
request = inevow.IRequest(context)
uri = request.prePathURL()
if '?' in request.uri:
uri += '?' + request.uri.split('?')[-1]
return klass.fromString(uri)
fromContext = classmethod(fromContext)
## path manipulations ##
def pathList(self, unquote=False, copy=True):
result = self._qpathlist
if unquote:
result = map(urllib.unquote, result)
if copy:
result = result[:]
return result
def sibling(self, path):
"""Construct a url where the given path segment is a sibling of this url
"""
l = self.pathList()
l[-1] = path
return self._pathMod(l, self.queryList(0))
def child(self, path):
"""Construct a url where the given path segment is a child of this url
"""
l = self.pathList()
if l[-1] == '':
l[-1] = path
else:
l.append(path)
return self._pathMod(l, self.queryList(0))
def isRoot(self, pathlist):
return (pathlist == [''] or not pathlist)
def parent(self):
import warnings
warnings.warn(
"[v0.4] URL.parent has been deprecated and replaced with parentdir (which does what parent used to do) and up (which does what you probably thought parent would do ;-))",
DeprecationWarning,
stacklevel=2)
return self.parentdir()
def here(self):
import warnings
warnings.warn(
"URL.here() is deprecated, please use URL.curdir() instead!",
DeprecationWarning,
stacklevel=2)
return self.curdir()
def curdir(self):
"""Construct a url which is a logical equivalent to '.'
of the current url. For example:
>>> print URL.fromString('http://foo.com/bar').curdir()
http://foo.com/
>>> print URL.fromString('http://foo.com/bar/').curdir()
http://foo.com/bar/
"""
l = self.pathList()
if l[-1] != '':
l[-1] = ''
return self._pathMod(l, self.queryList(0))
def up(self):
"""Pop a URL segment from this url.
"""
l = self.pathList()
if len(l):
l.pop()
return self._pathMod(l, self.queryList(0))
def parentdir(self):
"""Construct a url which is the parent of this url's directory;
This is logically equivalent to '..' of the current url.
For example:
>>> print URL.fromString('http://foo.com/bar/file').parentdir()
http://foo.com/
>>> print URL.fromString('http://foo.com/bar/dir/').parentdir()
http://foo.com/bar/
"""
l = self.pathList()
if not self.isRoot(l) and l[-1] == '':
del l[-2]
else:
# we are a file, such as http://example.com/foo/bar our
# parent directory is http://example.com/
l.pop()
if self.isRoot(l): l.append('')
else: l[-1] = ''
return self._pathMod(l, self.queryList(0))
def click(self, href):
"""Build a path by merging 'href' and this path.
Return a path which is the URL where a browser would presumably
take you if you clicked on a link with an 'href' as given.
"""
scheme, netloc, path, query, fragment = urlparse.urlsplit(href)
if (scheme, netloc, path, query, fragment) == ('', '', '', '', ''):
return self
query = unquerify(query)
if scheme:
if path and path[0] == '/':
path = path[1:]
return URL(scheme, netloc, map(raw, path.split('/')), query, fragment)
else:
scheme = self.scheme
if not netloc:
netloc = self.netloc
if not path:
path = self.path
if not query:
query = self._querylist
if not fragment:
fragment = self.fragment
else:
if path[0] == '/':
path = path[1:]
else:
l = self.pathList()
l[-1] = path
path = '/'.join(l)
path = normURLPath(path)
return URL(scheme, netloc, map(raw, path.split('/')), query, fragment)
## query manipulation ##
def queryList(self, copy=True):
"""Return current query as a list of tuples."""
if copy:
return self._querylist[:]
return self._querylist
# FIXME: here we call str() on query arg values: is this right?
def add(self, name, value=None):
"""Add a query argument with the given value
None indicates that the argument has no value
"""
q = self.queryList()
q.append((name, value))
return self._pathMod(self.pathList(copy=False), q)
def replace(self, name, value=None):
"""Remove all existing occurrances of the query
argument 'name', *if it exists*, then add the argument
with the given value.
None indicates that the argument has no value
"""
ql = self.queryList(False)
## Preserve the original position of the query key in the list
i = 0
for (k, v) in ql:
if k == name:
break
i += 1
q = filter(lambda x: x[0] != name, ql)
q.insert(i, (name, value))
return self._pathMod(self.pathList(copy=False), q)
def remove(self, name):
"""Remove all query arguments with the given name
"""
return self._pathMod(
self.pathList(copy=False),
filter(
lambda x: x[0] != name, self.queryList(False)))
def clear(self, name=None):
"""Remove all existing query arguments
"""
if name is None:
q = []
else:
q = filter(lambda x: x[0] != name, self.queryList(False))
return self._pathMod(self.pathList(copy=False), q)
## scheme manipulation ##
def secure(self, secure=True, port=None):
"""Modify the scheme to https/http and return the new URL.
@param secure: choose between https and http, default to True (https)
@param port: port, override the scheme's normal port
"""
# Choose the scheme and default port.
if secure:
scheme, defaultPort = 'https', 443
else:
scheme, defaultPort = 'http', 80
# Rebuild the netloc with port if not default.
netloc = self.netloc.split(':',1)[0]
if port is not None and port != defaultPort:
netloc = '%s:%d' % (netloc, port)
return self.__class__(scheme, netloc, self._qpathlist, self._querylist, self.fragment)
## fragment/anchor manipulation
def anchor(self, anchor=None):
'''Modify the fragment/anchor and return a new URL. An anchor of
None (the default) or '' (the empty string) will the current anchor.
'''
return self.__class__(self.scheme, self.netloc, self._qpathlist, self._querylist, anchor)
## object protocol override ##
def __str__(self):
return flatten(self)
def __repr__(self):
return (
'URL(scheme=%r, netloc=%r, pathsegs=%r, querysegs=%r, fragment=%r)'
% (self.scheme, self.netloc, self._qpathlist, self._querylist, self.fragment))
def normURLPath(path):
'''Normalise the URL path by resolving segments of '.' and ',,'.
'''
segs = []
addEmpty = False
pathSegs = path.split('/')
for seg in pathSegs:
if seg == '.':
pass
elif seg == '..':
if segs:
segs.pop()
else:
segs.append(seg)
if pathSegs[-1:] in (['.'],['..']):
segs.append('')
return '/'.join(segs)
class URLOverlay(object):
def __init__(self, urlaccessor, doc=None, dolater=None, keep=None):
"""A Proto like object for abstractly specifying urls in stan trees.
@param urlaccessor: a function which takes context and returns a URL
@param doc: a a string documenting this URLOverlay instance's usage
@param dolater: a list of tuples of (command, args, kw) where
command is a string, args is a tuple and kw is a dict; when the
URL is returned from urlaccessor during rendering, these
methods will be applied to the URL in order
"""
if doc is not None:
self.__doc__ = doc
self.urlaccessor = urlaccessor
if dolater is None:
dolater= []
self.dolater = dolater
if keep is None:
keep = []
self._keep = keep
def addCommand(self, cmd, args, kw):
dl = self.dolater[:]
dl.append((cmd, args, kw))
return self.__class__(self.urlaccessor, dolater=dl, keep=self._keep[:])
def keep(self, *args):
"""A list of arguments to carry over from the previous url.
"""
K = self._keep[:]
K.extend(args)
return self.__class__(self.urlaccessor, dolater=self.dolater[:], keep=K)
def createForwarder(cmd):
return lambda self, *args, **kw: self.addCommand(cmd, args, kw)
for cmd in [
'sibling', 'child', 'parent', 'here', 'curdir', 'click', 'add',
'replace', 'clear', 'remove', 'secure', 'anchor', 'up', 'parentdir'
]:
setattr(URLOverlay, cmd, createForwarder(cmd))
def hereaccessor(context):
return URL.fromContext(context).clear()
here = URLOverlay(
hereaccessor,
"A lazy url construction object representing the current page's URL. "
"The URL which will be used will be determined at render time by "
"looking at the request. Any query parameters will be "
"cleared automatically.")
def gethereaccessor(context):
return URL.fromContext(context)
gethere = URLOverlay(gethereaccessor,
"A lazy url construction object like 'here' except query parameters "
"are preserved. Useful for constructing a URL to this same object "
"when query parameters need to be preserved but modified slightly.")
def viewhereaccessor(context):
U = hereaccessor(context)
i = 1
while True:
try:
params = context.locate(inevow.IViewParameters, depth=i)
except KeyError:
break
for (cmd, args, kw) in iter(params):
U = getattr(U, cmd)(*args, **kw)
i += 1
return U
viewhere = URLOverlay(viewhereaccessor,
"A lazy url construction object like 'here' IViewParameters objects "
"are looked up in the context during rendering. Commands provided by "
"any found IViewParameters objects are applied to the URL object before "
"rendering it.")
def rootaccessor(context):
req = context.locate(inevow.IRequest)
root = req.getRootURL()
if root is None:
return URL.fromContext(context).click('/')
return URL.fromString(root)
root = URLOverlay(rootaccessor,
"A lazy URL construction object representing the root of the "
"application. Normally, this will just be the logical '/', but if "
"request.rememberRootURL() has previously been used in "
"the request traversal process, the url of the resource "
"where rememberRootURL was called will be used instead.")
def URLSerializer(original, context):
urlContext = WovenContext(parent=context, precompile=context.precompile, inURL=True)
if original.scheme:
yield "%s://%s" % (original.scheme, original.netloc)
for pathsegment in original._qpathlist:
yield '/'
yield serialize(pathsegment, urlContext)
query = original._querylist
if query:
yield '?'
first = True
for key, value in query:
if not first:
yield '&'
else:
first = False
yield serialize(key, urlContext)
if value is not None:
yield '='
yield serialize(value, urlContext)
if original.fragment:
yield "#"
yield serialize(original.fragment, urlContext)
def URLOverlaySerializer(original, context):
if context.precompile:
yield original
else:
url = original.urlaccessor(context)
for (cmd, args, kw) in original.dolater:
url = getattr(url, cmd)(*args, **kw)
req = context.locate(inevow.IRequest)
for key in original._keep:
for value in req.args.get(key, []):
url = url.add(key, value)
yield serialize(url, context)
## This is totally unfinished and doesn't work yet.
#class IURLGenerator(compy.Interface):
# pass
class URLGenerator:
#__implements__ = IURLGenerator,
def __init__(self):
self._objmap = weakref.WeakKeyDictionary()
def objectMountedAt(self, obj, at):
self._objmap[obj] = at
def url(self, obj):
try:
return self._objmap.get(obj, None)
except TypeError:
return None
__call__ = url
def __getstate__(self):
d = self.__dict__.copy()
del d['_objmap']
return d
def __setstate__(self, state):
self.__dict__ = state
self._objmap = weakref.WeakKeyDictionary()
class URLRedirectAdapter:
"""Adapt URL objects so that trying to render one causes a HTTP
redirect.
"""
__implements__ = inevow.IResource,
def __init__(self, original):
self.original = original
def locateChild(self, ctx, segments):
return self, ()
def renderHTTP(self, ctx):
# The URL may have deferred parts so flatten it
u = flatten(self.original, ctx)
# It might also be relative so resolve it against the current URL
# and flatten it again.
u = flatten(URL.fromContext(ctx).click(u), ctx)
return redirectTo(u, inevow.IRequest(ctx))
| gpl-2.0 |
ddd332/presto | presto-docs/target/sphinx/docutils/writers/xetex/__init__.py | 4 | 5079 | #!/usr/bin/env python
# -*- coding: utf8 -*-
# :Author: Günter Milde <[email protected]>
# :Revision: $Revision: 7389 $
# :Date: $Date: 2012-03-30 13:58:21 +0200 (Fre, 30 Mär 2012) $
# :Copyright: © 2010 Günter Milde.
# :License: Released under the terms of the `2-Clause BSD license`_, in short:
#
# Copying and distribution of this file, with or without modification,
# are permitted in any medium without royalty provided the copyright
# notice and this notice are preserved.
# This file is offered as-is, without any warranty.
#
# .. _2-Clause BSD license: http://www.spdx.org/licenses/BSD-2-Clause
"""
XeLaTeX document tree Writer.
A variant of Docutils' standard 'latex2e' writer producing output
suited for processing with XeLaTeX (http://tug.org/xetex/).
"""
__docformat__ = 'reStructuredText'
import os
import os.path
import re
import docutils
from docutils import frontend, nodes, utils, writers, languages
from docutils.writers import latex2e
class Writer(latex2e.Writer):
"""A writer for Unicode-based LaTeX variants (XeTeX, LuaTeX)"""
supported = ('xetex','xelatex','luatex')
"""Formats this writer supports."""
default_template = 'xelatex.tex'
default_preamble = '\n'.join([
r'% Linux Libertine (free, wide coverage, not only for Linux)',
r'\setmainfont{Linux Libertine O}',
r'\setsansfont{Linux Biolinum O}',
r'\setmonofont[HyphenChar=None]{DejaVu Sans Mono}',
])
config_section = 'xetex writer'
config_section_dependencies = ('writers', 'latex2e writer')
settings_spec = frontend.filter_settings_spec(
latex2e.Writer.settings_spec,
'font_encoding',
template=('Template file. Default: "%s".' % default_template,
['--template'], {'default': default_template, 'metavar': '<file>'}),
latex_preamble=('Customization by LaTeX code in the preamble. '
'Default: select PDF standard fonts (Times, Helvetica, Courier).',
['--latex-preamble'],
{'default': default_preamble}),
)
def __init__(self):
latex2e.Writer.__init__(self)
self.settings_defaults.update({'fontencoding': ''}) # use default (EU1 or EU2)
self.translator_class = XeLaTeXTranslator
class Babel(latex2e.Babel):
"""Language specifics for XeTeX.
Use `polyglossia` instead of `babel` and adapt settings.
"""
language_codes = latex2e.Babel.language_codes.copy()
# Additionally supported or differently named languages:
language_codes.update({
# code Polyglossia-name comment
'cop': 'coptic',
'de': 'german', # new spelling (de_1996)
'de_1901': 'ogerman', # old spelling
'dv': 'divehi', # Maldivian
'dsb': 'lsorbian',
'el_polyton': 'polygreek',
'fa': 'farsi',
'grc': 'ancientgreek',
'hsb': 'usorbian',
'sh-cyrl': 'serbian', # Serbo-Croatian, Cyrillic script
'sh-latn': 'croatian', # Serbo-Croatian, Latin script
'sq': 'albanian',
'sr': 'serbian', # Cyrillic script (sr-cyrl)
'th': 'thai',
'vi': 'vietnamese',
# zh-latn: ??? # Chinese Pinyin
})
# Languages without Polyglossia support:
for key in ('af', # 'afrikaans',
'de_at', # 'naustrian',
'de_at_1901', # 'austrian',
'fr_ca', # 'canadien',
'grc_ibycus', # 'ibycus', (Greek Ibycus encoding)
'sr-latn', # 'serbian script=latin'
):
del(language_codes[key])
def __init__(self, language_code, reporter):
self.language_code = language_code
self.reporter = reporter
self.language = self.language_name(language_code)
self.otherlanguages = {}
self.warn_msg = 'Language "%s" not supported by XeTeX (polyglossia).'
self.quote_index = 0
self.quotes = ('"', '"')
# language dependent configuration:
# double quotes are "active" in some languages (e.g. German).
self.literal_double_quote = u'"' # TODO: use \textquotedbl
def __call__(self):
setup = [r'\usepackage{polyglossia}',
r'\setdefaultlanguage{%s}' % self.language]
if self.otherlanguages:
setup.append(r'\setotherlanguages{%s}' %
','.join(self.otherlanguages.keys()))
return '\n'.join(setup)
class XeLaTeXTranslator(latex2e.LaTeXTranslator):
def __init__(self, document):
self.is_xetex = True # typeset with XeTeX or LuaTeX engine
latex2e.LaTeXTranslator.__init__(self, document, Babel)
if self.latex_encoding == 'utf8':
self.requirements.pop('_inputenc', None)
else:
self.requirements['_inputenc'] = (r'\XeTeXinputencoding %s '
% self.latex_encoding)
| apache-2.0 |
wasade/qiime | tests/test_plot_semivariogram.py | 1 | 12517 | #!/usr/bin/env python
__author__ = "Antonio Gonzalez Pena"
__copyright__ = "Copyright 2011, The QIIME Project"
__credits__ = ["Antonio Gonzalez Pena"]
__license__ = "GPL"
__version__ = "1.8.0-dev"
__maintainer__ = "Antonio Gonzalez Pena"
__email__ = "[email protected]"
from qiime.plot_semivariogram import hist_bins, fit_semivariogram
from unittest import TestCase, main
from numpy.testing import assert_almost_equal
from numpy import asarray
class FunctionTests(TestCase):
"""Tests of top-level functions"""
def test_hist_bins(self):
""" test hist_bins """
x = asarray(
[3.,
4.12310563,
4.24264069,
4.47213595,
5.,
5.,
5.,
5.,
5.38516481,
5.65685425,
6.40312424,
6.40312424,
6.70820393,
7.,
7.07106781,
7.07106781,
7.28010989,
7.81024968,
8.,
8.06225775,
8.06225775,
8.24621125,
9.,
9.48683298,
9.48683298,
9.89949494,
9.89949494,
10.,
10.04987562,
10.04987562])
bins = [2.0, 5.0, 7.5, 10.0, 11.0]
hist_res = [0., 8., 9., 11., 2.]
vals, hist = hist_bins(bins, x)
assert_almost_equal(vals, bins)
assert_almost_equal(hist, hist_res)
def test_reorder_samples(self):
""" test that regural and irregular order give the same results """
model = "linear"
# Test normal order
x_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
x = asarray(
[[0.0,
1.0,
2.0,
3.0,
4.0,
5.0],
[0.0,
0.0,
6.0,
7.0,
8.0,
9.0],
[0.0,
0.0,
0.0,
10.0,
11.0,
12.0],
[0.0,
0.0,
0.0,
0.0,
13.0,
14.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
15.0]])
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
1.0,
2.0,
3.0,
4.0,
5.0],
[0.0,
0.0,
6.0,
7.0,
8.0,
9.0],
[0.0,
0.0,
0.0,
10.0,
11.0,
12.0],
[0.0,
0.0,
0.0,
0.0,
13.0,
14.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
15.0]])
vals_exp = [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 7.0]
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (x_lbl, x), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, vals_exp)
# Test altered
model = "linear"
# order = [5, 1, 3, 4, 0, 2]
x_lbl = ['s6', 's2', 's4', 's5', 's1', 's3']
x = asarray(
[[0.0,
0.0,
0.0,
0.0,
0.0,
0.0],
[9.0,
0.0,
7.0,
8.0,
0.0,
6.0],
[14.0,
0.0,
0.0,
13.0,
0.0,
0.0],
[15.0,
0.0,
0.0,
0.0,
0.0,
0.0],
[5.0,
1.0,
3.0,
4.0,
0.0,
2.0],
[12.0,
0.0,
10.0,
11.0,
0.0,
0.0]])
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
1.0,
2.0,
3.0,
4.0,
5.0],
[0.0,
0.0,
6.0,
7.0,
8.0,
9.0],
[0.0,
0.0,
0.0,
10.0,
11.0,
12.0],
[0.0,
0.0,
0.0,
0.0,
13.0,
14.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
15.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
0.0]])
vals_exp = [
1.,
2.,
3.,
4.,
5.,
6.,
7.,
8.,
9.,
10.,
11.,
12.,
13.,
14.,
15.]
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (y_lbl, y), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, vals_exp)
def test_models_semivariograms(self):
""" test the semivariogram fitting models """
# All models should return the same x_vals, y_vals, x_fit
# because we are using the same x
x_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
x = asarray(
[[0.0,
1.0,
2.0,
3.0,
4.0,
5.0],
[0.0,
0.0,
6.0,
7.0,
8.0,
9.0],
[0.0,
0.0,
0.0,
10.0,
11.0,
12.0],
[0.0,
0.0,
0.0,
0.0,
13.0,
14.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
15.0]])
vals_exp = [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 7.0]
model = "nugget"
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
5.0,
5.0,
5.0,
5.0,
5.0],
[0.0,
0.0,
5.0,
5.0,
5.0,
5.0],
[0.0,
0.0,
0.0,
5.0,
5.0,
5.0],
[0.0,
0.0,
0.0,
0.0,
5.0,
5.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
5.0]])
y_vals_exp = [2.3000000143667378] * (len(x) * 2)
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (x_lbl, x), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, y_vals_exp)
model = "exponential"
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
1.0,
22.0,
33.0,
44.0,
55.0],
[0.0,
0.0,
66.0,
77.0,
88.0,
99.0],
[0.0,
0.0,
0.0,
1010.0,
1111.0,
1212.0],
[0.0,
0.0,
0.0,
0.0,
1313.0,
1414.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
1515.0]])
x_vals_exp = [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 7.0]
y_vals_exp = [0.0, 0.0, 0.0, 0.0, 1.0, 22.0, 33.0, 44.0, 66.0, 77.0]
x_fit_exp = [0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 7.0]
y_fit_exp = [-1.481486808707005, -1.481486808707005, -1.481486808707005,
-1.481486808707005, 9.72783464904061, 20.937152199747878,
32.14646584698613, 43.355775583612704, 65.7743833464588,
76.983681369107]
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (y_lbl, y), model, [])
assert_almost_equal(x_vals, x_vals_exp)
assert_almost_equal(y_vals, y_vals_exp)
assert_almost_equal(x_fit, x_fit_exp)
assert_almost_equal(y_fit, y_fit_exp, decimal=2)
model = "gaussian"
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
1.0,
22.0,
33.0,
44.0,
55.0],
[0.0,
0.0,
66.0,
77.0,
88.0,
99.0],
[0.0,
0.0,
0.0,
1010.0,
1111.0,
1212.0],
[0.0,
0.0,
0.0,
0.0,
1313.0,
1414.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
1515.0]])
y_vals_exp = [0.17373844, 0.17373844, 0.17373844, 0.17373844,
0.54915099, 1.5597716 , 2.91606171, 4.2880578 ,
6.24509872, 6.74690541]
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (x_lbl, x), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, y_vals_exp, decimal=2)
model = "periodic"
y_lbl = ['s1', 's2', 's3', 's4', 's5', 's6']
y = asarray(
[[0.0,
1.0,
22.0,
33.0,
44.0,
55.0],
[0.0,
0.0,
66.0,
77.0,
88.0,
99.0],
[0.0,
0.0,
0.0,
1010.0,
1111.0,
1212.0],
[0.0,
0.0,
0.0,
0.0,
1313.0,
1414.0],
[0.0,
0.0,
0.0,
0.0,
0.0,
1515.0]])
y_vals_exp = [0.2324873886681871, 0.2324873886681871,
0.2324873886681871, 0.2324873886681871,
0.5528698895985695, 1.4508010363573784,
2.7491053124879112, 4.191607473962063,
6.39840364731269, 6.727263101495738]
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (x_lbl, x), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, y_vals_exp, decimal=2)
model = "linear"
y_lbl = x_lbl
y = x
x_vals, y_vals, x_fit, y_fit, func_text = fit_semivariogram(
(x_lbl, x), (x_lbl, x), model, [])
assert_almost_equal(x_vals, vals_exp)
assert_almost_equal(y_vals, vals_exp)
assert_almost_equal(x_fit, vals_exp)
assert_almost_equal(y_fit, vals_exp, decimal=2)
# run tests if called from command line
if __name__ == '__main__':
main()
| gpl-2.0 |
auready/docker-py | docker/utils/socket.py | 10 | 1771 | import errno
import os
import select
import struct
import six
try:
from ..transport import NpipeSocket
except ImportError:
NpipeSocket = type(None)
class SocketError(Exception):
pass
def read(socket, n=4096):
"""
Reads at most n bytes from socket
"""
recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)
# wait for data to become available
if not isinstance(socket, NpipeSocket):
select.select([socket], [], [])
try:
if hasattr(socket, 'recv'):
return socket.recv(n)
return os.read(socket.fileno(), n)
except EnvironmentError as e:
if e.errno not in recoverable_errors:
raise
def read_exactly(socket, n):
"""
Reads exactly n bytes from socket
Raises SocketError if there isn't enough data
"""
data = six.binary_type()
while len(data) < n:
next_data = read(socket, n - len(data))
if not next_data:
raise SocketError("Unexpected EOF")
data += next_data
return data
def next_frame_size(socket):
"""
Returns the size of the next frame of data waiting to be read from socket,
according to the protocol defined here:
https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/attach-to-a-container
"""
try:
data = read_exactly(socket, 8)
except SocketError:
return 0
_, actual = struct.unpack('>BxxxL', data)
return actual
def frames_iter(socket):
"""
Returns a generator of frames read from socket
"""
while True:
n = next_frame_size(socket)
if n == 0:
break
while n > 0:
result = read(socket, n)
n -= len(result)
yield result
| apache-2.0 |
SuperKogito/Cryptos | CryptosCode/ExitPage.py | 1 | 2699 | # -*- coding: utf-8 -*-
"""
Created on Sat Sep 23 19:05:35 2017
@author: SuperKogito
"""
# Define imports
import tkinter as tk
class ExitPage(tk.Frame):
""" Exit page class """
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
self.configure(background='black')
# Define main frame
self.main_frame = tk.Frame(self, background='black')
self.main_frame.pack(expand=1)
self.main_frame.pack()
# Define upper frame
upper_frame = tk.Frame(self.main_frame, width=300, height=50,
background='black')
upper_frame.grid(column=0, row=0)
# Define label
exit_string = '\n\nAre you sure that you want to exit Crypotos?\n'
exit_label = tk.Label(upper_frame, text=exit_string,
background='black', foreground="white")
exit_label.pack(side="top", fill="x", pady=10)
# Define middle frame
middle_frame = tk.Frame(self.main_frame, background='black',
width=300, height=50)
middle_frame.grid(column=0, row=1)
# Define cancel button
cancel_button = tk.Button(middle_frame, text="Cancel",
command=lambda: controller.show_frame("PageOne"))
cancel_button.pack(side=tk.RIGHT)
# Define yes button
yes_button = tk.Button(middle_frame, text="Yes",
command=lambda: controller.quit_func())
yes_button.pack(side=tk.RIGHT, padx=5, pady=5)
# Configure the buttons
cancel_button.configure(background='black', foreground='white',
activebackground='#0080ff',
activeforeground='white')
yes_button.configure(background='black', foreground='white',
activebackground='#0080ff',
activeforeground='white')
# Define lower frame
lower_frame = tk.Frame(self.main_frame, background='black',
width=300, height=50)
lower_frame.grid(column=0, row=2)
# Define label
dev_text = (
"\nDeveloped by: SuperKogito\n"
"Gihthub repository: "
"https://github.com/SuperKogito/Cryptos"
)
self.developper_text = tk.Label(lower_frame,
text=dev_text,
background='black',
foreground='White')
self.developper_text.pack(side="bottom")
| mit |
scorphus/thefuck | thefuck/specific/git.py | 4 | 1128 | import re
from decorator import decorator
from ..utils import is_app
from ..shells import shell
@decorator
def git_support(fn, command):
"""Resolves git aliases and supports testing for both git and hub."""
# supports GitHub's `hub` command
# which is recommended to be used with `alias git=hub`
# but at this point, shell aliases have already been resolved
if not is_app(command, 'git', 'hub'):
return False
# perform git aliases expansion
if 'trace: alias expansion:' in command.output:
search = re.search("trace: alias expansion: ([^ ]*) => ([^\n]*)",
command.output)
alias = search.group(1)
# by default git quotes everything, for example:
# 'commit' '--amend'
# which is surprising and does not allow to easily test for
# eg. 'git commit'
expansion = ' '.join(shell.quote(part)
for part in shell.split_command(search.group(2)))
new_script = command.script.replace(alias, expansion)
command = command.update(script=new_script)
return fn(command)
| mit |
astrofrog/numpy | numpy/matlib.py | 90 | 9494 | import numpy as np
from numpy.matrixlib.defmatrix import matrix, asmatrix
# need * as we're copying the numpy namespace
from numpy import *
__version__ = np.__version__
__all__ = np.__all__[:] # copy numpy namespace
__all__ += ['rand', 'randn', 'repmat']
def empty(shape, dtype=None, order='C'):
"""
Return a new matrix of given shape and type, without initializing entries.
Parameters
----------
shape : int or tuple of int
Shape of the empty matrix.
dtype : data-type, optional
Desired output data-type.
order : {'C', 'F'}, optional
Whether to store multi-dimensional data in C (row-major) or
Fortran (column-major) order in memory.
See Also
--------
empty_like, zeros
Notes
-----
`empty`, unlike `zeros`, does not set the matrix values to zero,
and may therefore be marginally faster. On the other hand, it requires
the user to manually set all the values in the array, and should be
used with caution.
Examples
--------
>>> import numpy.matlib
>>> np.matlib.empty((2, 2)) # filled with random data
matrix([[ 6.76425276e-320, 9.79033856e-307],
[ 7.39337286e-309, 3.22135945e-309]]) #random
>>> np.matlib.empty((2, 2), dtype=int)
matrix([[ 6600475, 0],
[ 6586976, 22740995]]) #random
"""
return ndarray.__new__(matrix, shape, dtype, order=order)
def ones(shape, dtype=None, order='C'):
"""
Matrix of ones.
Return a matrix of given shape and type, filled with ones.
Parameters
----------
shape : {sequence of ints, int}
Shape of the matrix
dtype : data-type, optional
The desired data-type for the matrix, default is np.float64.
order : {'C', 'F'}, optional
Whether to store matrix in C- or Fortran-contiguous order,
default is 'C'.
Returns
-------
out : matrix
Matrix of ones of given shape, dtype, and order.
See Also
--------
ones : Array of ones.
matlib.zeros : Zero matrix.
Notes
-----
If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``,
`out` becomes a single row matrix of shape ``(1,N)``.
Examples
--------
>>> np.matlib.ones((2,3))
matrix([[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> np.matlib.ones(2)
matrix([[ 1., 1.]])
"""
a = ndarray.__new__(matrix, shape, dtype, order=order)
a.fill(1)
return a
def zeros(shape, dtype=None, order='C'):
"""
Return a matrix of given shape and type, filled with zeros.
Parameters
----------
shape : int or sequence of ints
Shape of the matrix
dtype : data-type, optional
The desired data-type for the matrix, default is float.
order : {'C', 'F'}, optional
Whether to store the result in C- or Fortran-contiguous order,
default is 'C'.
Returns
-------
out : matrix
Zero matrix of given shape, dtype, and order.
See Also
--------
numpy.zeros : Equivalent array function.
matlib.ones : Return a matrix of ones.
Notes
-----
If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``,
`out` becomes a single row matrix of shape ``(1,N)``.
Examples
--------
>>> import numpy.matlib
>>> np.matlib.zeros((2, 3))
matrix([[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> np.matlib.zeros(2)
matrix([[ 0., 0.]])
"""
a = ndarray.__new__(matrix, shape, dtype, order=order)
a.fill(0)
return a
def identity(n,dtype=None):
"""
Returns the square identity matrix of given size.
Parameters
----------
n : int
Size of the returned identity matrix.
dtype : data-type, optional
Data-type of the output. Defaults to ``float``.
Returns
-------
out : matrix
`n` x `n` matrix with its main diagonal set to one,
and all other elements zero.
See Also
--------
numpy.identity : Equivalent array function.
matlib.eye : More general matrix identity function.
Examples
--------
>>> import numpy.matlib
>>> np.matlib.identity(3, dtype=int)
matrix([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
"""
a = array([1]+n*[0],dtype=dtype)
b = empty((n,n),dtype=dtype)
b.flat = a
return b
def eye(n,M=None, k=0, dtype=float):
"""
Return a matrix with ones on the diagonal and zeros elsewhere.
Parameters
----------
n : int
Number of rows in the output.
M : int, optional
Number of columns in the output, defaults to `n`.
k : int, optional
Index of the diagonal: 0 refers to the main diagonal,
a positive value refers to an upper diagonal,
and a negative value to a lower diagonal.
dtype : dtype, optional
Data-type of the returned matrix.
Returns
-------
I : matrix
A `n` x `M` matrix where all elements are equal to zero,
except for the `k`-th diagonal, whose values are equal to one.
See Also
--------
numpy.eye : Equivalent array function.
identity : Square identity matrix.
Examples
--------
>>> import numpy.matlib
>>> np.matlib.eye(3, k=1, dtype=float)
matrix([[ 0., 1., 0.],
[ 0., 0., 1.],
[ 0., 0., 0.]])
"""
return asmatrix(np.eye(n,M,k,dtype))
def rand(*args):
"""
Return a matrix of random values with given shape.
Create a matrix of the given shape and propagate it with
random samples from a uniform distribution over ``[0, 1)``.
Parameters
----------
\\*args : Arguments
Shape of the output.
If given as N integers, each integer specifies the size of one
dimension.
If given as a tuple, this tuple gives the complete shape.
Returns
-------
out : ndarray
The matrix of random values with shape given by `\\*args`.
See Also
--------
randn, numpy.random.rand
Examples
--------
>>> import numpy.matlib
>>> np.matlib.rand(2, 3)
matrix([[ 0.68340382, 0.67926887, 0.83271405],
[ 0.00793551, 0.20468222, 0.95253525]]) #random
>>> np.matlib.rand((2, 3))
matrix([[ 0.84682055, 0.73626594, 0.11308016],
[ 0.85429008, 0.3294825 , 0.89139555]]) #random
If the first argument is a tuple, other arguments are ignored:
>>> np.matlib.rand((2, 3), 4)
matrix([[ 0.46898646, 0.15163588, 0.95188261],
[ 0.59208621, 0.09561818, 0.00583606]]) #random
"""
if isinstance(args[0], tuple):
args = args[0]
return asmatrix(np.random.rand(*args))
def randn(*args):
"""
Return a random matrix with data from the "standard normal" distribution.
`randn` generates a matrix filled with random floats sampled from a
univariate "normal" (Gaussian) distribution of mean 0 and variance 1.
Parameters
----------
\\*args : Arguments
Shape of the output.
If given as N integers, each integer specifies the size of one
dimension. If given as a tuple, this tuple gives the complete shape.
Returns
-------
Z : matrix of floats
A matrix of floating-point samples drawn from the standard normal
distribution.
See Also
--------
rand, random.randn
Notes
-----
For random samples from :math:`N(\\mu, \\sigma^2)`, use:
``sigma * np.matlib.randn(...) + mu``
Examples
--------
>>> import numpy.matlib
>>> np.matlib.randn(1)
matrix([[-0.09542833]]) #random
>>> np.matlib.randn(1, 2, 3)
matrix([[ 0.16198284, 0.0194571 , 0.18312985],
[-0.7509172 , 1.61055 , 0.45298599]]) #random
Two-by-four matrix of samples from :math:`N(3, 6.25)`:
>>> 2.5 * np.matlib.randn((2, 4)) + 3
matrix([[ 4.74085004, 8.89381862, 4.09042411, 4.83721922],
[ 7.52373709, 5.07933944, -2.64043543, 0.45610557]]) #random
"""
if isinstance(args[0], tuple):
args = args[0]
return asmatrix(np.random.randn(*args))
def repmat(a, m, n):
"""
Repeat a 0-D to 2-D array or matrix MxN times.
Parameters
----------
a : array_like
The array or matrix to be repeated.
m, n : int
The number of times `a` is repeated along the first and second axes.
Returns
-------
out : ndarray
The result of repeating `a`.
Examples
--------
>>> import numpy.matlib
>>> a0 = np.array(1)
>>> np.matlib.repmat(a0, 2, 3)
array([[1, 1, 1],
[1, 1, 1]])
>>> a1 = np.arange(4)
>>> np.matlib.repmat(a1, 2, 2)
array([[0, 1, 2, 3, 0, 1, 2, 3],
[0, 1, 2, 3, 0, 1, 2, 3]])
>>> a2 = np.asmatrix(np.arange(6).reshape(2, 3))
>>> np.matlib.repmat(a2, 2, 3)
matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5],
[0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5]])
"""
a = asanyarray(a)
ndim = a.ndim
if ndim == 0:
origrows, origcols = (1,1)
elif ndim == 1:
origrows, origcols = (1, a.shape[0])
else:
origrows, origcols = a.shape
rows = origrows * m
cols = origcols * n
c = a.reshape(1,a.size).repeat(m, 0).reshape(rows, origcols).repeat(n,0)
return c.reshape(rows, cols)
| bsd-3-clause |
nicholaschris/landsatpy | utils.py | 1 | 2693 | import operator
import pandas as pd
import numpy as np
from numpy import ma
from scipy.misc import imresize
import scipy.ndimage as ndimage
from skimage.morphology import disk, dilation
def get_truth(input_one, input_two, comparison): # too much abstraction
ops = {'>': operator.gt,
'<': operator.lt,
'>=': operator.ge,
'<=': operator.le,
'=': operator.eq}
return ops[comparison](input_one, input_two)
def convert_to_celsius(brightness_temp_input):
return brightness_temp_input - 272.15
def calculate_percentile(input_masked_array, percentile):
flat_fill_input = input_masked_array.filled(np.nan).flatten()
df = pd.DataFrame(flat_fill_input)
percentile = df.quantile(percentile/100.0)
return percentile[0]
def save_object(obj, filename):
import pickle
with open(filename, 'wb') as output:
pickle.dump(obj, output)
def downsample(input_array, factor=4):
output_array = input_array[::2, ::2] / 4 + input_array[1::2, ::2] / 4 + input_array[::2, 1::2] / 4 + input_array[1::2, 1::2] / 4
return output_array
def dilate_boolean_array(input_array, disk_size=3):
selem = disk(disk_size)
dilated = dilation(input_array, selem)
return dilated
def get_resized_array(img, size):
lena = imresize(img, (size, size))
return lena
def interp_and_resize(array, new_length):
orig_y_length, orig_x_length = array.shape
interp_factor_y = new_length / orig_y_length
interp_factor_x = new_length / orig_x_length
y = round(interp_factor_y * orig_y_length)
x = round(interp_factor_x * orig_x_length)
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html
new_indicies = np.mgrid[0:orig_y_length:y * 1j, 0:orig_x_length:x * 1j]
# order=1 indicates bilinear interpolation.
interp_array = ndimage.map_coordinates(array, new_indicies,
order=1, output=array.dtype)
interp_array = interp_array.reshape((y, x))
return interp_array
def parse_mtl(in_file):
awesome = True
f = open(in_file, 'r')
print(in_file)
mtl_dict = {}
with open(in_file, 'r') as f:
while awesome:
line = f.readline()
if line.strip() == '' or line.strip() == 'END':
return mtl_dict
elif 'END_GROUP' in line:
pass
elif 'GROUP' in line:
curr_group = line.split('=')[1].strip()
mtl_dict[curr_group] = {}
else:
attr, value = line.split('=')[0].strip(), line.split('=')[1].strip()
mtl_dict[curr_group][attr] = value
| mit |
guludo/ardupilot-1 | Tools/scripts/frame_sizes.py | 351 | 1117 | #!/usr/bin/env python
import re, sys, operator, os
code_line = re.compile("^\s*\d+:/")
frame_line = re.compile("^\s*\d+\s+/\* frame size = (\d+) \*/")
class frame(object):
def __init__(self, code, frame_size):
self.code = code
self.frame_size = int(frame_size)
frames = []
def process_lst(filename):
'''process one lst file'''
last_code = ''
h = open(filename, mode='r')
for line in h:
if code_line.match(line):
last_code = line.strip()
elif frame_line.match(line):
frames.append(frame(last_code, frame_line.match(line).group(1)))
h.close()
if len(sys.argv) > 1:
dname = sys.argv[1]
else:
dname = '.'
for root, dirs, files in os.walk(dname):
for f in files:
if f.endswith(".lst"):
process_lst(os.path.join(root, f))
sorted_frames = sorted(frames,
key=operator.attrgetter('frame_size'),
reverse=True)
print("FrameSize Code")
for frame in sorted_frames:
if frame.frame_size > 0:
print("%9u %s" % (frame.frame_size, frame.code))
| gpl-3.0 |
John-Hart/autorest | src/generator/AutoRest.Python.Tests/AcceptanceTests/model_flattening_tests.py | 6 | 11782 | # --------------------------------------------------------------------------
#
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the ""Software""), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# --------------------------------------------------------------------------
import unittest
import subprocess
import sys
import isodate
import tempfile
import json
from datetime import date, datetime, timedelta
import os
from os.path import dirname, pardir, join, realpath, sep, pardir
cwd = dirname(realpath(__file__))
root = realpath(join(cwd , pardir, pardir, pardir, pardir))
sys.path.append(join(root, "src" , "client" , "Python", "msrest"))
log_level = int(os.environ.get('PythonLogLevel', 30))
tests = realpath(join(cwd, pardir, "Expected", "AcceptanceTests"))
sys.path.append(join(tests, "ModelFlattening"))
from msrest.serialization import Deserializer
from msrest.exceptions import DeserializationError
from autorestresourceflatteningtestservice import AutoRestResourceFlatteningTestService
from autorestresourceflatteningtestservice.models import (
FlattenedProduct,
ErrorException,
ResourceCollection,
SimpleProduct,
FlattenParameterGroup)
class ModelFlatteningTests(unittest.TestCase):
def setUp(self):
self.client = AutoRestResourceFlatteningTestService(base_url="http://localhost:3000")
return super(ModelFlatteningTests, self).setUp()
def test_flattening_array(self):
#Array
result = self.client.get_array()
self.assertEqual(3, len(result))
# Resource 1
self.assertEqual("1", result[0].id)
self.assertEqual("OK", result[0].provisioning_state_values)
self.assertEqual("Product1", result[0].pname)
self.assertEqual("Flat", result[0].flattened_product_type)
self.assertEqual("Building 44", result[0].location)
self.assertEqual("Resource1", result[0].name)
self.assertEqual("Succeeded", result[0].provisioning_state)
self.assertEqual("Microsoft.Web/sites", result[0].type)
self.assertEqual("value1", result[0].tags["tag1"])
self.assertEqual("value3", result[0].tags["tag2"])
# Resource 2
self.assertEqual("2", result[1].id)
self.assertEqual("Resource2", result[1].name)
self.assertEqual("Building 44", result[1].location)
# Resource 3
self.assertEqual("3", result[2].id)
self.assertEqual("Resource3", result[2].name)
resourceArray = [
{
'location': "West US",
'tags': {"tag1":"value1", "tag2":"value3"}},
{
'location': "Building 44"}]
self.client.put_array(resourceArray)
def test_flattening_dictionary(self):
#Dictionary
resultDictionary = self.client.get_dictionary()
self.assertEqual(3, len(resultDictionary))
# Resource 1
self.assertEqual("1", resultDictionary["Product1"].id)
self.assertEqual("OK", resultDictionary["Product1"].provisioning_state_values)
self.assertEqual("Product1", resultDictionary["Product1"].pname)
self.assertEqual("Flat", resultDictionary["Product1"].flattened_product_type)
self.assertEqual("Building 44", resultDictionary["Product1"].location)
self.assertEqual("Resource1", resultDictionary["Product1"].name)
self.assertEqual("Succeeded", resultDictionary["Product1"].provisioning_state)
self.assertEqual("Microsoft.Web/sites", resultDictionary["Product1"].type)
self.assertEqual("value1", resultDictionary["Product1"].tags["tag1"])
self.assertEqual("value3", resultDictionary["Product1"].tags["tag2"])
# Resource 2
self.assertEqual("2", resultDictionary["Product2"].id)
self.assertEqual("Resource2", resultDictionary["Product2"].name)
self.assertEqual("Building 44", resultDictionary["Product2"].location)
# Resource 3
self.assertEqual("3", resultDictionary["Product3"].id)
self.assertEqual("Resource3", resultDictionary["Product3"].name)
resourceDictionary = {
"Resource1": {
'location': "West US",
'tags': {"tag1":"value1", "tag2":"value3"},
'pname': "Product1",
'flattened_product_type': "Flat"},
"Resource2": {
'location': "Building 44",
'pname': "Product2",
'flattened_product_type': "Flat"}}
self.client.put_dictionary(resourceDictionary)
def test_flattening_complex_object(self):
#ResourceCollection
resultResource = self.client.get_resource_collection()
#dictionaryofresources
self.assertEqual(3, len(resultResource.dictionaryofresources))
# Resource 1
self.assertEqual("1", resultResource.dictionaryofresources["Product1"].id)
self.assertEqual("OK", resultResource.dictionaryofresources["Product1"].provisioning_state_values)
self.assertEqual("Product1", resultResource.dictionaryofresources["Product1"].pname)
self.assertEqual("Flat", resultResource.dictionaryofresources["Product1"].flattened_product_type)
self.assertEqual("Building 44", resultResource.dictionaryofresources["Product1"].location)
self.assertEqual("Resource1", resultResource.dictionaryofresources["Product1"].name)
self.assertEqual("Succeeded", resultResource.dictionaryofresources["Product1"].provisioning_state)
self.assertEqual("Microsoft.Web/sites", resultResource.dictionaryofresources["Product1"].type)
self.assertEqual("value1", resultResource.dictionaryofresources["Product1"].tags["tag1"])
self.assertEqual("value3", resultResource.dictionaryofresources["Product1"].tags["tag2"])
# Resource 2
self.assertEqual("2", resultResource.dictionaryofresources["Product2"].id)
self.assertEqual("Resource2", resultResource.dictionaryofresources["Product2"].name)
self.assertEqual("Building 44", resultResource.dictionaryofresources["Product2"].location)
# Resource 3
self.assertEqual("3", resultResource.dictionaryofresources["Product3"].id)
self.assertEqual("Resource3", resultResource.dictionaryofresources["Product3"].name)
#arrayofresources
self.assertEqual(3, len(resultResource.arrayofresources))
# Resource 1
self.assertEqual("4", resultResource.arrayofresources[0].id)
self.assertEqual("OK", resultResource.arrayofresources[0].provisioning_state_values)
self.assertEqual("Product4", resultResource.arrayofresources[0].pname)
self.assertEqual("Flat", resultResource.arrayofresources[0].flattened_product_type)
self.assertEqual("Building 44", resultResource.arrayofresources[0].location)
self.assertEqual("Resource4", resultResource.arrayofresources[0].name)
self.assertEqual("Succeeded", resultResource.arrayofresources[0].provisioning_state)
self.assertEqual("Microsoft.Web/sites", resultResource.arrayofresources[0].type)
self.assertEqual("value1", resultResource.arrayofresources[0].tags["tag1"])
self.assertEqual("value3", resultResource.arrayofresources[0].tags["tag2"])
# Resource 2
self.assertEqual("5", resultResource.arrayofresources[1].id)
self.assertEqual("Resource5", resultResource.arrayofresources[1].name)
self.assertEqual("Building 44", resultResource.arrayofresources[1].location)
# Resource 3
self.assertEqual("6", resultResource.arrayofresources[2].id)
self.assertEqual("Resource6", resultResource.arrayofresources[2].name)
#productresource
self.assertEqual("7", resultResource.productresource.id)
self.assertEqual("Resource7", resultResource.productresource.name)
resourceDictionary = {
"Resource1": FlattenedProduct(
location = "West US",
tags = {"tag1":"value1", "tag2":"value3"},
pname = "Product1",
flattened_product_type = "Flat"),
"Resource2": FlattenedProduct(
location = "Building 44",
pname = "Product2",
flattened_product_type = "Flat")}
resourceComplexObject = ResourceCollection(
dictionaryofresources = resourceDictionary,
arrayofresources = [
FlattenedProduct(
location = "West US",
tags = {"tag1":"value1", "tag2":"value3"},
pname = "Product1",
flattened_product_type = "Flat"),
FlattenedProduct(
location = "East US",
pname = "Product2",
flattened_product_type = "Flat")],
productresource = FlattenedProduct(
location = "India",
pname = "Azure",
flattened_product_type = "Flat"))
self.client.put_resource_collection(resourceComplexObject)
def test_model_flattening_simple(self):
simple_prduct = SimpleProduct(
product_id = "123",
description = "product description",
max_product_display_name = "max name",
odatavalue = "http://foo",
generic_value = "https://generic"
)
result = self.client.put_simple_product(simple_prduct)
self.assertEqual(result, simple_prduct)
def test_model_flattening_with_parameter_flattening(self):
simple_product = SimpleProduct(
product_id = "123",
description = "product description",
max_product_display_name = "max name",
odatavalue = "http://foo"
)
result = self.client.post_flattened_simple_product("123", "max name", "product description", None, "http://foo")
self.assertEqual(result, simple_product)
def test_model_flattening_with_grouping(self):
simple_prduct = SimpleProduct(
product_id = "123",
description = "product description",
max_product_display_name = "max name",
odatavalue = "http://foo"
)
group = FlattenParameterGroup(
product_id = "123",
description = "product description",
max_product_display_name="max name",
odatavalue="http://foo",
name="groupproduct")
result = self.client.put_simple_product_with_grouping(group)
self.assertEqual(result, simple_prduct)
if __name__ == '__main__':
unittest.main()
| mit |
ashishfinoit/django-rest-framework | tests/test_permissions.py | 68 | 18850 | from __future__ import unicode_literals
import base64
from django.contrib.auth.models import Group, Permission, User
from django.core.urlresolvers import ResolverMatch
from django.db import models
from django.test import TestCase
from django.utils import unittest
from rest_framework import (
HTTP_HEADER_ENCODING, authentication, generics, permissions, serializers,
status
)
from rest_framework.compat import get_model_name, guardian
from rest_framework.filters import DjangoObjectPermissionsFilter
from rest_framework.routers import DefaultRouter
from rest_framework.test import APIRequestFactory
from tests.models import BasicModel
factory = APIRequestFactory()
class BasicSerializer(serializers.ModelSerializer):
class Meta:
model = BasicModel
class RootView(generics.ListCreateAPIView):
queryset = BasicModel.objects.all()
serializer_class = BasicSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [permissions.DjangoModelPermissions]
class InstanceView(generics.RetrieveUpdateDestroyAPIView):
queryset = BasicModel.objects.all()
serializer_class = BasicSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [permissions.DjangoModelPermissions]
class GetQuerySetListView(generics.ListCreateAPIView):
serializer_class = BasicSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [permissions.DjangoModelPermissions]
def get_queryset(self):
return BasicModel.objects.all()
class EmptyListView(generics.ListCreateAPIView):
queryset = BasicModel.objects.none()
serializer_class = BasicSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [permissions.DjangoModelPermissions]
root_view = RootView.as_view()
api_root_view = DefaultRouter().get_api_root_view()
instance_view = InstanceView.as_view()
get_queryset_list_view = GetQuerySetListView.as_view()
empty_list_view = EmptyListView.as_view()
def basic_auth_header(username, password):
credentials = ('%s:%s' % (username, password))
base64_credentials = base64.b64encode(credentials.encode(HTTP_HEADER_ENCODING)).decode(HTTP_HEADER_ENCODING)
return 'Basic %s' % base64_credentials
class ModelPermissionsIntegrationTests(TestCase):
def setUp(self):
User.objects.create_user('disallowed', '[email protected]', 'password')
user = User.objects.create_user('permitted', '[email protected]', 'password')
user.user_permissions = [
Permission.objects.get(codename='add_basicmodel'),
Permission.objects.get(codename='change_basicmodel'),
Permission.objects.get(codename='delete_basicmodel')
]
user = User.objects.create_user('updateonly', '[email protected]', 'password')
user.user_permissions = [
Permission.objects.get(codename='change_basicmodel'),
]
self.permitted_credentials = basic_auth_header('permitted', 'password')
self.disallowed_credentials = basic_auth_header('disallowed', 'password')
self.updateonly_credentials = basic_auth_header('updateonly', 'password')
BasicModel(text='foo').save()
def test_has_create_permissions(self):
request = factory.post('/', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.permitted_credentials)
response = root_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_api_root_view_discard_default_django_model_permission(self):
"""
We check that DEFAULT_PERMISSION_CLASSES can
apply to APIRoot view. More specifically we check expected behavior of
``_ignore_model_permissions`` attribute support.
"""
request = factory.get('/', format='json',
HTTP_AUTHORIZATION=self.permitted_credentials)
request.resolver_match = ResolverMatch('get', (), {})
response = api_root_view(request)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_get_queryset_has_create_permissions(self):
request = factory.post('/', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.permitted_credentials)
response = get_queryset_list_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
def test_has_put_permissions(self):
request = factory.put('/1', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.permitted_credentials)
response = instance_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_has_delete_permissions(self):
request = factory.delete('/1', HTTP_AUTHORIZATION=self.permitted_credentials)
response = instance_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_does_not_have_create_permissions(self):
request = factory.post('/', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.disallowed_credentials)
response = root_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_does_not_have_put_permissions(self):
request = factory.put('/1', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.disallowed_credentials)
response = instance_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_does_not_have_delete_permissions(self):
request = factory.delete('/1', HTTP_AUTHORIZATION=self.disallowed_credentials)
response = instance_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_options_permitted(self):
request = factory.options(
'/',
HTTP_AUTHORIZATION=self.permitted_credentials
)
response = root_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('actions', response.data)
self.assertEqual(list(response.data['actions'].keys()), ['POST'])
request = factory.options(
'/1',
HTTP_AUTHORIZATION=self.permitted_credentials
)
response = instance_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('actions', response.data)
self.assertEqual(list(response.data['actions'].keys()), ['PUT'])
def test_options_disallowed(self):
request = factory.options(
'/',
HTTP_AUTHORIZATION=self.disallowed_credentials
)
response = root_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertNotIn('actions', response.data)
request = factory.options(
'/1',
HTTP_AUTHORIZATION=self.disallowed_credentials
)
response = instance_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertNotIn('actions', response.data)
def test_options_updateonly(self):
request = factory.options(
'/',
HTTP_AUTHORIZATION=self.updateonly_credentials
)
response = root_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertNotIn('actions', response.data)
request = factory.options(
'/1',
HTTP_AUTHORIZATION=self.updateonly_credentials
)
response = instance_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('actions', response.data)
self.assertEqual(list(response.data['actions'].keys()), ['PUT'])
def test_empty_view_does_not_assert(self):
request = factory.get('/1', HTTP_AUTHORIZATION=self.permitted_credentials)
response = empty_list_view(request, pk=1)
self.assertEqual(response.status_code, status.HTTP_200_OK)
class BasicPermModel(models.Model):
text = models.CharField(max_length=100)
class Meta:
app_label = 'tests'
permissions = (
('view_basicpermmodel', 'Can view basic perm model'),
# add, change, delete built in to django
)
class BasicPermSerializer(serializers.ModelSerializer):
class Meta:
model = BasicPermModel
# Custom object-level permission, that includes 'view' permissions
class ViewObjectPermissions(permissions.DjangoObjectPermissions):
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': ['%(app_label)s.view_%(model_name)s'],
'HEAD': ['%(app_label)s.view_%(model_name)s'],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
class ObjectPermissionInstanceView(generics.RetrieveUpdateDestroyAPIView):
queryset = BasicPermModel.objects.all()
serializer_class = BasicPermSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [ViewObjectPermissions]
object_permissions_view = ObjectPermissionInstanceView.as_view()
class ObjectPermissionListView(generics.ListAPIView):
queryset = BasicPermModel.objects.all()
serializer_class = BasicPermSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [ViewObjectPermissions]
object_permissions_list_view = ObjectPermissionListView.as_view()
class GetQuerysetObjectPermissionInstanceView(generics.RetrieveUpdateDestroyAPIView):
serializer_class = BasicPermSerializer
authentication_classes = [authentication.BasicAuthentication]
permission_classes = [ViewObjectPermissions]
def get_queryset(self):
return BasicPermModel.objects.all()
get_queryset_object_permissions_view = GetQuerysetObjectPermissionInstanceView.as_view()
@unittest.skipUnless(guardian, 'django-guardian not installed')
class ObjectPermissionsIntegrationTests(TestCase):
"""
Integration tests for the object level permissions API.
"""
def setUp(self):
from guardian.shortcuts import assign_perm
# create users
create = User.objects.create_user
users = {
'fullaccess': create('fullaccess', '[email protected]', 'password'),
'readonly': create('readonly', '[email protected]', 'password'),
'writeonly': create('writeonly', '[email protected]', 'password'),
'deleteonly': create('deleteonly', '[email protected]', 'password'),
}
# give everyone model level permissions, as we are not testing those
everyone = Group.objects.create(name='everyone')
model_name = get_model_name(BasicPermModel)
app_label = BasicPermModel._meta.app_label
f = '{0}_{1}'.format
perms = {
'view': f('view', model_name),
'change': f('change', model_name),
'delete': f('delete', model_name)
}
for perm in perms.values():
perm = '{0}.{1}'.format(app_label, perm)
assign_perm(perm, everyone)
everyone.user_set.add(*users.values())
# appropriate object level permissions
readers = Group.objects.create(name='readers')
writers = Group.objects.create(name='writers')
deleters = Group.objects.create(name='deleters')
model = BasicPermModel.objects.create(text='foo')
assign_perm(perms['view'], readers, model)
assign_perm(perms['change'], writers, model)
assign_perm(perms['delete'], deleters, model)
readers.user_set.add(users['fullaccess'], users['readonly'])
writers.user_set.add(users['fullaccess'], users['writeonly'])
deleters.user_set.add(users['fullaccess'], users['deleteonly'])
self.credentials = {}
for user in users.values():
self.credentials[user.username] = basic_auth_header(user.username, 'password')
# Delete
def test_can_delete_permissions(self):
request = factory.delete('/1', HTTP_AUTHORIZATION=self.credentials['deleteonly'])
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_cannot_delete_permissions(self):
request = factory.delete('/1', HTTP_AUTHORIZATION=self.credentials['readonly'])
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
# Update
def test_can_update_permissions(self):
request = factory.patch(
'/1', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.credentials['writeonly']
)
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data.get('text'), 'foobar')
def test_cannot_update_permissions(self):
request = factory.patch(
'/1', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.credentials['deleteonly']
)
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_cannot_update_permissions_non_existing(self):
request = factory.patch(
'/999', {'text': 'foobar'}, format='json',
HTTP_AUTHORIZATION=self.credentials['deleteonly']
)
response = object_permissions_view(request, pk='999')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
# Read
def test_can_read_permissions(self):
request = factory.get('/1', HTTP_AUTHORIZATION=self.credentials['readonly'])
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_cannot_read_permissions(self):
request = factory.get('/1', HTTP_AUTHORIZATION=self.credentials['writeonly'])
response = object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_can_read_get_queryset_permissions(self):
"""
same as ``test_can_read_permissions`` but with a view
that rely on ``.get_queryset()`` instead of ``.queryset``.
"""
request = factory.get('/1', HTTP_AUTHORIZATION=self.credentials['readonly'])
response = get_queryset_object_permissions_view(request, pk='1')
self.assertEqual(response.status_code, status.HTTP_200_OK)
# Read list
def test_can_read_list_permissions(self):
request = factory.get('/', HTTP_AUTHORIZATION=self.credentials['readonly'])
object_permissions_list_view.cls.filter_backends = (DjangoObjectPermissionsFilter,)
response = object_permissions_list_view(request)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data[0].get('id'), 1)
def test_cannot_read_list_permissions(self):
request = factory.get('/', HTTP_AUTHORIZATION=self.credentials['writeonly'])
object_permissions_list_view.cls.filter_backends = (DjangoObjectPermissionsFilter,)
response = object_permissions_list_view(request)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertListEqual(response.data, [])
class BasicPerm(permissions.BasePermission):
def has_permission(self, request, view):
return False
class BasicPermWithDetail(permissions.BasePermission):
message = 'Custom: You cannot access this resource'
def has_permission(self, request, view):
return False
class BasicObjectPerm(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return False
class BasicObjectPermWithDetail(permissions.BasePermission):
message = 'Custom: You cannot access this resource'
def has_object_permission(self, request, view, obj):
return False
class PermissionInstanceView(generics.RetrieveUpdateDestroyAPIView):
queryset = BasicModel.objects.all()
serializer_class = BasicSerializer
class DeniedView(PermissionInstanceView):
permission_classes = (BasicPerm,)
class DeniedViewWithDetail(PermissionInstanceView):
permission_classes = (BasicPermWithDetail,)
class DeniedObjectView(PermissionInstanceView):
permission_classes = (BasicObjectPerm,)
class DeniedObjectViewWithDetail(PermissionInstanceView):
permission_classes = (BasicObjectPermWithDetail,)
denied_view = DeniedView.as_view()
denied_view_with_detail = DeniedViewWithDetail.as_view()
denied_object_view = DeniedObjectView.as_view()
denied_object_view_with_detail = DeniedObjectViewWithDetail.as_view()
class CustomPermissionsTests(TestCase):
def setUp(self):
BasicModel(text='foo').save()
User.objects.create_user('username', '[email protected]', 'password')
credentials = basic_auth_header('username', 'password')
self.request = factory.get('/1', format='json', HTTP_AUTHORIZATION=credentials)
self.custom_message = 'Custom: You cannot access this resource'
def test_permission_denied(self):
response = denied_view(self.request, pk=1)
detail = response.data.get('detail')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertNotEqual(detail, self.custom_message)
def test_permission_denied_with_custom_detail(self):
response = denied_view_with_detail(self.request, pk=1)
detail = response.data.get('detail')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(detail, self.custom_message)
def test_permission_denied_for_object(self):
response = denied_object_view(self.request, pk=1)
detail = response.data.get('detail')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertNotEqual(detail, self.custom_message)
def test_permission_denied_for_object_with_custom_detail(self):
response = denied_object_view_with_detail(self.request, pk=1)
detail = response.data.get('detail')
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(detail, self.custom_message)
| bsd-2-clause |
Brainiarc7/linux-3.18-parrot | Documentation/networking/cxacru-cf.py | 14668 | 1626 | #!/usr/bin/env python
# Copyright 2009 Simon Arlott
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; either version 2 of the License, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# Usage: cxacru-cf.py < cxacru-cf.bin
# Output: values string suitable for the sysfs adsl_config attribute
#
# Warning: cxacru-cf.bin with MD5 hash cdbac2689969d5ed5d4850f117702110
# contains mis-aligned values which will stop the modem from being able
# to make a connection. If the first and last two bytes are removed then
# the values become valid, but the modulation will be forced to ANSI
# T1.413 only which may not be appropriate.
#
# The original binary format is a packed list of le32 values.
import sys
import struct
i = 0
while True:
buf = sys.stdin.read(4)
if len(buf) == 0:
break
elif len(buf) != 4:
sys.stdout.write("\n")
sys.stderr.write("Error: read {0} not 4 bytes\n".format(len(buf)))
sys.exit(1)
if i > 0:
sys.stdout.write(" ")
sys.stdout.write("{0:x}={1}".format(i, struct.unpack("<I", buf)[0]))
i += 1
sys.stdout.write("\n")
| gpl-2.0 |
taigaio/taiga-back | taiga/projects/settings/migrations/0001_initial.py | 1 | 1897 | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2018-09-24 11:49
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import taiga.projects.settings.choices
class Migration(migrations.Migration):
initial = True
dependencies = [
('projects', '0061_auto_20180918_1355'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UserProjectSettings',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('homepage', models.SmallIntegerField(choices=[(taiga.projects.settings.choices.Section(1), 'Timeline'), (taiga.projects.settings.choices.Section(2), 'Epics'), (taiga.projects.settings.choices.Section(3), 'Backlog'), (taiga.projects.settings.choices.Section(4), 'Kanban'), (taiga.projects.settings.choices.Section(5), 'Issues'), (taiga.projects.settings.choices.Section(6), 'TeamWiki')], default=taiga.projects.settings.choices.Section(1))),
('created_at', models.DateTimeField(default=django.utils.timezone.now)),
('modified_at', models.DateTimeField()),
('project', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='user_project_settings', to='projects.Project')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='user_project_settings', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['created_at'],
},
),
migrations.AlterUniqueTogether(
name='userprojectsettings',
unique_together=set([('project', 'user')]),
),
]
| agpl-3.0 |
mogers/buck | third-party/nailgun/pynailgun/ng.py | 17 | 19064 | #!/usr/bin/env python
#
# Copyright 2004-2015, Martian Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ctypes
import platform
import optparse
import os
import os.path
import Queue
import select
import socket
import struct
import sys
from threading import Condition, Event, Thread
# @author <a href="http://www.martiansoftware.com/contact.html">Marty Lamb</a>
# @author Pete Kirkham (Win32 port)
# @author Ben Hamilton (Python port)
#
# Please try to keep this working on Python 2.6.
NAILGUN_VERSION = '0.9.0'
BUFSIZE = 2048
NAILGUN_PORT_DEFAULT = 2113
CHUNK_HEADER_LEN = 5
CHUNKTYPE_STDIN = '0'
CHUNKTYPE_STDOUT = '1'
CHUNKTYPE_STDERR = '2'
CHUNKTYPE_STDIN_EOF = '.'
CHUNKTYPE_ARG = 'A'
CHUNKTYPE_LONGARG = 'L'
CHUNKTYPE_ENV = 'E'
CHUNKTYPE_DIR = 'D'
CHUNKTYPE_CMD = 'C'
CHUNKTYPE_EXIT = 'X'
CHUNKTYPE_SENDINPUT = 'S'
CHUNKTYPE_HEARTBEAT = 'H'
NSEC_PER_SEC = 1000000000
# 500 ms heartbeat timeout
HEARTBEAT_TIMEOUT_NANOS = NSEC_PER_SEC / 2
HEARTBEAT_TIMEOUT_SECS = HEARTBEAT_TIMEOUT_NANOS / (NSEC_PER_SEC * 1.0)
# We need to support Python 2.6 hosts which lack memoryview().
import __builtin__
HAS_MEMORYVIEW = 'memoryview' in dir(__builtin__)
EVENT_STDIN_CHUNK = 0
EVENT_STDIN_CLOSED = 1
EVENT_STDIN_EXCEPTION = 2
class NailgunException(Exception):
SOCKET_FAILED = 231
CONNECT_FAILED = 230
UNEXPECTED_CHUNKTYPE = 229
CONNECTION_BROKEN = 227
def __init__(self, message, code):
self.message = message
self.code = code
def __str__(self):
return self.message
class NailgunConnection(object):
'''Stateful object holding the connection to the Nailgun server.'''
def __init__(
self,
server_name,
server_port=None,
stdin=sys.stdin,
stdout=sys.stdout,
stderr=sys.stderr,
cwd=None):
self.socket = make_nailgun_socket(server_name, server_port, cwd)
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.recv_flags = 0
self.send_flags = 0
if hasattr(socket, 'MSG_WAITALL'):
self.recv_flags |= socket.MSG_WAITALL
if hasattr(socket, 'MSG_NOSIGNAL'):
self.send_flags |= socket.MSG_NOSIGNAL
self.header_buf = ctypes.create_string_buffer(CHUNK_HEADER_LEN)
self.buf = ctypes.create_string_buffer(BUFSIZE)
self.ready_to_send_condition = Condition()
self.sendtime_nanos = 0
self.exit_code = None
self.stdin_queue = Queue.Queue()
self.shutdown_event = Event()
self.stdin_thread = Thread(
target=stdin_thread_main,
args=(self.stdin, self.stdin_queue, self.shutdown_event, self.ready_to_send_condition))
self.stdin_thread.daemon = True
def send_command(
self,
cmd,
cmd_args=[],
filearg=None,
env=os.environ,
cwd=os.getcwd()):
'''
Sends the command and environment to the nailgun server, then loops forever
reading the response until the server sends an exit chunk.
Returns the exit value, or raises NailgunException on error.
'''
try:
return self._send_command_and_read_response(cmd, cmd_args, filearg, env, cwd)
except socket.error as e:
raise NailgunException(
'Server disconnected unexpectedly: {0}'.format(e),
NailgunException.CONNECTION_BROKEN)
def _send_command_and_read_response(self, cmd, cmd_args, filearg, env, cwd):
if filearg:
send_file_arg(filearg, self)
for cmd_arg in cmd_args:
send_chunk(cmd_arg, CHUNKTYPE_ARG, self)
send_env_var('NAILGUN_FILESEPARATOR', os.sep, self)
send_env_var('NAILGUN_PATHSEPARATOR', os.pathsep, self)
send_tty_format(self.stdin, self)
send_tty_format(self.stdout, self)
send_tty_format(self.stderr, self)
for k, v in env.iteritems():
send_env_var(k, v, self)
send_chunk(cwd, CHUNKTYPE_DIR, self)
send_chunk(cmd, CHUNKTYPE_CMD, self)
self.stdin_thread.start()
while self.exit_code is None:
self._process_next_chunk()
self._check_stdin_queue()
self.shutdown_event.set()
with self.ready_to_send_condition:
self.ready_to_send_condition.notify()
# We can't really join on self.stdin_thread, since
# there's no way to interrupt its call to sys.stdin.readline.
return self.exit_code
def _process_next_chunk(self):
'''
Processes the next chunk from the nailgun server.
'''
select_list = set([self.socket])
readable, _, exceptional = select.select(
select_list, [], select_list, HEARTBEAT_TIMEOUT_SECS)
if self.socket in readable:
process_nailgun_stream(self)
now = monotonic_time_nanos()
if now - self.sendtime_nanos > HEARTBEAT_TIMEOUT_NANOS:
send_heartbeat(self)
if self.socket in exceptional:
raise NailgunException(
'Server disconnected in select',
NailgunException.CONNECTION_BROKEN)
def _check_stdin_queue(self):
'''Check if the stdin thread has read anything.'''
while not self.stdin_queue.empty():
try:
(event_type, event_arg) = self.stdin_queue.get_nowait()
if event_type == EVENT_STDIN_CHUNK:
send_chunk(event_arg, CHUNKTYPE_STDIN, self)
elif event_type == EVENT_STDIN_CLOSED:
send_chunk('', CHUNKTYPE_STDIN_EOF, self)
elif event_type == EVENT_STDIN_EXCEPTION:
raise event_arg
except Queue.Empty:
break
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
try:
self.socket.close()
except socket.error:
pass
def monotonic_time_nanos():
'''Returns a monotonically-increasing timestamp value in nanoseconds.
The epoch of the return value is undefined. To use this, you must call
it more than once and calculate the delta between two calls.
'''
# This function should be overwritten below on supported platforms.
raise Exception('Unsupported platform: ' + platform.system())
if platform.system() == 'Linux':
# From <linux/time.h>, available since 2.6.28 (released 24-Dec-2008).
CLOCK_MONOTONIC_RAW = 4
librt = ctypes.CDLL('librt.so.1', use_errno=True)
clock_gettime = librt.clock_gettime
class struct_timespec(ctypes.Structure):
_fields_ = [('tv_sec', ctypes.c_long), ('tv_nsec', ctypes.c_long)]
clock_gettime.argtypes = [ctypes.c_int, ctypes.POINTER(struct_timespec)]
def _monotonic_time_nanos_linux():
t = struct_timespec()
clock_gettime(CLOCK_MONOTONIC_RAW, ctypes.byref(t))
return t.tv_sec * NSEC_PER_SEC + t.tv_nsec
monotonic_time_nanos = _monotonic_time_nanos_linux
elif platform.system() == 'Darwin':
# From <mach/mach_time.h>
KERN_SUCCESS = 0
libSystem = ctypes.CDLL('/usr/lib/libSystem.dylib', use_errno=True)
mach_timebase_info = libSystem.mach_timebase_info
class struct_mach_timebase_info(ctypes.Structure):
_fields_ = [('numer', ctypes.c_uint32), ('denom', ctypes.c_uint32)]
mach_timebase_info.argtypes = [ctypes.POINTER(struct_mach_timebase_info)]
mach_ti = struct_mach_timebase_info()
ret = mach_timebase_info(ctypes.byref(mach_ti))
if ret != KERN_SUCCESS:
raise Exception('Could not get mach_timebase_info, error: ' + str(ret))
mach_absolute_time = libSystem.mach_absolute_time
mach_absolute_time.restype = ctypes.c_uint64
def _monotonic_time_nanos_darwin():
return (mach_absolute_time() * mach_ti.numer) / mach_ti.denom
monotonic_time_nanos = _monotonic_time_nanos_darwin
elif platform.system() == 'Windows':
# From <Winbase.h>
perf_frequency = ctypes.c_uint64()
ctypes.windll.kernel32.QueryPerformanceFrequency(ctypes.byref(perf_frequency))
def _monotonic_time_nanos_windows():
perf_counter = ctypes.c_uint64()
ctypes.windll.kernel32.QueryPerformanceCounter(ctypes.byref(perf_counter))
return perf_counter.value * NSEC_PER_SEC / perf_frequency.value
monotonic_time_nanos = _monotonic_time_nanos_windows
elif sys.platform == 'cygwin':
k32 = ctypes.CDLL('Kernel32', use_errno=True)
perf_frequency = ctypes.c_uint64()
k32.QueryPerformanceFrequency(ctypes.byref(perf_frequency))
def _monotonic_time_nanos_cygwin():
perf_counter = ctypes.c_uint64()
k32.QueryPerformanceCounter(ctypes.byref(perf_counter))
return perf_counter.value * NSEC_PER_SEC / perf_frequency.value
monotonic_time_nanos = _monotonic_time_nanos_cygwin
def send_chunk(buf, chunk_type, nailgun_connection):
'''
Sends a chunk noting the specified payload size and chunk type.
'''
struct.pack_into('>ic', nailgun_connection.header_buf, 0, len(buf), chunk_type)
nailgun_connection.sendtime_nanos = monotonic_time_nanos()
nailgun_connection.socket.sendall(
nailgun_connection.header_buf.raw,
nailgun_connection.send_flags)
nailgun_connection.socket.sendall(buf, nailgun_connection.send_flags)
def send_env_var(name, value, nailgun_connection):
'''
Sends an environment variable in KEY=VALUE format.
'''
send_chunk('='.join((name, value)), CHUNKTYPE_ENV, nailgun_connection)
def send_tty_format(f, nailgun_connection):
'''
Sends a NAILGUN_TTY_# environment variable.
'''
if not f or not hasattr(f, 'fileno'):
return
fileno = f.fileno()
isatty = os.isatty(fileno)
send_env_var('NAILGUN_TTY_' + str(fileno), str(int(isatty)), nailgun_connection)
def send_file_arg(filename, nailgun_connection):
'''
Sends the contents of a file to the server.
'''
with open(filename) as f:
while True:
num_bytes = f.readinto(nailgun_connection.buf)
if not num_bytes:
break
send_chunk(
nailgun_connection.buf.raw[:num_bytes], CHUNKTYPE_LONGARG, nailgun_connection)
def recv_to_fd(dest_file, num_bytes, nailgun_connection):
'''
Receives num_bytes bytes from the nailgun socket and copies them to the specified file
object. Used to route data to stdout or stderr on the client.
'''
bytes_read = 0
while bytes_read < num_bytes:
bytes_to_read = min(len(nailgun_connection.buf), num_bytes - bytes_read)
bytes_received = nailgun_connection.socket.recv_into(
nailgun_connection.buf,
bytes_to_read,
nailgun_connection.recv_flags)
if dest_file:
dest_file.write(nailgun_connection.buf[:bytes_received])
bytes_read += bytes_received
def recv_to_buffer(num_bytes, buf, nailgun_connection):
'''
Receives num_bytes from the nailgun socket and writes them into the specified buffer.
'''
# We'd love to use socket.recv_into() everywhere to avoid
# unnecessary copies, but we need to support Python 2.6. The
# only way to provide an offset to recv_into() is to use
# memoryview(), which doesn't exist until Python 2.7.
if HAS_MEMORYVIEW:
recv_into_memoryview(num_bytes, memoryview(buf), nailgun_connection)
else:
recv_to_buffer_with_copy(num_bytes, buf, nailgun_connection)
def recv_into_memoryview(num_bytes, buf_view, nailgun_connection):
'''
Receives num_bytes from the nailgun socket and writes them into the specified memoryview
to avoid an extra copy.
'''
bytes_read = 0
while bytes_read < num_bytes:
bytes_received = nailgun_connection.socket.recv_into(
buf_view[bytes_read:],
num_bytes - bytes_read,
nailgun_connection.recv_flags)
if not bytes_received:
raise NailgunException(
'Server unexpectedly disconnected in recv_into()',
NailgunException.CONNECTION_BROKEN)
bytes_read += bytes_received
def recv_to_buffer_with_copy(num_bytes, buf, nailgun_connection):
'''
Receives num_bytes from the nailgun socket and writes them into the specified buffer.
'''
bytes_read = 0
while bytes_read < num_bytes:
recv_buf = nailgun_connection.socket.recv(
num_bytes - bytes_read,
nailgun_connection.recv_flags)
if not len(recv_buf):
raise NailgunException(
'Server unexpectedly disconnected in recv()',
NailgunException.CONNECTION_BROKEN)
buf[bytes_read:bytes_read + len(recv_buf)] = recv_buf
bytes_read += len(recv_buf)
def process_exit(exit_len, nailgun_connection):
'''
Receives an exit code from the nailgun server and sets nailgun_connection.exit_code
to indicate the client should exit.
'''
num_bytes = min(len(nailgun_connection.buf), exit_len)
recv_to_buffer(num_bytes, nailgun_connection.buf, nailgun_connection)
nailgun_connection.exit_code = int(''.join(nailgun_connection.buf.raw[:num_bytes]))
def send_heartbeat(nailgun_connection):
'''
Sends a heartbeat to the nailgun server to indicate the client is still alive.
'''
try:
send_chunk('', CHUNKTYPE_HEARTBEAT, nailgun_connection)
except IOError as e:
# The Nailgun C client ignores SIGPIPE etc. on heartbeats,
# so we do too. (This typically happens when shutting down.)
pass
def stdin_thread_main(stdin, queue, shutdown_event, ready_to_send_condition):
if not stdin:
return
try:
while not shutdown_event.is_set():
with ready_to_send_condition:
ready_to_send_condition.wait()
if shutdown_event.is_set():
break
# This is a bit cheesy, but there isn't a great way to
# portably tell Python to read as much as possible on
# stdin without blocking.
buf = stdin.readline()
if buf == '':
queue.put((EVENT_STDIN_CLOSED, None))
break
queue.put((EVENT_STDIN_CHUNK, buf))
except Exception as e:
queue.put((EVENT_STDIN_EXCEPTION, e))
def process_nailgun_stream(nailgun_connection):
'''
Processes a single chunk from the nailgun server.
'''
recv_to_buffer(
len(nailgun_connection.header_buf), nailgun_connection.header_buf, nailgun_connection)
(chunk_len, chunk_type) = struct.unpack_from('>ic', nailgun_connection.header_buf.raw)
if chunk_type == CHUNKTYPE_STDOUT:
recv_to_fd(nailgun_connection.stdout, chunk_len, nailgun_connection)
elif chunk_type == CHUNKTYPE_STDERR:
recv_to_fd(nailgun_connection.stderr, chunk_len, nailgun_connection)
elif chunk_type == CHUNKTYPE_EXIT:
process_exit(chunk_len, nailgun_connection)
elif chunk_type == CHUNKTYPE_SENDINPUT:
with nailgun_connection.ready_to_send_condition:
# Wake up the stdin thread and tell it to read as much data as possible.
nailgun_connection.ready_to_send_condition.notify()
else:
raise NailgunException(
'Unexpected chunk type: {0}'.format(chunk_type),
NailgunException.UNEXPECTED_CHUNKTYPE)
def make_nailgun_socket(nailgun_server, nailgun_port=None, cwd=None):
'''
Creates and returns a socket connection to the nailgun server.
'''
s = None
if nailgun_server.startswith('local:'):
try:
s = socket.socket(socket.AF_UNIX)
except socket.error as msg:
raise NailgunException(
'Could not create local socket connection to server: {0}'.format(msg),
NailgunException.SOCKET_FAILED)
socket_addr = nailgun_server[6:]
prev_cwd = os.getcwd()
try:
if cwd is not None:
os.chdir(cwd)
s.connect(socket_addr)
except socket.error as msg:
raise NailgunException(
'Could not connect to local server at {0}: {1}'.format(socket_addr, msg),
NailgunException.CONNECT_FAILED)
finally:
if cwd is not None:
os.chdir(prev_cwd)
else:
socket_addr = nailgun_server
socket_family = socket.AF_UNSPEC
for (af, socktype, proto, _, sa) in socket.getaddrinfo(
nailgun_server, nailgun_port, socket.AF_UNSPEC, socket.SOCK_STREAM):
try:
s = socket.socket(af, socktype, proto)
except socket.error as msg:
s = None
continue
try:
s.connect(sa)
except socket.error as msg:
s.close()
s = None
continue
break
if s is None:
raise NailgunException(
'Could not connect to server {0}:{1}'.format(nailgun_server, nailgun_port),
NailgunException.NAILGUN_CONNECT_FAILED)
return s
def main():
'''
Main entry point to the nailgun client.
'''
default_nailgun_server = os.environ.get('NAILGUN_SERVER', '127.0.0.1')
default_nailgun_port = int(os.environ.get('NAILGUN_PORT', NAILGUN_PORT_DEFAULT))
parser = optparse.OptionParser(usage='%prog [options] cmd arg1 arg2 ...')
parser.add_option('--nailgun-server', default=default_nailgun_server)
parser.add_option('--nailgun-port', type='int', default=default_nailgun_port)
parser.add_option('--nailgun-filearg')
parser.add_option('--nailgun-showversion', action='store_true')
parser.add_option('--nailgun-help', action='help')
(options, args) = parser.parse_args()
if options.nailgun_showversion:
print 'NailGun client version ' + NAILGUN_VERSION
if len(args):
cmd = args.pop(0)
else:
cmd = os.path.basename(sys.argv[0])
# Pass any remaining command line arguments to the server.
cmd_args = args
try:
with NailgunConnection(
options.nailgun_server,
server_port=options.nailgun_port) as c:
exit_code = c.send_command(cmd, cmd_args, options.nailgun_filearg)
sys.exit(exit_code)
except NailgunException as e:
print >>sys.stderr, str(e)
sys.exit(e.code)
except KeyboardInterrupt as e:
pass
if __name__ == '__main__':
main()
| apache-2.0 |
chooyan-eng/ChooyanHttp | http_client.py | 1 | 2148 | import socket
class ChooyanHttpClient:
def request(host, port=80):
response = ChooyanResponse()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
request_str = 'GET / HTTP/1.1\nHost: %s\r\n\r\n' % (host)
s.send(request_str.encode('utf-8'))
headerbuffer = ResponseBuffer()
allbuffer = ResponseBuffer()
while True:
chunk = s.recv(4096)
allbuffer.append(chunk)
if response.content_length == -1:
headerbuffer.append(chunk)
response.content_length = ChooyanHttpClient.parse_contentlength(headerbuffer)
else:
if len(allbuffer.get_body()) >= response.content_length:
break
response.body = allbuffer.get_body()
response.responce_code = 200
s.close()
return response
def parse_contentlength(buffer):
while True:
line = buffer.read_line()
if line.startswith('Content-Length'):
return int(line.replace('Content-Length: ', ''))
if line == None:
return -1
class ChooyanResponse:
def __init__(self):
self.responce_code = None
self.body = None
self.content_length = -1
class ResponseBuffer:
def __init__(self):
self.data = b''
def append(self, data):
self.data += data
def read_line(self):
if self.data == b'':
return None
end_index = self.data.find(b'\r\n')
if end_index == -1:
ret = self.data
self.data = b''
else:
ret = self.data[:end_index]
self.data = self.data[end_index + len(b'\r\n'):]
return ret.decode('utf-8')
def get_body(self):
body_index = self.data.find(b'\r\n\r\n')
if body_index == -1:
return None
else:
return self.data[body_index + len(b'\r\n\r\n'):]
if __name__ == '__main__':
resp = ChooyanHttpClient.request('www.hasam.jp', 80)
if resp.responce_code == 200:
print(resp.body)
| apache-2.0 |
nttks/jenkins-test | cms/djangoapps/contentstore/views/entrance_exam.py | 7 | 8293 | """
Entrance Exams view module -- handles all requests related to entrance exam management via Studio
Intended to be utilized as an AJAX callback handler, versus a proper view/screen
"""
import json
import logging
from django.contrib.auth.decorators import login_required
from django_future.csrf import ensure_csrf_cookie
from django.http import HttpResponse
from django.test import RequestFactory
from contentstore.views.item import create_item, delete_item
from milestones import api as milestones_api
from models.settings.course_metadata import CourseMetadata
from opaque_keys.edx.keys import CourseKey, UsageKey
from opaque_keys import InvalidKeyError
from student.auth import has_course_author_access
from util.milestones_helpers import generate_milestone_namespace, NAMESPACE_CHOICES
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.exceptions import ItemNotFoundError
from django.conf import settings
__all__ = ['entrance_exam', ]
log = logging.getLogger(__name__)
@login_required
@ensure_csrf_cookie
def entrance_exam(request, course_key_string):
"""
The restful handler for entrance exams.
It allows retrieval of all the assets (as an HTML page), as well as uploading new assets,
deleting assets, and changing the "locked" state of an asset.
GET
Retrieves the entrance exam module (metadata) for the specified course
POST
Adds an entrance exam module to the specified course.
DELETE
Removes the entrance exam from the course
"""
course_key = CourseKey.from_string(course_key_string)
# Deny access if the user is valid, but they lack the proper object access privileges
if not has_course_author_access(request.user, course_key):
return HttpResponse(status=403)
# Retrieve the entrance exam module for the specified course (returns 404 if none found)
if request.method == 'GET':
return _get_entrance_exam(request, course_key)
# Create a new entrance exam for the specified course (returns 201 if created)
elif request.method == 'POST':
response_format = request.REQUEST.get('format', 'html')
http_accept = request.META.get('http_accept')
if response_format == 'json' or 'application/json' in http_accept:
ee_min_score = request.POST.get('entrance_exam_minimum_score_pct', None)
# if request contains empty value or none then save the default one.
entrance_exam_minimum_score_pct = float(settings.ENTRANCE_EXAM_MIN_SCORE_PCT)
if ee_min_score != '' and ee_min_score is not None:
entrance_exam_minimum_score_pct = float(ee_min_score)
return create_entrance_exam(request, course_key, entrance_exam_minimum_score_pct)
return HttpResponse(status=400)
# Remove the entrance exam module for the specified course (returns 204 regardless of existence)
elif request.method == 'DELETE':
return delete_entrance_exam(request, course_key)
# No other HTTP verbs/methods are supported at this time
else:
return HttpResponse(status=405)
def create_entrance_exam(request, course_key, entrance_exam_minimum_score_pct):
"""
api method to create an entrance exam.
First clean out any old entrance exams.
"""
_delete_entrance_exam(request, course_key)
return _create_entrance_exam(
request=request,
course_key=course_key,
entrance_exam_minimum_score_pct=entrance_exam_minimum_score_pct
)
def _create_entrance_exam(request, course_key, entrance_exam_minimum_score_pct=None):
"""
Internal workflow operation to create an entrance exam
"""
# Provide a default value for the minimum score percent if nothing specified
if entrance_exam_minimum_score_pct is None:
entrance_exam_minimum_score_pct = float(settings.ENTRANCE_EXAM_MIN_SCORE_PCT)
# Confirm the course exists
course = modulestore().get_course(course_key)
if course is None:
return HttpResponse(status=400)
# Create the entrance exam item (currently it's just a chapter)
payload = {
'category': "chapter",
'display_name': "Entrance Exam",
'parent_locator': unicode(course.location),
'is_entrance_exam': True,
'in_entrance_exam': True,
}
factory = RequestFactory()
internal_request = factory.post('/', json.dumps(payload), content_type="application/json")
internal_request.user = request.user
created_item = json.loads(create_item(internal_request).content)
# Set the entrance exam metadata flags for this course
# Reload the course so we don't overwrite the new child reference
course = modulestore().get_course(course_key)
metadata = {
'entrance_exam_enabled': True,
'entrance_exam_minimum_score_pct': entrance_exam_minimum_score_pct / 100,
'entrance_exam_id': created_item['locator'],
}
CourseMetadata.update_from_dict(metadata, course, request.user)
# Add an entrance exam milestone if one does not already exist
milestone_namespace = generate_milestone_namespace(
NAMESPACE_CHOICES['ENTRANCE_EXAM'],
course_key
)
milestones = milestones_api.get_milestones(milestone_namespace)
if len(milestones):
milestone = milestones[0]
else:
description = 'Autogenerated during {} entrance exam creation.'.format(unicode(course.id))
milestone = milestones_api.add_milestone({
'name': 'Completed Course Entrance Exam',
'namespace': milestone_namespace,
'description': description
})
relationship_types = milestones_api.get_milestone_relationship_types()
milestones_api.add_course_milestone(
unicode(course.id),
relationship_types['REQUIRES'],
milestone
)
milestones_api.add_course_content_milestone(
unicode(course.id),
created_item['locator'],
relationship_types['FULFILLS'],
milestone
)
return HttpResponse(status=201)
def _get_entrance_exam(request, course_key): # pylint: disable=W0613
"""
Internal workflow operation to retrieve an entrance exam
"""
course = modulestore().get_course(course_key)
if course is None:
return HttpResponse(status=400)
if not getattr(course, 'entrance_exam_id'):
return HttpResponse(status=404)
try:
exam_key = UsageKey.from_string(course.entrance_exam_id)
except InvalidKeyError:
return HttpResponse(status=404)
try:
exam_descriptor = modulestore().get_item(exam_key)
return HttpResponse(
_serialize_entrance_exam(exam_descriptor),
status=200, mimetype='application/json')
except ItemNotFoundError:
return HttpResponse(status=404)
def delete_entrance_exam(request, course_key):
"""
api method to delete an entrance exam
"""
return _delete_entrance_exam(request=request, course_key=course_key)
def _delete_entrance_exam(request, course_key):
"""
Internal workflow operation to remove an entrance exam
"""
store = modulestore()
course = store.get_course(course_key)
if course is None:
return HttpResponse(status=400)
course_children = store.get_items(
course_key,
qualifiers={'category': 'chapter'}
)
for course_child in course_children:
if course_child.is_entrance_exam:
delete_item(request, course_child.scope_ids.usage_id)
milestones_api.remove_content_references(unicode(course_child.scope_ids.usage_id))
# Reset the entrance exam flags on the course
# Reload the course so we have the latest state
course = store.get_course(course_key)
if getattr(course, 'entrance_exam_id'):
metadata = {
'entrance_exam_enabled': False,
'entrance_exam_minimum_score_pct': None,
'entrance_exam_id': None,
}
CourseMetadata.update_from_dict(metadata, course, request.user)
return HttpResponse(status=204)
def _serialize_entrance_exam(entrance_exam_module):
"""
Internal helper to convert an entrance exam module/object into JSON
"""
return json.dumps({
'locator': unicode(entrance_exam_module.location)
})
| agpl-3.0 |
amenonsen/ansible | lib/ansible/modules/network/aci/aci_contract_subject_to_filter.py | 26 | 9088 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = r'''
---
module: aci_contract_subject_to_filter
short_description: Bind Contract Subjects to Filters (vz:RsSubjFiltAtt)
description:
- Bind Contract Subjects to Filters on Cisco ACI fabrics.
version_added: '2.4'
options:
contract:
description:
- The name of the contract.
type: str
aliases: [ contract_name ]
filter:
description:
- The name of the Filter to bind to the Subject.
type: str
aliases: [ filter_name ]
log:
description:
- Determines if the binding should be set to log.
- The APIC defaults to C(none) when unset during creation.
type: str
choices: [ log, none ]
aliases: [ directive ]
subject:
description:
- The name of the Contract Subject.
type: str
aliases: [ contract_subject, subject_name ]
state:
description:
- Use C(present) or C(absent) for adding or removing.
- Use C(query) for listing an object or multiple objects.
type: str
choices: [ absent, present, query ]
default: present
tenant:
description:
- The name of the tenant.
type: str
required: yes
aliases: [ tenant_name ]
extends_documentation_fragment: aci
notes:
- The C(tenant), C(contract), C(subject), and C(filter_name) must exist before using this module in your playbook.
The M(aci_tenant), M(aci_contract), M(aci_contract_subject), and M(aci_filter) modules can be used for these.
seealso:
- module: aci_contract_subject
- module: aci_filter
- name: APIC Management Information Model reference
description: More information about the internal APIC class B(vz:RsSubjFiltAtt).
link: https://developer.cisco.com/docs/apic-mim-ref/
author:
- Jacob McGill (@jmcgill298)
'''
EXAMPLES = r'''
- name: Add a new contract subject to filer binding
aci_contract_subject_to_filter:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: test
filter: '{{ filter }}'
log: '{{ log }}'
state: present
delegate_to: localhost
- name: Remove an existing contract subject to filter binding
aci_contract_subject_to_filter:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: test
filter: '{{ filter }}'
log: '{{ log }}'
state: present
delegate_to: localhost
- name: Query a specific contract subject to filter binding
aci_contract_subject_to_filter:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: test
filter: '{{ filter }}'
state: query
delegate_to: localhost
register: query_result
- name: Query all contract subject to filter bindings
aci_contract_subject_to_filter:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: test
state: query
delegate_to: localhost
register: query_result
'''
RETURN = r'''
current:
description: The existing configuration from the APIC after the module has finished
returned: success
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
error:
description: The error information as returned from the APIC
returned: failure
type: dict
sample:
{
"code": "122",
"text": "unknown managed object class foo"
}
raw:
description: The raw output returned by the APIC REST API (xml or json)
returned: parse error
type: str
sample: '<?xml version="1.0" encoding="UTF-8"?><imdata totalCount="1"><error code="122" text="unknown managed object class foo"/></imdata>'
sent:
description: The actual/minimal configuration pushed to the APIC
returned: info
type: list
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment"
}
}
}
previous:
description: The original configuration from the APIC before the module has started
returned: info
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
proposed:
description: The assembled configuration from the user-provided parameters
returned: info
type: dict
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"name": "production"
}
}
}
filter_string:
description: The filter string used for the request
returned: failure or debug
type: str
sample: ?rsp-prop-include=config-only
method:
description: The HTTP method used for the request to the APIC
returned: failure or debug
type: str
sample: POST
response:
description: The HTTP response from the APIC
returned: failure or debug
type: str
sample: OK (30 bytes)
status:
description: The HTTP status from the APIC
returned: failure or debug
type: int
sample: 200
url:
description: The HTTP url used for the request to the APIC
returned: failure or debug
type: str
sample: https://10.11.12.13/api/mo/uni/tn-production.json
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.aci.aci import ACIModule, aci_argument_spec
def main():
argument_spec = aci_argument_spec()
argument_spec.update(
contract=dict(type='str', aliases=['contract_name']), # Not required for querying all objects
filter=dict(type='str', aliases=['filter_name']), # Not required for querying all objects
subject=dict(type='str', aliases=['contract_subject', 'subject_name']), # Not required for querying all objects
tenant=dict(type='str', aliases=['tenant_name']), # Not required for querying all objects
log=dict(tyep='str', choices=['log', 'none'], aliases=['directive']),
state=dict(type='str', default='present', choices=['absent', 'present', 'query']),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=[
['state', 'absent', ['contract', 'filter', 'subject', 'tenant']],
['state', 'present', ['contract', 'filter', 'subject', 'tenant']],
],
)
contract = module.params['contract']
filter_name = module.params['filter']
log = module.params['log']
subject = module.params['subject']
tenant = module.params['tenant']
state = module.params['state']
# Add subject_filter key to modul.params for building the URL
module.params['subject_filter'] = filter_name
# Convert log to empty string if none, as that is what API expects. An empty string is not a good option to present the user.
if log == 'none':
log = ''
aci = ACIModule(module)
aci.construct_url(
root_class=dict(
aci_class='fvTenant',
aci_rn='tn-{0}'.format(tenant),
module_object=tenant,
target_filter={'name': tenant},
),
subclass_1=dict(
aci_class='vzBrCP',
aci_rn='brc-{0}'.format(contract),
module_object=contract,
target_filter={'name': contract},
),
subclass_2=dict(
aci_class='vzSubj',
aci_rn='subj-{0}'.format(subject),
module_object=subject,
target_filter={'name': subject},
),
subclass_3=dict(
aci_class='vzRsSubjFiltAtt',
aci_rn='rssubjFiltAtt-{0}'.format(filter_name),
module_object=filter_name,
target_filter={'tnVzFilterName': filter_name},
),
)
aci.get_existing()
if state == 'present':
aci.payload(
aci_class='vzRsSubjFiltAtt',
class_config=dict(
tnVzFilterName=filter_name,
directives=log,
),
)
aci.get_diff(aci_class='vzRsSubjFiltAtt')
aci.post_config()
elif state == 'absent':
aci.delete_config()
# Remove subject_filter used to build URL from module.params
module.params.pop('subject_filter')
aci.exit_json()
if __name__ == "__main__":
main()
| gpl-3.0 |
mollstam/UnrealPy | UnrealPyEmbed/Development/Python/2015.08.07-Python2710-x64-Source-vs2015/Python27/Source/django-1.8.2/tests/auth_tests/test_hashers.py | 12 | 14727 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from unittest import skipUnless
from django.conf.global_settings import PASSWORD_HASHERS
from django.contrib.auth.hashers import (
UNUSABLE_PASSWORD_PREFIX, UNUSABLE_PASSWORD_SUFFIX_LENGTH,
BasePasswordHasher, PBKDF2PasswordHasher, PBKDF2SHA1PasswordHasher,
check_password, get_hasher, identify_hasher, is_password_usable,
make_password,
)
from django.test import SimpleTestCase
from django.test.utils import override_settings
from django.utils import six
try:
import crypt
except ImportError:
crypt = None
try:
import bcrypt
except ImportError:
bcrypt = None
class PBKDF2SingleIterationHasher(PBKDF2PasswordHasher):
iterations = 1
@override_settings(PASSWORD_HASHERS=PASSWORD_HASHERS)
class TestUtilsHashPass(SimpleTestCase):
def test_simple(self):
encoded = make_password('lètmein')
self.assertTrue(encoded.startswith('pbkdf2_sha256$'))
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
# Blank passwords
blank_encoded = make_password('')
self.assertTrue(blank_encoded.startswith('pbkdf2_sha256$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_pkbdf2(self):
encoded = make_password('lètmein', 'seasalt', 'pbkdf2_sha256')
self.assertEqual(encoded,
'pbkdf2_sha256$20000$seasalt$oBSd886ysm3AqYun62DOdin8YcfbU1z9cksZSuLP9r0=')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "pbkdf2_sha256")
# Blank passwords
blank_encoded = make_password('', 'seasalt', 'pbkdf2_sha256')
self.assertTrue(blank_encoded.startswith('pbkdf2_sha256$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_sha1(self):
encoded = make_password('lètmein', 'seasalt', 'sha1')
self.assertEqual(encoded,
'sha1$seasalt$cff36ea83f5706ce9aa7454e63e431fc726b2dc8')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "sha1")
# Blank passwords
blank_encoded = make_password('', 'seasalt', 'sha1')
self.assertTrue(blank_encoded.startswith('sha1$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_md5(self):
encoded = make_password('lètmein', 'seasalt', 'md5')
self.assertEqual(encoded,
'md5$seasalt$3f86d0d3d465b7b458c231bf3555c0e3')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "md5")
# Blank passwords
blank_encoded = make_password('', 'seasalt', 'md5')
self.assertTrue(blank_encoded.startswith('md5$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_unsalted_md5(self):
encoded = make_password('lètmein', '', 'unsalted_md5')
self.assertEqual(encoded, '88a434c88cca4e900f7874cd98123f43')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "unsalted_md5")
# Alternate unsalted syntax
alt_encoded = "md5$$%s" % encoded
self.assertTrue(is_password_usable(alt_encoded))
self.assertTrue(check_password('lètmein', alt_encoded))
self.assertFalse(check_password('lètmeinz', alt_encoded))
# Blank passwords
blank_encoded = make_password('', '', 'unsalted_md5')
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_unsalted_sha1(self):
encoded = make_password('lètmein', '', 'unsalted_sha1')
self.assertEqual(encoded, 'sha1$$6d138ca3ae545631b3abd71a4f076ce759c5700b')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "unsalted_sha1")
# Raw SHA1 isn't acceptable
alt_encoded = encoded[6:]
self.assertFalse(check_password('lètmein', alt_encoded))
# Blank passwords
blank_encoded = make_password('', '', 'unsalted_sha1')
self.assertTrue(blank_encoded.startswith('sha1$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
@skipUnless(crypt, "no crypt module to generate password.")
def test_crypt(self):
encoded = make_password('lètmei', 'ab', 'crypt')
self.assertEqual(encoded, 'crypt$$ab1Hv2Lg7ltQo')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(check_password('lètmei', encoded))
self.assertFalse(check_password('lètmeiz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "crypt")
# Blank passwords
blank_encoded = make_password('', 'ab', 'crypt')
self.assertTrue(blank_encoded.startswith('crypt$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
@skipUnless(bcrypt, "bcrypt not installed")
def test_bcrypt_sha256(self):
encoded = make_password('lètmein', hasher='bcrypt_sha256')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(encoded.startswith('bcrypt_sha256$'))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "bcrypt_sha256")
# Verify that password truncation no longer works
password = ('VSK0UYV6FFQVZ0KG88DYN9WADAADZO1CTSIVDJUNZSUML6IBX7LN7ZS3R5'
'JGB3RGZ7VI7G7DJQ9NI8BQFSRPTG6UWTTVESA5ZPUN')
encoded = make_password(password, hasher='bcrypt_sha256')
self.assertTrue(check_password(password, encoded))
self.assertFalse(check_password(password[:72], encoded))
# Blank passwords
blank_encoded = make_password('', hasher='bcrypt_sha256')
self.assertTrue(blank_encoded.startswith('bcrypt_sha256$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
@skipUnless(bcrypt, "bcrypt not installed")
def test_bcrypt(self):
encoded = make_password('lètmein', hasher='bcrypt')
self.assertTrue(is_password_usable(encoded))
self.assertTrue(encoded.startswith('bcrypt$'))
self.assertTrue(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertEqual(identify_hasher(encoded).algorithm, "bcrypt")
# Blank passwords
blank_encoded = make_password('', hasher='bcrypt')
self.assertTrue(blank_encoded.startswith('bcrypt$'))
self.assertTrue(is_password_usable(blank_encoded))
self.assertTrue(check_password('', blank_encoded))
self.assertFalse(check_password(' ', blank_encoded))
def test_unusable(self):
encoded = make_password(None)
self.assertEqual(len(encoded), len(UNUSABLE_PASSWORD_PREFIX) + UNUSABLE_PASSWORD_SUFFIX_LENGTH)
self.assertFalse(is_password_usable(encoded))
self.assertFalse(check_password(None, encoded))
self.assertFalse(check_password(encoded, encoded))
self.assertFalse(check_password(UNUSABLE_PASSWORD_PREFIX, encoded))
self.assertFalse(check_password('', encoded))
self.assertFalse(check_password('lètmein', encoded))
self.assertFalse(check_password('lètmeinz', encoded))
self.assertRaises(ValueError, identify_hasher, encoded)
# Assert that the unusable passwords actually contain a random part.
# This might fail one day due to a hash collision.
self.assertNotEqual(encoded, make_password(None), "Random password collision?")
def test_unspecified_password(self):
"""
Makes sure specifying no plain password with a valid encoded password
returns `False`.
"""
self.assertFalse(check_password(None, make_password('lètmein')))
def test_bad_algorithm(self):
with self.assertRaises(ValueError):
make_password('lètmein', hasher='lolcat')
self.assertRaises(ValueError, identify_hasher, "lolcat$salt$hash")
def test_bad_encoded(self):
self.assertFalse(is_password_usable('lètmein_badencoded'))
self.assertFalse(is_password_usable(''))
def test_low_level_pkbdf2(self):
hasher = PBKDF2PasswordHasher()
encoded = hasher.encode('lètmein', 'seasalt2')
self.assertEqual(encoded,
'pbkdf2_sha256$20000$seasalt2$Flpve/uAcyo6+IFI6YAhjeABGPVbRQjzHDxRhqxewgw=')
self.assertTrue(hasher.verify('lètmein', encoded))
def test_low_level_pbkdf2_sha1(self):
hasher = PBKDF2SHA1PasswordHasher()
encoded = hasher.encode('lètmein', 'seasalt2')
self.assertEqual(encoded,
'pbkdf2_sha1$20000$seasalt2$pJt86NmjAweBY1StBvxCu7l1o9o=')
self.assertTrue(hasher.verify('lètmein', encoded))
def test_upgrade(self):
self.assertEqual('pbkdf2_sha256', get_hasher('default').algorithm)
for algo in ('sha1', 'md5'):
encoded = make_password('lètmein', hasher=algo)
state = {'upgraded': False}
def setter(password):
state['upgraded'] = True
self.assertTrue(check_password('lètmein', encoded, setter))
self.assertTrue(state['upgraded'])
def test_no_upgrade(self):
encoded = make_password('lètmein')
state = {'upgraded': False}
def setter():
state['upgraded'] = True
self.assertFalse(check_password('WRONG', encoded, setter))
self.assertFalse(state['upgraded'])
def test_no_upgrade_on_incorrect_pass(self):
self.assertEqual('pbkdf2_sha256', get_hasher('default').algorithm)
for algo in ('sha1', 'md5'):
encoded = make_password('lètmein', hasher=algo)
state = {'upgraded': False}
def setter():
state['upgraded'] = True
self.assertFalse(check_password('WRONG', encoded, setter))
self.assertFalse(state['upgraded'])
def test_pbkdf2_upgrade(self):
hasher = get_hasher('default')
self.assertEqual('pbkdf2_sha256', hasher.algorithm)
self.assertNotEqual(hasher.iterations, 1)
old_iterations = hasher.iterations
try:
# Generate a password with 1 iteration.
hasher.iterations = 1
encoded = make_password('letmein')
algo, iterations, salt, hash = encoded.split('$', 3)
self.assertEqual(iterations, '1')
state = {'upgraded': False}
def setter(password):
state['upgraded'] = True
# Check that no upgrade is triggered
self.assertTrue(check_password('letmein', encoded, setter))
self.assertFalse(state['upgraded'])
# Revert to the old iteration count and ...
hasher.iterations = old_iterations
# ... check if the password would get updated to the new iteration count.
self.assertTrue(check_password('letmein', encoded, setter))
self.assertTrue(state['upgraded'])
finally:
hasher.iterations = old_iterations
def test_pbkdf2_upgrade_new_hasher(self):
hasher = get_hasher('default')
self.assertEqual('pbkdf2_sha256', hasher.algorithm)
self.assertNotEqual(hasher.iterations, 1)
state = {'upgraded': False}
def setter(password):
state['upgraded'] = True
with self.settings(PASSWORD_HASHERS=[
'auth_tests.test_hashers.PBKDF2SingleIterationHasher']):
encoded = make_password('letmein')
algo, iterations, salt, hash = encoded.split('$', 3)
self.assertEqual(iterations, '1')
# Check that no upgrade is triggered
self.assertTrue(check_password('letmein', encoded, setter))
self.assertFalse(state['upgraded'])
# Revert to the old iteration count and check if the password would get
# updated to the new iteration count.
with self.settings(PASSWORD_HASHERS=[
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'auth_tests.test_hashers.PBKDF2SingleIterationHasher']):
self.assertTrue(check_password('letmein', encoded, setter))
self.assertTrue(state['upgraded'])
def test_load_library_no_algorithm(self):
with self.assertRaises(ValueError) as e:
BasePasswordHasher()._load_library()
self.assertEqual("Hasher 'BasePasswordHasher' doesn't specify a "
"library attribute", str(e.exception))
def test_load_library_importerror(self):
PlainHasher = type(str('PlainHasher'), (BasePasswordHasher,),
{'algorithm': 'plain', 'library': 'plain'})
# Python 3.3 adds quotes around module name
with six.assertRaisesRegex(self, ValueError,
"Couldn't load 'PlainHasher' algorithm library: No module named '?plain'?"):
PlainHasher()._load_library()
| mit |
PennPanda/xenproject | tools/libxl/gentypes.py | 8 | 27441 | #!/usr/bin/python
import sys
import re
import idl
def libxl_C_instance_of(ty, instancename):
if isinstance(ty, idl.Aggregate) and ty.typename is None:
if instancename is None:
return libxl_C_type_define(ty)
else:
return libxl_C_type_define(ty) + " " + instancename
s = ""
if isinstance(ty, idl.Array):
s += libxl_C_instance_of(ty.lenvar.type, ty.lenvar.name) + ";\n"
return s + ty.typename + " " + instancename
def libxl_C_type_define(ty, indent = ""):
s = ""
if isinstance(ty, idl.Enumeration):
if ty.typename is None:
s += "enum {\n"
else:
s += "typedef enum %s {\n" % ty.typename
for v in ty.values:
x = "%s = %d" % (v.name, v.value)
x = x.replace("\n", "\n ")
s += " " + x + ",\n"
if ty.typename is None:
s += "}"
else:
s += "} %s" % ty.typename
elif isinstance(ty, idl.Aggregate):
if isinstance(ty, idl.KeyedUnion):
s += libxl_C_instance_of(ty.keyvar.type, ty.keyvar.name) + ";\n"
if ty.typename is None:
s += "%s {\n" % ty.kind
else:
s += "typedef %s %s {\n" % (ty.kind, ty.typename)
for f in ty.fields:
if isinstance(ty, idl.KeyedUnion) and f.type is None: continue
x = libxl_C_instance_of(f.type, f.name)
if f.const:
x = "const " + x
x = x.replace("\n", "\n ")
s += " " + x + ";\n"
if ty.typename is None:
s += "}"
else:
s += "} %s" % ty.typename
else:
raise NotImplementedError("%s" % type(ty))
return s.replace("\n", "\n%s" % indent)
def libxl_C_type_dispose(ty, v, indent = " ", parent = None):
s = ""
if isinstance(ty, idl.KeyedUnion):
if parent is None:
raise Exception("KeyedUnion type must have a parent")
s += "switch (%s) {\n" % (parent + ty.keyvar.name)
for f in ty.fields:
(nparent,fexpr) = ty.member(v, f, parent is None)
s += "case %s:\n" % f.enumname
if f.type is not None:
s += libxl_C_type_dispose(f.type, fexpr, indent + " ", nparent)
s += " break;\n"
s += "}\n"
elif isinstance(ty, idl.Array):
if parent is None:
raise Exception("Array type must have a parent")
if ty.elem_type.dispose_fn is not None:
s += "{\n"
s += " int i;\n"
s += " for (i=0; i<%s; i++)\n" % (parent + ty.lenvar.name)
s += libxl_C_type_dispose(ty.elem_type, v+"[i]",
indent + " ", parent)
if ty.dispose_fn is not None:
if ty.elem_type.dispose_fn is not None:
s += " "
s += "%s(%s);\n" % (ty.dispose_fn, ty.pass_arg(v, parent is None))
if ty.elem_type.dispose_fn is not None:
s += "}\n"
elif isinstance(ty, idl.Struct) and (parent is None or ty.dispose_fn is None):
for f in [f for f in ty.fields if not f.const]:
(nparent,fexpr) = ty.member(v, f, parent is None)
s += libxl_C_type_dispose(f.type, fexpr, "", nparent)
else:
if ty.dispose_fn is not None:
s += "%s(%s);\n" % (ty.dispose_fn, ty.pass_arg(v, parent is None))
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_type_copy(ty, v, w, indent = " ", vparent = None, wparent = None):
s = ""
if vparent is None:
s += "GC_INIT(ctx);\n";
if isinstance(ty, idl.KeyedUnion):
if vparent is None or wparent is None:
raise Exception("KeyedUnion type must have a parent")
s += "%s = %s;\n" % ((vparent + ty.keyvar.name), (wparent + ty.keyvar.name))
s += "switch (%s) {\n" % (wparent + ty.keyvar.name)
for f in ty.fields:
(vnparent,vfexpr) = ty.member(v, f, vparent is None)
(wnparent,wfexpr) = ty.member(w, f, wparent is None)
s += "case %s:\n" % f.enumname
if f.type is not None:
s += libxl_C_type_copy(f.type, vfexpr, wfexpr, indent + " ",
vnparent, wnparent)
s += " break;\n"
s += "}\n"
elif isinstance(ty, idl.Array):
if vparent is None or wparent is None:
raise Exception("Array type must have a parent")
s += "%s = libxl__calloc(NOGC, %s, sizeof(*%s));\n" % (ty.pass_arg(v, vparent is None),
(wparent + ty.lenvar.name),
ty.pass_arg(w, wparent is None))
s += "%s = %s;\n" % ((vparent + ty.lenvar.name), (wparent + ty.lenvar.name))
s += "{\n"
s += " int i;\n"
s += " for (i=0; i<%s; i++)\n" % (wparent + ty.lenvar.name)
s += libxl_C_type_copy(ty.elem_type, v+"[i]", w+"[i]",
indent + " ", vparent, wparent)
s += "}\n"
elif isinstance(ty, idl.Struct) and ((vparent is None and wparent is None) or ty.copy_fn is None):
for f in [f for f in ty.fields if not f.const and not f.type.private]:
(vnparent,vfexpr) = ty.member(v, f, vparent is None)
(wnparent,wfexpr) = ty.member(w, f, wparent is None)
s += libxl_C_type_copy(f.type, vfexpr, wfexpr, "", vnparent, wnparent)
else:
if ty.copy_fn is not None:
s += "%s(ctx, %s, %s);\n" % (ty.copy_fn,
ty.pass_arg(v, vparent is None, passby=idl.PASS_BY_REFERENCE),
ty.pass_arg(w, wparent is None, passby=idl.PASS_BY_REFERENCE))
else:
s += "%s = %s;\n" % (ty.pass_arg(v, vparent is None, passby=idl.PASS_BY_VALUE),
ty.pass_arg(w, wparent is None, passby=idl.PASS_BY_VALUE))
if vparent is None:
s += "GC_FREE;\n"
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_init_members(ty, nesting = 0):
"""Returns a list of members of ty which require a separate init"""
if isinstance(ty, idl.Aggregate):
return [f for f in ty.fields if not f.const and isinstance(f.type,idl.KeyedUnion)]
else:
return []
def _libxl_C_type_init(ty, v, indent = " ", parent = None, subinit=False):
s = ""
if isinstance(ty, idl.KeyedUnion):
if parent is None:
raise Exception("KeyedUnion type must have a parent")
if subinit:
s += "switch (%s) {\n" % (parent + ty.keyvar.name)
for f in ty.fields:
(nparent,fexpr) = ty.member(v, f, parent is None)
s += "case %s:\n" % f.enumname
if f.type is not None:
s += _libxl_C_type_init(f.type, fexpr, " ", nparent)
s += " break;\n"
s += "}\n"
else:
if ty.keyvar.init_val:
s += "%s = %s;\n" % (parent + ty.keyvar.name, ty.keyvar.init_val)
elif ty.keyvar.type.init_val:
s += "%s = %s;\n" % (parent + ty.keyvar.name, ty.keyvar.type.init_val)
elif isinstance(ty, idl.Struct) and (parent is None or ty.init_fn is None):
for f in [f for f in ty.fields if not f.const]:
(nparent,fexpr) = ty.member(v, f, parent is None)
if f.init_val is not None:
s += "%s = %s;\n" % (fexpr, f.init_val)
else:
s += _libxl_C_type_init(f.type, fexpr, "", nparent)
else:
if ty.init_val is not None:
s += "%s = %s;\n" % (ty.pass_arg(v, parent is None), ty.init_val)
elif ty.init_fn is not None:
s += "%s(%s);\n" % (ty.init_fn, ty.pass_arg(v, parent is None))
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_type_init(ty):
s = ""
s += "void %s(%s)\n" % (ty.init_fn, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE))
s += "{\n"
s += " memset(p, '\\0', sizeof(*p));\n"
s += _libxl_C_type_init(ty, "p")
s += "}\n"
s += "\n"
return s
def libxl_C_type_member_init(ty, field):
if not isinstance(field.type, idl.KeyedUnion):
raise Exception("Only KeyedUnion is supported for member init")
ku = field.type
s = ""
s += "void %s(%s, %s)\n" % (ty.init_fn + "_" + ku.keyvar.name,
ty.make_arg("p", passby=idl.PASS_BY_REFERENCE),
ku.keyvar.type.make_arg(ku.keyvar.name))
s += "{\n"
if ku.keyvar.init_val is not None:
init_val = ku.keyvar.init_val
elif ku.keyvar.type.init_val is not None:
init_val = ku.keyvar.type.init_val
else:
init_val = None
(nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
if init_val is not None:
s += " assert(%s == %s);\n" % (fexpr, init_val)
else:
s += " assert(!%s);\n" % (fexpr)
s += " %s = %s;\n" % (fexpr, ku.keyvar.name)
(nparent,fexpr) = ty.member(ty.pass_arg("p"), field, isref=True)
s += _libxl_C_type_init(ku, fexpr, parent=nparent, subinit=True)
s += "}\n"
s += "\n"
return s
def libxl_C_type_gen_map_key(f, parent, indent = ""):
s = ""
if isinstance(f.type, idl.KeyedUnion):
s += "switch (%s) {\n" % (parent + f.type.keyvar.name)
for x in f.type.fields:
v = f.type.keyvar.name + "." + x.name
s += "case %s:\n" % x.enumname
s += " s = yajl_gen_string(hand, (const unsigned char *)\"%s\", sizeof(\"%s\")-1);\n" % (v, v)
s += " if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
s += " break;\n"
s += "}\n"
else:
s += "s = yajl_gen_string(hand, (const unsigned char *)\"%s\", sizeof(\"%s\")-1);\n" % (f.name, f.name)
s += "if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def get_init_val(f):
if f.init_val is not None:
return f.init_val
elif f.type.init_val is not None:
return f.type.init_val
return None
def get_default_expr(f, nparent, fexpr):
if isinstance(f.type, idl.Aggregate):
return "1 /* always generate JSON output for aggregate type */"
if isinstance(f.type, idl.Array):
return "%s && %s" % (fexpr, nparent + f.type.lenvar.name)
init_val = get_init_val(f)
if init_val is not None:
return "%s != %s" % (fexpr, init_val)
if f.type.check_default_fn:
return "!%s(&%s)" % (f.type.check_default_fn, fexpr)
return "%s" % fexpr
def libxl_C_type_gen_json(ty, v, indent = " ", parent = None):
s = ""
if parent is None:
s += "yajl_gen_status s;\n"
if isinstance(ty, idl.Array):
if parent is None:
raise Exception("Array type must have a parent")
s += "{\n"
s += " int i;\n"
s += " s = yajl_gen_array_open(hand);\n"
s += " if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
s += " for (i=0; i<%s; i++) {\n" % (parent + ty.lenvar.name)
s += libxl_C_type_gen_json(ty.elem_type, v+"[i]",
indent + " ", parent)
s += " }\n"
s += " s = yajl_gen_array_close(hand);\n"
s += " if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
s += "}\n"
elif isinstance(ty, idl.Enumeration):
s += "s = libxl__yajl_gen_enum(hand, %s_to_string(%s));\n" % (ty.typename, ty.pass_arg(v, parent is None))
s += "if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
elif isinstance(ty, idl.KeyedUnion):
if parent is None:
raise Exception("KeyedUnion type must have a parent")
s += "switch (%s) {\n" % (parent + ty.keyvar.name)
for f in ty.fields:
(nparent,fexpr) = ty.member(v, f, parent is None)
s += "case %s:\n" % f.enumname
if f.type is not None:
s += libxl_C_type_gen_json(f.type, fexpr, indent + " ", nparent)
else:
s += " s = yajl_gen_map_open(hand);\n"
s += " if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
s += " s = yajl_gen_map_close(hand);\n"
s += " if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
s += " break;\n"
s += "}\n"
elif isinstance(ty, idl.Struct) and (parent is None or ty.json_gen_fn is None):
s += "s = yajl_gen_map_open(hand);\n"
s += "if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
for f in [f for f in ty.fields if not f.const and not f.type.private]:
(nparent,fexpr) = ty.member(v, f, parent is None)
default_expr = get_default_expr(f, nparent, fexpr)
s += "if (%s) {\n" % default_expr
s += libxl_C_type_gen_map_key(f, nparent, " ")
s += libxl_C_type_gen_json(f.type, fexpr, " ", nparent)
s += "}\n"
s += "s = yajl_gen_map_close(hand);\n"
s += "if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
else:
if ty.json_gen_fn is not None:
s += "s = %s(hand, %s);\n" % (ty.json_gen_fn, ty.pass_arg(v, parent is None))
s += "if (s != yajl_gen_status_ok)\n"
s += " goto out;\n"
if parent is None:
s += "out:\n"
s += "return s;\n"
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_type_to_json(ty, v, indent = " "):
s = ""
gen = "(libxl__gen_json_callback)&%s_gen_json" % ty.typename
s += "return libxl__object_to_json(ctx, \"%s\", %s, (void *)%s);\n" % (ty.typename, gen, ty.pass_arg(v, passby=idl.PASS_BY_REFERENCE))
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_type_parse_json(ty, w, v, indent = " ", parent = None, discriminator = None):
s = ""
if parent is None:
s += "int rc = 0;\n"
s += "const libxl__json_object *x = o;\n"
if isinstance(ty, idl.Array):
if parent is None:
raise Exception("Array type must have a parent")
if discriminator is not None:
raise Exception("Only KeyedUnion can have discriminator")
lenvar = parent + ty.lenvar.name
s += "{\n"
s += " libxl__json_object *t;\n"
s += " int i;\n"
s += " if (!libxl__json_object_is_array(x)) {\n"
s += " rc = -1;\n"
s += " goto out;\n"
s += " }\n"
s += " %s = x->u.array->count;\n" % lenvar
s += " %s = libxl__calloc(NOGC, %s, sizeof(*%s));\n" % (v, lenvar, v)
s += " if (!%s && %s != 0) {\n" % (v, lenvar)
s += " rc = -1;\n"
s += " goto out;\n"
s += " }\n"
s += " for (i=0; (t=libxl__json_array_get(x,i)); i++) {\n"
s += libxl_C_type_parse_json(ty.elem_type, "t", v+"[i]",
indent + " ", parent)
s += " }\n"
s += " if (i != %s) {\n" % lenvar
s += " rc = -1;\n"
s += " goto out;\n"
s += " }\n"
s += "}\n"
elif isinstance(ty, idl.Enumeration):
if discriminator is not None:
raise Exception("Only KeyedUnion can have discriminator")
s += "{\n"
s += " const char *enum_str;\n"
s += " if (!libxl__json_object_is_string(x)) {\n"
s += " rc = -1;\n"
s += " goto out;\n"
s += " }\n"
s += " enum_str = libxl__json_object_get_string(x);\n"
s += " rc = %s_from_string(enum_str, %s);\n" % (ty.typename, ty.pass_arg(v, parent is None, idl.PASS_BY_REFERENCE))
s += " if (rc)\n"
s += " goto out;\n"
s += "}\n"
elif isinstance(ty, idl.KeyedUnion):
if parent is None:
raise Exception("KeyedUnion type must have a parent")
if discriminator is None:
raise Excpetion("KeyedUnion type must have a discriminator")
for f in ty.fields:
if f.enumname != discriminator:
continue
(nparent,fexpr) = ty.member(v, f, parent is None)
if f.type is not None:
s += libxl_C_type_parse_json(f.type, w, fexpr, indent + " ", nparent)
elif isinstance(ty, idl.Struct) and (parent is None or ty.json_parse_fn is None):
if discriminator is not None:
raise Exception("Only KeyedUnion can have discriminator")
for f in [f for f in ty.fields if not f.const and not f.type.private]:
saved_var_name = "saved_%s" % f.name
s += "{\n"
s += " const libxl__json_object *%s = NULL;\n" % saved_var_name
s += " %s = x;\n" % saved_var_name
if isinstance(f.type, idl.KeyedUnion):
for x in f.type.fields:
s += " x = libxl__json_map_get(\"%s\", %s, JSON_MAP);\n" % \
(f.type.keyvar.name + "." + x.name, w)
s += " if (x) {\n"
(nparent, fexpr) = ty.member(v, f.type.keyvar, parent is None)
s += " %s_init_%s(%s, %s);\n" % (ty.typename, f.type.keyvar.name, v, x.enumname)
(nparent,fexpr) = ty.member(v, f, parent is None)
s += libxl_C_type_parse_json(f.type, "x", fexpr, " ", nparent, x.enumname)
s += " }\n"
else:
s += " x = libxl__json_map_get(\"%s\", %s, %s);\n" % (f.name, w, f.type.json_parse_type)
s += " if (x) {\n"
(nparent,fexpr) = ty.member(v, f, parent is None)
s += libxl_C_type_parse_json(f.type, "x", fexpr, " ", nparent)
s += " }\n"
s += " x = %s;\n" % saved_var_name
s += "}\n"
else:
if discriminator is not None:
raise Exception("Only KeyedUnion can have discriminator")
if ty.json_parse_fn is not None:
s += "rc = %s(gc, %s, &%s);\n" % (ty.json_parse_fn, w, v)
s += "if (rc)\n"
s += " goto out;\n"
if parent is None:
s += "out:\n"
s += "return rc;\n"
if s != "":
s = indent +s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_type_from_json(ty, v, w, indent = " "):
s = ""
parse = "(libxl__json_parse_callback)&%s_parse_json" % (ty.namespace + "_" + ty.rawname)
s += "return libxl__object_from_json(ctx, \"%s\", %s, %s, %s);\n" % (ty.typename, parse, v, w)
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_enum_to_string(ty, e, indent = " "):
s = ""
s += "switch(%s) {\n" % e
for v in ty.values:
s += " case %s:\n" % (v.name)
s += " return \"%s\";\n" % (v.valuename.lower())
s += " default:\n "
s += " return NULL;\n"
s += "}\n"
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_enum_strings(ty, indent=""):
s = ""
s += "libxl_enum_string_table %s_string_table[] = {\n" % (ty.typename)
for v in ty.values:
s += " { .s = \"%s\", .v = %s },\n" % (v.valuename.lower(), v.name)
s += " { NULL, -1 },\n"
s += "};\n"
s += "\n"
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
def libxl_C_enum_from_string(ty, str, e, indent = " "):
s = ""
s += "return libxl__enum_from_string(%s_string_table,\n" % ty.typename
s += " %s, (int *)%s);\n" % (str, e)
if s != "":
s = indent + s
return s.replace("\n", "\n%s" % indent).rstrip(indent)
if __name__ == '__main__':
if len(sys.argv) != 6:
print >>sys.stderr, "Usage: gentypes.py <idl> <header> <header-private> <header-json> <implementation>"
sys.exit(1)
(_, idlname, header, header_private, header_json, impl) = sys.argv
(builtins,types) = idl.parse(idlname)
print "outputting libxl type definitions to %s" % header
f = open(header, "w")
header_define = header.upper().replace('.','_')
f.write("""#ifndef %s
#define %s
/*
* DO NOT EDIT.
*
* This file is autogenerated by
* "%s"
*/
""" % (header_define, header_define, " ".join(sys.argv)))
for ty in types:
f.write(libxl_C_type_define(ty) + ";\n")
if ty.dispose_fn is not None:
f.write("%svoid %s(%s);\n" % (ty.hidden(), ty.dispose_fn, ty.make_arg("p")))
if ty.copy_fn is not None:
f.write("%svoid %s(libxl_ctx *ctx, %s, %s);\n" % (ty.hidden(), ty.copy_fn,
ty.make_arg("dst"), ty.make_arg("src")))
if ty.init_fn is not None:
f.write("%svoid %s(%s);\n" % (ty.hidden(), ty.init_fn, ty.make_arg("p")))
for field in libxl_init_members(ty):
if not isinstance(field.type, idl.KeyedUnion):
raise Exception("Only KeyedUnion is supported for member init")
ku = field.type
f.write("%svoid %s(%s, %s);\n" % (ty.hidden(), ty.init_fn + "_" + ku.keyvar.name,
ty.make_arg("p"),
ku.keyvar.type.make_arg(ku.keyvar.name)))
if ty.json_gen_fn is not None:
f.write("%schar *%s_to_json(libxl_ctx *ctx, %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p")))
if ty.json_parse_fn is not None:
f.write("%sint %s_from_json(libxl_ctx *ctx, %s, const char *s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
if isinstance(ty, idl.Enumeration):
f.write("%sconst char *%s_to_string(%s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p")))
f.write("%sint %s_from_string(const char *s, %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("e", passby=idl.PASS_BY_REFERENCE)))
f.write("%sextern libxl_enum_string_table %s_string_table[];\n" % (ty.hidden(), ty.typename))
f.write("\n")
f.write("""#endif /* %s */\n""" % (header_define))
f.close()
print "outputting libxl JSON definitions to %s" % header_json
f = open(header_json, "w")
header_json_define = header_json.upper().replace('.','_')
f.write("""#ifndef %s
#define %s
/*
* DO NOT EDIT.
*
* This file is autogenerated by
* "%s"
*/
""" % (header_json_define, header_json_define, " ".join(sys.argv)))
for ty in [ty for ty in types if ty.json_gen_fn is not None]:
f.write("%syajl_gen_status %s_gen_json(yajl_gen hand, %s);\n" % (ty.hidden(), ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
f.write("\n")
f.write("""#endif /* %s */\n""" % header_json_define)
f.close()
print "outputting libxl type internal definitions to %s" % header_private
f = open(header_private, "w")
header_private_define = header_private.upper().replace('.','_')
f.write("""#ifndef %s
#define %s
/*
* DO NOT EDIT.
*
* This file is autogenerated by
* "%s"
*/
""" % (header_private_define, header_private_define, " ".join(sys.argv)))
for ty in [ty for ty in types if ty.json_parse_fn is not None]:
f.write("%sint %s_parse_json(libxl__gc *gc, const libxl__json_object *o, %s);\n" % \
(ty.hidden(), ty.namespace + "_" + ty.rawname,
ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
f.write("\n")
f.write("""#endif /* %s */\n""" % header_json_define)
f.close()
print "outputting libxl type implementations to %s" % impl
f = open(impl, "w")
f.write("""
/* DO NOT EDIT.
*
* This file is autogenerated by
* "%s"
*/
#include "libxl_osdeps.h"
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include "libxl_internal.h"
#define LIBXL_DTOR_POISON 0xa5
""" % " ".join(sys.argv))
for ty in [t for t in types if t.dispose_fn is not None and t.autogenerate_dispose_fn]:
f.write("void %s(%s)\n" % (ty.dispose_fn, ty.make_arg("p")))
f.write("{\n")
f.write(libxl_C_type_dispose(ty, "p"))
f.write(" memset(p, LIBXL_DTOR_POISON, sizeof(*p));\n")
f.write("}\n")
f.write("\n")
for ty in [t for t in types if t.copy_fn and t.autogenerate_copy_fn]:
f.write("void %s(libxl_ctx *ctx, %s, %s)\n" % (ty.copy_fn,
ty.make_arg("dst", passby=idl.PASS_BY_REFERENCE),
ty.make_arg("src", passby=idl.PASS_BY_REFERENCE)))
f.write("{\n")
f.write(libxl_C_type_copy(ty, "dst", "src"))
f.write("}\n")
f.write("\n")
for ty in [t for t in types if t.init_fn is not None and t.autogenerate_init_fn]:
f.write(libxl_C_type_init(ty))
for field in libxl_init_members(ty):
f.write(libxl_C_type_member_init(ty, field))
for ty in [t for t in types if isinstance(t,idl.Enumeration)]:
f.write("const char *%s_to_string(%s e)\n" % (ty.typename, ty.typename))
f.write("{\n")
f.write(libxl_C_enum_to_string(ty, "e"))
f.write("}\n")
f.write("\n")
f.write(libxl_C_enum_strings(ty))
f.write("int %s_from_string(const char *s, %s *e)\n" % (ty.typename, ty.typename))
f.write("{\n")
f.write(libxl_C_enum_from_string(ty, "s", "e"))
f.write("}\n")
f.write("\n")
for ty in [t for t in types if t.json_gen_fn is not None]:
f.write("yajl_gen_status %s_gen_json(yajl_gen hand, %s)\n" % (ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
f.write("{\n")
f.write(libxl_C_type_gen_json(ty, "p"))
f.write("}\n")
f.write("\n")
f.write("char *%s_to_json(libxl_ctx *ctx, %s)\n" % (ty.typename, ty.make_arg("p")))
f.write("{\n")
f.write(libxl_C_type_to_json(ty, "p"))
f.write("}\n")
f.write("\n")
for ty in [t for t in types if t.json_parse_fn is not None]:
f.write("int %s_parse_json(libxl__gc *gc, const libxl__json_object *%s, %s)\n" % \
(ty.namespace + "_" + ty.rawname,"o",ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
f.write("{\n")
f.write(libxl_C_type_parse_json(ty, "o", "p"))
f.write("}\n")
f.write("\n")
f.write("int %s_from_json(libxl_ctx *ctx, %s, const char *s)\n" % (ty.typename, ty.make_arg("p", passby=idl.PASS_BY_REFERENCE)))
f.write("{\n")
if not isinstance(ty, idl.Enumeration):
f.write(" %s_init(p);\n" % ty.typename)
f.write(libxl_C_type_from_json(ty, "p", "s"))
f.write("}\n")
f.write("\n")
f.close()
| gpl-2.0 |
normanyahq/Parameterized-Remote-Shell-Execution-Service | server.py | 1 | 1415 | from flask import Flask, request
from subprocess import Popen, PIPE
import json
app = Flask(__name__)
HelpMessage = """
Usage:
POST command to this URL with following payload:
{"file": "...", args:[...]}
We are using this format to keep it the same with NodeJs spawnSync
Example:
{"file": "ls", args: ["-l", "-a"]}
Test with curl:
curl -X POST -H "Content-type: application/json" --data '{"file": "ls", "args":["-a", "-l"]}' localhost:41414
"""
@app.route("/", methods=["POST", "GET"])
def commandExecutor():
if request.method == "GET":
return HelpMessage
elif request.method == "POST":
commandObject = request.get_json()
print ('Command Object: {}'.format(commandObject))
process = Popen([commandObject["file"]] + commandObject["args"],
stdin=PIPE,
stdout=PIPE,
stderr=PIPE)
(stdout, stderr) = process.communicate(input=commandObject.get("input", "").encode('utf-8'))
result = json.dumps({ "stdout": stdout,
"stderr": stderr,
"exit_code": process.returncode,
"error": process.returncode!=0})
print ("\tstdout: {}".format(stdout))
if stderr:
print ("\tstderr: {}".format(stderr))
print ("\tresult: {}".format(result))
return result
| mit |
vijayendrabvs/ssl-neutron | neutron/plugins/openvswitch/ovs_models_v2.py | 7 | 3864 | # Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Boolean, Column, ForeignKey, Integer, String
from sqlalchemy.schema import UniqueConstraint
from neutron.db import models_v2
from neutron.db.models_v2 import model_base
from sqlalchemy import orm
class VlanAllocation(model_base.BASEV2):
"""Represents allocation state of vlan_id on physical network."""
__tablename__ = 'ovs_vlan_allocations'
physical_network = Column(String(64), nullable=False, primary_key=True)
vlan_id = Column(Integer, nullable=False, primary_key=True,
autoincrement=False)
allocated = Column(Boolean, nullable=False)
def __init__(self, physical_network, vlan_id):
self.physical_network = physical_network
self.vlan_id = vlan_id
self.allocated = False
def __repr__(self):
return "<VlanAllocation(%s,%d,%s)>" % (self.physical_network,
self.vlan_id, self.allocated)
class TunnelAllocation(model_base.BASEV2):
"""Represents allocation state of tunnel_id."""
__tablename__ = 'ovs_tunnel_allocations'
tunnel_id = Column(Integer, nullable=False, primary_key=True,
autoincrement=False)
allocated = Column(Boolean, nullable=False)
def __init__(self, tunnel_id):
self.tunnel_id = tunnel_id
self.allocated = False
def __repr__(self):
return "<TunnelAllocation(%d,%s)>" % (self.tunnel_id, self.allocated)
class NetworkBinding(model_base.BASEV2):
"""Represents binding of virtual network to physical realization."""
__tablename__ = 'ovs_network_bindings'
network_id = Column(String(36),
ForeignKey('networks.id', ondelete="CASCADE"),
primary_key=True)
# 'gre', 'vlan', 'flat', 'local'
network_type = Column(String(32), nullable=False)
physical_network = Column(String(64))
segmentation_id = Column(Integer) # tunnel_id or vlan_id
network = orm.relationship(
models_v2.Network,
backref=orm.backref("binding", lazy='joined',
uselist=False, cascade='delete'))
def __init__(self, network_id, network_type, physical_network,
segmentation_id):
self.network_id = network_id
self.network_type = network_type
self.physical_network = physical_network
self.segmentation_id = segmentation_id
def __repr__(self):
return "<NetworkBinding(%s,%s,%s,%d)>" % (self.network_id,
self.network_type,
self.physical_network,
self.segmentation_id)
class TunnelEndpoint(model_base.BASEV2):
"""Represents tunnel endpoint in RPC mode."""
__tablename__ = 'ovs_tunnel_endpoints'
__table_args__ = (
UniqueConstraint('id', name='uniq_ovs_tunnel_endpoints0id'),
)
ip_address = Column(String(64), primary_key=True)
id = Column(Integer, nullable=False)
def __init__(self, ip_address, id):
self.ip_address = ip_address
self.id = id
def __repr__(self):
return "<TunnelEndpoint(%s,%s)>" % (self.ip_address, self.id)
| apache-2.0 |
trondhindenes/ansible-modules-core | network/nxos/nxos_snmp_contact.py | 20 | 11922 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
DOCUMENTATION = '''
---
module: nxos_snmp_contact
version_added: "2.2"
short_description: Manages SNMP contact info.
description:
- Manages SNMP contact information.
extends_documentation_fragment: nxos
author:
- Jason Edelman (@jedelman8)
- Gabriele Gerbino (@GGabriele)
notes:
- C(state=absent) removes the contact configuration if it is configured.
options:
contact:
description:
- Contact information.
required: true
state:
description:
- Manage the state of the resource.
required: true
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# ensure snmp contact is configured
- nxos_snmp_contact:
contact: Test
state: present
host: {{ inventory_hostname }}
username: {{ un }}
password: {{ pwd }}
'''
RETURN = '''
proposed:
description: k/v pairs of parameters passed into module
returned: always
type: dict
sample: {"contact": "New_Test"}
existing:
description: k/v pairs of existing snmp contact
type: dict
sample: {"contact": "Test"}
end_state:
description: k/v pairs of snmp contact after module execution
returned: always
type: dict
sample: {"contact": "New_Test"}
updates:
description: commands sent to the device
returned: always
type: list
sample: ["snmp-server contact New_Test"]
changed:
description: check to see if a change was made on the device
returned: always
type: boolean
sample: true
'''
import json
# COMMON CODE FOR MIGRATION
import re
from ansible.module_utils.basic import get_exception
from ansible.module_utils.netcfg import NetworkConfig, ConfigLine
from ansible.module_utils.shell import ShellError
try:
from ansible.module_utils.nxos import get_module
except ImportError:
from ansible.module_utils.nxos import NetworkModule
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class CustomNetworkConfig(NetworkConfig):
def expand_section(self, configobj, S=None):
if S is None:
S = list()
S.append(configobj)
for child in configobj.children:
if child in S:
continue
self.expand_section(child, S)
return S
def get_object(self, path):
for item in self.items:
if item.text == path[-1]:
parents = [p.text for p in item.parents]
if parents == path[:-1]:
return item
def to_block(self, section):
return '\n'.join([item.raw for item in section])
def get_section(self, path):
try:
section = self.get_section_objects(path)
return self.to_block(section)
except ValueError:
return list()
def get_section_objects(self, path):
if not isinstance(path, list):
path = [path]
obj = self.get_object(path)
if not obj:
raise ValueError('path does not exist in config')
return self.expand_section(obj)
def add(self, lines, parents=None):
"""Adds one or lines of configuration
"""
ancestors = list()
offset = 0
obj = None
## global config command
if not parents:
for line in to_list(lines):
item = ConfigLine(line)
item.raw = line
if item not in self.items:
self.items.append(item)
else:
for index, p in enumerate(parents):
try:
i = index + 1
obj = self.get_section_objects(parents[:i])[0]
ancestors.append(obj)
except ValueError:
# add parent to config
offset = index * self.indent
obj = ConfigLine(p)
obj.raw = p.rjust(len(p) + offset)
if ancestors:
obj.parents = list(ancestors)
ancestors[-1].children.append(obj)
self.items.append(obj)
ancestors.append(obj)
# add child objects
for line in to_list(lines):
# check if child already exists
for child in ancestors[-1].children:
if child.text == line:
break
else:
offset = len(parents) * self.indent
item = ConfigLine(line)
item.raw = line.rjust(len(line) + offset)
item.parents = ancestors
ancestors[-1].children.append(item)
self.items.append(item)
def get_network_module(**kwargs):
try:
return get_module(**kwargs)
except NameError:
return NetworkModule(**kwargs)
def get_config(module, include_defaults=False):
config = module.params['config']
if not config:
try:
config = module.get_config()
except AttributeError:
defaults = module.params['include_defaults']
config = module.config.get_config(include_defaults=defaults)
return CustomNetworkConfig(indent=2, contents=config)
def load_config(module, candidate):
config = get_config(module)
commands = candidate.difference(config)
commands = [str(c).strip() for c in commands]
save_config = module.params['save']
result = dict(changed=False)
if commands:
if not module.check_mode:
try:
module.configure(commands)
except AttributeError:
module.config(commands)
if save_config:
try:
module.config.save_config()
except AttributeError:
module.execute(['copy running-config startup-config'])
result['changed'] = True
result['updates'] = commands
return result
# END OF COMMON CODE
def execute_config_command(commands, module):
try:
module.configure(commands)
except ShellError:
clie = get_exception()
module.fail_json(msg='Error sending CLI commands',
error=str(clie), commands=commands)
except AttributeError:
try:
commands.insert(0, 'configure')
module.cli.add_commands(commands, output='config')
module.cli.run_commands()
except ShellError:
clie = get_exception()
module.fail_json(msg='Error sending CLI commands',
error=str(clie), commands=commands)
def get_cli_body_ssh(command, response, module):
"""Get response for when transport=cli. This is kind of a hack and mainly
needed because these modules were originally written for NX-API. And
not every command supports "| json" when using cli/ssh. As such, we assume
if | json returns an XML string, it is a valid command, but that the
resource doesn't exist yet. Instead, the output will be a raw string
when issuing commands containing 'show run'.
"""
if 'xml' in response[0]:
body = []
elif 'show run' in command:
body = response
else:
try:
body = [json.loads(response[0])]
except ValueError:
module.fail_json(msg='Command does not support JSON output',
command=command)
return body
def execute_show(cmds, module, command_type=None):
command_type_map = {
'cli_show': 'json',
'cli_show_ascii': 'text'
}
try:
if command_type:
response = module.execute(cmds, command_type=command_type)
else:
response = module.execute(cmds)
except ShellError:
clie = get_exception()
module.fail_json(msg='Error sending {0}'.format(cmds),
error=str(clie))
except AttributeError:
try:
if command_type:
command_type = command_type_map.get(command_type)
module.cli.add_commands(cmds, output=command_type)
response = module.cli.run_commands()
else:
module.cli.add_commands(cmds, raw=True)
response = module.cli.run_commands()
except ShellError:
clie = get_exception()
module.fail_json(msg='Error sending {0}'.format(cmds),
error=str(clie))
return response
def execute_show_command(command, module, command_type='cli_show'):
if module.params['transport'] == 'cli':
if 'show run' not in command:
command += ' | json'
cmds = [command]
response = execute_show(cmds, module)
body = get_cli_body_ssh(command, response, module)
elif module.params['transport'] == 'nxapi':
cmds = [command]
body = execute_show(cmds, module, command_type=command_type)
return body
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_snmp_contact(module):
contact = {}
contact_regex = '.*snmp-server\scontact\s(?P<contact>\S+).*'
command = 'show run snmp'
body = execute_show_command(command, module, command_type='cli_show_ascii')[0]
try:
match_contact = re.match(contact_regex, body, re.DOTALL)
group_contact = match_contact.groupdict()
contact['contact'] = group_contact["contact"]
except AttributeError:
contact = {}
return contact
def main():
argument_spec = dict(
contact=dict(required=True, type='str'),
state=dict(choices=['absent', 'present'],
default='present')
)
module = get_network_module(argument_spec=argument_spec,
supports_check_mode=True)
contact = module.params['contact']
state = module.params['state']
existing = get_snmp_contact(module)
changed = False
proposed = dict(contact=contact)
end_state = existing
commands = []
if state == 'absent':
if existing and existing['contact'] == contact:
commands.append('no snmp-server contact')
elif state == 'present':
if not existing or existing['contact'] != contact:
commands.append('snmp-server contact {0}'.format(contact))
cmds = flatten_list(commands)
if cmds:
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
changed = True
execute_config_command(cmds, module)
end_state = get_snmp_contact(module)
if 'configure' in cmds:
cmds.pop(0)
results = {}
results['proposed'] = proposed
results['existing'] = existing
results['end_state'] = end_state
results['updates'] = cmds
results['changed'] = changed
module.exit_json(**results)
if __name__ == '__main__':
main()
| gpl-3.0 |
daniponi/django | tests/custom_pk/models.py | 282 | 1272 | # -*- coding: utf-8 -*-
"""
Using a custom primary key
By default, Django adds an ``"id"`` field to each model. But you can override
this behavior by explicitly adding ``primary_key=True`` to a field.
"""
from __future__ import unicode_literals
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
from .fields import MyAutoField
@python_2_unicode_compatible
class Employee(models.Model):
employee_code = models.IntegerField(primary_key=True, db_column='code')
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
class Meta:
ordering = ('last_name', 'first_name')
def __str__(self):
return "%s %s" % (self.first_name, self.last_name)
@python_2_unicode_compatible
class Business(models.Model):
name = models.CharField(max_length=20, primary_key=True)
employees = models.ManyToManyField(Employee)
class Meta:
verbose_name_plural = 'businesses'
def __str__(self):
return self.name
@python_2_unicode_compatible
class Bar(models.Model):
id = MyAutoField(primary_key=True, db_index=True)
def __str__(self):
return repr(self.pk)
class Foo(models.Model):
bar = models.ForeignKey(Bar, models.CASCADE)
| bsd-3-clause |
WillisXChen/django-oscar | oscar/lib/python2.7/site-packages/django/contrib/gis/geos/base.py | 197 | 1660 | from ctypes import c_void_p
from django.contrib.gis.geos.error import GEOSException
# Trying to import GDAL libraries, if available. Have to place in
# try/except since this package may be used outside GeoDjango.
try:
from django.contrib.gis import gdal
except ImportError:
# A 'dummy' gdal module.
class GDALInfo(object):
HAS_GDAL = False
gdal = GDALInfo()
# NumPy supported?
try:
import numpy
except ImportError:
numpy = False
class GEOSBase(object):
"""
Base object for GEOS objects that has a pointer access property
that controls access to the underlying C pointer.
"""
# Initially the pointer is NULL.
_ptr = None
# Default allowed pointer type.
ptr_type = c_void_p
# Pointer access property.
def _get_ptr(self):
# Raise an exception if the pointer isn't valid don't
# want to be passing NULL pointers to routines --
# that's very bad.
if self._ptr:
return self._ptr
else:
raise GEOSException('NULL GEOS %s pointer encountered.' % self.__class__.__name__)
def _set_ptr(self, ptr):
# Only allow the pointer to be set with pointers of the
# compatible type or None (NULL).
if ptr is None or isinstance(ptr, self.ptr_type):
self._ptr = ptr
else:
raise TypeError('Incompatible pointer type')
# Property for controlling access to the GEOS object pointers. Using
# this raises an exception when the pointer is NULL, thus preventing
# the C library from attempting to access an invalid memory location.
ptr = property(_get_ptr, _set_ptr)
| bsd-3-clause |
kwilliams-mo/iris | lib/iris/tests/test_plot.py | 1 | 32122 | # (C) British Crown Copyright 2010 - 2013, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
# import iris tests first so that some things can be initialised before
# importing anything else
import iris.tests as tests
from functools import wraps
import types
import warnings
import matplotlib.pyplot as plt
import numpy as np
import iris
import iris.coords as coords
import iris.plot as iplt
import iris.quickplot as qplt
import iris.symbols
import iris.tests.stock
import iris.tests.test_mapping as test_mapping
def simple_cube():
cube = iris.tests.stock.realistic_4d()
cube = cube[:, 0, 0, :]
cube.coord('time').guess_bounds()
return cube
class TestSimple(tests.GraphicsTest):
def test_points(self):
cube = simple_cube()
qplt.contourf(cube)
self.check_graphic()
def test_bounds(self):
cube = simple_cube()
qplt.pcolor(cube)
self.check_graphic()
class TestMissingCoord(tests.GraphicsTest):
def _check(self, cube):
qplt.contourf(cube)
self.check_graphic()
qplt.pcolor(cube)
self.check_graphic()
def test_no_u(self):
cube = simple_cube()
cube.remove_coord('grid_longitude')
self._check(cube)
def test_no_v(self):
cube = simple_cube()
cube.remove_coord('time')
self._check(cube)
def test_none(self):
cube = simple_cube()
cube.remove_coord('grid_longitude')
cube.remove_coord('time')
self._check(cube)
@iris.tests.skip_data
class TestMissingCS(tests.GraphicsTest):
@iris.tests.skip_data
def test_missing_cs(self):
cube = tests.stock.simple_pp()
cube.coord("latitude").coord_system = None
cube.coord("longitude").coord_system = None
qplt.contourf(cube)
qplt.plt.gca().coastlines()
self.check_graphic()
class TestHybridHeight(tests.GraphicsTest):
def setUp(self):
self.cube = iris.tests.stock.realistic_4d()[0, :15, 0, :]
def _check(self, plt_method, test_altitude=True):
plt_method(self.cube)
self.check_graphic()
plt_method(self.cube, coords=['level_height', 'grid_longitude'])
self.check_graphic()
plt_method(self.cube, coords=['grid_longitude', 'level_height'])
self.check_graphic()
if test_altitude:
plt_method(self.cube, coords=['grid_longitude', 'altitude'])
self.check_graphic()
plt_method(self.cube, coords=['altitude', 'grid_longitude'])
self.check_graphic()
def test_points(self):
self._check(qplt.contourf)
def test_bounds(self):
self._check(qplt.pcolor, test_altitude=False)
def test_orography(self):
qplt.contourf(self.cube)
iplt.orography_at_points(self.cube)
iplt.points(self.cube)
self.check_graphic()
coords = ['altitude', 'grid_longitude']
qplt.contourf(self.cube, coords=coords)
iplt.orography_at_points(self.cube, coords=coords)
iplt.points(self.cube, coords=coords)
self.check_graphic()
# TODO: Test bounds once they are supported.
with self.assertRaises(NotImplementedError):
qplt.pcolor(self.cube)
iplt.orography_at_bounds(self.cube)
iplt.outline(self.cube)
self.check_graphic()
class Test1dPlotMultiArgs(tests.GraphicsTest):
# tests for iris.plot using multi-argument calling convention
def setUp(self):
self.cube1d = _load_4d_testcube()[0, :, 0, 0]
self.draw_method = iplt.plot
def test_cube(self):
# just plot a cube against its dim coord
self.draw_method(self.cube1d) # altitude vs temp
self.check_graphic()
def test_coord(self):
# plot the altitude coordinate
self.draw_method(self.cube1d.coord('altitude'))
self.check_graphic()
def test_coord_cube(self):
# plot temperature against sigma
self.draw_method(self.cube1d.coord('sigma'), self.cube1d)
self.check_graphic()
def test_cube_coord(self):
# plot a vertical profile of temperature
self.draw_method(self.cube1d, self.cube1d.coord('altitude'))
self.check_graphic()
def test_coord_coord(self):
# plot two coordinates that are not mappable
self.draw_method(self.cube1d.coord('sigma'),
self.cube1d.coord('altitude'))
self.check_graphic()
def test_coord_coord_map(self):
# plot lat-lon aux coordinates of a trajectory, which draws a map
lon = iris.coords.AuxCoord([0, 5, 10, 15, 20, 25, 30, 35, 40, 45],
standard_name='longitude',
units='degrees_north')
lat = iris.coords.AuxCoord([45, 55, 50, 60, 55, 65, 60, 70, 65, 75],
standard_name='latitude',
units='degrees_north')
self.draw_method(lon, lat)
plt.gca().coastlines()
self.check_graphic()
def test_cube_cube(self):
# plot two phenomena against eachother, in this case just dummy data
cube1 = self.cube1d.copy()
cube2 = self.cube1d.copy()
cube1.rename('some phenomenon')
cube2.rename('some other phenomenon')
cube1.units = iris.unit.Unit('no_unit')
cube2.units = iris.unit.Unit('no_unit')
cube1.data[:] = np.linspace(0, 1, 7)
cube2.data[:] = np.exp(cube1.data)
self.draw_method(cube1, cube2)
self.check_graphic()
def test_incompatible_objects(self):
# incompatible objects (not the same length) should raise an error
with self.assertRaises(ValueError):
self.draw_method(self.cube1d.coord('time'), (self.cube1d))
def test_multimidmensional(self):
# multidimensional cubes are not allowed
cube = _load_4d_testcube()[0, :, :, 0]
with self.assertRaises(ValueError):
self.draw_method(cube)
def test_not_cube_or_coord(self):
# inputs must be cubes or coordinates, otherwise an error should be
# raised
xdim = np.arange(self.cube1d.shape[0])
with self.assertRaises(TypeError):
self.draw_method(xdim, self.cube1d)
def test_coords_deprecated(self):
# ensure a warning is raised if the old coords keyword argument is
# used, and make sure the plot produced is consistent with the old
# interface
msg = 'Missing deprecation warning for coords keyword.'
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
self.draw_method(self.cube1d, coords=['sigma'])
self.assertEqual(len(w), 1, msg)
self.check_graphic()
def test_coords_deprecation_too_many(self):
# in deprecation mode, too many coords is an error
with self.assertRaises(ValueError):
self.draw_method(self.cube1d, coords=['sigma', 'sigma'])
def test_coords_deprecation_invalid_span(self):
# in deprecation mode, a coordinate that doesn't span data is an error
with self.assertRaises(ValueError):
self.draw_method(self.cube1d, coords=['time'])
class Test1dQuickplotPlotMultiArgs(Test1dPlotMultiArgs):
# tests for iris.plot using multi-argument calling convention
def setUp(self):
self.cube1d = _load_4d_testcube()[0, :, 0, 0]
self.draw_method = qplt.plot
@tests.skip_data
class Test1dScatter(tests.GraphicsTest):
def setUp(self):
self.cube = iris.load_cube(
tests.get_data_path(('NAME', 'NAMEIII_trajectory.txt')),
'Temperature')
self.draw_method = iplt.scatter
def test_coord_coord(self):
x = self.cube.coord('longitude')
y = self.cube.coord('height')
c = self.cube.data
self.draw_method(x, y, c=c, edgecolor='none')
self.check_graphic()
def test_coord_coord_map(self):
x = self.cube.coord('longitude')
y = self.cube.coord('latitude')
c = self.cube.data
self.draw_method(x, y, c=c, edgecolor='none')
plt.gca().coastlines()
self.check_graphic()
def test_coord_cube(self):
x = self.cube.coord('latitude')
y = self.cube
c = self.cube.coord('Travel Time').points
self.draw_method(x, y, c=c, edgecolor='none')
self.check_graphic()
def test_cube_coord(self):
x = self.cube
y = self.cube.coord('height')
c = self.cube.coord('Travel Time').points
self.draw_method(x, y, c=c, edgecolor='none')
self.check_graphic()
def test_cube_cube(self):
x = iris.load_cube(
tests.get_data_path(('NAME', 'NAMEIII_trajectory.txt')),
'Rel Humidity')
y = self.cube
c = self.cube.coord('Travel Time').points
self.draw_method(x, y, c=c, edgecolor='none')
self.check_graphic()
def test_incompatible_objects(self):
# cubes/coordinates of different sizes cannot be plotted
x = self.cube
y = self.cube.coord('height')[:-1]
with self.assertRaises(ValueError):
self.draw_method(x, y)
def test_multidimensional(self):
# multidimensional cubes/coordinates are not allowed
x = _load_4d_testcube()[0, :, :, 0]
y = x.coord('model_level_number')
with self.assertRaises(ValueError):
self.draw_method(x, y)
def test_not_cube_or_coord(self):
# inputs must be cubes or coordinates
x = np.arange(self.cube.shape[0])
y = self.cube
with self.assertRaises(TypeError):
self.draw_method(x, y)
@tests.skip_data
class Test1dQuickplotScatter(Test1dScatter):
def setUp(self):
self.cube = iris.load_cube(
tests.get_data_path(('NAME', 'NAMEIII_trajectory.txt')),
'Temperature')
self.draw_method = qplt.scatter
@iris.tests.skip_data
class TestAttributePositive(tests.GraphicsTest):
def test_1d_positive_up(self):
path = tests.get_data_path(('NetCDF', 'ORCA2', 'votemper.nc'))
cube = iris.load_cube(path)
qplt.plot(cube.coord('depth'), cube[0, :, 60, 80])
self.check_graphic()
def test_1d_positive_down(self):
path = tests.get_data_path(('NetCDF', 'ORCA2', 'votemper.nc'))
cube = iris.load_cube(path)
qplt.plot(cube[0, :, 60, 80], cube.coord('depth'))
self.check_graphic()
def test_2d_positive_up(self):
path = tests.get_data_path(('NetCDF', 'testing',
'small_theta_colpex.nc'))
cube = iris.load_cube(path)[0, :, 42, :]
qplt.pcolormesh(cube)
self.check_graphic()
def test_2d_positive_down(self):
path = tests.get_data_path(('NetCDF', 'ORCA2', 'votemper.nc'))
cube = iris.load_cube(path)[0, :, 42, :]
qplt.pcolormesh(cube)
self.check_graphic()
# Caches _load_4d_testcube so subsequent calls are faster
def cache(fn, cache={}):
def inner(*args, **kwargs):
key = fn.__name__
if key not in cache:
cache[key] = fn(*args, **kwargs)
return cache[key]
return inner
@cache
def _load_4d_testcube():
# Load example 4d data (TZYX).
test_cube = iris.tests.stock.realistic_4d()
# Replace forecast_period coord with a multi-valued version.
time_coord = test_cube.coord('time')
n_times = len(time_coord.points)
forecast_dims = test_cube.coord_dims(time_coord)
test_cube.remove_coord('forecast_period')
# Make up values (including bounds), to roughly match older testdata.
point_values = np.linspace((1 + 1.0 / 6), 2.0, n_times)
point_uppers = point_values + (point_values[1] - point_values[0])
bound_values = np.column_stack([point_values, point_uppers])
# NOTE: this must be a DimCoord
# - an equivalent AuxCoord produces different plots.
new_forecast_coord = iris.coords.DimCoord(
points=point_values,
bounds=bound_values,
standard_name='forecast_period',
units=iris.unit.Unit('hours')
)
test_cube.add_aux_coord(new_forecast_coord, forecast_dims)
# Heavily reduce dimensions for faster testing.
# NOTE: this makes ZYX non-contiguous. Doesn't seem to matter for now.
test_cube = test_cube[:, ::10, ::10, ::10]
return test_cube
@cache
def _load_wind_no_bounds():
# Load the COLPEX data => TZYX
path = tests.get_data_path(('PP', 'COLPEX', 'small_eastward_wind.pp'))
wind = iris.load_cube(path, 'eastward_wind')
# Remove bounds from all coords that have them.
wind.coord('grid_latitude').bounds = None
wind.coord('grid_longitude').bounds = None
wind.coord('level_height').bounds = None
wind.coord('sigma').bounds = None
return wind[:, :, :50, :50]
def _time_series(src_cube):
# Until we have plotting support for multiple axes on the same dimension,
# remove the time coordinate and its axis.
cube = src_cube.copy()
cube.remove_coord('time')
return cube
def _date_series(src_cube):
# Until we have plotting support for multiple axes on the same dimension,
# remove the forecast_period coordinate and its axis.
cube = src_cube.copy()
cube.remove_coord('forecast_period')
return cube
class SliceMixin(object):
"""Mixin class providing tests for each 2-dimensional permutation of axes.
Requires self.draw_method to be the relevant plotting function,
and self.results to be a dictionary containing the desired test results."""
def test_yx(self):
cube = self.wind[0, 0, :, :]
self.draw_method(cube)
self.check_graphic()
def test_zx(self):
cube = self.wind[0, :, 0, :]
self.draw_method(cube)
self.check_graphic()
def test_tx(self):
cube = _time_series(self.wind[:, 0, 0, :])
self.draw_method(cube)
self.check_graphic()
def test_zy(self):
cube = self.wind[0, :, :, 0]
self.draw_method(cube)
self.check_graphic()
def test_ty(self):
cube = _time_series(self.wind[:, 0, :, 0])
self.draw_method(cube)
self.check_graphic()
def test_tz(self):
cube = _time_series(self.wind[:, :, 0, 0])
self.draw_method(cube)
self.check_graphic()
@iris.tests.skip_data
class TestContour(tests.GraphicsTest, SliceMixin):
"""Test the iris.plot.contour routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = iplt.contour
@iris.tests.skip_data
class TestContourf(tests.GraphicsTest, SliceMixin):
"""Test the iris.plot.contourf routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = iplt.contourf
@iris.tests.skip_data
class TestPcolor(tests.GraphicsTest, SliceMixin):
"""Test the iris.plot.pcolor routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = iplt.pcolor
@iris.tests.skip_data
class TestPcolormesh(tests.GraphicsTest, SliceMixin):
"""Test the iris.plot.pcolormesh routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = iplt.pcolormesh
def check_warnings(method):
"""
Decorator that adds a catch_warnings and filter to assert
the method being decorated issues a UserWarning.
"""
@wraps(method)
def decorated_method(self, *args, **kwargs):
# Force reset of iris.coords warnings registry to avoid suppression of
# repeated warnings. warnings.resetwarnings() does not do this.
if hasattr(coords, '__warningregistry__'):
coords.__warningregistry__.clear()
# Check that method raises warning.
with warnings.catch_warnings():
warnings.simplefilter("error")
with self.assertRaises(UserWarning):
return method(self, *args, **kwargs)
return decorated_method
def ignore_warnings(method):
"""
Decorator that adds a catch_warnings and filter to suppress
any warnings issues by the method being decorated.
"""
@wraps(method)
def decorated_method(self, *args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return method(self, *args, **kwargs)
return decorated_method
class CheckForWarningsMetaclass(type):
"""
Metaclass that adds a further test for each base class test
that checks that each test raises a UserWarning. Each base
class test is then overriden to ignore warnings in order to
check the underlying functionality.
"""
def __new__(cls, name, bases, local):
def add_decorated_methods(attr_dict, target_dict, decorator):
for key, value in attr_dict.items():
if (isinstance(value, types.FunctionType) and
key.startswith('test')):
new_key = '_'.join((key, decorator.__name__))
if new_key not in target_dict:
wrapped = decorator(value)
wrapped.__name__ = new_key
target_dict[new_key] = wrapped
else:
raise RuntimeError('A attribute called {!r} '
'already exists.'.format(new_key))
def override_with_decorated_methods(attr_dict, target_dict,
decorator):
for key, value in attr_dict.items():
if (isinstance(value, types.FunctionType) and
key.startswith('test')):
target_dict[key] = decorator(value)
# Add decorated versions of base methods
# to check for warnings.
for base in bases:
add_decorated_methods(base.__dict__, local, check_warnings)
# Override base methods to ignore warnings.
for base in bases:
override_with_decorated_methods(base.__dict__, local,
ignore_warnings)
return type.__new__(cls, name, bases, local)
@iris.tests.skip_data
class TestPcolorNoBounds(tests.GraphicsTest, SliceMixin):
"""
Test the iris.plot.pcolor routine on a cube with coordinates
that have no bounds.
"""
__metaclass__ = CheckForWarningsMetaclass
def setUp(self):
self.wind = _load_wind_no_bounds()
self.draw_method = iplt.pcolor
@iris.tests.skip_data
class TestPcolormeshNoBounds(tests.GraphicsTest, SliceMixin):
"""
Test the iris.plot.pcolormesh routine on a cube with coordinates
that have no bounds.
"""
__metaclass__ = CheckForWarningsMetaclass
def setUp(self):
self.wind = _load_wind_no_bounds()
self.draw_method = iplt.pcolormesh
class Slice1dMixin(object):
"""Mixin class providing tests for each 1-dimensional permutation of axes.
Requires self.draw_method to be the relevant plotting function,
and self.results to be a dictionary containing the desired test results."""
def test_x(self):
cube = self.wind[0, 0, 0, :]
self.draw_method(cube)
self.check_graphic()
def test_y(self):
cube = self.wind[0, 0, :, 0]
self.draw_method(cube)
self.check_graphic()
def test_z(self):
cube = self.wind[0, :, 0, 0]
self.draw_method(cube)
self.check_graphic()
def test_t(self):
cube = _time_series(self.wind[:, 0, 0, 0])
self.draw_method(cube)
self.check_graphic()
def test_t_dates(self):
cube = _date_series(self.wind[:, 0, 0, 0])
self.draw_method(cube)
plt.gcf().autofmt_xdate()
plt.xlabel('Phenomenon time')
self.check_graphic()
@iris.tests.skip_data
class TestPlot(tests.GraphicsTest, Slice1dMixin):
"""Test the iris.plot.plot routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = iplt.plot
@iris.tests.skip_data
class TestQuickplotPlot(tests.GraphicsTest, Slice1dMixin):
"""Test the iris.quickplot.plot routine."""
def setUp(self):
self.wind = _load_4d_testcube()
self.draw_method = qplt.plot
_load_cube_once_cache = {}
def load_cube_once(filename, constraint):
"""Same syntax as load_cube, but will only load a file once,
then cache the answer in a dictionary.
"""
global _load_cube_once_cache
key = (filename, str(constraint))
cube = _load_cube_once_cache.get(key, None)
if cube is None:
cube = iris.load_cube(filename, constraint)
_load_cube_once_cache[key] = cube
return cube
class LambdaStr(object):
"""Provides a callable function which has a sensible __repr__."""
def __init__(self, repr, lambda_fn):
self.repr = repr
self.lambda_fn = lambda_fn
def __call__(self, *args, **kwargs):
return self.lambda_fn(*args, **kwargs)
def __repr__(self):
return self.repr
@iris.tests.skip_data
class TestPlotCoordinatesGiven(tests.GraphicsTest):
def setUp(self):
filename = tests.get_data_path(('PP', 'COLPEX',
'theta_and_orog_subset.pp'))
self.cube = load_cube_once(filename, 'air_potential_temperature')
self.draw_module = iris.plot
self.contourf = LambdaStr('iris.plot.contourf',
lambda cube, *args, **kwargs:
iris.plot.contourf(cube, *args, **kwargs))
self.contour = LambdaStr('iris.plot.contour',
lambda cube, *args, **kwargs:
iris.plot.contour(cube, *args, **kwargs))
self.points = LambdaStr('iris.plot.points',
lambda cube, *args, **kwargs:
iris.plot.points(cube, c=cube.data,
*args, **kwargs))
self.plot = LambdaStr('iris.plot.plot',
lambda cube, *args, **kwargs:
iris.plot.plot(cube, *args, **kwargs))
self.results = {'yx': ([self.contourf, ['grid_latitude',
'grid_longitude']],
[self.contourf, ['grid_longitude',
'grid_latitude']],
[self.contour, ['grid_latitude',
'grid_longitude']],
[self.contour, ['grid_longitude',
'grid_latitude']],
[self.points, ['grid_latitude',
'grid_longitude']],
[self.points, ['grid_longitude',
'grid_latitude']],),
'zx': ([self.contourf, ['model_level_number',
'grid_longitude']],
[self.contourf, ['grid_longitude',
'model_level_number']],
[self.contour, ['model_level_number',
'grid_longitude']],
[self.contour, ['grid_longitude',
'model_level_number']],
[self.points, ['model_level_number',
'grid_longitude']],
[self.points, ['grid_longitude',
'model_level_number']],),
'tx': ([self.contourf, ['time', 'grid_longitude']],
[self.contourf, ['grid_longitude', 'time']],
[self.contour, ['time', 'grid_longitude']],
[self.contour, ['grid_longitude', 'time']],
[self.points, ['time', 'grid_longitude']],
[self.points, ['grid_longitude', 'time']],),
'x': ([self.plot, ['grid_longitude']],),
'y': ([self.plot, ['grid_latitude']],)
}
def draw(self, draw_method, *args, **kwargs):
draw_fn = getattr(self.draw_module, draw_method)
draw_fn(*args, **kwargs)
self.check_graphic()
def run_tests(self, cube, results):
for draw_method, coords in results:
draw_method(cube, coords=coords)
try:
self.check_graphic()
except AssertionError, err:
self.fail('Draw method %r failed with coords: %r. '
'Assertion message: %s' % (draw_method, coords, err))
def run_tests_1d(self, cube, results):
# there is a different calling convention for 1d plots
for draw_method, coords in results:
draw_method(cube.coord(coords[0]), cube)
try:
self.check_graphic()
except AssertionError as err:
msg = 'Draw method {!r} failed with coords: {!r}. ' \
'Assertion message: {!s}'
self.fail(msg.format(draw_method, coords, err))
def test_yx(self):
test_cube = self.cube[0, 0, :, :]
self.run_tests(test_cube, self.results['yx'])
def test_zx(self):
test_cube = self.cube[0, :15, 0, :]
self.run_tests(test_cube, self.results['zx'])
def test_tx(self):
test_cube = self.cube[:, 0, 0, :]
self.run_tests(test_cube, self.results['tx'])
def test_x(self):
test_cube = self.cube[0, 0, 0, :]
self.run_tests_1d(test_cube, self.results['x'])
def test_y(self):
test_cube = self.cube[0, 0, :, 0]
self.run_tests_1d(test_cube, self.results['y'])
def test_badcoords(self):
cube = self.cube[0, 0, :, :]
draw_fn = getattr(self.draw_module, 'contourf')
self.assertRaises(ValueError, draw_fn, cube,
coords=['grid_longitude', 'grid_longitude'])
self.assertRaises(ValueError, draw_fn, cube,
coords=['grid_longitude', 'grid_longitude',
'grid_latitude'])
self.assertRaises(iris.exceptions.CoordinateNotFoundError, draw_fn,
cube, coords=['grid_longitude', 'wibble'])
self.assertRaises(ValueError, draw_fn, cube, coords=[])
self.assertRaises(ValueError, draw_fn, cube,
coords=[cube.coord('grid_longitude'),
cube.coord('grid_longitude')])
self.assertRaises(ValueError, draw_fn, cube,
coords=[cube.coord('grid_longitude'),
cube.coord('grid_longitude'),
cube.coord('grid_longitude')])
def test_non_cube_coordinate(self):
cube = self.cube[0, :, :, 0]
pts = -100 + np.arange(cube.shape[1]) * 13
x = coords.DimCoord(pts, standard_name='model_level_number',
attributes={'positive': 'up'})
self.draw('contourf', cube, coords=['grid_latitude', x])
@iris.tests.skip_data
class TestPlotDimAndAuxCoordsKwarg(tests.GraphicsTest):
def setUp(self):
filename = tests.get_data_path(('NetCDF', 'rotated', 'xy',
'rotPole_landAreaFraction.nc'))
self.cube = iris.load_cube(filename)
def test_default(self):
iplt.contourf(self.cube)
plt.gca().coastlines()
self.check_graphic()
def test_coords(self):
# Pass in dimension coords.
rlat = self.cube.coord('grid_latitude')
rlon = self.cube.coord('grid_longitude')
iplt.contourf(self.cube, coords=[rlon, rlat])
plt.gca().coastlines()
self.check_graphic()
# Pass in auxiliary coords.
lat = self.cube.coord('latitude')
lon = self.cube.coord('longitude')
iplt.contourf(self.cube, coords=[lon, lat])
plt.gca().coastlines()
self.check_graphic()
def test_coord_names(self):
# Pass in names of dimension coords.
iplt.contourf(self.cube, coords=['grid_longitude', 'grid_latitude'])
plt.gca().coastlines()
self.check_graphic()
# Pass in names of auxiliary coords.
iplt.contourf(self.cube, coords=['longitude', 'latitude'])
plt.gca().coastlines()
self.check_graphic()
def test_yx_order(self):
# Do not attempt to draw coastlines as it is not a map.
iplt.contourf(self.cube, coords=['grid_latitude', 'grid_longitude'])
self.check_graphic()
iplt.contourf(self.cube, coords=['latitude', 'longitude'])
self.check_graphic()
class TestSymbols(tests.GraphicsTest):
def test_cloud_cover(self):
iplt.symbols(range(10), [0] * 10, [iris.symbols.CLOUD_COVER[i]
for i in range(10)], 0.375)
self.check_graphic()
class TestPlottingExceptions(tests.IrisTest):
def setUp(self):
self.bounded_cube = tests.stock.lat_lon_cube()
self.bounded_cube.coord("latitude").guess_bounds()
self.bounded_cube.coord("longitude").guess_bounds()
def test_boundmode_multidim(self):
# Test exception translation.
# We can't get contiguous bounded grids from multi-d coords.
cube = self.bounded_cube
cube.remove_coord("latitude")
cube.add_aux_coord(coords.AuxCoord(points=cube.data,
standard_name='latitude',
units='degrees'), [0, 1])
with self.assertRaises(ValueError):
iplt.pcolormesh(cube, coords=['longitude', 'latitude'])
def test_boundmode_4bounds(self):
# Test exception translation.
# We can only get contiguous bounded grids with 2 bounds per point.
cube = self.bounded_cube
lat = coords.AuxCoord.from_coord(cube.coord("latitude"))
lat.bounds = np.array([lat.points, lat.points + 1,
lat.points + 2, lat.points + 3]).transpose()
cube.remove_coord("latitude")
cube.add_aux_coord(lat, 0)
with self.assertRaises(ValueError):
iplt.pcolormesh(cube, coords=['longitude', 'latitude'])
def test_different_coord_systems(self):
cube = self.bounded_cube
lat = cube.coord('latitude')
lon = cube.coord('longitude')
lat.coord_system = iris.coord_systems.GeogCS(7000000)
lon.coord_system = iris.coord_systems.GeogCS(7000001)
with self.assertRaises(ValueError):
iplt.pcolormesh(cube, coords=['longitude', 'latitude'])
@iris.tests.skip_data
class TestPlotOtherCoordSystems(tests.GraphicsTest):
def test_plot_tmerc(self):
filename = tests.get_data_path(('NetCDF', 'transverse_mercator',
'tmean_1910_1910.nc'))
self.cube = iris.load_cube(filename)
iplt.pcolormesh(self.cube[0])
plt.gca().coastlines()
self.check_graphic()
if __name__ == "__main__":
tests.main()
| gpl-3.0 |
google/orchestra | orchestra/google/marketing_platform/operators/display_video_360.py | 1 | 18902 | #
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import json
import csv
import os
from random import randint
import tempfile
import time
from urllib.parse import urlparse
import requests
from airflow import models
from airflow.contrib.hooks.gcs_hook import GoogleCloudStorageHook
from airflow.contrib.hooks.bigquery_hook import BigQueryHook
from airflow.contrib.hooks.bigquery_hook import BigQueryBaseCursor
from airflow.models import BaseOperator
from orchestra.google.marketing_platform.hooks.display_video_360 import (
GoogleDisplayVideo360Hook
)
from orchestra.google.marketing_platform.utils import erf_utils
from orchestra.google.marketing_platform.utils.schema.sdf import (
SDF_VERSIONED_SCHEMA_TYPES
)
logger = logging.getLogger(__name__)
class GoogleDisplayVideo360CreateReportOperator(BaseOperator):
"""Creates and runs a new Display & Video 360 query.
Attributes:
report: The query body to create the report from. (templated)
Can receive a json string representing the report or reference to a
template file. Template references are recognized by a string ending in
'.json'.
api_version: The DV360 API version.
gcp_conn_id: The connection ID to use when fetching connection info.
delegate_to: The account to impersonate, if any.
XComs:
query_id: The query ID for the report created.
"""
template_fields = ['params', 'report']
template_ext = ['.json']
def __init__(self,
report,
api_version='v1',
gcp_conn_id='google_cloud_default',
delegate_to=None,
*args,
**kwargs):
super(GoogleDisplayVideo360CreateReportOperator, self).__init__(*args, **kwargs)
self.report = report
self.api_version = api_version
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.hook = None
def execute(self, context):
if self.hook is None:
self.hook = GoogleDisplayVideo360Hook(
api_version=self.api_version,
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to)
report_body = json.loads(self.report)
request = self.hook.get_service().queries().createquery(body=report_body)
response = request.execute()
context['task_instance'].xcom_push('query_id', response['queryId'])
class GoogleDisplayVideo360RunReportOperator(BaseOperator):
"""Runs a stored query to generate a report.
Attributes:
api_version: The DV360 API version.
query_id: The ID of the query to run. (templated)
gcp_conn_id: The connection ID to use when fetching connection info.
delegate_to: The account to impersonate, if any.
"""
template_fields = ['query_id']
def __init__(self,
query_id,
api_version='v1',
gcp_conn_id='google_cloud_default',
delegate_to=None,
*args,
**kwargs):
super(GoogleDisplayVideo360RunReportOperator, self).__init__(*args, **kwargs)
self.api_version = api_version
self.conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.service = None
self.query_id = query_id
def execute(self, context):
if self.service is None:
hook = GoogleDisplayVideo360Hook(
api_version=self.api_version,
gcp_conn_id=self.conn_id,
delegate_to=self.delegate_to
)
self.service = hook.get_service()
request = self.service.queries().runquery(
queryId=self.query_id, body={})
request.execute()
class GoogleDisplayVideo360DownloadReportOperator(BaseOperator):
"""Downloads a Display & Video 360 report into Google Cloud Storage.
Attributes:
report_url: The Google Cloud Storage url where the latest report is stored.
(templated)
destination_bucket: The destination Google cloud storage bucket where the
report should be written to. (templated)
destination_object: The destination name of the object in the destination
Google cloud storage bucket. (templated)
If the destination points to an existing folder, the report will be
written under the specified folder.
gcp_conn_id: The connection ID to use when fetching connection info.
delegate_to: The account to impersonate, if any.
XComs:
destination_bucket: The Google cloud storage bucket the report was written
to.
destination_object: The Google cloud storage URI for the report.
"""
template_fields = ['report_url', 'destination_bucket', 'destination_object']
def __init__(self,
report_url,
destination_bucket,
destination_object=None,
chunk_size=5 * 1024 * 1024,
gcp_conn_id='google_cloud_default',
delegate_to=None,
*args,
**kwargs):
super(GoogleDisplayVideo360DownloadReportOperator, self).__init__(*args, **kwargs)
self.report_url = report_url
self.destination_bucket = destination_bucket
self.destination_object = destination_object
self.chunk_size = chunk_size
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.hook = None
@staticmethod
def _download_report(source_url, destination_file, chunk_size):
response = requests.head(source_url)
content_length = int(response.headers['Content-Length'])
start_byte = 0
while start_byte < content_length:
end_byte = start_byte + chunk_size - 1
if end_byte >= content_length:
end_byte = content_length - 1
headers = {'Range': 'bytes=%s-%s' % (start_byte, end_byte)}
response = requests.get(source_url, stream=True, headers=headers)
chunk = response.raw.read()
destination_file.write(chunk)
start_byte = end_byte + 1
destination_file.close()
@staticmethod
def _get_destination_uri(destination_object, report_url):
report_file_name = urlparse(report_url).path.split('/')[2]
if destination_object is None:
return report_file_name
if destination_object.endswith('/'):
return destination_object + report_file_name
return destination_object
def execute(self, context):
if self.hook is None:
self.hook = GoogleCloudStorageHook(
google_cloud_storage_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to)
temp_file = tempfile.NamedTemporaryFile(delete=False)
try:
# TODO(efolgar): Directly stream to storage instead of temp file
self._download_report(self.report_url, temp_file, self.chunk_size)
destination_object_name = self._get_destination_uri(
self.destination_object, self.report_url)
self.hook.upload(
bucket=self.destination_bucket,
object=destination_object_name,
filename=temp_file.name,
multipart=True)
context['task_instance'].xcom_push(
'destination_bucket', self.destination_bucket)
context['task_instance'].xcom_push(
'destination_object', destination_object_name)
finally:
temp_file.close()
os.unlink(temp_file.name)
class GoogleDisplayVideo360DeleteReportOperator(BaseOperator):
"""Deletes Display & Video 360 queries and any associated reports.
Attributes:
api_version: The DV360 API version.
query_id: The DV360 query id to delete. (templated)
query_title: The DV360 query title to delete. (templated)
Any query with a matching title will be deleted.
ignore_if_missing: If True, return success even if the query is missing.
gcp_conn_id: The connection ID to use when fetching connection info.
delegate_to: The account to impersonate, if any.
"""
template_fields = ['query_id', 'query_title']
ui_color = '#ffd1dc'
def __init__(self,
api_version='v1',
query_id=None,
query_title=None,
ignore_if_missing=False,
gcp_conn_id='google_cloud_default',
delegate_to=None,
*args,
**kwargs):
super(GoogleDisplayVideo360DeleteReportOperator, self).__init__(*args, **kwargs)
self.api_version = api_version
self.query_id = query_id
self.query_title = query_title
self.ignore_if_missing = ignore_if_missing
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.hook = None
def execute(self, context):
if self.hook is None:
self.hook = GoogleDisplayVideo360Hook(
gcp_conn_id=self.gcp_conn_id,
api_version=self.api_version,
delegate_to=self.delegate_to)
if self.query_id is not None:
self.hook.deletequery(
self.query_id,
ignore_if_missing=self.ignore_if_missing)
if self.query_title is not None:
self.hook.deletequeries(
self.query_title,
ignore_if_missing=self.ignore_if_missing)
class GoogleDisplayVideo360ERFToBigQueryOperator(BaseOperator):
"""Upload Multiple Entity Read Files to specified big query dataset.
"""
def __init__(self,
gcp_conn_id='google_cloud_default',
report_body=None,
yesterday=False,
entity_type=None,
file_creation_date=None,
cloud_project_id=None,
bq_table=None,
schema=None,
gcs_bucket=None,
erf_bucket=None,
partner_ids=[],
write_disposition='WRITE_TRUNCATE',
*args,
**kwargs):
super(GoogleDisplayVideo360ERFToBigQueryOperator, self).__init__(*args, **kwargs)
self.gcp_conn_id = gcp_conn_id
self.service = None
self.bq_hook = None
self.gcs_hook = None
self.report_body = report_body
self.erf_bucket = erf_bucket
self.yesterday = yesterday
self.cloud_project_id = cloud_project_id
self.bq_table = bq_table
self.gcs_bucket = gcs_bucket
self.schema = schema
self.entity_type = entity_type
self.erf_object = 'entity/%s.0.%s.json' % (file_creation_date, entity_type)
self.partner_ids = partner_ids
self.write_disposition = write_disposition
self.file_creation_date = file_creation_date
def execute(self, context):
if self.gcs_hook is None:
self.gcs_hook = GoogleCloudStorageHook(
google_cloud_storage_conn_id=self.gcp_conn_id)
if self.bq_hook is None:
self.bq_hook = BigQueryHook(bigquery_conn_id=self.gcp_conn_id)
for i, partner_id in enumerate(self.partner_ids):
filename = erf_utils.download_and_transform_erf(self, partner_id)
entity_read_file_ndj = 'gs://%s/%s' % (self.gcs_bucket, filename)
if i > 0:
self.write_disposition = 'WRITE_APPEND'
bq_base_cursor = self.bq_hook.get_conn().cursor()
bq_base_cursor.run_load(
destination_project_dataset_table=self.bq_table,
schema_fields=self.schema,
source_uris=[entity_read_file_ndj],
source_format='NEWLINE_DELIMITED_JSON',
write_disposition=self.write_disposition)
self.gcs_hook.delete(self.gcs_bucket, filename)
class GoogleDisplayVideo360SDFToBigQueryOperator(BaseOperator):
"""Make a request to SDF API and upload the data to BQ."""
DEFAULT_SDF_TABLE_NAMES = {
'LINE_ITEM': 'SDFLineItem',
'AD_GROUP': 'SDFAdGroup',
'AD': 'SDFAd',
'INSERTION_ORDER': 'SDFInsertionOrder',
'CAMPAIGN': 'SDFCampaign'
}
SDF_API_RESPONSE_KEYS = {
'LINE_ITEM': 'lineItems',
'AD_GROUP': 'adGroups',
'AD': 'ads',
'INSERTION_ORDER': 'insertionOrders',
'CAMPAIGN': 'campaigns'
}
def __init__(self,
gcp_conn_id='google_cloud_default',
gcs_bucket=None,
schema=None,
bq_dataset=None,
write_disposition=None,
cloud_project_id=None,
file_types=None,
filter_ids=None,
api_version=None,
filter_type=None,
table_names=DEFAULT_SDF_TABLE_NAMES,
sdf_api_response_keys=SDF_API_RESPONSE_KEYS,
*args,
**kwargs):
super(GoogleDisplayVideo360SDFToBigQueryOperator, self).__init__(*args, **kwargs)
self.gcp_conn_id = gcp_conn_id
self.service = None
self.hook = None
self.bq_hook = None
self.gcs_hook = None
self.gcs_bucket = gcs_bucket
self.schema = schema
self.bq_dataset = bq_dataset
self.write_disposition = write_disposition
self.cloud_project_id = cloud_project_id
self.file_types = file_types
self.filter_ids = filter_ids
self.api_version = api_version
self.filter_type = filter_type
self.table_names = table_names
self.sdf_api_response_keys = sdf_api_response_keys
def execute(self, context):
if self.hook is None:
self.hook = GoogleDisplayVideo360Hook(gcp_conn_id=self.gcp_conn_id)
if self.bq_hook is None:
self.bq_hook = BigQueryHook(bigquery_conn_id=self.gcp_conn_id)
if self.gcs_hook is None:
self.gcs_hook = GoogleCloudStorageHook(google_cloud_storage_conn_id=self.gcp_conn_id)
request_body = {'fileTypes': self.file_types, 'filterType': self.filter_type, 'filterIds': self.filter_ids,
'version': self.api_version}
logger.info('Request body: %s ' % request_body)
request = self.hook.get_service().sdf().download(body=request_body)
response = request.execute()
for file_type in self.file_types:
temp_file = None
try:
logger.info('Uploading SDF to GCS')
temp_file = tempfile.NamedTemporaryFile(delete=False)
response_key = self.sdf_api_response_keys.get(file_type)
temp_file.write(response[response_key].encode('utf-8'))
temp_file.close()
filename = '%d_%s_%s_%s.json' % (time.time() * 1e+9, randint(
1, 1000000), response_key, 'sdf')
self.gcs_hook.upload(self.gcs_bucket, filename, temp_file.name)
logger.info('SDF upload to GCS complete')
finally:
if temp_file:
temp_file.close()
os.unlink(temp_file.name)
sdf_file = 'gs://%s/%s' % (self.gcs_bucket, filename)
bq_table = self.table_names.get(file_type)
bq_table = '%s.%s' % (self.bq_dataset, bq_table)
schema = SDF_VERSIONED_SCHEMA_TYPES.get(self.api_version).get(file_type)
try:
bq_base_cursor = self.bq_hook.get_conn().cursor()
logger.info('Uploading SDF to BigQuery')
bq_base_cursor.run_load(
destination_project_dataset_table=bq_table,
schema_fields=schema,
source_uris=[sdf_file],
source_format='CSV',
skip_leading_rows=1,
write_disposition=self.write_disposition)
finally:
logger.info('Deleting SDF from GCS')
self.gcs_hook.delete(self.gcs_bucket, filename)
class GoogleDisplayVideo360RecordSDFAdvertiserOperator(BaseOperator):
"""
Get Partner and Advertiser Ids from a report and populate an airflow variable.
"""
template_fields = ['report_url', 'variable_name']
def __init__(self,
report_url,
variable_name,
gcp_conn_id='google_cloud_default',
*args,
**kwargs):
super(GoogleDisplayVideo360RecordSDFAdvertiserOperator, self).__init__(*args, **kwargs)
self.gcp_conn_id = gcp_conn_id
self.service = None
self.report_url = report_url
self.variable_name = variable_name
def execute(self, context):
try:
report_file = tempfile.NamedTemporaryFile(delete=False)
file_download = requests.get(self.report_url, stream=True)
for chunk in file_download.iter_content(chunk_size=1024 * 1024):
report_file.write(chunk)
report_file.close()
advertisers = {}
with open(report_file.name, 'r') as f:
csv_reader = csv.DictReader(f)
for line in csv_reader:
advertiser_id = line["Advertiser ID"]
partner_id = line["Partner ID"]
if advertiser_id.strip():
try:
advertisers[partner_id].append(advertiser_id)
message = 'ADDING to key %s new advertiser %s' % (
partner_id, advertiser_id)
logger.info(message)
except KeyError:
advertisers[partner_id] = [advertiser_id]
message = 'CREATING new key %s with advertiser %s' % (
partner_id, advertiser_id)
logger.info(message)
else:
break
models.Variable.set(self.variable_name, json.dumps(advertisers))
finally:
if report_file:
report_file.close()
os.unlink(report_file.name)
| apache-2.0 |
gilneidp/FinalProject | ALL_FILES/pox/misc/pidfile.py | 44 | 2096 | # Copyright 2013 James McCauley
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Component to create PID files for running POX as a service
"""
from pox.core import core
import os
import atexit
_files = set()
_first_init = False
def _del_pidfiles ():
if not _files: return
try:
msg = "Cleaning up %i pidfile" % (len(_files),)
if len(_files) != 1: msg += 's'
log.debug(msg)
except:
pass
for f in list(_files):
shortname = f
if os.path.abspath(os.path.basename(f)) == f:
shortname = os.path.basename(f)
try:
os.remove(f)
except:
msg = "Couldn't delete pidfile '%s'" % (shortname,)
try:
log.exception(msg)
except:
print(msg)
_files.remove(f)
def _handle_DownEvent (event):
_del_pidfiles()
def launch (file, force = False, __INSTANCE__ = None):
global log
log = core.getLogger()
absfile = os.path.abspath(file)
if absfile in _files:
log.warn("pidfile '%s' specified multiple times", file)
return
global _first_init
if not _first_init:
try:
atexit.register(_del_pidfiles)
except:
log.info('atexit not available')
core.addListenerByName("DownEvent", _handle_DownEvent)
_first_init = True
if os.path.exists(absfile) and not force:
log.error("Aborting startup: pidfile '%s' exists "
"(use --force to override)", file)
return False
try:
f = open(absfile, 'w')
f.write("%s\n" % (os.getpid(),))
except:
log.exception("Failed to create pidfile '%s'", file)
return False
f.close()
_files.add(absfile)
| mit |
caot/intellij-community | python/lib/Lib/encodings/cp1140.py | 593 | 13361 | """ Python Character Mapping Codec cp1140 generated from 'python-mappings/CP1140.TXT' with gencodec.py.
"""#"
import codecs
### Codec APIs
class Codec(codecs.Codec):
def encode(self,input,errors='strict'):
return codecs.charmap_encode(input,errors,encoding_table)
def decode(self,input,errors='strict'):
return codecs.charmap_decode(input,errors,decoding_table)
class IncrementalEncoder(codecs.IncrementalEncoder):
def encode(self, input, final=False):
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
class IncrementalDecoder(codecs.IncrementalDecoder):
def decode(self, input, final=False):
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
class StreamWriter(Codec,codecs.StreamWriter):
pass
class StreamReader(Codec,codecs.StreamReader):
pass
### encodings module API
def getregentry():
return codecs.CodecInfo(
name='cp1140',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
)
### Decoding Table
decoding_table = (
u'\x00' # 0x00 -> NULL
u'\x01' # 0x01 -> START OF HEADING
u'\x02' # 0x02 -> START OF TEXT
u'\x03' # 0x03 -> END OF TEXT
u'\x9c' # 0x04 -> CONTROL
u'\t' # 0x05 -> HORIZONTAL TABULATION
u'\x86' # 0x06 -> CONTROL
u'\x7f' # 0x07 -> DELETE
u'\x97' # 0x08 -> CONTROL
u'\x8d' # 0x09 -> CONTROL
u'\x8e' # 0x0A -> CONTROL
u'\x0b' # 0x0B -> VERTICAL TABULATION
u'\x0c' # 0x0C -> FORM FEED
u'\r' # 0x0D -> CARRIAGE RETURN
u'\x0e' # 0x0E -> SHIFT OUT
u'\x0f' # 0x0F -> SHIFT IN
u'\x10' # 0x10 -> DATA LINK ESCAPE
u'\x11' # 0x11 -> DEVICE CONTROL ONE
u'\x12' # 0x12 -> DEVICE CONTROL TWO
u'\x13' # 0x13 -> DEVICE CONTROL THREE
u'\x9d' # 0x14 -> CONTROL
u'\x85' # 0x15 -> CONTROL
u'\x08' # 0x16 -> BACKSPACE
u'\x87' # 0x17 -> CONTROL
u'\x18' # 0x18 -> CANCEL
u'\x19' # 0x19 -> END OF MEDIUM
u'\x92' # 0x1A -> CONTROL
u'\x8f' # 0x1B -> CONTROL
u'\x1c' # 0x1C -> FILE SEPARATOR
u'\x1d' # 0x1D -> GROUP SEPARATOR
u'\x1e' # 0x1E -> RECORD SEPARATOR
u'\x1f' # 0x1F -> UNIT SEPARATOR
u'\x80' # 0x20 -> CONTROL
u'\x81' # 0x21 -> CONTROL
u'\x82' # 0x22 -> CONTROL
u'\x83' # 0x23 -> CONTROL
u'\x84' # 0x24 -> CONTROL
u'\n' # 0x25 -> LINE FEED
u'\x17' # 0x26 -> END OF TRANSMISSION BLOCK
u'\x1b' # 0x27 -> ESCAPE
u'\x88' # 0x28 -> CONTROL
u'\x89' # 0x29 -> CONTROL
u'\x8a' # 0x2A -> CONTROL
u'\x8b' # 0x2B -> CONTROL
u'\x8c' # 0x2C -> CONTROL
u'\x05' # 0x2D -> ENQUIRY
u'\x06' # 0x2E -> ACKNOWLEDGE
u'\x07' # 0x2F -> BELL
u'\x90' # 0x30 -> CONTROL
u'\x91' # 0x31 -> CONTROL
u'\x16' # 0x32 -> SYNCHRONOUS IDLE
u'\x93' # 0x33 -> CONTROL
u'\x94' # 0x34 -> CONTROL
u'\x95' # 0x35 -> CONTROL
u'\x96' # 0x36 -> CONTROL
u'\x04' # 0x37 -> END OF TRANSMISSION
u'\x98' # 0x38 -> CONTROL
u'\x99' # 0x39 -> CONTROL
u'\x9a' # 0x3A -> CONTROL
u'\x9b' # 0x3B -> CONTROL
u'\x14' # 0x3C -> DEVICE CONTROL FOUR
u'\x15' # 0x3D -> NEGATIVE ACKNOWLEDGE
u'\x9e' # 0x3E -> CONTROL
u'\x1a' # 0x3F -> SUBSTITUTE
u' ' # 0x40 -> SPACE
u'\xa0' # 0x41 -> NO-BREAK SPACE
u'\xe2' # 0x42 -> LATIN SMALL LETTER A WITH CIRCUMFLEX
u'\xe4' # 0x43 -> LATIN SMALL LETTER A WITH DIAERESIS
u'\xe0' # 0x44 -> LATIN SMALL LETTER A WITH GRAVE
u'\xe1' # 0x45 -> LATIN SMALL LETTER A WITH ACUTE
u'\xe3' # 0x46 -> LATIN SMALL LETTER A WITH TILDE
u'\xe5' # 0x47 -> LATIN SMALL LETTER A WITH RING ABOVE
u'\xe7' # 0x48 -> LATIN SMALL LETTER C WITH CEDILLA
u'\xf1' # 0x49 -> LATIN SMALL LETTER N WITH TILDE
u'\xa2' # 0x4A -> CENT SIGN
u'.' # 0x4B -> FULL STOP
u'<' # 0x4C -> LESS-THAN SIGN
u'(' # 0x4D -> LEFT PARENTHESIS
u'+' # 0x4E -> PLUS SIGN
u'|' # 0x4F -> VERTICAL LINE
u'&' # 0x50 -> AMPERSAND
u'\xe9' # 0x51 -> LATIN SMALL LETTER E WITH ACUTE
u'\xea' # 0x52 -> LATIN SMALL LETTER E WITH CIRCUMFLEX
u'\xeb' # 0x53 -> LATIN SMALL LETTER E WITH DIAERESIS
u'\xe8' # 0x54 -> LATIN SMALL LETTER E WITH GRAVE
u'\xed' # 0x55 -> LATIN SMALL LETTER I WITH ACUTE
u'\xee' # 0x56 -> LATIN SMALL LETTER I WITH CIRCUMFLEX
u'\xef' # 0x57 -> LATIN SMALL LETTER I WITH DIAERESIS
u'\xec' # 0x58 -> LATIN SMALL LETTER I WITH GRAVE
u'\xdf' # 0x59 -> LATIN SMALL LETTER SHARP S (GERMAN)
u'!' # 0x5A -> EXCLAMATION MARK
u'$' # 0x5B -> DOLLAR SIGN
u'*' # 0x5C -> ASTERISK
u')' # 0x5D -> RIGHT PARENTHESIS
u';' # 0x5E -> SEMICOLON
u'\xac' # 0x5F -> NOT SIGN
u'-' # 0x60 -> HYPHEN-MINUS
u'/' # 0x61 -> SOLIDUS
u'\xc2' # 0x62 -> LATIN CAPITAL LETTER A WITH CIRCUMFLEX
u'\xc4' # 0x63 -> LATIN CAPITAL LETTER A WITH DIAERESIS
u'\xc0' # 0x64 -> LATIN CAPITAL LETTER A WITH GRAVE
u'\xc1' # 0x65 -> LATIN CAPITAL LETTER A WITH ACUTE
u'\xc3' # 0x66 -> LATIN CAPITAL LETTER A WITH TILDE
u'\xc5' # 0x67 -> LATIN CAPITAL LETTER A WITH RING ABOVE
u'\xc7' # 0x68 -> LATIN CAPITAL LETTER C WITH CEDILLA
u'\xd1' # 0x69 -> LATIN CAPITAL LETTER N WITH TILDE
u'\xa6' # 0x6A -> BROKEN BAR
u',' # 0x6B -> COMMA
u'%' # 0x6C -> PERCENT SIGN
u'_' # 0x6D -> LOW LINE
u'>' # 0x6E -> GREATER-THAN SIGN
u'?' # 0x6F -> QUESTION MARK
u'\xf8' # 0x70 -> LATIN SMALL LETTER O WITH STROKE
u'\xc9' # 0x71 -> LATIN CAPITAL LETTER E WITH ACUTE
u'\xca' # 0x72 -> LATIN CAPITAL LETTER E WITH CIRCUMFLEX
u'\xcb' # 0x73 -> LATIN CAPITAL LETTER E WITH DIAERESIS
u'\xc8' # 0x74 -> LATIN CAPITAL LETTER E WITH GRAVE
u'\xcd' # 0x75 -> LATIN CAPITAL LETTER I WITH ACUTE
u'\xce' # 0x76 -> LATIN CAPITAL LETTER I WITH CIRCUMFLEX
u'\xcf' # 0x77 -> LATIN CAPITAL LETTER I WITH DIAERESIS
u'\xcc' # 0x78 -> LATIN CAPITAL LETTER I WITH GRAVE
u'`' # 0x79 -> GRAVE ACCENT
u':' # 0x7A -> COLON
u'#' # 0x7B -> NUMBER SIGN
u'@' # 0x7C -> COMMERCIAL AT
u"'" # 0x7D -> APOSTROPHE
u'=' # 0x7E -> EQUALS SIGN
u'"' # 0x7F -> QUOTATION MARK
u'\xd8' # 0x80 -> LATIN CAPITAL LETTER O WITH STROKE
u'a' # 0x81 -> LATIN SMALL LETTER A
u'b' # 0x82 -> LATIN SMALL LETTER B
u'c' # 0x83 -> LATIN SMALL LETTER C
u'd' # 0x84 -> LATIN SMALL LETTER D
u'e' # 0x85 -> LATIN SMALL LETTER E
u'f' # 0x86 -> LATIN SMALL LETTER F
u'g' # 0x87 -> LATIN SMALL LETTER G
u'h' # 0x88 -> LATIN SMALL LETTER H
u'i' # 0x89 -> LATIN SMALL LETTER I
u'\xab' # 0x8A -> LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\xbb' # 0x8B -> RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\xf0' # 0x8C -> LATIN SMALL LETTER ETH (ICELANDIC)
u'\xfd' # 0x8D -> LATIN SMALL LETTER Y WITH ACUTE
u'\xfe' # 0x8E -> LATIN SMALL LETTER THORN (ICELANDIC)
u'\xb1' # 0x8F -> PLUS-MINUS SIGN
u'\xb0' # 0x90 -> DEGREE SIGN
u'j' # 0x91 -> LATIN SMALL LETTER J
u'k' # 0x92 -> LATIN SMALL LETTER K
u'l' # 0x93 -> LATIN SMALL LETTER L
u'm' # 0x94 -> LATIN SMALL LETTER M
u'n' # 0x95 -> LATIN SMALL LETTER N
u'o' # 0x96 -> LATIN SMALL LETTER O
u'p' # 0x97 -> LATIN SMALL LETTER P
u'q' # 0x98 -> LATIN SMALL LETTER Q
u'r' # 0x99 -> LATIN SMALL LETTER R
u'\xaa' # 0x9A -> FEMININE ORDINAL INDICATOR
u'\xba' # 0x9B -> MASCULINE ORDINAL INDICATOR
u'\xe6' # 0x9C -> LATIN SMALL LIGATURE AE
u'\xb8' # 0x9D -> CEDILLA
u'\xc6' # 0x9E -> LATIN CAPITAL LIGATURE AE
u'\u20ac' # 0x9F -> EURO SIGN
u'\xb5' # 0xA0 -> MICRO SIGN
u'~' # 0xA1 -> TILDE
u's' # 0xA2 -> LATIN SMALL LETTER S
u't' # 0xA3 -> LATIN SMALL LETTER T
u'u' # 0xA4 -> LATIN SMALL LETTER U
u'v' # 0xA5 -> LATIN SMALL LETTER V
u'w' # 0xA6 -> LATIN SMALL LETTER W
u'x' # 0xA7 -> LATIN SMALL LETTER X
u'y' # 0xA8 -> LATIN SMALL LETTER Y
u'z' # 0xA9 -> LATIN SMALL LETTER Z
u'\xa1' # 0xAA -> INVERTED EXCLAMATION MARK
u'\xbf' # 0xAB -> INVERTED QUESTION MARK
u'\xd0' # 0xAC -> LATIN CAPITAL LETTER ETH (ICELANDIC)
u'\xdd' # 0xAD -> LATIN CAPITAL LETTER Y WITH ACUTE
u'\xde' # 0xAE -> LATIN CAPITAL LETTER THORN (ICELANDIC)
u'\xae' # 0xAF -> REGISTERED SIGN
u'^' # 0xB0 -> CIRCUMFLEX ACCENT
u'\xa3' # 0xB1 -> POUND SIGN
u'\xa5' # 0xB2 -> YEN SIGN
u'\xb7' # 0xB3 -> MIDDLE DOT
u'\xa9' # 0xB4 -> COPYRIGHT SIGN
u'\xa7' # 0xB5 -> SECTION SIGN
u'\xb6' # 0xB6 -> PILCROW SIGN
u'\xbc' # 0xB7 -> VULGAR FRACTION ONE QUARTER
u'\xbd' # 0xB8 -> VULGAR FRACTION ONE HALF
u'\xbe' # 0xB9 -> VULGAR FRACTION THREE QUARTERS
u'[' # 0xBA -> LEFT SQUARE BRACKET
u']' # 0xBB -> RIGHT SQUARE BRACKET
u'\xaf' # 0xBC -> MACRON
u'\xa8' # 0xBD -> DIAERESIS
u'\xb4' # 0xBE -> ACUTE ACCENT
u'\xd7' # 0xBF -> MULTIPLICATION SIGN
u'{' # 0xC0 -> LEFT CURLY BRACKET
u'A' # 0xC1 -> LATIN CAPITAL LETTER A
u'B' # 0xC2 -> LATIN CAPITAL LETTER B
u'C' # 0xC3 -> LATIN CAPITAL LETTER C
u'D' # 0xC4 -> LATIN CAPITAL LETTER D
u'E' # 0xC5 -> LATIN CAPITAL LETTER E
u'F' # 0xC6 -> LATIN CAPITAL LETTER F
u'G' # 0xC7 -> LATIN CAPITAL LETTER G
u'H' # 0xC8 -> LATIN CAPITAL LETTER H
u'I' # 0xC9 -> LATIN CAPITAL LETTER I
u'\xad' # 0xCA -> SOFT HYPHEN
u'\xf4' # 0xCB -> LATIN SMALL LETTER O WITH CIRCUMFLEX
u'\xf6' # 0xCC -> LATIN SMALL LETTER O WITH DIAERESIS
u'\xf2' # 0xCD -> LATIN SMALL LETTER O WITH GRAVE
u'\xf3' # 0xCE -> LATIN SMALL LETTER O WITH ACUTE
u'\xf5' # 0xCF -> LATIN SMALL LETTER O WITH TILDE
u'}' # 0xD0 -> RIGHT CURLY BRACKET
u'J' # 0xD1 -> LATIN CAPITAL LETTER J
u'K' # 0xD2 -> LATIN CAPITAL LETTER K
u'L' # 0xD3 -> LATIN CAPITAL LETTER L
u'M' # 0xD4 -> LATIN CAPITAL LETTER M
u'N' # 0xD5 -> LATIN CAPITAL LETTER N
u'O' # 0xD6 -> LATIN CAPITAL LETTER O
u'P' # 0xD7 -> LATIN CAPITAL LETTER P
u'Q' # 0xD8 -> LATIN CAPITAL LETTER Q
u'R' # 0xD9 -> LATIN CAPITAL LETTER R
u'\xb9' # 0xDA -> SUPERSCRIPT ONE
u'\xfb' # 0xDB -> LATIN SMALL LETTER U WITH CIRCUMFLEX
u'\xfc' # 0xDC -> LATIN SMALL LETTER U WITH DIAERESIS
u'\xf9' # 0xDD -> LATIN SMALL LETTER U WITH GRAVE
u'\xfa' # 0xDE -> LATIN SMALL LETTER U WITH ACUTE
u'\xff' # 0xDF -> LATIN SMALL LETTER Y WITH DIAERESIS
u'\\' # 0xE0 -> REVERSE SOLIDUS
u'\xf7' # 0xE1 -> DIVISION SIGN
u'S' # 0xE2 -> LATIN CAPITAL LETTER S
u'T' # 0xE3 -> LATIN CAPITAL LETTER T
u'U' # 0xE4 -> LATIN CAPITAL LETTER U
u'V' # 0xE5 -> LATIN CAPITAL LETTER V
u'W' # 0xE6 -> LATIN CAPITAL LETTER W
u'X' # 0xE7 -> LATIN CAPITAL LETTER X
u'Y' # 0xE8 -> LATIN CAPITAL LETTER Y
u'Z' # 0xE9 -> LATIN CAPITAL LETTER Z
u'\xb2' # 0xEA -> SUPERSCRIPT TWO
u'\xd4' # 0xEB -> LATIN CAPITAL LETTER O WITH CIRCUMFLEX
u'\xd6' # 0xEC -> LATIN CAPITAL LETTER O WITH DIAERESIS
u'\xd2' # 0xED -> LATIN CAPITAL LETTER O WITH GRAVE
u'\xd3' # 0xEE -> LATIN CAPITAL LETTER O WITH ACUTE
u'\xd5' # 0xEF -> LATIN CAPITAL LETTER O WITH TILDE
u'0' # 0xF0 -> DIGIT ZERO
u'1' # 0xF1 -> DIGIT ONE
u'2' # 0xF2 -> DIGIT TWO
u'3' # 0xF3 -> DIGIT THREE
u'4' # 0xF4 -> DIGIT FOUR
u'5' # 0xF5 -> DIGIT FIVE
u'6' # 0xF6 -> DIGIT SIX
u'7' # 0xF7 -> DIGIT SEVEN
u'8' # 0xF8 -> DIGIT EIGHT
u'9' # 0xF9 -> DIGIT NINE
u'\xb3' # 0xFA -> SUPERSCRIPT THREE
u'\xdb' # 0xFB -> LATIN CAPITAL LETTER U WITH CIRCUMFLEX
u'\xdc' # 0xFC -> LATIN CAPITAL LETTER U WITH DIAERESIS
u'\xd9' # 0xFD -> LATIN CAPITAL LETTER U WITH GRAVE
u'\xda' # 0xFE -> LATIN CAPITAL LETTER U WITH ACUTE
u'\x9f' # 0xFF -> CONTROL
)
### Encoding table
encoding_table=codecs.charmap_build(decoding_table)
| apache-2.0 |
numerigraphe/odoo | addons/sale_mrp/tests/__init__.py | 262 | 1085 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Business Applications
# Copyright (c) 2012-TODAY OpenERP S.A. <http://openerp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from . import test_move_explode
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
datalogics-robb/scons | bin/SConsDoc.py | 2 | 9625 | #!/usr/bin/env python
#
# Module for handling SCons documentation processing.
#
__doc__ = """
This module parses home-brew XML files that document various things
in SCons. Right now, it handles Builders, construction variables,
and Tools, but we expect it to get extended in the future.
In general, you can use any DocBook tag in the input, and this module
just adds processing various home-brew tags to try to make life a
little easier.
Builder example:
<builder name="VARIABLE">
<summary>
This is the summary description of an SCons Tool.
It will get placed in the man page,
and in the appropriate User's Guide appendix.
The name of any builder may be interpolated
anywhere in the document by specifying the
&b-VARIABLE;
element. It need not be on a line by itself.
Unlike normal XML, blank lines are significant in these
descriptions and serve to separate paragraphs.
They'll get replaced in DocBook output with appropriate tags
to indicate a new paragraph.
<example>
print "this is example code, it will be offset and indented"
</example>
</summary>
</builder>
Construction variable example:
<cvar name="VARIABLE">
<summary>
This is the summary description of a construction variable.
It will get placed in the man page,
and in the appropriate User's Guide appendix.
The name of any construction variable may be interpolated
anywhere in the document by specifying the
&t-VARIABLE;
element. It need not be on a line by itself.
Unlike normal XML, blank lines are significant in these
descriptions and serve to separate paragraphs.
They'll get replaced in DocBook output with appropriate tags
to indicate a new paragraph.
<example>
print "this is example code, it will be offset and indented"
</example>
</summary>
</cvar>
Tool example:
<tool name="VARIABLE">
<summary>
This is the summary description of an SCons Tool.
It will get placed in the man page,
and in the appropriate User's Guide appendix.
The name of any tool may be interpolated
anywhere in the document by specifying the
&t-VARIABLE;
element. It need not be on a line by itself.
Unlike normal XML, blank lines are significant in these
descriptions and serve to separate paragraphs.
They'll get replaced in DocBook output with appropriate tags
to indicate a new paragraph.
<example>
print "this is example code, it will be offset and indented"
</example>
</summary>
</tool>
"""
import os.path
import imp
import sys
import xml.sax.handler
class Item:
def __init__(self, name):
self.name = name
self.sort_name = name.lower()
if self.sort_name[0] == '_':
self.sort_name = self.sort_name[1:]
self.summary = []
self.sets = None
self.uses = None
def cmp_name(self, name):
if name[0] == '_':
name = name[1:]
return name.lower()
def __cmp__(self, other):
return cmp(self.sort_name, other.sort_name)
class Builder(Item):
pass
class Tool(Item):
def __init__(self, name):
Item.__init__(self, name)
self.entity = self.name.replace('+', 'X')
class ConstructionVariable(Item):
pass
class Chunk:
def __init__(self, tag, body=None):
self.tag = tag
if not body:
body = []
self.body = body
def __str__(self):
body = ''.join(self.body)
return "<%s>%s</%s>\n" % (self.tag, body, self.tag)
def append(self, data):
self.body.append(data)
class Summary:
def __init__(self):
self.body = []
self.collect = []
def append(self, data):
self.collect.append(data)
def end_para(self):
text = ''.join(self.collect)
paras = text.split('\n\n')
if paras == ['\n']:
return
if paras[0] == '':
self.body.append('\n')
paras = paras[1:]
paras[0] = '\n' + paras[0]
if paras[-1] == '':
paras = paras[:-1]
paras[-1] = paras[-1] + '\n'
last = '\n'
else:
last = None
sep = None
for p in paras:
c = Chunk("para", p)
if sep:
self.body.append(sep)
self.body.append(c)
sep = '\n'
if last:
self.body.append(last)
def begin_chunk(self, chunk):
self.end_para()
self.collect = chunk
def end_chunk(self):
self.body.append(self.collect)
self.collect = []
class SConsDocHandler(xml.sax.handler.ContentHandler,
xml.sax.handler.ErrorHandler):
def __init__(self):
self._start_dispatch = {}
self._end_dispatch = {}
keys = self.__class__.__dict__.keys()
start_tag_method_names = filter(lambda k: k[:6] == 'start_', keys)
end_tag_method_names = filter(lambda k: k[:4] == 'end_', keys)
for method_name in start_tag_method_names:
tag = method_name[6:]
self._start_dispatch[tag] = getattr(self, method_name)
for method_name in end_tag_method_names:
tag = method_name[4:]
self._end_dispatch[tag] = getattr(self, method_name)
self.stack = []
self.collect = []
self.current_object = []
self.builders = {}
self.tools = {}
self.cvars = {}
def startElement(self, name, attrs):
try:
start_element_method = self._start_dispatch[name]
except KeyError:
self.characters('<%s>' % name)
else:
start_element_method(attrs)
def endElement(self, name):
try:
end_element_method = self._end_dispatch[name]
except KeyError:
self.characters('</%s>' % name)
else:
end_element_method()
#
#
def characters(self, chars):
self.collect.append(chars)
def begin_collecting(self, chunk):
self.collect = chunk
def end_collecting(self):
self.collect = []
def begin_chunk(self):
pass
def end_chunk(self):
pass
#
#
#
def begin_xxx(self, obj):
self.stack.append(self.current_object)
self.current_object = obj
def end_xxx(self):
self.current_object = self.stack.pop()
#
#
#
def start_scons_doc(self, attrs):
pass
def end_scons_doc(self):
pass
def start_builder(self, attrs):
name = attrs.get('name')
try:
builder = self.builders[name]
except KeyError:
builder = Builder(name)
self.builders[name] = builder
self.begin_xxx(builder)
def end_builder(self):
self.end_xxx()
def start_tool(self, attrs):
name = attrs.get('name')
try:
tool = self.tools[name]
except KeyError:
tool = Tool(name)
self.tools[name] = tool
self.begin_xxx(tool)
def end_tool(self):
self.end_xxx()
def start_cvar(self, attrs):
name = attrs.get('name')
try:
cvar = self.cvars[name]
except KeyError:
cvar = ConstructionVariable(name)
self.cvars[name] = cvar
self.begin_xxx(cvar)
def end_cvar(self):
self.end_xxx()
def start_summary(self, attrs):
summary = Summary()
self.current_object.summary = summary
self.begin_xxx(summary)
self.begin_collecting(summary)
def end_summary(self):
self.current_object.end_para()
self.end_xxx()
def start_example(self, attrs):
example = Chunk("programlisting")
self.current_object.begin_chunk(example)
def end_example(self):
self.current_object.end_chunk()
def start_uses(self, attrs):
self.begin_collecting([])
def end_uses(self):
self.current_object.uses = ''.join(self.collect).split()
self.current_object.uses.sort()
self.end_collecting()
def start_sets(self, attrs):
self.begin_collecting([])
def end_sets(self):
self.current_object.sets = ''.join(self.collect).split()
self.current_object.sets.sort()
self.end_collecting()
# Stuff for the ErrorHandler portion.
def error(self, exception):
linenum = exception._linenum - self.preamble_lines
sys.stderr.write('%s:%d:%d: %s (error)\n' % (self.filename, linenum, exception._colnum, ''.join(exception.args)))
def fatalError(self, exception):
linenum = exception._linenum - self.preamble_lines
sys.stderr.write('%s:%d:%d: %s (fatalError)\n' % (self.filename, linenum, exception._colnum, ''.join(exception.args)))
def set_file_info(self, filename, preamble_lines):
self.filename = filename
self.preamble_lines = preamble_lines
# lifted from Ka-Ping Yee's way cool pydoc module.
def importfile(path):
"""Import a Python source file or compiled file given its path."""
magic = imp.get_magic()
file = open(path, 'r')
if file.read(len(magic)) == magic:
kind = imp.PY_COMPILED
else:
kind = imp.PY_SOURCE
file.close()
filename = os.path.basename(path)
name, ext = os.path.splitext(filename)
file = open(path, 'r')
try:
module = imp.load_module(name, file, path, (ext, 'r', kind))
except ImportError, e:
sys.stderr.write("Could not import %s: %s\n" % (path, e))
return None
file.close()
return module
| mit |
TyberiusPrime/pysam | tests/tabix_test.py | 2 | 43143 | #!/usr/bin/env python
'''unit testing code for pysam.
Execute in the :file:`tests` directory as it requires the Makefile
and data files located there.
'''
import sys
import os
import shutil
import gzip
import pysam
import unittest
import glob
import re
import copy
from TestUtils import checkURL
DATADIR = 'tabix_data'
IS_PYTHON3 = sys.version_info[0] >= 3
def myzip_open(infile, mode="r"):
'''open compressed file and decode.'''
def _convert(f):
for l in f:
yield l.decode("ascii")
if IS_PYTHON3:
if mode == "r":
return _convert(gzip.open(infile, "r"))
else:
return gzip.open(mode)
def loadAndConvert(filename, encode=True):
'''load data from filename and convert all fields to string.
Filename can be either plain or compressed (ending in .gz).
'''
data = []
if filename.endswith(".gz"):
with gzip.open(filename) as inf:
for line in inf:
line = line.decode("ascii")
if line.startswith("#"):
continue
d = line.strip().split("\t")
data.append(d)
else:
with open(filename) as f:
for line in f:
if line.startswith("#"):
continue
d = line.strip().split("\t")
data.append(d)
return data
def splitToBytes(s):
'''split string and return list of bytes.'''
return [x.encode("ascii") for x in s.split("\t")]
def checkBinaryEqual(filename1, filename2):
'''return true if the two files are binary equal.'''
if os.path.getsize(filename1) != os.path.getsize(filename2):
return False
with open(filename1, "rb") as infile:
d1 = infile.read()
with open(filename2, "rb") as infile:
d2 = infile.read()
found = False
for c1, c2 in zip(d1, d2):
if c1 != c2:
break
else:
found = True
return found
class TestIndexing(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
filename_idx = os.path.join(DATADIR, "example.gtf.gz.tbi")
def setUp(self):
self.tmpfilename = "tmp_%i.gtf.gz" % id(self)
shutil.copyfile(self.filename, self.tmpfilename)
def testIndexPreset(self):
'''test indexing via preset.'''
pysam.tabix_index(self.tmpfilename, preset="gff")
checkBinaryEqual(self.tmpfilename + ".tbi", self.filename_idx)
def tearDown(self):
os.unlink(self.tmpfilename)
os.unlink(self.tmpfilename + ".tbi")
class TestCompression(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
filename_idx = os.path.join(DATADIR, "example.gtf.gz.tbi")
preset = "gff"
def setUp(self):
self.tmpfilename = "tmp_TestCompression_%i" % id(self)
with gzip.open(self.filename, "rb") as infile, \
open(self.tmpfilename, "wb") as outfile:
outfile.write(infile.read())
def testCompression(self):
'''see also issue 106'''
pysam.tabix_compress(self.tmpfilename, self.tmpfilename + ".gz")
checkBinaryEqual(self.tmpfilename, self.tmpfilename + ".gz")
def testIndexPresetUncompressed(self):
'''test indexing via preset.'''
pysam.tabix_index(self.tmpfilename, preset=self.preset)
# check if uncompressed file has been removed
self.assertEqual(os.path.exists(self.tmpfilename), False)
checkBinaryEqual(self.tmpfilename + ".gz", self.filename)
checkBinaryEqual(self.tmpfilename + ".gz.tbi", self.filename_idx)
def testIndexPresetCompressed(self):
'''test indexing via preset.'''
pysam.tabix_compress(self.tmpfilename, self.tmpfilename + ".gz")
pysam.tabix_index(self.tmpfilename + ".gz", preset=self.preset)
checkBinaryEqual(self.tmpfilename + ".gz", self.filename)
checkBinaryEqual(self.tmpfilename + ".gz.tbi", self.filename_idx)
def tearDown(self):
try:
os.unlink(self.tmpfilename)
os.unlink(self.tmpfilename + ".gz")
os.unlink(self.tmpfilename + ".gz.tbi")
except OSError:
pass
class TestCompressionSam(TestCompression):
filename = os.path.join(DATADIR, "example.sam.gz")
filename_index = os.path.join(DATADIR, "example.sam.gz.tbi")
preset = "sam"
class TestCompressionBed(TestCompression):
filename = os.path.join(DATADIR, "example.bed.gz")
filename_index = os.path.join(DATADIR, "example.bed.gz.tbi")
preset = "bed"
class TestCompressionVCF(TestCompression):
filename = os.path.join(DATADIR, "example.vcf.gz")
filename_index = os.path.join(DATADIR, "example.vcf.gz.tbi")
preset = "vcf"
class IterationTest(unittest.TestCase):
with_comments = False
def setUp(self):
lines = []
with gzip.open(self.filename, "rb") as inf:
for line in inf:
line = line.decode('ascii')
if line.startswith("#"):
if not self.with_comments:
continue
lines.append(line)
# creates index of contig, start, end, adds content without newline.
self.compare = [
(x[0][0], int(x[0][3]), int(x[0][4]), x[1])
for x in [(y.split("\t"), y[:-1]) for y in lines
if not y.startswith("#")]]
self.comments = [x[:-1] for x in lines if x.startswith("#")]
def getSubset(self, contig=None, start=None, end=None):
if contig is None:
# all lines
subset = [x[3] for x in self.compare]
else:
if start is not None and end is None:
# until end of contig
subset = [x[3]
for x in self.compare if x[0] == contig
and x[2] > start]
elif start is None and end is not None:
# from start of contig
subset = [x[3]
for x in self.compare if x[0] == contig
and x[1] <= end]
elif start is None and end is None:
subset = [x[3] for x in self.compare if x[0] == contig]
else:
# all within interval
subset = [x[3] for x in self.compare if x[0] == contig
and min(x[2], end) - max(x[1], start) > 0]
if self.with_comments:
subset.extend(self.comments)
return subset
def checkPairwise(self, result, ref):
'''check pairwise results.
'''
result.sort()
ref.sort()
a = set(result)
b = set(ref)
self.assertEqual(
len(result), len(ref),
"unexpected number of results: "
"result=%i, expected ref=%i, differences are %s: %s"
% (len(result), len(ref),
a.difference(b),
b.difference(a)))
for x, d in enumerate(list(zip(result, ref))):
self.assertEqual(
d[0], d[1],
"unexpected results in pair %i:\n'%s', expected\n'%s'" %
(x, d[0], d[1]))
class TestGZFile(IterationTest):
filename = os.path.join(DATADIR, "example.gtf.gz")
with_comments = True
def setUp(self):
IterationTest.setUp(self)
self.gzfile = pysam.GZIterator(self.filename)
def testAll(self):
result = list(self.gzfile)
ref = self.getSubset()
self.checkPairwise(result, ref)
class TestIterationWithoutComments(IterationTest):
'''test iterating with TabixFile.fetch() when
there are no comments in the file.'''
filename = os.path.join(DATADIR,
"example.gtf.gz")
def setUp(self):
IterationTest.setUp(self)
self.tabix = pysam.TabixFile(self.filename)
def tearDown(self):
self.tabix.close()
def testRegionStrings(self):
"""test if access with various region strings
works"""
self.assertEqual(218, len(list(
self.tabix.fetch("chr1"))))
self.assertEqual(218, len(list(
self.tabix.fetch("chr1", 1000))))
self.assertEqual(218, len(list(
self.tabix.fetch("chr1", end=1000000))))
self.assertEqual(218, len(list(
self.tabix.fetch("chr1", 1000, 1000000))))
def testAll(self):
result = list(self.tabix.fetch())
ref = self.getSubset()
self.checkPairwise(result, ref)
def testPerContig(self):
for contig in ("chr1", "chr2", "chr1", "chr2"):
result = list(self.tabix.fetch(contig))
ref = self.getSubset(contig)
self.checkPairwise(result, ref)
def testPerContigToEnd(self):
end = None
for contig in ("chr1", "chr2", "chr1", "chr2"):
for start in range(0, 200000, 1000):
result = list(self.tabix.fetch(contig, start, end))
ref = self.getSubset(contig, start, end)
self.checkPairwise(result, ref)
def testPerContigFromStart(self):
start = None
for contig in ("chr1", "chr2", "chr1", "chr2"):
for end in range(0, 200000, 1000):
result = list(self.tabix.fetch(contig, start, end))
ref = self.getSubset(contig, start, end)
self.checkPairwise(result, ref)
def testPerContig2(self):
start, end = None, None
for contig in ("chr1", "chr2", "chr1", "chr2"):
result = list(self.tabix.fetch(contig, start, end))
ref = self.getSubset(contig, start, end)
self.checkPairwise(result, ref)
def testPerInterval(self):
start, end = None, None
for contig in ("chr1", "chr2", "chr1", "chr2"):
for start in range(0, 200000, 2000):
for end in range(start, start + 2000, 500):
result = list(self.tabix.fetch(contig, start, end))
ref = self.getSubset(contig, start, end)
self.checkPairwise(result, ref)
def testInvalidIntervals(self):
# invalid intervals (start > end)
self.assertRaises(ValueError, self.tabix.fetch, "chr1", 0, -10)
self.assertRaises(ValueError, self.tabix.fetch, "chr1", 200, 0)
# out of range intervals
self.assertRaises(ValueError, self.tabix.fetch, "chr1", -10, 200)
self.assertRaises(ValueError, self.tabix.fetch, "chr1", -10, -20)
# unknown chromosome
self.assertRaises(ValueError, self.tabix.fetch, "chrUn")
# out of range access
# to be implemented
# self.assertRaises(IndexError, self.tabix.fetch, "chr1", 1000000, 2000000)
# raise no error for empty intervals
self.tabix.fetch("chr1", 100, 100)
def testGetContigs(self):
self.assertEqual(sorted(self.tabix.contigs), ["chr1", "chr2"])
# check that contigs is read-only
self.assertRaises(
AttributeError, setattr, self.tabix, "contigs", ["chr1", "chr2"])
def testHeader(self):
ref = []
with gzip.open(self.filename) as inf:
for x in inf:
x = x.decode("ascii")
if not x.startswith("#"):
break
ref.append(x[:-1].encode('ascii'))
header = list(self.tabix.header)
self.assertEqual(ref, header)
def testReopening(self):
'''test repeated opening of the same file.'''
def func1():
# opens any tabix file
with pysam.TabixFile(self.filename) as inf:
pass
for i in range(1000):
func1()
class TestIterationWithComments(TestIterationWithoutComments):
'''test iterating with TabixFile.fetch() when
there are comments in the file.
Tests will create plenty of warnings on stderr.
'''
filename = os.path.join(DATADIR, "example_comments.gtf.gz")
def setUp(self):
TestIterationWithoutComments.setUp(self)
class TestParser(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
def setUp(self):
self.tabix = pysam.TabixFile(self.filename)
self.compare = loadAndConvert(self.filename)
def tearDown(self):
self.tabix.close()
def testRead(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asTuple())):
c = self.compare[x]
self.assertEqual(c, list(r))
self.assertEqual(len(c), len(r))
# test indexing
for y in range(0, len(r)):
self.assertEqual(c[y], r[y])
# test slicing access
for y in range(0, len(r) - 1):
for cc in range(y + 1, len(r)):
self.assertEqual(c[y:cc],
r[y:cc])
self.assertEqual("\t".join(map(str, c)),
str(r))
def testWrite(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asTuple())):
self.assertEqual(self.compare[x], list(r))
c = list(r)
for y in range(len(r)):
r[y] = "test_%05i" % y
c[y] = "test_%05i" % y
self.assertEqual([x for x in c], list(r))
self.assertEqual("\t".join(c), str(r))
# check second assignment
for y in range(len(r)):
r[y] = "test_%05i" % y
self.assertEqual([x for x in c], list(r))
self.assertEqual("\t".join(c), str(r))
def testUnset(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asTuple())):
self.assertEqual(self.compare[x], list(r))
c = list(r)
e = list(r)
for y in range(len(r)):
r[y] = None
c[y] = None
e[y] = ""
self.assertEqual(c, list(r))
self.assertEqual("\t".join(e), str(r))
def testIteratorCompressed(self):
'''test iteration from compressed file.'''
with gzip.open(self.filename) as infile:
for x, r in enumerate(pysam.tabix_iterator(
infile, pysam.asTuple())):
self.assertEqual(self.compare[x], list(r))
self.assertEqual(len(self.compare[x]), len(r))
# test indexing
for c in range(0, len(r)):
self.assertEqual(self.compare[x][c], r[c])
# test slicing access
for c in range(0, len(r) - 1):
for cc in range(c + 1, len(r)):
self.assertEqual(self.compare[x][c:cc],
r[c:cc])
def testIteratorUncompressed(self):
'''test iteration from uncompressed file.'''
tmpfilename = 'tmp_testIteratorUncompressed'
with gzip.open(self.filename, "rb") as infile, \
open(tmpfilename, "wb") as outfile:
outfile.write(infile.read())
with open(tmpfilename) as infile:
for x, r in enumerate(pysam.tabix_iterator(
infile, pysam.asTuple())):
self.assertEqual(self.compare[x], list(r))
self.assertEqual(len(self.compare[x]), len(r))
# test indexing
for c in range(0, len(r)):
self.assertEqual(self.compare[x][c], r[c])
# test slicing access
for c in range(0, len(r) - 1):
for cc in range(c + 1, len(r)):
self.assertEqual(self.compare[x][c:cc],
r[c:cc])
os.unlink(tmpfilename)
def testCopy(self):
a = self.tabix.fetch(parser=pysam.asTuple()).next()
b = copy.copy(a)
self.assertEqual(a, b)
a = self.tabix.fetch(parser=pysam.asGTF()).next()
b = copy.copy(a)
self.assertEqual(a, b)
class TestGTF(TestParser):
def testRead(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asGTF())):
c = self.compare[x]
self.assertEqual(len(c), len(r))
self.assertEqual(list(c), list(r))
self.assertEqual(c, str(r).split("\t"))
self.assertTrue(r.gene_id.startswith("ENSG"))
if r.feature != 'gene':
self.assertTrue(r.transcript_id.startswith("ENST"))
self.assertEqual(c[0], r.contig)
self.assertEqual("\t".join(map(str, c)),
str(r))
def testSetting(self):
for r in self.tabix.fetch(parser=pysam.asGTF()):
r.contig = r.contig + "_test"
r.source = r.source + "_test"
r.feature = r.feature + "_test"
r.start += 10
r.end += 10
r.score = 20
r.strand = "+"
r.frame = 0
r.attributes = 'gene_id "0001";'
class TestIterators(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
iterator = pysam.tabix_generic_iterator
parser = pysam.asTuple
is_compressed = False
def setUp(self):
self.tabix = pysam.TabixFile(self.filename)
self.compare = loadAndConvert(self.filename)
self.tmpfilename_uncompressed = 'tmp_TestIterators'
with gzip.open(self.filename, "rb") as infile, \
open(self.tmpfilename_uncompressed, "wb") as outfile:
outfile.write(infile.read())
def tearDown(self):
self.tabix.close()
os.unlink(self.tmpfilename_uncompressed)
def open(self):
if self.is_compressed:
infile = gzip.open(self.filename)
else:
infile = open(self.tmpfilename_uncompressed)
return infile
def testIteration(self):
with self.open() as infile:
for x, r in enumerate(self.iterator(infile, self.parser())):
self.assertEqual(self.compare[x], list(r))
self.assertEqual(len(self.compare[x]), len(r))
# test indexing
for c in range(0, len(r)):
self.assertEqual(self.compare[x][c], r[c])
# test slicing access
for c in range(0, len(r) - 1):
for cc in range(c + 1, len(r)):
self.assertEqual(self.compare[x][c:cc],
r[c:cc])
def testClosedFile(self):
'''test for error when iterating from closed file.'''
infile = self.open()
infile.close()
# iterating from a closed file should raise a value error
self.assertRaises(ValueError, self.iterator, infile, self.parser())
def testClosedFileIteration(self):
'''test for error when iterating from file that has been closed'''
infile = self.open()
i = self.iterator(infile, self.parser())
x = i.next()
infile.close()
# Not implemented
# self.assertRaises(ValueError, i.next)
class TestIteratorsGenericCompressed(TestIterators):
is_compressed = True
class TestIteratorsFileCompressed(TestIterators):
iterator = pysam.tabix_file_iterator
is_compressed = True
class TestIteratorsFileUncompressed(TestIterators):
iterator = pysam.tabix_file_iterator
is_compressed = False
class TestIterationMalformattedGTFFiles(unittest.TestCase):
'''test reading from malformatted gtf files.'''
parser = pysam.asGTF
iterator = pysam.tabix_generic_iterator
parser = pysam.asGTF
def testGTFTooManyFields(self):
with gzip.open(os.path.join(
DATADIR,
"gtf_toomany_fields.gtf.gz")) as infile:
iterator = self.iterator(
infile,
parser=self.parser())
self.assertRaises(ValueError, iterator.next)
def testGTFTooFewFields(self):
with gzip.open(os.path.join(
DATADIR,
"gtf_toofew_fields.gtf.gz")) as infile:
iterator = self.iterator(
infile,
parser=self.parser())
self.assertRaises(ValueError, iterator.next)
class TestBed(unittest.TestCase):
filename = os.path.join(DATADIR, "example.bed.gz")
def setUp(self):
self.tabix = pysam.TabixFile(self.filename)
self.compare = loadAndConvert(self.filename)
def tearDown(self):
self.tabix.close()
def testRead(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asBed())):
c = self.compare[x]
self.assertEqual(len(c), len(r))
self.assertEqual(c, str(r).split("\t"))
self.assertEqual(c[0], r.contig)
self.assertEqual(int(c[1]), r.start)
self.assertEqual(int(c[2]), r.end)
self.assertEqual(list(c), list(r))
self.assertEqual("\t".join(map(str, c)),
str(r))
def testWrite(self):
for x, r in enumerate(self.tabix.fetch(parser=pysam.asBed())):
c = self.compare[x]
self.assertEqual(c, str(r).split("\t"))
self.assertEqual(list(c), list(r))
r.contig = "test"
self.assertEqual("test", r.contig)
self.assertEqual("test", r[0])
r.start += 1
self.assertEqual(int(c[1]) + 1, r.start)
self.assertEqual(str(int(c[1]) + 1), r[1])
r.end += 1
self.assertEqual(int(c[2]) + 1, r.end)
self.assertEqual(str(int(c[2]) + 1), r[2])
class TestVCF(unittest.TestCase):
filename = os.path.join(DATADIR, "example.vcf40")
def setUp(self):
self.tmpfilename = "tmp_%s.vcf" % id(self)
shutil.copyfile(self.filename, self.tmpfilename)
pysam.tabix_index(self.tmpfilename, preset="vcf")
def tearDown(self):
os.unlink(self.tmpfilename + ".gz")
if os.path.exists(self.tmpfilename + ".gz.tbi"):
os.unlink(self.tmpfilename + ".gz.tbi")
if IS_PYTHON3:
class TestUnicode(unittest.TestCase):
'''test reading from a file with non-ascii characters.'''
filename = os.path.join(DATADIR, "example_unicode.vcf")
def setUp(self):
self.tmpfilename = "tmp_%s.vcf" % id(self)
shutil.copyfile(self.filename, self.tmpfilename)
pysam.tabix_index(self.tmpfilename, preset="vcf")
def testFromTabix(self):
# use ascii encoding - should raise error
with pysam.TabixFile(
self.tmpfilename + ".gz", encoding="ascii") as t:
results = list(t.fetch(parser=pysam.asVCF()))
self.assertRaises(UnicodeDecodeError, getattr, results[1], "id")
with pysam.TabixFile(
self.tmpfilename + ".gz", encoding="utf-8") as t:
results = list(t.fetch(parser=pysam.asVCF()))
self.assertEqual(getattr(results[1], "id"), u"Rene\xe9")
def testFromVCF(self):
self.vcf = pysam.VCF()
self.assertRaises(
UnicodeDecodeError,
self.vcf.connect, self.tmpfilename + ".gz", "ascii")
self.vcf.connect(self.tmpfilename + ".gz", encoding="utf-8")
v = self.vcf.getsamples()[0]
class TestVCFFromTabix(TestVCF):
columns = ("contig", "pos", "id",
"ref", "alt", "qual",
"filter", "info", "format")
def setUp(self):
TestVCF.setUp(self)
self.tabix = pysam.TabixFile(self.tmpfilename + ".gz")
self.compare = loadAndConvert(self.filename)
def tearDown(self):
self.tabix.close()
def testRead(self):
ncolumns = len(self.columns)
for x, r in enumerate(self.tabix.fetch(parser=pysam.asVCF())):
c = self.compare[x]
for y, field in enumerate(self.columns):
# it is ok to have a missing format column
if y == 8 and y == len(c):
continue
if field == "pos":
self.assertEqual(int(c[y]) - 1, getattr(r, field))
self.assertEqual(int(c[y]) - 1, r.pos)
else:
self.assertEqual(c[y], getattr(r, field),
"mismatch in field %s: %s != %s" %
(field, c[y], getattr(r, field)))
if len(c) == 8:
self.assertEqual(0, len(r))
else:
self.assertEqual(len(c), len(r) + ncolumns)
for y in range(len(c) - ncolumns):
self.assertEqual(c[ncolumns + y], r[y])
self.assertEqual("\t".join(map(str, c)),
str(r))
def testWrite(self):
ncolumns = len(self.columns)
for x, r in enumerate(self.tabix.fetch(parser=pysam.asVCF())):
c = self.compare[x]
# check unmodified string
cmp_string = str(r)
ref_string = "\t".join([x for x in c])
self.assertEqual(ref_string, cmp_string)
# set fields and compare field-wise
for y, field in enumerate(self.columns):
# it is ok to have a missing format column
if y == 8 and y == len(c):
continue
if field == "pos":
rpos = getattr(r, field)
self.assertEqual(int(c[y]) - 1, rpos)
self.assertEqual(int(c[y]) - 1, r.pos)
# increment pos by 1
setattr(r, field, rpos + 1)
self.assertEqual(getattr(r, field), rpos + 1)
c[y] = str(int(c[y]) + 1)
else:
setattr(r, field, "test_%i" % y)
c[y] = "test_%i" % y
self.assertEqual(c[y], getattr(r, field),
"mismatch in field %s: %s != %s" %
(field, c[y], getattr(r, field)))
if len(c) == 8:
self.assertEqual(0, len(r))
else:
self.assertEqual(len(c), len(r) + ncolumns)
for y in range(len(c) - ncolumns):
c[ncolumns + y] = "test_%i" % y
r[y] = "test_%i" % y
self.assertEqual(c[ncolumns + y], r[y])
class TestVCFFromVCF(TestVCF):
columns = ("chrom", "pos", "id",
"ref", "alt", "qual",
"filter", "info", "format")
# tests failing while parsing
fail_on_parsing = (
(5, "Flag fields should not have a value"),
(9, "aouao"),
(13, "aoeu"),
(18, "Error BAD_NUMBER_OF_PARAMETERS"),
(24, "Error HEADING_NOT_SEPARATED_BY_TABS"))
# tests failing on opening
fail_on_opening = ((24, "Error HEADING_NOT_SEPARATED_BY_TABS"),
)
fail_on_samples = []
check_samples = False
coordinate_offset = 1
# value returned for missing values
missing_value = "."
missing_quality = -1
def setUp(self):
TestVCF.setUp(self)
self.vcf = pysam.VCF()
self.compare = loadAndConvert(self.filename, encode=False)
def tearDown(self):
self.vcf.close()
def testConnecting(self):
fn = os.path.basename(self.filename)
for x, msg in self.fail_on_opening:
if "%i.vcf" % x == fn:
self.assertRaises(ValueError,
self.vcf.connect,
self.tmpfilename + ".gz")
else:
self.vcf.connect(self.tmpfilename + ".gz")
def get_iterator(self):
with open(self.filename) as f:
fn = os.path.basename(self.filename)
for x, msg in self.fail_on_opening:
if "%i.vcf" % x == fn:
self.assertRaises(ValueError, self.vcf.parse, f)
return
for vcf_code, msg in self.fail_on_parsing:
if "%i.vcf" % vcf_code == fn:
self.assertRaises((ValueError,
AssertionError),
list, self.vcf.parse(f))
return
# python 2.7
# self.assertRaisesRegexp(
# ValueError, re.compile(msg), self.vcf.parse, f)
return list(self.vcf.parse(f))
def get_field_value(self, record, field):
return record[field]
def sample2value(self, r, v):
return r, v
def alt2value(self, r, v):
if r == ".":
return [], v
else:
return r.split(","), list(v)
def filter2value(self, r, v):
if r == "PASS":
return [], v
elif r == ".":
return [], v
else:
return r.split(";"), v
def testParsing(self):
itr = self.get_iterator()
if itr is None:
return
fn = os.path.basename(self.filename)
for vcf_code, msg in self.fail_on_parsing:
if "%i.vcf" % vcf_code == fn:
self.assertRaises((ValueError,
AssertionError),
list, itr)
return
# python 2.7
# self.assertRaisesRegexp(
# ValueError, re.compile(msg), self.vcf.parse, f)
check_samples = self.check_samples
for vcf_code, msg in self.fail_on_samples:
if "%i.vcf" % vcf_code == fn:
check_samples = False
for x, r in enumerate(itr):
c = self.compare[x]
for y, field in enumerate(self.columns):
# it is ok to have a missing format column
if y == 8 and y == len(c):
continue
val = self.get_field_value(r, field)
if field == "pos":
self.assertEqual(int(c[y]) - self.coordinate_offset,
val)
elif field == "alt" or field == "alts":
cc, vv = self.alt2value(c[y], val)
if cc != vv:
# import pdb; pdb.set_trace()
pass
self.assertEqual(
cc, vv,
"mismatch in field %s: expected %s, got %s" %
(field, cc, vv))
elif field == "filter":
cc, vv = self.filter2value(c[y], val)
self.assertEqual(
cc, vv,
"mismatch in field %s: expected %s, got %s" %
(field, cc, vv))
elif field == "info":
# tests for info field not implemented
pass
elif field == "qual" and c[y] == ".":
self.assertEqual(
self.missing_quality, val,
"mismatch in field %s: expected %s, got %s" %
(field, c[y], val))
elif field == "format":
# format field converted to list
self.assertEqual(
c[y].split(":"), list(val),
"mismatch in field %s: expected %s, got %s" %
(field, c[y], val))
elif type(val) in (int, float):
if c[y] == ".":
self.assertEqual(
None, val,
"mismatch in field %s: expected %s, got %s" %
(field, c[y], val))
else:
self.assertAlmostEqual(
float(c[y]), float(val), 2,
"mismatch in field %s: expected %s, got %s" %
(field, c[y], val))
else:
if c[y] == ".":
ref_val = self.missing_value
else:
ref_val = c[y]
self.assertEqual(
ref_val, val,
"mismatch in field %s: expected %s(%s), got %s(%s)" %
(field, ref_val, type(ref_val), val, type(val)))
# parse samples
if check_samples:
if len(c) == 8:
for x, s in enumerate(r.samples):
self.assertEqual(
[], r.samples[s].values(),
"mismatch in sample {}: "
"expected [], got {}, src={}, line={}".format(
s, r.samples[s].values(),
r.samples[s].items(), r))
else:
for x, s in enumerate(r.samples):
ref, comp = self.sample2value(
c[9 + x],
r.samples[s])
self.compare_samples(ref, comp, s, r)
def compare_samples(self, ref, comp, s, r):
if ref != comp:
# check if GT not at start, not VCF conform and
# not supported by cbcf.pyx
k = r.format.keys()
if "GT" in k and k[0] != "GT":
return
# perform an element-wise checto work around rounding differences
for a, b in zip(re.split("[:,;]", ref),
re.split("[:,;]", comp)):
is_float = True
try:
a = float(a)
b = float(b)
except ValueError:
is_float = False
if is_float:
self.assertAlmostEqual(
a, b, 2,
"mismatch in sample {}: "
"expected {}, got {}, src={}, line={}"
.format(
s, ref, comp,
r.samples[s].items(), r))
else:
self.assertEqual(
a, b,
"mismatch in sample {}: "
"expected {}, got {}, src={}, line={}"
.format(
s, ref, comp,
r.samples[s].items(), r))
############################################################################
# create a test class for each example vcf file.
# Two samples are created -
# 1. Testing pysam/tabix access
# 2. Testing the VCF class
vcf_files = glob.glob(os.path.join(DATADIR, "vcf", "*.vcf"))
for vcf_file in vcf_files:
n = "VCFFromTabixTest_%s" % os.path.basename(vcf_file[:-4])
globals()[n] = type(n, (TestVCFFromTabix,), dict(filename=vcf_file,))
n = "VCFFromVCFTest_%s" % os.path.basename(vcf_file[:-4])
globals()[n] = type(n, (TestVCFFromVCF,), dict(filename=vcf_file,))
class TestVCFFromVariantFile(TestVCFFromVCF):
columns = ("chrom", "pos", "id",
"ref", "alts", "qual",
"filter", "info", "format")
fail_on_parsing = []
fail_on_opening = []
coordinate_offset = 0
check_samples = True
fail_on_samples = [
(9, "PL field not defined. Expected to be scalar, but is array"),
(12, "PL field not defined. Expected to be scalar, but is array"),
(18, "PL field not defined. Expected to be scalar, but is array"),
]
# value returned for missing values
missing_value = None
missing_quality = None
vcf = None
def filter2value(self, r, v):
if r == "PASS":
return ["PASS"], list(v)
elif r == ".":
return [], list(v)
else:
return r.split(";"), list(v)
def alt2value(self, r, v):
if r == ".":
return None, v
else:
return r.split(","), list(v)
def sample2value(self, r, smp):
def convert_field(f):
if f is None:
return "."
elif isinstance(f, tuple):
return ",".join(map(convert_field, f))
else:
return str(f)
v = smp.values()
if 'GT' in smp:
alleles = [str(a) if a is not None else '.' for a in smp.allele_indices]
v[0] = '/|'[smp.phased].join(alleles)
comp = ":".join(map(convert_field, v))
if comp.endswith(":."):
comp = comp[:-2]
return r, comp
def setUp(self):
TestVCF.setUp(self)
self.compare = loadAndConvert(self.filename, encode=False)
def tearDown(self):
if self.vcf:
self.vcf.close()
self.vcf = None
def get_iterator(self):
self.vcf = pysam.VariantFile(self.filename)
return self.vcf.fetch()
def get_field_value(self, record, field):
return getattr(record, field)
for vcf_file in vcf_files:
n = "TestVCFFromVariantFile_%s" % os.path.basename(vcf_file[:-4])
globals()[n] = type(n, (TestVCFFromVariantFile,), dict(filename=vcf_file,))
class TestRemoteFileHTTP(unittest.TestCase):
url = "http://genserv.anat.ox.ac.uk/downloads/pysam/test/example_htslib.gtf.gz"
region = "chr1:1-1000"
local = os.path.join(DATADIR, "example.gtf.gz")
def setUp(self):
if not checkURL(self.url):
self.remote_file = None
return
self.remote_file = pysam.TabixFile(self.url, "r")
self.local_file = pysam.TabixFile(self.local, "r")
def tearDown(self):
if self.remote_file is None:
return
self.remote_file.close()
self.local_file.close()
def testFetchAll(self):
if self.remote_file is None:
return
remote_result = list(self.remote_file.fetch())
local_result = list(self.local_file.fetch())
self.assertEqual(len(remote_result), len(local_result))
for x, y in zip(remote_result, local_result):
self.assertEqual(x, y)
def testHeader(self):
if self.remote_file is None:
return
self.assertEqual(list(self.local_file.header), [])
self.assertRaises(AttributeError,
getattr,
self.remote_file,
"header")
class TestIndexArgument(unittest.TestCase):
filename_src = os.path.join(DATADIR, "example.vcf.gz")
filename_dst = "tmp_example.vcf.gz"
index_src = os.path.join(DATADIR, "example.vcf.gz.tbi")
index_dst = "tmp_index_example.vcf.gz.tbi"
preset = "vcf"
def testFetchAll(self):
shutil.copyfile(self.filename_src, self.filename_dst)
shutil.copyfile(self.index_src, self.index_dst)
with pysam.TabixFile(
self.filename_src, "r", index=self.index_src) as same_basename_file:
same_basename_results = list(same_basename_file.fetch())
with pysam.TabixFile(
self.filename_dst, "r", index=self.index_dst) as diff_index_file:
diff_index_result = list(diff_index_file.fetch())
self.assertEqual(len(same_basename_results), len(diff_index_result))
for x, y in zip(same_basename_results, diff_index_result):
self.assertEqual(x, y)
os.unlink(self.filename_dst)
os.unlink(self.index_dst)
def _TestMultipleIteratorsHelper(filename, multiple_iterators):
'''open file within scope, return iterator.'''
tabix = pysam.TabixFile(filename)
iterator = tabix.fetch(parser=pysam.asGTF(),
multiple_iterators=multiple_iterators)
tabix.close()
return iterator
class TestBackwardsCompatibility(unittest.TestCase):
"""check if error is raised if a tabix file from an
old version is accessed from pysam"""
def check(self, filename, raises=None):
with pysam.TabixFile(filename) as tf:
ref = loadAndConvert(filename)
if raises is None:
self.assertEqual(len(list(tf.fetch())), len(ref))
else:
self.assertRaises(raises, tf.fetch)
def testVCF0v23(self):
self.check(os.path.join(DATADIR, "example_0v23.vcf.gz"),
ValueError)
def testBED0v23(self):
self.check(os.path.join(DATADIR, "example_0v23.bed.gz"),
ValueError)
def testVCF0v26(self):
self.check(os.path.join(DATADIR, "example_0v26.vcf.gz"),
ValueError)
def testBED0v26(self):
self.check(os.path.join(DATADIR, "example_0v26.bed.gz"),
ValueError)
def testVCF(self):
self.check(os.path.join(DATADIR, "example.vcf.gz"))
def testBED(self):
self.check(os.path.join(DATADIR, "example.bed.gz"))
def testEmpty(self):
self.check(os.path.join(DATADIR, "empty.bed.gz"))
class TestMultipleIterators(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
def testJoinedIterators(self):
# two iterators working on the same file
with pysam.TabixFile(self.filename) as tabix:
a = tabix.fetch(parser=pysam.asGTF()).next()
b = tabix.fetch(parser=pysam.asGTF()).next()
# the first two lines differ only by the feature field
self.assertEqual(a.feature, "UTR")
self.assertEqual(b.feature, "exon")
self.assertEqual(re.sub("UTR", "", str(a)),
re.sub("exon", "", str(b)))
def testDisjointIterators(self):
# two iterators working on the same file
with pysam.TabixFile(self.filename) as tabix:
a = tabix.fetch(parser=pysam.asGTF(), multiple_iterators=True).next()
b = tabix.fetch(parser=pysam.asGTF(), multiple_iterators=True).next()
# both iterators are at top of file
self.assertEqual(str(a), str(b))
def testScope(self):
# technically it does not really test if the scope is correct
i = _TestMultipleIteratorsHelper(self.filename,
multiple_iterators=True)
self.assertTrue(i.next())
i = _TestMultipleIteratorsHelper(self.filename,
multiple_iterators=False)
self.assertRaises(IOError, i.next)
def testDoubleFetch(self):
with pysam.TabixFile(self.filename) as f:
for a, b in zip(f.fetch(multiple_iterators=True),
f.fetch(multiple_iterators=True)):
self.assertEqual(str(a), str(b))
class TestContextManager(unittest.TestCase):
filename = os.path.join(DATADIR, "example.gtf.gz")
def testManager(self):
with pysam.TabixFile(self.filename) as tabixfile:
tabixfile.fetch()
self.assertEqual(tabixfile.closed, True)
if __name__ == "__main__":
unittest.main()
| mit |
harvardinformatics/jobTree | src/jobTreeStats.py | 3 | 32075 | #!/usr/bin/env python
# Copyright (C) 2011 by Benedict Paten ([email protected])
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
""" Reports the state of your given job tree.
"""
import cPickle
import os
from random import choice
import string
import sys
import time
import xml.etree.ElementTree as ET # not cElementTree so as to allow caching
from xml.dom import minidom # For making stuff pretty
from sonLib.bioio import logger
from sonLib.bioio import logFile
from sonLib.bioio import getBasicOptionParser
from sonLib.bioio import parseBasicOptions
from sonLib.bioio import TempFileTree
from jobTree.src.master import getEnvironmentFileName, getJobFileDirName
from jobTree.src.master import getStatsFileName, getConfigFileName
from jobTree.src.master import getStatsCacheFileName
class JTTag(object):
""" Convenience object that stores xml attributes as object attributes.
"""
def __init__(self, tree):
""" Given an ElementTree tag, build a convenience object.
"""
for name in ["total_time", "median_clock", "total_memory",
"median_wait", "total_number", "average_time",
"median_memory", "min_number_per_slave", "average_wait",
"total_clock", "median_time", "min_time", "min_wait",
"max_clock", "max_wait", "total_wait", "min_clock",
"average_memory", "max_number_per_slave", "max_memory",
"average_memory", "max_number_per_slave", "max_memory",
"median_number_per_slave", "average_number_per_slave",
"max_time", "average_clock", "min_memory", "min_clock",
]:
setattr(self, name, self.__get(tree, name))
self.name = tree.tag
def __get(self, tag, name):
if name in tag.attrib:
value = tag.attrib[name]
else:
return float("nan")
try:
a = float(value)
except ValueError:
a = float("nan")
return a
class ColumnWidths(object):
""" Convenience object that stores the width of columns for printing.
Helps make things pretty.
"""
def __init__(self):
self.categories = ["time", "clock", "wait", "memory"]
self.fields_count = ["count", "min", "med", "ave", "max", "total"]
self.fields = ["min", "med", "ave", "max", "total"]
self.data = {}
for category in self.categories:
for field in self.fields_count:
self.setWidth(category, field, 8)
def title(self, category):
""" Return the total printed length of this category item.
"""
return sum(
map(lambda x: self.getWidth(category, x), self.fields))
def getWidth(self, category, field):
category = category.lower()
return self.data["%s_%s" % (category, field)]
def setWidth(self, category, field, width):
category = category.lower()
self.data["%s_%s" % (category, field)] = width
def report(self):
for c in self.categories:
for f in self.fields:
print '%s %s %d' % (c, f, self.getWidth(c, f))
def initializeOptions(parser):
##########################################
# Construct the arguments.
##########################################
parser.add_option("--jobTree", dest="jobTree", default='./jobTree',
help="Directory containing the job tree. Can also be specified as the single argument to the script. Default=%default")
parser.add_option("--outputFile", dest="outputFile", default=None,
help="File in which to write results")
parser.add_option("--raw", action="store_true", default=False,
help="output the raw xml data.")
parser.add_option("--pretty", "--human", action="store_true", default=False,
help=("if not raw, prettify the numbers to be "
"human readable."))
parser.add_option("--categories",
help=("comma separated list from [time, clock, wait, "
"memory]"))
parser.add_option("--sortCategory", default="time",
help=("how to sort Target list. may be from [alpha, "
"time, clock, wait, memory, count]. "
"default=%(default)s"))
parser.add_option("--sortField", default="med",
help=("how to sort Target list. may be from [min, "
"med, ave, max, total]. "
"default=%(default)s"))
parser.add_option("--sortReverse", "--reverseSort", default=False,
action="store_true",
help="reverse sort order.")
parser.add_option("--cache", default=False, action="store_true",
help="stores a cache to speed up data display.")
def checkOptions(options, args, parser):
""" Check options, throw parser.error() if something goes wrong
"""
logger.info("Parsed arguments")
if len(sys.argv) == 1:
parser.print_help()
sys.exit(0)
assert len(args) <= 1 # Only jobtree may be specified as argument
if len(args) == 1: # Allow jobTree directory as arg
options.jobTree = args[0]
logger.info("Checking if we have files for job tree")
if options.jobTree == None:
parser.error("Specify --jobTree")
if not os.path.exists(options.jobTree):
parser.error("--jobTree %s does not exist"
% options.jobTree)
if not os.path.isdir(options.jobTree):
parser.error("--jobTree %s is not a directory"
% options.jobTree)
if not os.path.isfile(getConfigFileName(options.jobTree)):
parser.error("A valid job tree must contain the config file")
if not os.path.isfile(getStatsFileName(options.jobTree)):
parser.error("The job-tree was run without the --stats flag, "
"so no stats were created")
defaultCategories = ["time", "clock", "wait", "memory"]
if options.categories is None:
options.categories = defaultCategories
else:
options.categories = map(lambda x: x.lower(),
options.categories.split(","))
for c in options.categories:
if c not in defaultCategories:
parser.error("Unknown category %s. Must be from %s"
% (c, str(defaultCategories)))
extraSort = ["count", "alpha"]
if options.sortCategory is not None:
if (options.sortCategory not in defaultCategories and
options.sortCategory not in extraSort):
parser.error("Unknown --sortCategory %s. Must be from %s"
% (options.sortCategory,
str(defaultCategories + extraSort)))
sortFields = ["min", "med", "ave", "max", "total"]
if options.sortField is not None:
if (options.sortField not in sortFields):
parser.error("Unknown --sortField %s. Must be from %s"
% (options.sortField, str(sortFields)))
logger.info("Checked arguments")
def prettyXml(elem):
""" Return a pretty-printed XML string for the ElementTree Element.
"""
roughString = ET.tostring(elem, "utf-8")
reparsed = minidom.parseString(roughString)
return reparsed.toprettyxml(indent=" ")
def padStr(s, field=None):
""" Pad the begining of a string with spaces, if necessary.
"""
if field is None:
return s
else:
if len(s) >= field:
return s
else:
return " " * (field - len(s)) + s
def prettyMemory(k, field=None, isBytes=False):
""" Given input k as kilobytes, return a nicely formatted string.
"""
from math import floor
if isBytes:
k /= 1024
if k < 1024:
return padStr("%gK" % k, field)
if k < (1024 * 1024):
return padStr("%.1fM" % (k / 1024.0), field)
if k < (1024 * 1024 * 1024):
return padStr("%.1fG" % (k / 1024.0 / 1024.0), field)
if k < (1024 * 1024 * 1024 * 1024):
return padStr("%.1fT" % (k / 1024.0 / 1024.0 / 1024.0), field)
if k < (1024 * 1024 * 1024 * 1024 * 1024):
return padStr("%.1fP" % (k / 1024.0 / 1024.0 / 1024.0 / 1024.0), field)
def prettyTime(t, field=None):
""" Given input t as seconds, return a nicely formatted string.
"""
from math import floor
pluralDict = {True: "s", False: ""}
if t < 120:
return padStr("%ds" % t, field)
if t < 120 * 60:
m = floor(t / 60.)
s = t % 60
return padStr("%dm%ds" % (m, s), field)
if t < 25 * 60 * 60:
h = floor(t / 60. / 60.)
m = floor((t - (h * 60. * 60.)) / 60.)
s = t % 60
return padStr("%dh%gm%ds" % (h, m, s), field)
if t < 7 * 24 * 60 * 60:
d = floor(t / 24. / 60. / 60.)
h = floor((t - (d * 24. * 60. * 60.)) / 60. / 60.)
m = floor((t
- (d * 24. * 60. * 60.)
- (h * 60. * 60.))
/ 60.)
s = t % 60
dPlural = pluralDict[d > 1]
return padStr("%dday%s%dh%dm%ds" % (d, dPlural, h, m, s), field)
w = floor(t / 7. / 24. / 60. / 60.)
d = floor((t - (w * 7 * 24 * 60 * 60)) / 24. / 60. / 60.)
h = floor((t
- (w * 7. * 24. * 60. * 60.)
- (d * 24. * 60. * 60.))
/ 60. / 60.)
m = floor((t
- (w * 7. * 24. * 60. * 60.)
- (d * 24. * 60. * 60.)
- (h * 60. * 60.))
/ 60.)
s = t % 60
wPlural = pluralDict[w > 1]
dPlural = pluralDict[d > 1]
return padStr("%dweek%s%dday%s%dh%dm%ds" % (w, wPlural, d,
dPlural, h, m, s), field)
def reportTime(t, options, field=None):
""" Given t seconds, report back the correct format as string.
"""
if options.pretty:
return prettyTime(t, field=field)
else:
if field is not None:
return "%*.2f" % (field, t)
else:
return "%.2f" % t
def reportMemory(k, options, field=None, isBytes=False):
""" Given k kilobytes, report back the correct format as string.
"""
if options.pretty:
return prettyMemory(int(k), field=field, isBytes=isBytes)
else:
if isBytes:
k /= 1024.
if field is not None:
return "%*dK" % (field - 1, k) # -1 for the "K"
else:
return "%dK" % int(k)
def reportNumber(n, options, field=None):
""" Given n an integer, report back the correct format as string.
"""
if field is not None:
return "%*g" % (field, n)
else:
return "%g" % n
def refineData(root, options):
""" walk down from the root and gather up the important bits.
"""
slave = JTTag(root.find("slave"))
target = JTTag(root.find("target"))
targetTypesTree = root.find("target_types")
targetTypes = []
for child in targetTypesTree:
targetTypes.append(JTTag(child))
return root, slave, target, targetTypes
def sprintTag(key, tag, options, columnWidths=None):
""" Generate a pretty-print ready string from a JTTag().
"""
if columnWidths is None:
columnWidths = ColumnWidths()
header = " %7s " % decorateTitle("Count", options)
sub_header = " %7s " % "n"
tag_str = " %s" % reportNumber(tag.total_number, options, field=7)
out_str = ""
if key == "target":
out_str += " %-12s | %7s%7s%7s%7s\n" % ("Slave Jobs", "min",
"med", "ave", "max")
slave_str = "%s| " % (" " * 14)
for t in [tag.min_number_per_slave, tag.median_number_per_slave,
tag.average_number_per_slave, tag.max_number_per_slave]:
slave_str += reportNumber(t, options, field=7)
out_str += slave_str + "\n"
if "time" in options.categories:
header += "| %*s " % (columnWidths.title("time"),
decorateTitle("Time", options))
sub_header += decorateSubHeader("Time", columnWidths, options)
tag_str += " | "
for t, width in [
(tag.min_time, columnWidths.getWidth("time", "min")),
(tag.median_time, columnWidths.getWidth("time", "med")),
(tag.average_time, columnWidths.getWidth("time", "ave")),
(tag.max_time, columnWidths.getWidth("time", "max")),
(tag.total_time, columnWidths.getWidth("time", "total")),
]:
tag_str += reportTime(t, options, field=width)
if "clock" in options.categories:
header += "| %*s " % (columnWidths.title("clock"),
decorateTitle("Clock", options))
sub_header += decorateSubHeader("Clock", columnWidths, options)
tag_str += " | "
for t, width in [
(tag.min_clock, columnWidths.getWidth("clock", "min")),
(tag.median_clock, columnWidths.getWidth("clock", "med")),
(tag.average_clock, columnWidths.getWidth("clock", "ave")),
(tag.max_clock, columnWidths.getWidth("clock", "max")),
(tag.total_clock, columnWidths.getWidth("clock", "total")),
]:
tag_str += reportTime(t, options, field=width)
if "wait" in options.categories:
header += "| %*s " % (columnWidths.title("wait"),
decorateTitle("Wait", options))
sub_header += decorateSubHeader("Wait", columnWidths, options)
tag_str += " | "
for t, width in [
(tag.min_wait, columnWidths.getWidth("wait", "min")),
(tag.median_wait, columnWidths.getWidth("wait", "med")),
(tag.average_wait, columnWidths.getWidth("wait", "ave")),
(tag.max_wait, columnWidths.getWidth("wait", "max")),
(tag.total_wait, columnWidths.getWidth("wait", "total")),
]:
tag_str += reportTime(t, options, field=width)
if "memory" in options.categories:
header += "| %*s " % (columnWidths.title("memory"),
decorateTitle("Memory", options))
sub_header += decorateSubHeader("Memory", columnWidths, options)
tag_str += " | "
for t, width in [
(tag.min_memory, columnWidths.getWidth("memory", "min")),
(tag.median_memory, columnWidths.getWidth("memory", "med")),
(tag.average_memory, columnWidths.getWidth("memory", "ave")),
(tag.max_memory, columnWidths.getWidth("memory", "max")),
(tag.total_memory, columnWidths.getWidth("memory", "total")),
]:
tag_str += reportMemory(t, options, field=width)
out_str += header + "\n"
out_str += sub_header + "\n"
out_str += tag_str + "\n"
return out_str
def decorateTitle(title, options):
""" Add a marker to TITLE if the TITLE is sorted on.
"""
if title.lower() == options.sortCategory:
return "%s*" % title
else:
return title
def decorateSubHeader(title, columnWidths, options):
""" Add a marker to the correct field if the TITLE is sorted on.
"""
title = title.lower()
if title != options.sortCategory:
s = "| %*s%*s%*s%*s%*s " % (
columnWidths.getWidth(title, "min"), "min",
columnWidths.getWidth(title, "med"), "med",
columnWidths.getWidth(title, "ave"), "ave",
columnWidths.getWidth(title, "max"), "max",
columnWidths.getWidth(title, "total"), "total")
return s
else:
s = "| "
for field, width in [("min", columnWidths.getWidth(title, "min")),
("med", columnWidths.getWidth(title, "med")),
("ave", columnWidths.getWidth(title, "ave")),
("max", columnWidths.getWidth(title, "max")),
("total", columnWidths.getWidth(title, "total"))]:
if options.sortField == field:
s += "%*s*" % (width - 1, field)
else:
s += "%*s" % (width, field)
s += " "
return s
def get(tree, name):
""" Return a float value attribute NAME from TREE.
"""
if name in tree.attrib:
value = tree.attrib[name]
else:
return float("nan")
try:
a = float(value)
except ValueError:
a = float("nan")
return a
def sortTargets(targetTypes, options):
""" Return a targetTypes all sorted.
"""
longforms = {"med": "median",
"ave": "average",
"min": "min",
"total": "total",
"max": "max",}
sortField = longforms[options.sortField]
if (options.sortCategory == "time" or
options.sortCategory == "clock" or
options.sortCategory == "wait" or
options.sortCategory == "memory"
):
return sorted(
targetTypes,
key=lambda tag: getattr(tag, "%s_%s"
% (sortField, options.sortCategory)),
reverse=options.sortReverse)
elif options.sortCategory == "alpha":
return sorted(
targetTypes, key=lambda tag: tag.name,
reverse=options.sortReverse)
elif options.sortCategory == "count":
return sorted(targetTypes, key=lambda tag: tag.total_number,
reverse=options.sortReverse)
def reportPrettyData(root, slave, target, target_types, options):
""" print the important bits out.
"""
out_str = "Batch System: %s\n" % root.attrib["batch_system"]
out_str += ("Default CPU: %s Default Memory: %s\n"
"Job Time: %s Max CPUs: %s Max Threads: %s\n" % (
reportNumber(get(root, "default_cpu"), options),
reportMemory(get(root, "default_memory"), options, isBytes=True),
reportTime(get(root, "job_time"), options),
reportNumber(get(root, "max_cpus"), options),
reportNumber(get(root, "max_threads"), options),
))
out_str += ("Total Clock: %s Total Runtime: %s\n" % (
reportTime(get(root, "total_clock"), options),
reportTime(get(root, "total_run_time"), options),
))
target_types = sortTargets(target_types, options)
columnWidths = computeColumnWidths(target_types, slave, target, options)
out_str += "Slave\n"
out_str += sprintTag("slave", slave, options, columnWidths=columnWidths)
out_str += "Target\n"
out_str += sprintTag("target", target, options, columnWidths=columnWidths)
for t in target_types:
out_str += " %s\n" % t.name
out_str += sprintTag(t.name, t, options, columnWidths=columnWidths)
return out_str
def computeColumnWidths(target_types, slave, target, options):
""" Return a ColumnWidths() object with the correct max widths.
"""
cw = ColumnWidths()
for t in target_types:
updateColumnWidths(t, cw, options)
updateColumnWidths(slave, cw, options)
updateColumnWidths(target, cw, options)
return cw
def updateColumnWidths(tag, cw, options):
""" Update the column width attributes for this tag's fields.
"""
longforms = {"med": "median",
"ave": "average",
"min": "min",
"total": "total",
"max": "max",}
for category in ["time", "clock", "wait", "memory"]:
if category in options.categories:
for field in ["min", "med", "ave", "max", "total"]:
t = getattr(tag, "%s_%s" % (longforms[field], category))
if category in ["time", "clock", "wait"]:
s = reportTime(t, options,
field=cw.getWidth(category, field)).strip()
else:
s = reportMemory(t, options,
field=cw.getWidth(category, field)).strip()
if len(s) >= cw.getWidth(category, field):
# this string is larger than max, width must be increased
cw.setWidth(category, field, len(s) + 1)
def buildElement(element, items, itemName):
""" Create an element for output.
"""
def __round(i):
if i < 0:
logger.debug("I got a less than 0 value: %s" % i)
return 0.0
return i
itemTimes = [ __round(float(item.attrib["time"])) for item in items ]
itemTimes.sort()
itemClocks = [ __round(float(item.attrib["clock"])) for item in items ]
itemClocks.sort()
itemWaits = [ __round(__round(float(item.attrib["time"])) -
__round(float(item.attrib["clock"])))
for item in items ]
itemWaits.sort()
itemMemory = [ __round(float(item.attrib["memory"])) for item in items ]
itemMemory.sort()
assert len(itemClocks) == len(itemTimes)
assert len(itemClocks) == len(itemWaits)
if len(itemTimes) == 0:
itemTimes.append(0)
itemClocks.append(0)
itemWaits.append(0)
itemMemory.append(0)
return ET.SubElement(
element, itemName,
{"total_number":str(len(items)),
"total_time":str(sum(itemTimes)),
"median_time":str(itemTimes[len(itemTimes)/2]),
"average_time":str(sum(itemTimes)/len(itemTimes)),
"min_time":str(min(itemTimes)),
"max_time":str(max(itemTimes)),
"total_clock":str(sum(itemClocks)),
"median_clock":str(itemClocks[len(itemClocks)/2]),
"average_clock":str(sum(itemClocks)/len(itemClocks)),
"min_clock":str(min(itemClocks)),
"max_clock":str(max(itemClocks)),
"total_wait":str(sum(itemWaits)),
"median_wait":str(itemWaits[len(itemWaits)/2]),
"average_wait":str(sum(itemWaits)/len(itemWaits)),
"min_wait":str(min(itemWaits)),
"max_wait":str(max(itemWaits)),
"total_memory":str(sum(itemMemory)),
"median_memory":str(itemMemory[len(itemMemory)/2]),
"average_memory":str(sum(itemMemory)/len(itemMemory)),
"min_memory":str(min(itemMemory)),
"max_memory":str(max(itemMemory))
})
def createSummary(element, containingItems, containingItemName, getFn):
itemCounts = [len(getFn(containingItem)) for
containingItem in containingItems]
itemCounts.sort()
if len(itemCounts) == 0:
itemCounts.append(0)
element.attrib["median_number_per_%s" %
containingItemName] = str(itemCounts[len(itemCounts) / 2])
element.attrib["average_number_per_%s" %
containingItemName] = str(float(sum(itemCounts)) /
len(itemCounts))
element.attrib["min_number_per_%s" %
containingItemName] = str(min(itemCounts))
element.attrib["max_number_per_%s" %
containingItemName] = str(max(itemCounts))
def getSettings(options):
""" Collect and return the stats and config data.
"""
config_file = getConfigFileName(options.jobTree)
stats_file = getStatsFileName(options.jobTree)
try:
config = ET.parse(config_file).getroot()
except ET.ParseError:
sys.stderr.write("The config file xml, %s, is empty.\n" % config_file)
raise
try:
stats = ET.parse(stats_file).getroot() # Try parsing the whole file.
except ET.ParseError: # If it doesn't work then we build the file incrementally
sys.stderr.write("The job tree stats file is incomplete or corrupt, "
"we'll try instead to parse what's in the file "
"incrementally until we reach an error.\n")
fH = open(stats_file, 'r') # Open the file for editing
stats = ET.Element("stats")
try:
for event, elem in ET.iterparse(fH):
if elem.tag == 'slave':
stats.append(elem)
except ET.ParseError:
pass # Do nothing at this point
finally:
fH.close()
return config, stats
def processData(config, stats, options):
##########################################
# Collate the stats and report
##########################################
if stats.find("total_time") == None: # Hack to allow unfinished jobtrees.
ET.SubElement(stats, "total_time", { "time":"0.0", "clock":"0.0"})
collatedStatsTag = ET.Element(
"collated_stats",
{"total_run_time":stats.find("total_time").attrib["time"],
"total_clock":stats.find("total_time").attrib["clock"],
"batch_system":config.attrib["batch_system"],
"job_time":config.attrib["job_time"],
"default_memory":config.attrib["default_memory"],
"default_cpu":config.attrib["default_cpu"],
"max_cpus":config.attrib["max_cpus"],
"max_threads":config.attrib["max_threads"] })
# Add slave info
slaves = stats.findall("slave")
buildElement(collatedStatsTag, slaves, "slave")
# Add aggregated target info
targets = []
for slave in slaves:
targets += slave.findall("target")
def fn4(job):
return list(slave.findall("target"))
createSummary(buildElement(collatedStatsTag, targets, "target"),
slaves, "slave", fn4)
# Get info for each target
targetNames = set()
for target in targets:
targetNames.add(target.attrib["class"])
targetTypesTag = ET.SubElement(collatedStatsTag, "target_types")
for targetName in targetNames:
targetTypes = [ target for target in targets
if target.attrib["class"] == targetName ]
targetTypeTag = buildElement(targetTypesTag, targetTypes, targetName)
return collatedStatsTag
def reportData(xml_tree, options):
# Now dump it all out to file
if options.raw:
out_str = prettyXml(xml_tree)
else:
root, slave, target, target_types = refineData(xml_tree, options)
out_str = reportPrettyData(root, slave, target, target_types, options)
if options.outputFile != None:
fileHandle = open(options.outputFile, "w")
fileHandle.write(out_str)
fileHandle.close()
# Now dump onto the screen
print out_str
def getNullFile():
""" Guaranteed to return a valid path to a file that does not exist.
"""
charSet = string.ascii_lowercase + "0123456789"
prefix = os.getcwd()
nullFile = "null_%s" % "".join(choice(charSet) for x in xrange(6))
while os.path.exists(os.path.join(prefix, nullFile)):
nullFile = "null_%s" % "".join(choice(charSet) for x in xrange(6))
return os.path.join(os.getcwd(), nullFile)
def getPreferredStatsCacheFileName(options):
""" Determine if the jobtree or the os.getcwd() version should be used.
If no good option exists, return a nonexistent file path.
Note you MUST check to see if the return value exists before using.
"""
null_file = getNullFile()
location_jt = getStatsCacheFileName(options.jobTree)
location_local = os.path.abspath(os.path.join(os.getcwd(),
".stats_cache.pickle"))
# start by looking for the current directory cache.
if os.path.exists(location_local):
loc_file = open(location_local, "r")
data, loc = cPickle.load(loc_file)
if getStatsFileName(options.jobTree) != loc:
# the local cache is from looking up a *different* jobTree
location_local = null_file
if os.path.exists(location_jt) and not os.path.exists(location_local):
# use the jobTree directory version
return location_jt
elif not os.path.exists(location_jt) and os.path.exists(location_local):
# use the os.getcwd() version
return location_local
elif os.path.exists(location_jt) and os.path.exists(location_local):
# check file modify times and use the most recent version
mtime_jt = os.path.getmtime(location_jt)
mtime_local = os.path.getmtime(location_local)
if mtime_jt > mtime_local:
return location_jt
else:
return location_local
else:
return null_file
def unpackData(options):
"""unpackData() opens up the pickle of the last run and pulls out
all the relevant data.
"""
cache_file = getPreferredStatsCacheFileName(options)
if not os.path.exists(cache_file):
return None
if os.path.exists(cache_file):
f = open(cache_file, "r")
try:
data, location = cPickle.load(f)
except EOFError:
# bad cache.
return None
finally:
f.close()
if location == getStatsFileName(options.jobTree):
return data
return None
def packData(data, options):
""" packData stores all of the data in the appropriate pickle cache file.
"""
stats_file = getStatsFileName(options.jobTree)
cache_file = getStatsCacheFileName(options.jobTree)
try:
# try to write to the jobTree directory
payload = (data, stats_file)
f = open(cache_file, "wb")
cPickle.dump(payload, f, 2) # 2 is binary format
f.close()
except IOError:
if not options.cache:
return
# try to write to the current working directory only if --cache
cache_file = os.path.abspath(os.path.join(os.getcwd(),
".stats_cache.pickle"))
payload = (data, stats_file)
f = open(cache_file, "wb")
cPickle.dump(payload, f, 2) # 2 is binary format
f.close()
def cacheAvailable(options):
""" Check to see if a cache is available, return it.
"""
if not os.path.exists(getStatsFileName(options.jobTree)):
return None
cache_file = getPreferredStatsCacheFileName(options)
if not os.path.exists(cache_file):
return None
# check the modify times on the files, see if the cache should be recomputed
mtime_stats = os.path.getmtime(getStatsFileName(options.jobTree))
mtime_cache = os.path.getmtime(cache_file)
if mtime_stats > mtime_cache:
# recompute cache
return None
# cache is fresh, return the cache
return unpackData(options)
def main():
""" Reports stats on the job-tree, use with --stats option to jobTree.
"""
parser = getBasicOptionParser(
"usage: %prog [--jobTree] JOB_TREE_DIR [options]", "%prog 0.1")
initializeOptions(parser)
options, args = parseBasicOptions(parser)
checkOptions(options, args, parser)
collatedStatsTag = cacheAvailable(options)
if collatedStatsTag is None:
config, stats = getSettings(options)
collatedStatsTag = processData(config, stats, options)
reportData(collatedStatsTag, options)
packData(collatedStatsTag, options)
def _test():
import doctest
return doctest.testmod()
if __name__ == "__main__":
_test()
main()
| mit |
z-jason/anki | aqt/forms/dconf.py | 1 | 20860 | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'designer/dconf.ui'
#
# Created: Sun Mar 30 10:19:28 2014
# by: PyQt4 UI code generator 4.10.3
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName(_fromUtf8("Dialog"))
Dialog.resize(494, 454)
self.verticalLayout = QtGui.QVBoxLayout(Dialog)
self.verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.horizontalLayout_2 = QtGui.QHBoxLayout()
self.horizontalLayout_2.setObjectName(_fromUtf8("horizontalLayout_2"))
self.label_31 = QtGui.QLabel(Dialog)
self.label_31.setObjectName(_fromUtf8("label_31"))
self.horizontalLayout_2.addWidget(self.label_31)
self.dconf = QtGui.QComboBox(Dialog)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(3)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.dconf.sizePolicy().hasHeightForWidth())
self.dconf.setSizePolicy(sizePolicy)
self.dconf.setObjectName(_fromUtf8("dconf"))
self.horizontalLayout_2.addWidget(self.dconf)
self.confOpts = QtGui.QToolButton(Dialog)
self.confOpts.setMaximumSize(QtCore.QSize(16777215, 32))
self.confOpts.setText(_fromUtf8(""))
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(_fromUtf8(":/icons/gears.png")), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.confOpts.setIcon(icon)
self.confOpts.setToolButtonStyle(QtCore.Qt.ToolButtonTextBesideIcon)
self.confOpts.setArrowType(QtCore.Qt.NoArrow)
self.confOpts.setObjectName(_fromUtf8("confOpts"))
self.horizontalLayout_2.addWidget(self.confOpts)
self.verticalLayout.addLayout(self.horizontalLayout_2)
self.count = QtGui.QLabel(Dialog)
self.count.setStyleSheet(_fromUtf8("* { color: red }"))
self.count.setText(_fromUtf8(""))
self.count.setAlignment(QtCore.Qt.AlignCenter)
self.count.setWordWrap(True)
self.count.setObjectName(_fromUtf8("count"))
self.verticalLayout.addWidget(self.count)
self.tabWidget = QtGui.QTabWidget(Dialog)
self.tabWidget.setObjectName(_fromUtf8("tabWidget"))
self.tab = QtGui.QWidget()
self.tab.setObjectName(_fromUtf8("tab"))
self.verticalLayout_2 = QtGui.QVBoxLayout(self.tab)
self.verticalLayout_2.setObjectName(_fromUtf8("verticalLayout_2"))
self.gridLayout = QtGui.QGridLayout()
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self.label_27 = QtGui.QLabel(self.tab)
self.label_27.setObjectName(_fromUtf8("label_27"))
self.gridLayout.addWidget(self.label_27, 5, 2, 1, 1)
self.label_24 = QtGui.QLabel(self.tab)
self.label_24.setObjectName(_fromUtf8("label_24"))
self.gridLayout.addWidget(self.label_24, 5, 0, 1, 1)
self.lrnFactor = QtGui.QSpinBox(self.tab)
self.lrnFactor.setMinimum(130)
self.lrnFactor.setMaximum(999)
self.lrnFactor.setObjectName(_fromUtf8("lrnFactor"))
self.gridLayout.addWidget(self.lrnFactor, 5, 1, 1, 1)
self.label_8 = QtGui.QLabel(self.tab)
self.label_8.setObjectName(_fromUtf8("label_8"))
self.gridLayout.addWidget(self.label_8, 1, 0, 1, 1)
self.lrnEasyInt = QtGui.QSpinBox(self.tab)
self.lrnEasyInt.setMinimum(1)
self.lrnEasyInt.setObjectName(_fromUtf8("lrnEasyInt"))
self.gridLayout.addWidget(self.lrnEasyInt, 4, 1, 1, 1)
self.lrnGradInt = QtGui.QSpinBox(self.tab)
self.lrnGradInt.setMinimum(1)
self.lrnGradInt.setObjectName(_fromUtf8("lrnGradInt"))
self.gridLayout.addWidget(self.lrnGradInt, 3, 1, 1, 1)
self.newplim = QtGui.QLabel(self.tab)
self.newplim.setText(_fromUtf8(""))
self.newplim.setObjectName(_fromUtf8("newplim"))
self.gridLayout.addWidget(self.newplim, 2, 2, 1, 1)
self.label_5 = QtGui.QLabel(self.tab)
self.label_5.setObjectName(_fromUtf8("label_5"))
self.gridLayout.addWidget(self.label_5, 4, 0, 1, 1)
self.label_4 = QtGui.QLabel(self.tab)
self.label_4.setObjectName(_fromUtf8("label_4"))
self.gridLayout.addWidget(self.label_4, 3, 0, 1, 1)
self.newPerDay = QtGui.QSpinBox(self.tab)
self.newPerDay.setMaximum(9999)
self.newPerDay.setObjectName(_fromUtf8("newPerDay"))
self.gridLayout.addWidget(self.newPerDay, 2, 1, 1, 1)
self.label_6 = QtGui.QLabel(self.tab)
self.label_6.setObjectName(_fromUtf8("label_6"))
self.gridLayout.addWidget(self.label_6, 2, 0, 1, 1)
self.lrnSteps = QtGui.QLineEdit(self.tab)
self.lrnSteps.setObjectName(_fromUtf8("lrnSteps"))
self.gridLayout.addWidget(self.lrnSteps, 0, 1, 1, 2)
self.label_2 = QtGui.QLabel(self.tab)
self.label_2.setObjectName(_fromUtf8("label_2"))
self.gridLayout.addWidget(self.label_2, 0, 0, 1, 1)
self.newOrder = QtGui.QComboBox(self.tab)
self.newOrder.setObjectName(_fromUtf8("newOrder"))
self.gridLayout.addWidget(self.newOrder, 1, 1, 1, 2)
self.bury = QtGui.QCheckBox(self.tab)
self.bury.setObjectName(_fromUtf8("bury"))
self.gridLayout.addWidget(self.bury, 6, 0, 1, 3)
self.label_9 = QtGui.QLabel(self.tab)
self.label_9.setObjectName(_fromUtf8("label_9"))
self.gridLayout.addWidget(self.label_9, 4, 2, 1, 1)
self.label_7 = QtGui.QLabel(self.tab)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label_7.sizePolicy().hasHeightForWidth())
self.label_7.setSizePolicy(sizePolicy)
self.label_7.setObjectName(_fromUtf8("label_7"))
self.gridLayout.addWidget(self.label_7, 3, 2, 1, 1)
self.verticalLayout_2.addLayout(self.gridLayout)
spacerItem = QtGui.QSpacerItem(20, 40, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout_2.addItem(spacerItem)
self.tabWidget.addTab(self.tab, _fromUtf8(""))
self.tab_3 = QtGui.QWidget()
self.tab_3.setObjectName(_fromUtf8("tab_3"))
self.verticalLayout_4 = QtGui.QVBoxLayout(self.tab_3)
self.verticalLayout_4.setObjectName(_fromUtf8("verticalLayout_4"))
self.gridLayout_3 = QtGui.QGridLayout()
self.gridLayout_3.setObjectName(_fromUtf8("gridLayout_3"))
self.label_20 = QtGui.QLabel(self.tab_3)
self.label_20.setObjectName(_fromUtf8("label_20"))
self.gridLayout_3.addWidget(self.label_20, 1, 0, 1, 1)
self.easyBonus = QtGui.QSpinBox(self.tab_3)
self.easyBonus.setMinimum(100)
self.easyBonus.setMaximum(1000)
self.easyBonus.setSingleStep(5)
self.easyBonus.setObjectName(_fromUtf8("easyBonus"))
self.gridLayout_3.addWidget(self.easyBonus, 1, 1, 1, 1)
self.label_21 = QtGui.QLabel(self.tab_3)
self.label_21.setObjectName(_fromUtf8("label_21"))
self.gridLayout_3.addWidget(self.label_21, 1, 2, 1, 1)
self.label_34 = QtGui.QLabel(self.tab_3)
self.label_34.setObjectName(_fromUtf8("label_34"))
self.gridLayout_3.addWidget(self.label_34, 2, 2, 1, 1)
self.revPerDay = QtGui.QSpinBox(self.tab_3)
self.revPerDay.setMinimum(0)
self.revPerDay.setMaximum(9999)
self.revPerDay.setObjectName(_fromUtf8("revPerDay"))
self.gridLayout_3.addWidget(self.revPerDay, 0, 1, 1, 1)
self.label_33 = QtGui.QLabel(self.tab_3)
self.label_33.setObjectName(_fromUtf8("label_33"))
self.gridLayout_3.addWidget(self.label_33, 2, 0, 1, 1)
self.label_37 = QtGui.QLabel(self.tab_3)
self.label_37.setObjectName(_fromUtf8("label_37"))
self.gridLayout_3.addWidget(self.label_37, 0, 0, 1, 1)
self.label_3 = QtGui.QLabel(self.tab_3)
self.label_3.setObjectName(_fromUtf8("label_3"))
self.gridLayout_3.addWidget(self.label_3, 3, 0, 1, 1)
self.maxIvl = QtGui.QSpinBox(self.tab_3)
self.maxIvl.setMinimum(1)
self.maxIvl.setMaximum(99999)
self.maxIvl.setObjectName(_fromUtf8("maxIvl"))
self.gridLayout_3.addWidget(self.maxIvl, 3, 1, 1, 1)
self.label_23 = QtGui.QLabel(self.tab_3)
self.label_23.setObjectName(_fromUtf8("label_23"))
self.gridLayout_3.addWidget(self.label_23, 3, 2, 1, 1)
self.revplim = QtGui.QLabel(self.tab_3)
self.revplim.setText(_fromUtf8(""))
self.revplim.setObjectName(_fromUtf8("revplim"))
self.gridLayout_3.addWidget(self.revplim, 0, 2, 1, 1)
self.fi1 = QtGui.QDoubleSpinBox(self.tab_3)
self.fi1.setDecimals(0)
self.fi1.setMinimum(0.0)
self.fi1.setMaximum(999.0)
self.fi1.setSingleStep(1.0)
self.fi1.setProperty("value", 100.0)
self.fi1.setObjectName(_fromUtf8("fi1"))
self.gridLayout_3.addWidget(self.fi1, 2, 1, 1, 1)
self.buryRev = QtGui.QCheckBox(self.tab_3)
self.buryRev.setObjectName(_fromUtf8("buryRev"))
self.gridLayout_3.addWidget(self.buryRev, 4, 0, 1, 3)
self.verticalLayout_4.addLayout(self.gridLayout_3)
spacerItem1 = QtGui.QSpacerItem(20, 152, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout_4.addItem(spacerItem1)
self.tabWidget.addTab(self.tab_3, _fromUtf8(""))
self.tab_2 = QtGui.QWidget()
self.tab_2.setObjectName(_fromUtf8("tab_2"))
self.verticalLayout_3 = QtGui.QVBoxLayout(self.tab_2)
self.verticalLayout_3.setObjectName(_fromUtf8("verticalLayout_3"))
self.gridLayout_2 = QtGui.QGridLayout()
self.gridLayout_2.setObjectName(_fromUtf8("gridLayout_2"))
self.label_17 = QtGui.QLabel(self.tab_2)
self.label_17.setObjectName(_fromUtf8("label_17"))
self.gridLayout_2.addWidget(self.label_17, 0, 0, 1, 1)
self.lapSteps = QtGui.QLineEdit(self.tab_2)
self.lapSteps.setObjectName(_fromUtf8("lapSteps"))
self.gridLayout_2.addWidget(self.lapSteps, 0, 1, 1, 2)
self.label = QtGui.QLabel(self.tab_2)
self.label.setObjectName(_fromUtf8("label"))
self.gridLayout_2.addWidget(self.label, 1, 0, 1, 1)
self.label_10 = QtGui.QLabel(self.tab_2)
self.label_10.setObjectName(_fromUtf8("label_10"))
self.gridLayout_2.addWidget(self.label_10, 3, 0, 1, 1)
self.leechThreshold = QtGui.QSpinBox(self.tab_2)
self.leechThreshold.setObjectName(_fromUtf8("leechThreshold"))
self.gridLayout_2.addWidget(self.leechThreshold, 3, 1, 1, 1)
self.label_11 = QtGui.QLabel(self.tab_2)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label_11.sizePolicy().hasHeightForWidth())
self.label_11.setSizePolicy(sizePolicy)
self.label_11.setObjectName(_fromUtf8("label_11"))
self.gridLayout_2.addWidget(self.label_11, 3, 2, 1, 1)
self.label_12 = QtGui.QLabel(self.tab_2)
self.label_12.setObjectName(_fromUtf8("label_12"))
self.gridLayout_2.addWidget(self.label_12, 4, 0, 1, 1)
self.lapMinInt = QtGui.QSpinBox(self.tab_2)
self.lapMinInt.setMinimum(1)
self.lapMinInt.setMaximum(99)
self.lapMinInt.setObjectName(_fromUtf8("lapMinInt"))
self.gridLayout_2.addWidget(self.lapMinInt, 2, 1, 1, 1)
self.label_13 = QtGui.QLabel(self.tab_2)
self.label_13.setObjectName(_fromUtf8("label_13"))
self.gridLayout_2.addWidget(self.label_13, 2, 0, 1, 1)
self.label_14 = QtGui.QLabel(self.tab_2)
self.label_14.setObjectName(_fromUtf8("label_14"))
self.gridLayout_2.addWidget(self.label_14, 2, 2, 1, 1)
self.horizontalLayout = QtGui.QHBoxLayout()
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.leechAction = QtGui.QComboBox(self.tab_2)
self.leechAction.setObjectName(_fromUtf8("leechAction"))
self.leechAction.addItem(_fromUtf8(""))
self.leechAction.addItem(_fromUtf8(""))
self.horizontalLayout.addWidget(self.leechAction)
spacerItem2 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem2)
self.gridLayout_2.addLayout(self.horizontalLayout, 4, 1, 1, 2)
self.label_28 = QtGui.QLabel(self.tab_2)
self.label_28.setObjectName(_fromUtf8("label_28"))
self.gridLayout_2.addWidget(self.label_28, 1, 2, 1, 1)
self.lapMult = QtGui.QSpinBox(self.tab_2)
self.lapMult.setMaximum(100)
self.lapMult.setSingleStep(5)
self.lapMult.setObjectName(_fromUtf8("lapMult"))
self.gridLayout_2.addWidget(self.lapMult, 1, 1, 1, 1)
self.verticalLayout_3.addLayout(self.gridLayout_2)
spacerItem3 = QtGui.QSpacerItem(20, 72, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout_3.addItem(spacerItem3)
self.tabWidget.addTab(self.tab_2, _fromUtf8(""))
self.tab_5 = QtGui.QWidget()
self.tab_5.setObjectName(_fromUtf8("tab_5"))
self.verticalLayout_6 = QtGui.QVBoxLayout(self.tab_5)
self.verticalLayout_6.setObjectName(_fromUtf8("verticalLayout_6"))
self.gridLayout_5 = QtGui.QGridLayout()
self.gridLayout_5.setObjectName(_fromUtf8("gridLayout_5"))
self.label_25 = QtGui.QLabel(self.tab_5)
self.label_25.setObjectName(_fromUtf8("label_25"))
self.gridLayout_5.addWidget(self.label_25, 0, 0, 1, 1)
self.maxTaken = QtGui.QSpinBox(self.tab_5)
self.maxTaken.setMinimum(30)
self.maxTaken.setMaximum(3600)
self.maxTaken.setSingleStep(10)
self.maxTaken.setObjectName(_fromUtf8("maxTaken"))
self.gridLayout_5.addWidget(self.maxTaken, 0, 1, 1, 1)
self.label_26 = QtGui.QLabel(self.tab_5)
self.label_26.setObjectName(_fromUtf8("label_26"))
self.gridLayout_5.addWidget(self.label_26, 0, 2, 1, 1)
self.verticalLayout_6.addLayout(self.gridLayout_5)
self.showTimer = QtGui.QCheckBox(self.tab_5)
self.showTimer.setObjectName(_fromUtf8("showTimer"))
self.verticalLayout_6.addWidget(self.showTimer)
self.autoplaySounds = QtGui.QCheckBox(self.tab_5)
self.autoplaySounds.setObjectName(_fromUtf8("autoplaySounds"))
self.verticalLayout_6.addWidget(self.autoplaySounds)
self.replayQuestion = QtGui.QCheckBox(self.tab_5)
self.replayQuestion.setChecked(False)
self.replayQuestion.setObjectName(_fromUtf8("replayQuestion"))
self.verticalLayout_6.addWidget(self.replayQuestion)
spacerItem4 = QtGui.QSpacerItem(20, 199, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout_6.addItem(spacerItem4)
self.tabWidget.addTab(self.tab_5, _fromUtf8(""))
self.tab_4 = QtGui.QWidget()
self.tab_4.setObjectName(_fromUtf8("tab_4"))
self.verticalLayout_5 = QtGui.QVBoxLayout(self.tab_4)
self.verticalLayout_5.setObjectName(_fromUtf8("verticalLayout_5"))
self.label_22 = QtGui.QLabel(self.tab_4)
self.label_22.setObjectName(_fromUtf8("label_22"))
self.verticalLayout_5.addWidget(self.label_22)
self.desc = QtGui.QTextEdit(self.tab_4)
self.desc.setObjectName(_fromUtf8("desc"))
self.verticalLayout_5.addWidget(self.desc)
self.tabWidget.addTab(self.tab_4, _fromUtf8(""))
self.verticalLayout.addWidget(self.tabWidget)
self.buttonBox = QtGui.QDialogButtonBox(Dialog)
self.buttonBox.setOrientation(QtCore.Qt.Horizontal)
self.buttonBox.setStandardButtons(QtGui.QDialogButtonBox.Help|QtGui.QDialogButtonBox.Ok|QtGui.QDialogButtonBox.RestoreDefaults)
self.buttonBox.setObjectName(_fromUtf8("buttonBox"))
self.verticalLayout.addWidget(self.buttonBox)
self.retranslateUi(Dialog)
self.tabWidget.setCurrentIndex(0)
QtCore.QObject.connect(self.buttonBox, QtCore.SIGNAL(_fromUtf8("accepted()")), Dialog.accept)
QtCore.QObject.connect(self.buttonBox, QtCore.SIGNAL(_fromUtf8("rejected()")), Dialog.reject)
QtCore.QMetaObject.connectSlotsByName(Dialog)
Dialog.setTabOrder(self.dconf, self.confOpts)
Dialog.setTabOrder(self.confOpts, self.tabWidget)
Dialog.setTabOrder(self.tabWidget, self.lrnSteps)
Dialog.setTabOrder(self.lrnSteps, self.newOrder)
Dialog.setTabOrder(self.newOrder, self.newPerDay)
Dialog.setTabOrder(self.newPerDay, self.lrnGradInt)
Dialog.setTabOrder(self.lrnGradInt, self.lrnEasyInt)
Dialog.setTabOrder(self.lrnEasyInt, self.lrnFactor)
Dialog.setTabOrder(self.lrnFactor, self.bury)
Dialog.setTabOrder(self.bury, self.revPerDay)
Dialog.setTabOrder(self.revPerDay, self.easyBonus)
Dialog.setTabOrder(self.easyBonus, self.fi1)
Dialog.setTabOrder(self.fi1, self.maxIvl)
Dialog.setTabOrder(self.maxIvl, self.buryRev)
Dialog.setTabOrder(self.buryRev, self.lapSteps)
Dialog.setTabOrder(self.lapSteps, self.lapMult)
Dialog.setTabOrder(self.lapMult, self.lapMinInt)
Dialog.setTabOrder(self.lapMinInt, self.leechThreshold)
Dialog.setTabOrder(self.leechThreshold, self.leechAction)
Dialog.setTabOrder(self.leechAction, self.maxTaken)
Dialog.setTabOrder(self.maxTaken, self.showTimer)
Dialog.setTabOrder(self.showTimer, self.autoplaySounds)
Dialog.setTabOrder(self.autoplaySounds, self.replayQuestion)
Dialog.setTabOrder(self.replayQuestion, self.buttonBox)
Dialog.setTabOrder(self.buttonBox, self.desc)
def retranslateUi(self, Dialog):
self.label_31.setText(_("Options group:"))
self.label_27.setText(_("%"))
self.label_24.setText(_("Starting ease"))
self.label_8.setText(_("Order"))
self.label_5.setText(_("Easy interval"))
self.label_4.setText(_("Graduating interval"))
self.label_6.setText(_("New cards/day"))
self.label_2.setText(_("Steps (in minutes)"))
self.bury.setText(_("Bury related new cards until the next day"))
self.label_9.setText(_("days"))
self.label_7.setText(_("days"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), _("New Cards"))
self.label_20.setText(_("Easy bonus"))
self.label_21.setText(_("%"))
self.label_34.setText(_("%"))
self.label_33.setText(_("Interval modifier"))
self.label_37.setText(_("Maximum reviews/day"))
self.label_3.setText(_("Maximum interval"))
self.label_23.setText(_("days"))
self.buryRev.setText(_("Bury related reviews until the next day"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_3), _("Reviews"))
self.label_17.setText(_("Steps (in minutes)"))
self.label.setText(_("New interval"))
self.label_10.setText(_("Leech threshold"))
self.label_11.setText(_("lapses"))
self.label_12.setText(_("Leech action"))
self.label_13.setText(_("Minimum interval"))
self.label_14.setText(_("days"))
self.leechAction.setItemText(0, _("Suspend Card"))
self.leechAction.setItemText(1, _("Tag Only"))
self.label_28.setText(_("%"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), _("Lapses"))
self.label_25.setText(_("Ignore answer times longer than"))
self.label_26.setText(_("seconds"))
self.showTimer.setText(_("Show answer timer"))
self.autoplaySounds.setText(_("Automatically play audio"))
self.replayQuestion.setText(_("When answer shown, replay both question and answer audio"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_5), _("General"))
self.label_22.setText(_("Description to show on study screen (current deck only):"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_4), _("Description"))
import icons_rc
| agpl-3.0 |
Stanford-Online/edx-platform | lms/djangoapps/mobile_api/decorators.py | 22 | 2476 | """
Decorators for Mobile APIs.
"""
import functools
from django.http import Http404
from opaque_keys.edx.keys import CourseKey
from rest_framework import status
from rest_framework.response import Response
from lms.djangoapps.courseware.courses import get_course_with_access
from lms.djangoapps.courseware.courseware_access_exception import CoursewareAccessException
from lms.djangoapps.courseware.exceptions import CourseAccessRedirect
from openedx.core.lib.api.view_utils import view_auth_classes
from xmodule.modulestore.django import modulestore
def mobile_course_access(depth=0):
"""
Method decorator for a mobile API endpoint that verifies the user has access to the course in a mobile context.
"""
def _decorator(func):
"""Outer method decorator."""
@functools.wraps(func)
def _wrapper(self, request, *args, **kwargs):
"""
Expects kwargs to contain 'course_id'.
Passes the course descriptor to the given decorated function.
Raises 404 if access to course is disallowed.
"""
course_id = CourseKey.from_string(kwargs.pop('course_id'))
with modulestore().bulk_operations(course_id):
try:
course = get_course_with_access(
request.user,
'load_mobile',
course_id,
depth=depth,
check_if_enrolled=True,
)
except CoursewareAccessException as error:
return Response(data=error.to_json(), status=status.HTTP_404_NOT_FOUND)
except CourseAccessRedirect as error:
# If the redirect contains information about the triggering AccessError,
# return the information contained in the AccessError.
if error.access_error is not None:
return Response(data=error.access_error.to_json(), status=status.HTTP_404_NOT_FOUND)
# Raise a 404 if the user does not have course access
raise Http404
return func(self, request, course=course, *args, **kwargs)
return _wrapper
return _decorator
def mobile_view(is_user=False):
"""
Function and class decorator that abstracts the authentication and permission checks for mobile api views.
"""
return view_auth_classes(is_user)
| agpl-3.0 |
cyanna/edx-platform | cms/djangoapps/contentstore/views/tests/test_videos.py | 17 | 15670 | #-*- coding: utf-8 -*-
"""
Unit tests for video-related REST APIs.
"""
# pylint: disable=attribute-defined-outside-init
import csv
import json
import dateutil.parser
import re
from StringIO import StringIO
from django.conf import settings
from django.test.utils import override_settings
from mock import Mock, patch
from edxval.api import create_profile, create_video, get_video_info
from contentstore.models import VideoUploadConfig
from contentstore.views.videos import KEY_EXPIRATION_IN_SECONDS, VIDEO_ASSET_TYPE, StatusDisplayStrings
from contentstore.tests.utils import CourseTestCase
from contentstore.utils import reverse_course_url
from xmodule.assetstore import AssetMetadata
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.factories import CourseFactory
class VideoUploadTestMixin(object):
"""
Test cases for the video upload feature
"""
def get_url_for_course_key(self, course_key):
"""Return video handler URL for the given course"""
return reverse_course_url(self.VIEW_NAME, course_key)
def setUp(self):
super(VideoUploadTestMixin, self).setUp()
self.url = self.get_url_for_course_key(self.course.id)
self.test_token = "test_token"
self.course.video_upload_pipeline = {
"course_video_upload_token": self.test_token,
}
self.save_course()
self.profiles = ["profile1", "profile2"]
self.previous_uploads = [
{
"edx_video_id": "test1",
"client_video_id": "test1.mp4",
"duration": 42.0,
"status": "upload",
"encoded_videos": [],
},
{
"edx_video_id": "test2",
"client_video_id": "test2.mp4",
"duration": 128.0,
"status": "file_complete",
"encoded_videos": [
{
"profile": "profile1",
"url": "http://example.com/profile1/test2.mp4",
"file_size": 1600,
"bitrate": 100,
},
{
"profile": "profile2",
"url": "http://example.com/profile2/test2.mov",
"file_size": 16000,
"bitrate": 1000,
},
],
},
{
"edx_video_id": "non-ascii",
"client_video_id": u"nón-ascii-näme.mp4",
"duration": 256.0,
"status": "transcode_active",
"encoded_videos": [
{
"profile": "profile1",
"url": u"http://example.com/profile1/nón-ascii-näme.mp4",
"file_size": 3200,
"bitrate": 100,
},
]
},
]
# Ensure every status string is tested
self.previous_uploads += [
{
"edx_video_id": "status_test_{}".format(status),
"client_video_id": "status_test.mp4",
"duration": 3.14,
"status": status,
"encoded_videos": [],
}
for status in (
StatusDisplayStrings._STATUS_MAP.keys() + # pylint:disable=protected-access
["non_existent_status"]
)
]
for profile in self.profiles:
create_profile(profile)
for video in self.previous_uploads:
create_video(video)
modulestore().save_asset_metadata(
AssetMetadata(
self.course.id.make_asset_key(VIDEO_ASSET_TYPE, video["edx_video_id"])
),
self.user.id
)
def _get_previous_upload(self, edx_video_id):
"""Returns the previous upload with the given video id."""
return next(
video
for video in self.previous_uploads
if video["edx_video_id"] == edx_video_id
)
def test_anon_user(self):
self.client.logout()
response = self.client.get(self.url)
self.assertEqual(response.status_code, 302)
def test_put(self):
response = self.client.put(self.url)
self.assertEqual(response.status_code, 405)
def test_invalid_course_key(self):
response = self.client.get(
self.get_url_for_course_key("Non/Existent/Course")
)
self.assertEqual(response.status_code, 404)
def test_non_staff_user(self):
client, __ = self.create_non_staff_authed_user_client()
response = client.get(self.url)
self.assertEqual(response.status_code, 403)
def test_video_pipeline_not_enabled(self):
settings.FEATURES["ENABLE_VIDEO_UPLOAD_PIPELINE"] = False
self.assertEqual(self.client.get(self.url).status_code, 404)
def test_video_pipeline_not_configured(self):
settings.VIDEO_UPLOAD_PIPELINE = None
self.assertEqual(self.client.get(self.url).status_code, 404)
def test_course_not_configured(self):
self.course.video_upload_pipeline = {}
self.save_course()
self.assertEqual(self.client.get(self.url).status_code, 404)
@patch.dict("django.conf.settings.FEATURES", {"ENABLE_VIDEO_UPLOAD_PIPELINE": True})
@override_settings(VIDEO_UPLOAD_PIPELINE={"BUCKET": "test_bucket", "ROOT_PATH": "test_root"})
class VideosHandlerTestCase(VideoUploadTestMixin, CourseTestCase):
"""Test cases for the main video upload endpoint"""
VIEW_NAME = "videos_handler"
def test_get_json(self):
response = self.client.get_json(self.url)
self.assertEqual(response.status_code, 200)
response_videos = json.loads(response.content)["videos"]
self.assertEqual(len(response_videos), len(self.previous_uploads))
for i, response_video in enumerate(response_videos):
# Videos should be returned by creation date descending
original_video = self.previous_uploads[-(i + 1)]
self.assertEqual(
set(response_video.keys()),
set(["edx_video_id", "client_video_id", "created", "duration", "status"])
)
dateutil.parser.parse(response_video["created"])
for field in ["edx_video_id", "client_video_id", "duration"]:
self.assertEqual(response_video[field], original_video[field])
self.assertEqual(
response_video["status"],
StatusDisplayStrings.get(original_video["status"])
)
def test_get_html(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
self.assertRegexpMatches(response["Content-Type"], "^text/html(;.*)?$")
# Crude check for presence of data in returned HTML
for video in self.previous_uploads:
self.assertIn(video["edx_video_id"], response.content)
def test_post_non_json(self):
response = self.client.post(self.url, {"files": []})
self.assertEqual(response.status_code, 400)
def test_post_malformed_json(self):
response = self.client.post(self.url, "{", content_type="application/json")
self.assertEqual(response.status_code, 400)
def test_post_invalid_json(self):
def assert_bad(content):
"""Make request with content and assert that response is 400"""
response = self.client.post(
self.url,
json.dumps(content),
content_type="application/json"
)
self.assertEqual(response.status_code, 400)
# Top level missing files key
assert_bad({})
# Entry missing file_name
assert_bad({"files": [{"content_type": "video/mp4"}]})
# Entry missing content_type
assert_bad({"files": [{"file_name": "test.mp4"}]})
@override_settings(AWS_ACCESS_KEY_ID="test_key_id", AWS_SECRET_ACCESS_KEY="test_secret")
@patch("boto.s3.key.Key")
@patch("boto.s3.connection.S3Connection")
def test_post_success(self, mock_conn, mock_key):
files = [
{
"file_name": "first.mp4",
"content_type": "video/mp4",
},
{
"file_name": "second.webm",
"content_type": "video/webm",
},
{
"file_name": "third.mov",
"content_type": "video/quicktime",
},
{
"file_name": "fourth.mp4",
"content_type": "video/mp4",
},
]
bucket = Mock()
mock_conn.return_value = Mock(get_bucket=Mock(return_value=bucket))
mock_key_instances = [
Mock(
generate_url=Mock(
return_value="http://example.com/url_{}".format(file_info["file_name"])
)
)
for file_info in files
]
# If extra calls are made, return a dummy
mock_key.side_effect = mock_key_instances + [Mock()]
response = self.client.post(
self.url,
json.dumps({"files": files}),
content_type="application/json"
)
self.assertEqual(response.status_code, 200)
response_obj = json.loads(response.content)
mock_conn.assert_called_once_with(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
self.assertEqual(len(response_obj["files"]), len(files))
self.assertEqual(mock_key.call_count, len(files))
for i, file_info in enumerate(files):
# Ensure Key was set up correctly and extract id
key_call_args, __ = mock_key.call_args_list[i]
self.assertEqual(key_call_args[0], bucket)
path_match = re.match(
(
settings.VIDEO_UPLOAD_PIPELINE["ROOT_PATH"] +
"/([a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12})$"
),
key_call_args[1]
)
self.assertIsNotNone(path_match)
video_id = path_match.group(1)
mock_key_instance = mock_key_instances[i]
mock_key_instance.set_metadata.assert_any_call(
"course_video_upload_token",
self.test_token
)
mock_key_instance.set_metadata.assert_any_call(
"client_video_id",
file_info["file_name"]
)
mock_key_instance.set_metadata.assert_any_call("course_key", unicode(self.course.id))
mock_key_instance.generate_url.assert_called_once_with(
KEY_EXPIRATION_IN_SECONDS,
"PUT",
headers={"Content-Type": file_info["content_type"]}
)
# Ensure asset store was updated and the created_by field was set
asset_metadata = modulestore().find_asset_metadata(
self.course.id.make_asset_key(VIDEO_ASSET_TYPE, video_id)
)
self.assertIsNotNone(asset_metadata)
self.assertEquals(asset_metadata.created_by, self.user.id)
# Ensure VAL was updated
val_info = get_video_info(video_id)
self.assertEqual(val_info["status"], "upload")
self.assertEqual(val_info["client_video_id"], file_info["file_name"])
self.assertEqual(val_info["status"], "upload")
self.assertEqual(val_info["duration"], 0)
self.assertEqual(val_info["courses"], [unicode(self.course.id)])
# Ensure response is correct
response_file = response_obj["files"][i]
self.assertEqual(response_file["file_name"], file_info["file_name"])
self.assertEqual(response_file["upload_url"], mock_key_instance.generate_url())
@patch.dict("django.conf.settings.FEATURES", {"ENABLE_VIDEO_UPLOAD_PIPELINE": True})
@override_settings(VIDEO_UPLOAD_PIPELINE={"BUCKET": "test_bucket", "ROOT_PATH": "test_root"})
class VideoUrlsCsvTestCase(VideoUploadTestMixin, CourseTestCase):
"""Test cases for the CSV download endpoint for video uploads"""
VIEW_NAME = "video_encodings_download"
def setUp(self):
super(VideoUrlsCsvTestCase, self).setUp()
VideoUploadConfig(profile_whitelist="profile1").save()
def _check_csv_response(self, expected_profiles):
"""
Check that the response is a valid CSV response containing rows
corresponding to previous_uploads and including the expected profiles.
"""
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
self.assertEqual(
response["Content-Disposition"],
"attachment; filename={course}_video_urls.csv".format(course=self.course.id.course)
)
response_reader = StringIO(response.content)
reader = csv.DictReader(response_reader, dialect=csv.excel)
self.assertEqual(
reader.fieldnames,
(
["Name", "Duration", "Date Added", "Video ID", "Status"] +
["{} URL".format(profile) for profile in expected_profiles]
)
)
rows = list(reader)
self.assertEqual(len(rows), len(self.previous_uploads))
for i, row in enumerate(rows):
response_video = {
key.decode("utf-8"): value.decode("utf-8") for key, value in row.items()
}
# Videos should be returned by creation date descending
original_video = self.previous_uploads[-(i + 1)]
self.assertEqual(response_video["Name"], original_video["client_video_id"])
self.assertEqual(response_video["Duration"], str(original_video["duration"]))
dateutil.parser.parse(response_video["Date Added"])
self.assertEqual(response_video["Video ID"], original_video["edx_video_id"])
self.assertEqual(response_video["Status"], StatusDisplayStrings.get(original_video["status"]))
for profile in expected_profiles:
response_profile_url = response_video["{} URL".format(profile)]
original_encoded_for_profile = next(
(
original_encoded
for original_encoded in original_video["encoded_videos"]
if original_encoded["profile"] == profile
),
None
)
if original_encoded_for_profile:
self.assertEqual(response_profile_url, original_encoded_for_profile["url"])
else:
self.assertEqual(response_profile_url, "")
def test_basic(self):
self._check_csv_response(["profile1"])
def test_profile_whitelist(self):
VideoUploadConfig(profile_whitelist="profile1,profile2").save()
self._check_csv_response(["profile1", "profile2"])
def test_non_ascii_course(self):
course = CourseFactory.create(
number=u"nón-äscii",
video_upload_pipeline={
"course_video_upload_token": self.test_token,
}
)
response = self.client.get(self.get_url_for_course_key(course.id))
self.assertEqual(response.status_code, 200)
self.assertEqual(
response["Content-Disposition"],
"attachment; filename=video_urls.csv; filename*=utf-8''n%C3%B3n-%C3%A4scii_video_urls.csv"
)
| agpl-3.0 |
TheParrotsAreComing/PAS | TestingAssets/Files/delete_foster.py | 2 | 3543 | import time
import sys
import _mysql
import random
import string
import re
import os
import urllib.parse
from selenium import webdriver
from selenium.webdriver.support.ui import Select
import selenium.webdriver.chrome.service as service
from shutil import copyfile
try:
# Check to see if it was added
db=_mysql.connect('localhost','root','root','paws_db')
rand_fname=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
rand_lname=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
rand_mail=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
db.query("INSERT INTO fosters (first_name,last_name,address,email,created,is_deleted) VALUES(\""+rand_fname+"\",\""+rand_lname+"\",\"55 Gato Way\",\""+rand_mail+"@mail.com\",NOW(),true);");
db.store_result()
db.query("SELECT id,first_name FROM fosters where last_name=\""+rand_lname+"\" AND email=\""+rand_mail+"@mail.com\"")
r=db.store_result()
k=r.fetch_row(1,1)
a_id = k[0].get('id')
service = service.Service('D:\ChromeDriver\chromedriver')
service.start()
capabilities = {'chrome.binary': 'C:\Program Files (x86)\Google\Chrome\Application\chrome'} # Chrome path is different for everyone
driver = webdriver.Remote(service.service_url, capabilities)
driver.set_window_size(sys.argv[1], sys.argv[2]);
curfilePath = os.path.abspath(__file__)
curDir = os.path.abspath(os.path.join(curfilePath,os.pardir)) # this will return current directory in which python file resides.
parentDir = os.path.abspath(os.path.join(curDir,os.pardir))
grandParentDir = os.path.abspath(os.path.join(parentDir,os.pardir))
webroot = os.path.join(grandParentDir,"webroot","files","fosters",a_id)
rand_default=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
rand_new=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
file_path_1 = urllib.parse.urljoin('files/fosters/',a_id+"/"+rand_default)
db.query('INSERT INTO files (entity_type,entity_id,is_photo,file_path,mime_type,file_size,file_ext,created,is_deleted,original_filename) VALUES(4,'+a_id+',0,"'+file_path_1+'","application/pdf",78237,"pdf",NOW(),0,"test_doc_1");')
db.store_result()
db.query('SELECT id FROM files where file_path="'+file_path_1+'"')
r=db.store_result()
k=r.fetch_row(1,1)
file_1_id = k[0].get('id')
if not os.path.exists(webroot):
os.makedirs(webroot)
copyfile(os.getcwd()+"/doc/test_doc_1.pdf", os.path.join(webroot,rand_default+".pdf"))
for root,dir,files in os.walk(webroot):
for f in files:
os.chmod(os.path.join(root, f), 777)
driver.get('http://localhost:8765');
driver.find_element_by_id('email').send_keys('[email protected]')
driver.find_element_by_id('password').send_keys('password')
driver.find_element_by_css_selector('input[type="submit"]').click()
driver.get('http://localhost:8765/fosters/view/'+a_id)
driver.find_element_by_css_selector('a[data-ix="attachment-notification"]').click()
print("pass") #Not Implemented Yet
sys.exit(0)
driver.find_element_by_css_selector('div.picture-file[data-file-id="'+file_2_id+'"]').click()
driver.find_element_by_id("mark-profile-pic-btn").click()
driver.get('http://localhost:8765/fosters/view/'+a_id)
new_img = driver.find_element_by_css_selector('img.cat-profile-pic')
img_src = new_img.get_attribute('src')
if rand_new in img_src:
print("pass")
else:
print("fail")
driver.quit()
except Exception as e:
print(e)
print("fail")
| mit |
lyceel/engine | build/android/pylib/base/test_run_factory.py | 45 | 2028 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from pylib.gtest import gtest_test_instance
from pylib.gtest import local_device_gtest_run
from pylib.instrumentation import instrumentation_test_instance
from pylib.local.device import local_device_environment
from pylib.local.device import local_device_instrumentation_test_run
from pylib.remote.device import remote_device_environment
from pylib.remote.device import remote_device_gtest_run
from pylib.remote.device import remote_device_instrumentation_test_run
from pylib.remote.device import remote_device_uirobot_test_run
from pylib.uirobot import uirobot_test_instance
def CreateTestRun(_args, env, test_instance, error_func):
if isinstance(env, local_device_environment.LocalDeviceEnvironment):
if isinstance(test_instance, gtest_test_instance.GtestTestInstance):
return local_device_gtest_run.LocalDeviceGtestRun(env, test_instance)
if isinstance(test_instance,
instrumentation_test_instance.InstrumentationTestInstance):
return (local_device_instrumentation_test_run
.LocalDeviceInstrumentationTestRun(env, test_instance))
if isinstance(env, remote_device_environment.RemoteDeviceEnvironment):
if isinstance(test_instance, gtest_test_instance.GtestTestInstance):
return remote_device_gtest_run.RemoteDeviceGtestTestRun(
env, test_instance)
if isinstance(test_instance,
instrumentation_test_instance.InstrumentationTestInstance):
return (remote_device_instrumentation_test_run
.RemoteDeviceInstrumentationTestRun(env, test_instance))
if isinstance(test_instance, uirobot_test_instance.UirobotTestInstance):
return remote_device_uirobot_test_run.RemoteDeviceUirobotTestRun(
env, test_instance)
error_func('Unable to create test run for %s tests in %s environment'
% (str(test_instance), str(env)))
| bsd-3-clause |
mrphlip/lrrbot | lrrbot/desertbus_moderator_actions.py | 2 | 4185 | """
This module has basically nothing to do with actual lrrbot functionality...
It's just piggy-backing off it to share its code and steal its event loop.
Because that's easier than making this a separate process.
"""
import asyncio
import datetime
import sqlalchemy
from common import pubsub
from common import utils
from common import time as ctime
from common import gdata
from common.config import config
import logging
import time
import irc.client
log = logging.getLogger("desertbus_moderator_actions")
SPREADSHEET = "1KEEcv-hGEIwkHARpK-X6TBWUT3x8HpgG0i4tk16_Ysw"
WATCHCHANNEL = 'desertbus'
WATCHAS = 'mrphlip' # because lrrbot isn't a mod in the channel
DESERTBUS_START = config["timezone"].localize(datetime.datetime(2020, 11, 13, 10, 0))
class ModeratorActions:
def __init__(self, lrrbot, loop):
self.lrrbot = lrrbot
self.loop = loop
self.last_chat = {}
if config['log_desertbus_moderator_actions']:
self.lrrbot.reactor.add_global_handler("pubmsg", self.record_db_chat, -2)
self.lrrbot.reactor.add_global_handler("all_events", self.drop_db_events, -1)
self.lrrbot.reactor.add_global_handler("welcome", self.on_connect, 2)
self.lrrbot.reactor.scheduler.execute_every(60, self.clear_chat)
users = self.lrrbot.metadata.tables["users"]
with self.lrrbot.engine.begin() as conn:
selfrow = conn.execute(sqlalchemy.select([users.c.id]).where(users.c.name == WATCHAS)).first()
targetrow = conn.execute(sqlalchemy.select([users.c.id]).where(users.c.name == WATCHCHANNEL)).first()
if selfrow is not None and targetrow is not None:
self_channel_id, = selfrow
target_channel_id, = targetrow
topic = "chat_moderator_actions.%s.%s" % (self_channel_id, target_channel_id)
self.lrrbot.pubsub.subscribe([topic], WATCHAS)
pubsub.signals.signal(topic).connect(self.on_message)
@utils.swallow_errors
def on_message(self, sender, message):
log.info("Got message: %r", message['data'])
action = message['data']['moderation_action']
args = message['data']['args']
mod = message['data']['created_by']
if action == 'timeout':
user = args[0]
action = "Timeout: %s" % ctime.nice_duration(int(args[1]))
reason = args[2] if len(args) >= 3 else ''
last = self.last_chat.get(user.lower(), [''])[0]
elif action == 'ban':
user = args[0]
action = "Ban"
reason = args[1] if len(args) >= 2 else ''
last = self.last_chat.get(user.lower(), [''])[0]
elif action == 'unban':
user = args[0]
action = "Unban"
reason = ''
last = ''
elif action == 'untimeout':
user = args[0]
action = "Untimeout"
reason = ''
last = ''
elif action == 'delete':
user = args[0]
action = "Delete message"
reason = ''
last = args[1]
else:
user = ''
reason = repr(args)
last = ''
now = datetime.datetime.now(config["timezone"])
data = [
now.strftime("%Y-%m-%d %H:%M:%S"), # Timestamp
self.nice_time(now - DESERTBUS_START), # Timestamp (hours bussed)
user, # Offender's Username
mod, # Moderator
action, # Enforcement option/length
reason, # What was the cause of the enforcement action?
last, # Last Line
]
log.debug("Add row: %r", data)
asyncio.ensure_future(gdata.add_rows_to_spreadsheet(SPREADSHEET, [data]), loop=self.loop).add_done_callback(utils.check_exception)
def nice_time(self, s):
if isinstance(s, datetime.timedelta):
s = s.days * 86400 + s.seconds
if s < 0:
return "-" + self.nice_time(-s)
return "%d:%02d:%02d" % (s // 3600, (s // 60) % 60, s % 60)
@utils.swallow_errors
def record_db_chat(self, conn, event):
if event.target == "#" + WATCHCHANNEL:
source = irc.client.NickMask(event.source)
self.last_chat[source.nick.lower()] = (event.arguments[0], time.time())
return "NO MORE"
@utils.swallow_errors
def drop_db_events(self, conn, event):
if event.target == "#" + WATCHCHANNEL and event.type != "action":
return "NO MORE"
@utils.swallow_errors
def clear_chat(self):
cutoff = time.time() - 10*60
to_remove = [k for k, v in self.last_chat.items() if v[1] < cutoff]
for i in to_remove:
del self.last_chat[i]
def on_connect(self, conn, event):
conn.join("#" + WATCHCHANNEL)
| apache-2.0 |
Crach1015/plugin.video.superpack | zip/plugin.video.SportsDevil/lib/addonInstaller.py | 25 | 3511 | # -*- coding: utf-8 -*-
import os
import xbmc, xbmcaddon
import common
import urllib
import zipfile
from traceback import print_exc
from dialogs.dialogProgress import DialogProgress
from utils.fileUtils import getFileContent, clearDirectory
from utils.regexUtils import findall
PACKAGE_DIR = "special://home/addons/packages/"
INSTALL_DIR = "special://home/addons/"
DICT = {
'veetle': 'https://github.com/sissbruecker/xbmc-veetle-plugin/archive/master.zip',
'jtv': 'https://divingmules-repo.googlecode.com/files/plugin.video.jtv.archives-0.3.6.zip',
'youtube': 'http://ftp.hosteurope.de/mirror/xbmc.org/addons/frodo/plugin.video.youtube/plugin.video.youtube-4.4.4.zip'
}
def install(key):
entry = DICT[key]
return _install_addon(entry)
def _install_addon(url):
ri = AddonInstaller()
compressed = ri.download(url)
if compressed:
addonId = ri.install(compressed)
if addonId:
xbmc.sleep(100)
xbmc.executebuiltin('UpdateLocalAddons')
xbmc.sleep(100)
try:
_N_ = xbmcaddon.Addon(id=addonId)
common.showNotification(_N_.getAddonInfo("name"), 'Addon installed', 2000, _N_.getAddonInfo("icon"))
return True
except:
pass
return False
def isInstalled(addonId):
try:
_N_ = xbmcaddon.Addon(id=addonId)
return True
except:
return False
class AddonInstaller:
def download(self, url, destination=PACKAGE_DIR):
try:
dlg = DialogProgress()
dlg.create('SportsDevil - Installing external addon')
destination = xbmc.translatePath(destination) + os.path.basename(url)
def _report_hook(count, blocksize, totalsize):
percent = int(float(count * blocksize * 100) / totalsize)
dlg.update(percent, url, destination)
fp, _ = urllib.urlretrieve(url, destination, _report_hook)
return fp
except:
print_exc()
dlg.close()
return ""
def extract(self, fileOrPath, directory):
try:
if not directory.endswith(':') and not os.path.exists(directory):
os.mkdir(directory)
zf = zipfile.ZipFile(fileOrPath)
for _, name in enumerate(zf.namelist()):
if name.endswith('/'):
path = os.path.join(directory, name)
if os.path.exists(path):
clearDirectory(path)
else:
os.makedirs(path, 0777)
else:
outfile = open(os.path.join(directory, name), 'wb')
outfile.write(zf.read(name))
outfile.flush()
outfile.close()
return zf.filelist
except:
print_exc()
return None
def install(self, filename):
destination = xbmc.translatePath(INSTALL_DIR)
files = self.extract(filename, destination)
if files:
addonXml = filter(lambda x: x.filename.endswith('addon.xml'), files)
if addonXml:
path = os.path.join(destination, addonXml[0].filename)
content = getFileContent(path)
addonId = findall(content, '<addon id="([^"]+)"')
if addonId:
return addonId[0]
return None
| gpl-2.0 |
Beeblio/django | tests/custom_managers_regress/models.py | 38 | 1195 | """
Regression tests for custom manager classes.
"""
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
class RestrictedManager(models.Manager):
"""
A manager that filters out non-public instances.
"""
def get_queryset(self):
return super(RestrictedManager, self).get_queryset().filter(is_public=True)
@python_2_unicode_compatible
class RelatedModel(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
@python_2_unicode_compatible
class RestrictedModel(models.Model):
name = models.CharField(max_length=50)
is_public = models.BooleanField(default=False)
related = models.ForeignKey(RelatedModel)
objects = RestrictedManager()
plain_manager = models.Manager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class OneToOneRestrictedModel(models.Model):
name = models.CharField(max_length=50)
is_public = models.BooleanField(default=False)
related = models.OneToOneField(RelatedModel)
objects = RestrictedManager()
plain_manager = models.Manager()
def __str__(self):
return self.name
| bsd-3-clause |
MakerDAO/click | tests/test_commands.py | 29 | 7114 | # -*- coding: utf-8 -*-
import re
import click
def test_other_command_invoke(runner):
@click.command()
@click.pass_context
def cli(ctx):
return ctx.invoke(other_cmd, arg=42)
@click.command()
@click.argument('arg', type=click.INT)
def other_cmd(arg):
click.echo(arg)
result = runner.invoke(cli, [])
assert not result.exception
assert result.output == '42\n'
def test_other_command_forward(runner):
cli = click.Group()
@cli.command()
@click.option('--count', default=1)
def test(count):
click.echo('Count: %d' % count)
@cli.command()
@click.option('--count', default=1)
@click.pass_context
def dist(ctx, count):
ctx.forward(test)
ctx.invoke(test, count=42)
result = runner.invoke(cli, ['dist'])
assert not result.exception
assert result.output == 'Count: 1\nCount: 42\n'
def test_auto_shorthelp(runner):
@click.group()
def cli():
pass
@cli.command()
def short():
"""This is a short text."""
@cli.command()
def special_chars():
"""Login and store the token in ~/.netrc."""
@cli.command()
def long():
"""This is a long text that is too long to show as short help
and will be truncated instead."""
result = runner.invoke(cli, ['--help'])
assert re.search(
r'Commands:\n\s+'
r'long\s+This is a long text that is too long to show\.\.\.\n\s+'
r'short\s+This is a short text\.\n\s+'
r'special_chars\s+Login and store the token in ~/.netrc\.\s*',
result.output) is not None
def test_default_maps(runner):
@click.group()
def cli():
pass
@cli.command()
@click.option('--name', default='normal')
def foo(name):
click.echo(name)
result = runner.invoke(cli, ['foo'], default_map={
'foo': {'name': 'changed'}
})
assert not result.exception
assert result.output == 'changed\n'
def test_group_with_args(runner):
@click.group()
@click.argument('obj')
def cli(obj):
click.echo('obj=%s' % obj)
@cli.command()
def move():
click.echo('move')
result = runner.invoke(cli, [])
assert result.exit_code == 0
assert 'Show this message and exit.' in result.output
result = runner.invoke(cli, ['obj1'])
assert result.exit_code == 2
assert 'Error: Missing command.' in result.output
result = runner.invoke(cli, ['obj1', '--help'])
assert result.exit_code == 0
assert 'Show this message and exit.' in result.output
result = runner.invoke(cli, ['obj1', 'move'])
assert result.exit_code == 0
assert result.output == 'obj=obj1\nmove\n'
def test_base_command(runner):
import optparse
@click.group()
def cli():
pass
class OptParseCommand(click.BaseCommand):
def __init__(self, name, parser, callback):
click.BaseCommand.__init__(self, name)
self.parser = parser
self.callback = callback
def parse_args(self, ctx, args):
try:
opts, args = parser.parse_args(args)
except Exception as e:
ctx.fail(str(e))
ctx.args = args
ctx.params = vars(opts)
def get_usage(self, ctx):
return self.parser.get_usage()
def get_help(self, ctx):
return self.parser.format_help()
def invoke(self, ctx):
ctx.invoke(self.callback, ctx.args, **ctx.params)
parser = optparse.OptionParser(usage='Usage: foo test [OPTIONS]')
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
def test_callback(args, filename, verbose):
click.echo(' '.join(args))
click.echo(filename)
click.echo(verbose)
cli.add_command(OptParseCommand('test', parser, test_callback))
result = runner.invoke(cli, ['test', '-f', 'test.txt', '-q',
'whatever.txt', 'whateverelse.txt'])
assert not result.exception
assert result.output.splitlines() == [
'whatever.txt whateverelse.txt',
'test.txt',
'False',
]
result = runner.invoke(cli, ['test', '--help'])
assert not result.exception
assert result.output.splitlines() == [
'Usage: foo test [OPTIONS]',
'',
'Options:',
' -h, --help show this help message and exit',
' -f FILE, --file=FILE write report to FILE',
' -q, --quiet don\'t print status messages to stdout',
]
def test_object_propagation(runner):
for chain in False, True:
@click.group(chain=chain)
@click.option('--debug/--no-debug', default=False)
@click.pass_context
def cli(ctx, debug):
if ctx.obj is None:
ctx.obj = {}
ctx.obj['DEBUG'] = debug
@cli.command()
@click.pass_context
def sync(ctx):
click.echo('Debug is %s' % (ctx.obj['DEBUG'] and 'on' or 'off'))
result = runner.invoke(cli, ['sync'])
assert result.exception is None
assert result.output == 'Debug is off\n'
def test_other_command_invoke_with_defaults(runner):
@click.command()
@click.pass_context
def cli(ctx):
return ctx.invoke(other_cmd)
@click.command()
@click.option('--foo', type=click.INT, default=42)
@click.pass_context
def other_cmd(ctx, foo):
assert ctx.info_name == 'other_cmd'
click.echo(foo)
result = runner.invoke(cli, [])
assert not result.exception
assert result.output == '42\n'
def test_invoked_subcommand(runner):
@click.group(invoke_without_command=True)
@click.pass_context
def cli(ctx):
if ctx.invoked_subcommand is None:
click.echo('no subcommand, use default')
ctx.invoke(sync)
else:
click.echo('invoke subcommand')
@cli.command()
def sync():
click.echo('in subcommand')
result = runner.invoke(cli, ['sync'])
assert not result.exception
assert result.output == 'invoke subcommand\nin subcommand\n'
result = runner.invoke(cli)
assert not result.exception
assert result.output == 'no subcommand, use default\nin subcommand\n'
def test_unprocessed_options(runner):
@click.command(context_settings=dict(
ignore_unknown_options=True
))
@click.argument('args', nargs=-1, type=click.UNPROCESSED)
@click.option('--verbose', '-v', count=True)
def cli(verbose, args):
click.echo('Verbosity: %s' % verbose)
click.echo('Args: %s' % '|'.join(args))
result = runner.invoke(cli, ['-foo', '-vvvvx', '--muhaha', 'x', 'y', '-x'])
assert not result.exception
assert result.output.splitlines() == [
'Verbosity: 4',
'Args: -foo|-x|--muhaha|x|y|-x',
]
| bsd-3-clause |
maestro-hybrid-cloud/horizon | openstack_dashboard/dashboards/project/network_topology/subnets/tables.py | 33 | 1052 | # Copyright 2015 Cisco Systems.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.dashboards.project.networks.subnets import tables
class DeleteSubnet(tables.DeleteSubnet):
failure_url = 'horizon:project:network_topology:network'
class SubnetsTable(tables.SubnetsTable):
class Meta(object):
name = "subnets"
verbose_name = _("Subnets")
row_actions = (DeleteSubnet, )
table_actions = (DeleteSubnet, )
| apache-2.0 |
invisiblearts/DRCN | drcn_main.py | 1 | 3349 | # The MIT License (MIT)
#
# Copyright (c) 2016 invisiblearts
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from keras.models import Sequential, Model
from keras.layers import Convolution2D, Input, merge
from keras.callbacks import ModelCheckpoint
from keras.utils.io_utils import HDF5Matrix
from keras.optimizers import Adam
from drcn_merge import DRCN_Merge
BATCH_SIZE = 20
input_data = Input(batch_shape=(BATCH_SIZE, 1, 41, 41), name='data')
def func_iterator(x, func, times):
assert isinstance(times, int)
if times == 1:
return func(x)
return func_iterator(x, func, times-1)
def conv(channels=256, **kwargs):
return Convolution2D(channels, 3, 3, 'he_normal', border_mode='same', activation='relu', **kwargs)
embed_net = Sequential([conv(batch_input_shape=(BATCH_SIZE, 1, 41, 41)), conv()], name='Embedding Net')
infer_net = Sequential([conv(batch_input_shape=(BATCH_SIZE, 256, 41, 41))], name='Inference Net')
recons_net = Sequential([conv(batch_input_shape=(BATCH_SIZE, 256, 41, 41)), conv(1)], name='Reconstruction Net')
features = embed_net(input_data)
recurrence_list = []
reconstruct_list = []
for i in range(10):
recurrence_list.append(func_iterator(features, infer_net, i+1))
reconstruct_list.append(merge([recons_net(recurrence_list[i]), input_data]))
merged = merge(reconstruct_list, mode='concat', concat_axis=1)
DRCNMerge = DRCN_Merge(10)
out = DRCNMerge(merged)
DRCN_Model = Model(input=input_data, output=out, name='DRCN Final Model')
DRCN_Model.compile(optimizer=Adam(lr=0.00001, beta_1=0.9, beta_2=0.999), loss='mae')
train_data = HDF5Matrix('train_DRCN_data.h5', 'data', 0, 470)
train_label = HDF5Matrix('train_DRCN_label.h5', 'label', 0, 470)
test_data = HDF5Matrix('train_DRCN_data.h5', 'data', 470, 500)
test_label = HDF5Matrix('train_DRCN_label.h5', 'label', 470, 500)
with open('DRCN.yaml', 'w') as fp:
fp.write(DRCN_Model.to_yaml())
hist = DRCN_Model.fit(
train_data, train_label,
batch_size=BATCH_SIZE, nb_epoch=200,
validation_data=[test_data, test_label], shuffle='batch',
callbacks=[ModelCheckpoint('DRCN_weights.{epoch:02d}-{val_loss:.6f}.hdf5',
monitor='val_loss', verbose=0, save_best_only=False, mode='auto')])
DRCN_Model.save_weights('DRCN_weights.h5')
with open('DRCN_history.txt', 'w') as fp:
fp.write(str(hist.history))
| mit |
rismalrv/edx-platform | lms/djangoapps/ccx/views.py | 13 | 20949 | """
Views related to the Custom Courses feature.
"""
import csv
import datetime
import functools
import json
import logging
import pytz
from contextlib import contextmanager
from copy import deepcopy
from cStringIO import StringIO
from django.core.urlresolvers import reverse
from django.http import (
HttpResponse,
HttpResponseForbidden,
)
from django.contrib import messages
from django.core.exceptions import ValidationError
from django.core.validators import validate_email
from django.http import Http404
from django.shortcuts import redirect
from django.utils.translation import ugettext as _
from django.views.decorators.cache import cache_control
from django.views.decorators.csrf import ensure_csrf_cookie
from django.contrib.auth.models import User
from courseware.courses import get_course_by_id
from courseware.field_overrides import disable_overrides
from courseware.grades import iterate_grades_for
from courseware.model_data import FieldDataCache
from courseware.module_render import get_module_for_descriptor
from edxmako.shortcuts import render_to_response
from opaque_keys.edx.keys import CourseKey
from ccx_keys.locator import CCXLocator
from student.roles import CourseCcxCoachRole # pylint: disable=import-error
from student.models import CourseEnrollment
from instructor.offline_gradecalc import student_grades # pylint: disable=import-error
from instructor.views.api import _split_input_list # pylint: disable=import-error
from instructor.views.tools import get_student_from_identifier # pylint: disable=import-error
from instructor.enrollment import (
enroll_email,
unenroll_email,
get_email_params,
)
from .models import CustomCourseForEdX
from .overrides import (
clear_override_for_ccx,
get_override_for_ccx,
override_field_for_ccx,
clear_ccx_field_info_from_ccx_map,
bulk_delete_ccx_override_fields,
)
log = logging.getLogger(__name__)
TODAY = datetime.datetime.today # for patching in tests
def coach_dashboard(view):
"""
View decorator which enforces that the user have the CCX coach role on the
given course and goes ahead and translates the course_id from the Django
route into a course object.
"""
@functools.wraps(view)
def wrapper(request, course_id):
"""
Wraps the view function, performing access check, loading the course,
and modifying the view's call signature.
"""
course_key = CourseKey.from_string(course_id)
ccx = None
if isinstance(course_key, CCXLocator):
ccx_id = course_key.ccx
ccx = CustomCourseForEdX.objects.get(pk=ccx_id)
course_key = ccx.course_id
role = CourseCcxCoachRole(course_key)
if not role.has_user(request.user):
return HttpResponseForbidden(
_('You must be a CCX Coach to access this view.'))
course = get_course_by_id(course_key, depth=None)
# if there is a ccx, we must validate that it is the ccx for this coach
if ccx is not None:
coach_ccx = get_ccx_for_coach(course, request.user)
if coach_ccx is None or coach_ccx.id != ccx.id:
return HttpResponseForbidden(
_('You must be the coach for this ccx to access this view')
)
return view(request, course, ccx)
return wrapper
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def dashboard(request, course, ccx=None):
"""
Display the CCX Coach Dashboard.
"""
# right now, we can only have one ccx per user and course
# so, if no ccx is passed in, we can sefely redirect to that
if ccx is None:
ccx = get_ccx_for_coach(course, request.user)
if ccx:
url = reverse(
'ccx_coach_dashboard',
kwargs={'course_id': CCXLocator.from_course_locator(course.id, ccx.id)}
)
return redirect(url)
context = {
'course': course,
'ccx': ccx,
}
if ccx:
ccx_locator = CCXLocator.from_course_locator(course.id, ccx.id)
schedule = get_ccx_schedule(course, ccx)
grading_policy = get_override_for_ccx(
ccx, course, 'grading_policy', course.grading_policy)
context['schedule'] = json.dumps(schedule, indent=4)
context['save_url'] = reverse(
'save_ccx', kwargs={'course_id': ccx_locator})
context['ccx_members'] = CourseEnrollment.objects.filter(course_id=ccx_locator, is_active=True)
context['gradebook_url'] = reverse(
'ccx_gradebook', kwargs={'course_id': ccx_locator})
context['grades_csv_url'] = reverse(
'ccx_grades_csv', kwargs={'course_id': ccx_locator})
context['grading_policy'] = json.dumps(grading_policy, indent=4)
context['grading_policy_url'] = reverse(
'ccx_set_grading_policy', kwargs={'course_id': ccx_locator})
else:
context['create_ccx_url'] = reverse(
'create_ccx', kwargs={'course_id': course.id})
return render_to_response('ccx/coach_dashboard.html', context)
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def create_ccx(request, course, ccx=None):
"""
Create a new CCX
"""
name = request.POST.get('name')
# prevent CCX objects from being created for deprecated course ids.
if course.id.deprecated:
messages.error(request, _(
"You cannot create a CCX from a course using a deprecated id. "
"Please create a rerun of this course in the studio to allow "
"this action."))
url = reverse('ccx_coach_dashboard', kwargs={'course_id': course.id})
return redirect(url)
ccx = CustomCourseForEdX(
course_id=course.id,
coach=request.user,
display_name=name)
ccx.save()
# Make sure start/due are overridden for entire course
start = TODAY().replace(tzinfo=pytz.UTC)
override_field_for_ccx(ccx, course, 'start', start)
override_field_for_ccx(ccx, course, 'due', None)
# Hide anything that can show up in the schedule
hidden = 'visible_to_staff_only'
for chapter in course.get_children():
override_field_for_ccx(ccx, chapter, hidden, True)
for sequential in chapter.get_children():
override_field_for_ccx(ccx, sequential, hidden, True)
for vertical in sequential.get_children():
override_field_for_ccx(ccx, vertical, hidden, True)
ccx_id = CCXLocator.from_course_locator(course.id, ccx.id) # pylint: disable=no-member
url = reverse('ccx_coach_dashboard', kwargs={'course_id': ccx_id})
return redirect(url)
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def save_ccx(request, course, ccx=None):
"""
Save changes to CCX.
"""
if not ccx:
raise Http404
def override_fields(parent, data, graded, earliest=None, ccx_ids_to_delete=None):
"""
Recursively apply CCX schedule data to CCX by overriding the
`visible_to_staff_only`, `start` and `due` fields for units in the
course.
"""
if ccx_ids_to_delete is None:
ccx_ids_to_delete = []
blocks = {
str(child.location): child
for child in parent.get_children()}
for unit in data:
block = blocks[unit['location']]
override_field_for_ccx(
ccx, block, 'visible_to_staff_only', unit['hidden'])
start = parse_date(unit['start'])
if start:
if not earliest or start < earliest:
earliest = start
override_field_for_ccx(ccx, block, 'start', start)
else:
ccx_ids_to_delete.append(get_override_for_ccx(ccx, block, 'start_id'))
clear_ccx_field_info_from_ccx_map(ccx, block, 'start')
due = parse_date(unit['due'])
if due:
override_field_for_ccx(ccx, block, 'due', due)
else:
ccx_ids_to_delete.append(get_override_for_ccx(ccx, block, 'due_id'))
clear_ccx_field_info_from_ccx_map(ccx, block, 'due')
if not unit['hidden'] and block.graded:
graded[block.format] = graded.get(block.format, 0) + 1
children = unit.get('children', None)
if children:
override_fields(block, children, graded, earliest, ccx_ids_to_delete)
return earliest, ccx_ids_to_delete
graded = {}
earliest, ccx_ids_to_delete = override_fields(course, json.loads(request.body), graded, [])
bulk_delete_ccx_override_fields(ccx, ccx_ids_to_delete)
if earliest:
override_field_for_ccx(ccx, course, 'start', earliest)
# Attempt to automatically adjust grading policy
changed = False
policy = get_override_for_ccx(
ccx, course, 'grading_policy', course.grading_policy
)
policy = deepcopy(policy)
grader = policy['GRADER']
for section in grader:
count = graded.get(section.get('type'), 0)
if count < section['min_count']:
changed = True
section['min_count'] = count
if changed:
override_field_for_ccx(ccx, course, 'grading_policy', policy)
return HttpResponse(
json.dumps({
'schedule': get_ccx_schedule(course, ccx),
'grading_policy': json.dumps(policy, indent=4)}),
content_type='application/json',
)
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def set_grading_policy(request, course, ccx=None):
"""
Set grading policy for the CCX.
"""
if not ccx:
raise Http404
override_field_for_ccx(
ccx, course, 'grading_policy', json.loads(request.POST['policy']))
url = reverse(
'ccx_coach_dashboard',
kwargs={'course_id': CCXLocator.from_course_locator(course.id, ccx.id)}
)
return redirect(url)
def validate_date(year, month, day, hour, minute):
"""
avoid corrupting db if bad dates come in
"""
valid = True
if year < 0:
valid = False
if month < 1 or month > 12:
valid = False
if day < 1 or day > 31:
valid = False
if hour < 0 or hour > 23:
valid = False
if minute < 0 or minute > 59:
valid = False
return valid
def parse_date(datestring):
"""
Generate a UTC datetime.datetime object from a string of the form
'YYYY-MM-DD HH:MM'. If string is empty or `None`, returns `None`.
"""
if datestring:
date, time = datestring.split(' ')
year, month, day = map(int, date.split('-'))
hour, minute = map(int, time.split(':'))
if validate_date(year, month, day, hour, minute):
return datetime.datetime(
year, month, day, hour, minute, tzinfo=pytz.UTC)
return None
def get_ccx_for_coach(course, coach):
"""
Looks to see if user is coach of a CCX for this course. Returns the CCX or
None.
"""
ccxs = CustomCourseForEdX.objects.filter(
course_id=course.id,
coach=coach
)
# XXX: In the future, it would be nice to support more than one ccx per
# coach per course. This is a place where that might happen.
if ccxs.exists():
return ccxs[0]
return None
def get_ccx_schedule(course, ccx):
"""
Generate a JSON serializable CCX schedule.
"""
def visit(node, depth=1):
"""
Recursive generator function which yields CCX schedule nodes.
We convert dates to string to get them ready for use by the js date
widgets, which use text inputs.
"""
for child in node.get_children():
start = get_override_for_ccx(ccx, child, 'start', None)
if start:
start = str(start)[:-9]
due = get_override_for_ccx(ccx, child, 'due', None)
if due:
due = str(due)[:-9]
hidden = get_override_for_ccx(
ccx, child, 'visible_to_staff_only',
child.visible_to_staff_only)
visited = {
'location': str(child.location),
'display_name': child.display_name,
'category': child.category,
'start': start,
'due': due,
'hidden': hidden,
}
if depth < 3:
children = tuple(visit(child, depth + 1))
if children:
visited['children'] = children
yield visited
else:
yield visited
with disable_overrides():
return tuple(visit(course))
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def ccx_schedule(request, course, ccx=None): # pylint: disable=unused-argument
"""
get json representation of ccx schedule
"""
if not ccx:
raise Http404
schedule = get_ccx_schedule(course, ccx)
json_schedule = json.dumps(schedule, indent=4)
return HttpResponse(json_schedule, mimetype='application/json')
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def ccx_invite(request, course, ccx=None):
"""
Invite users to new ccx
"""
if not ccx:
raise Http404
action = request.POST.get('enrollment-button')
identifiers_raw = request.POST.get('student-ids')
identifiers = _split_input_list(identifiers_raw)
auto_enroll = True if 'auto-enroll' in request.POST else False
email_students = True if 'email-students' in request.POST else False
for identifier in identifiers:
user = None
email = None
try:
user = get_student_from_identifier(identifier)
except User.DoesNotExist:
email = identifier
else:
email = user.email
try:
validate_email(email)
course_key = CCXLocator.from_course_locator(course.id, ccx.id)
email_params = get_email_params(course, auto_enroll, course_key=course_key, display_name=ccx.display_name)
if action == 'Enroll':
enroll_email(
course_key,
email,
auto_enroll=auto_enroll,
email_students=email_students,
email_params=email_params
)
if action == "Unenroll":
unenroll_email(course_key, email, email_students=email_students, email_params=email_params)
except ValidationError:
log.info('Invalid user name or email when trying to invite students: %s', email)
url = reverse(
'ccx_coach_dashboard',
kwargs={'course_id': CCXLocator.from_course_locator(course.id, ccx.id)}
)
return redirect(url)
def validate_student_email(email):
"""
validate student's email id
"""
error_message = None
try:
validate_email(email)
except ValidationError:
log.info(
'Invalid user name or email when trying to enroll student: %s',
email
)
if email:
error_message = _(
'Could not find a user with name or email "{email}" '
).format(email=email)
else:
error_message = _(
'Please enter a valid username or email.'
)
return error_message
@ensure_csrf_cookie
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def ccx_student_management(request, course, ccx=None):
"""Manage the enrollment of individual students in a CCX
"""
if not ccx:
raise Http404
action = request.POST.get('student-action', None)
student_id = request.POST.get('student-id', '')
user = email = None
error_message = ""
course_key = CCXLocator.from_course_locator(course.id, ccx.id)
try:
user = get_student_from_identifier(student_id)
except User.DoesNotExist:
email = student_id
error_message = validate_student_email(email)
if email and not error_message:
error_message = _(
'Could not find a user with name or email "{email}" '
).format(email=email)
else:
email = user.email
error_message = validate_student_email(email)
if error_message is None:
if action == 'add':
# by decree, no emails sent to students added this way
# by decree, any students added this way are auto_enrolled
enroll_email(course_key, email, auto_enroll=True, email_students=False)
elif action == 'revoke':
unenroll_email(course_key, email, email_students=False)
else:
messages.error(request, error_message)
url = reverse('ccx_coach_dashboard', kwargs={'course_id': course_key})
return redirect(url)
@contextmanager
def ccx_course(ccx_locator):
"""Create a context in which the course identified by course_locator exists
"""
course = get_course_by_id(ccx_locator)
yield course
def prep_course_for_grading(course, request):
"""Set up course module for overrides to function properly"""
field_data_cache = FieldDataCache.cache_for_descriptor_descendents(
course.id, request.user, course, depth=2)
course = get_module_for_descriptor(
request.user, request, course, field_data_cache, course.id, course=course
)
course._field_data_cache = {} # pylint: disable=protected-access
course.set_grading_policy(course.grading_policy)
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def ccx_gradebook(request, course, ccx=None):
"""
Show the gradebook for this CCX.
"""
if not ccx:
raise Http404
ccx_key = CCXLocator.from_course_locator(course.id, ccx.id)
with ccx_course(ccx_key) as course:
prep_course_for_grading(course, request)
enrolled_students = User.objects.filter(
courseenrollment__course_id=ccx_key,
courseenrollment__is_active=1
).order_by('username').select_related("profile")
student_info = [
{
'username': student.username,
'id': student.id,
'email': student.email,
'grade_summary': student_grades(student, request, course),
'realname': student.profile.name,
}
for student in enrolled_students
]
return render_to_response('courseware/gradebook.html', {
'students': student_info,
'course': course,
'course_id': course.id,
'staff_access': request.user.is_staff,
'ordered_grades': sorted(
course.grade_cutoffs.items(), key=lambda i: i[1], reverse=True),
})
@cache_control(no_cache=True, no_store=True, must_revalidate=True)
@coach_dashboard
def ccx_grades_csv(request, course, ccx=None):
"""
Download grades as CSV.
"""
if not ccx:
raise Http404
ccx_key = CCXLocator.from_course_locator(course.id, ccx.id)
with ccx_course(ccx_key) as course:
prep_course_for_grading(course, request)
enrolled_students = User.objects.filter(
courseenrollment__course_id=ccx_key,
courseenrollment__is_active=1
).order_by('username').select_related("profile")
grades = iterate_grades_for(course, enrolled_students)
header = None
rows = []
for student, gradeset, __ in grades:
if gradeset:
# We were able to successfully grade this student for this
# course.
if not header:
# Encode the header row in utf-8 encoding in case there are
# unicode characters
header = [section['label'].encode('utf-8')
for section in gradeset[u'section_breakdown']]
rows.append(["id", "email", "username", "grade"] + header)
percents = {
section['label']: section.get('percent', 0.0)
for section in gradeset[u'section_breakdown']
if 'label' in section
}
row_percents = [percents.get(label, 0.0) for label in header]
rows.append([student.id, student.email, student.username,
gradeset['percent']] + row_percents)
buf = StringIO()
writer = csv.writer(buf)
for row in rows:
writer.writerow(row)
return HttpResponse(buf.getvalue(), content_type='text/plain')
| agpl-3.0 |
Salat-Cx65/python-for-android | python-build/python-libs/xmpppy/xmpp/__init__.py | 212 | 1795 | # $Id: __init__.py,v 1.9 2005/03/07 09:34:51 snakeru Exp $
"""
All features of xmpppy library contained within separate modules.
At present there are modules:
simplexml - XML handling routines
protocol - jabber-objects (I.e. JID and different stanzas and sub-stanzas) handling routines.
debug - Jacob Lundquist's debugging module. Very handy if you like colored debug.
auth - Non-SASL and SASL stuff. You will need it to auth as a client or transport.
transports - low level connection handling. TCP and TLS currently. HTTP support planned.
roster - simple roster for use in clients.
dispatcher - decision-making logic. Handles all hooks. The first who takes control over fresh stanzas.
features - different stuff that didn't worths separating into modules
browser - DISCO server framework. Allows to build dynamic disco tree.
filetransfer - Currently contains only IBB stuff. Can be used for bot-to-bot transfers.
Most of the classes that is defined in all these modules is an ancestors of
class PlugIn so they share a single set of methods allowing you to compile
a featured XMPP client. For every instance of PlugIn class the 'owner' is the class
in what the plug was plugged. While plugging in such instance usually sets some
methods of owner to it's own ones for easy access. All session specific info stored
either in instance of PlugIn or in owner's instance. This is considered unhandy
and there are plans to port 'Session' class from xmppd.py project for storing all
session-related info. Though if you are not accessing instances variables directly
and use only methods for access all values you should not have any problems.
"""
import simplexml,protocol,debug,auth,transports,roster,dispatcher,features,browser,filetransfer,commands
from client import *
from protocol import *
| apache-2.0 |
tcheehow/MissionPlanner | Lib/site-packages/numpy/distutils/from_template.py | 51 | 7890 | #!"C:\Users\hog\Documents\Visual Studio 2010\Projects\ArdupilotMega\ArdupilotMega\bin\Debug\ipy.exe"
"""
process_file(filename)
takes templated file .xxx.src and produces .xxx file where .xxx
is .pyf .f90 or .f using the following template rules:
'<..>' denotes a template.
All function and subroutine blocks in a source file with names that
contain '<..>' will be replicated according to the rules in '<..>'.
The number of comma-separeted words in '<..>' will determine the number of
replicates.
'<..>' may have two different forms, named and short. For example,
named:
<p=d,s,z,c> where anywhere inside a block '<p>' will be replaced with
'd', 's', 'z', and 'c' for each replicate of the block.
<_c> is already defined: <_c=s,d,c,z>
<_t> is already defined: <_t=real,double precision,complex,double complex>
short:
<s,d,c,z>, a short form of the named, useful when no <p> appears inside
a block.
In general, '<..>' contains a comma separated list of arbitrary
expressions. If these expression must contain a comma|leftarrow|rightarrow,
then prepend the comma|leftarrow|rightarrow with a backslash.
If an expression matches '\\<index>' then it will be replaced
by <index>-th expression.
Note that all '<..>' forms in a block must have the same number of
comma-separated entries.
Predefined named template rules:
<prefix=s,d,c,z>
<ftype=real,double precision,complex,double complex>
<ftypereal=real,double precision,\\0,\\1>
<ctype=float,double,complex_float,complex_double>
<ctypereal=float,double,\\0,\\1>
"""
__all__ = ['process_str','process_file']
import os
import sys
import re
routine_start_re = re.compile(r'(\n|\A)(( (\$|\*))|)\s*(subroutine|function)\b',re.I)
routine_end_re = re.compile(r'\n\s*end\s*(subroutine|function)\b.*(\n|\Z)',re.I)
function_start_re = re.compile(r'\n (\$|\*)\s*function\b',re.I)
def parse_structure(astr):
""" Return a list of tuples for each function or subroutine each
tuple is the start and end of a subroutine or function to be
expanded.
"""
spanlist = []
ind = 0
while 1:
m = routine_start_re.search(astr,ind)
if m is None:
break
start = m.start()
if function_start_re.match(astr,start,m.end()):
while 1:
i = astr.rfind('\n',ind,start)
if i==-1:
break
start = i
if astr[i:i+7]!='\n $':
break
start += 1
m = routine_end_re.search(astr,m.end())
ind = end = m and m.end()-1 or len(astr)
spanlist.append((start,end))
return spanlist
template_re = re.compile(r"<\s*(\w[\w\d]*)\s*>")
named_re = re.compile(r"<\s*(\w[\w\d]*)\s*=\s*(.*?)\s*>")
list_re = re.compile(r"<\s*((.*?))\s*>")
def find_repl_patterns(astr):
reps = named_re.findall(astr)
names = {}
for rep in reps:
name = rep[0].strip() or unique_key(names)
repl = rep[1].replace('\,','@comma@')
thelist = conv(repl)
names[name] = thelist
return names
item_re = re.compile(r"\A\\(?P<index>\d+)\Z")
def conv(astr):
b = astr.split(',')
l = [x.strip() for x in b]
for i in range(len(l)):
m = item_re.match(l[i])
if m:
j = int(m.group('index'))
l[i] = l[j]
return ','.join(l)
def unique_key(adict):
""" Obtain a unique key given a dictionary."""
allkeys = adict.keys()
done = False
n = 1
while not done:
newkey = '__l%s' % (n)
if newkey in allkeys:
n += 1
else:
done = True
return newkey
template_name_re = re.compile(r'\A\s*(\w[\w\d]*)\s*\Z')
def expand_sub(substr,names):
substr = substr.replace('\>','@rightarrow@')
substr = substr.replace('\<','@leftarrow@')
lnames = find_repl_patterns(substr)
substr = named_re.sub(r"<\1>",substr) # get rid of definition templates
def listrepl(mobj):
thelist = conv(mobj.group(1).replace('\,','@comma@'))
if template_name_re.match(thelist):
return "<%s>" % (thelist)
name = None
for key in lnames.keys(): # see if list is already in dictionary
if lnames[key] == thelist:
name = key
if name is None: # this list is not in the dictionary yet
name = unique_key(lnames)
lnames[name] = thelist
return "<%s>" % name
substr = list_re.sub(listrepl, substr) # convert all lists to named templates
# newnames are constructed as needed
numsubs = None
base_rule = None
rules = {}
for r in template_re.findall(substr):
if r not in rules:
thelist = lnames.get(r,names.get(r,None))
if thelist is None:
raise ValueError('No replicates found for <%s>' % (r))
if r not in names and not thelist.startswith('_'):
names[r] = thelist
rule = [i.replace('@comma@',',') for i in thelist.split(',')]
num = len(rule)
if numsubs is None:
numsubs = num
rules[r] = rule
base_rule = r
elif num == numsubs:
rules[r] = rule
else:
print("Mismatch in number of replacements (base <%s=%s>)"\
" for <%s=%s>. Ignoring." % (base_rule,
','.join(rules[base_rule]),
r,thelist))
if not rules:
return substr
def namerepl(mobj):
name = mobj.group(1)
return rules.get(name,(k+1)*[name])[k]
newstr = ''
for k in range(numsubs):
newstr += template_re.sub(namerepl, substr) + '\n\n'
newstr = newstr.replace('@rightarrow@','>')
newstr = newstr.replace('@leftarrow@','<')
return newstr
def process_str(allstr):
newstr = allstr
writestr = '' #_head # using _head will break free-format files
struct = parse_structure(newstr)
oldend = 0
names = {}
names.update(_special_names)
for sub in struct:
writestr += newstr[oldend:sub[0]]
names.update(find_repl_patterns(newstr[oldend:sub[0]]))
writestr += expand_sub(newstr[sub[0]:sub[1]],names)
oldend = sub[1]
writestr += newstr[oldend:]
return writestr
include_src_re = re.compile(r"(\n|\A)\s*include\s*['\"](?P<name>[\w\d./\\]+[.]src)['\"]",re.I)
def resolve_includes(source):
d = os.path.dirname(source)
fid = open(source)
lines = []
for line in fid.readlines():
m = include_src_re.match(line)
if m:
fn = m.group('name')
if not os.path.isabs(fn):
fn = os.path.join(d,fn)
if os.path.isfile(fn):
print ('Including file',fn)
lines.extend(resolve_includes(fn))
else:
lines.append(line)
else:
lines.append(line)
fid.close()
return lines
def process_file(source):
lines = resolve_includes(source)
return process_str(''.join(lines))
_special_names = find_repl_patterns('''
<_c=s,d,c,z>
<_t=real,double precision,complex,double complex>
<prefix=s,d,c,z>
<ftype=real,double precision,complex,double complex>
<ctype=float,double,complex_float,complex_double>
<ftypereal=real,double precision,\\0,\\1>
<ctypereal=float,double,\\0,\\1>
''')
if __name__ == "__main__":
try:
file = sys.argv[1]
except IndexError:
fid = sys.stdin
outfile = sys.stdout
else:
fid = open(file,'r')
(base, ext) = os.path.splitext(file)
newname = base
outfile = open(newname,'w')
allstr = fid.read()
writestr = process_str(allstr)
outfile.write(writestr)
| gpl-3.0 |
SlideAtlas/SlideAtlas-Server | testing/unit/test_models.py | 1 | 3917 | import os
import sys
import logging
from bson import ObjectId
logging.basicConfig(level=logging.INFO)
slideatlaspath = os.path.abspath(os.path.join(os.path.dirname(__file__), "../.."))
sys.path.append(slideatlaspath)
from slideatlas import models
from slideatlas.models import Image
from slideatlas.models import ImageStore, View, Session
import base64
def test_image_access():
obj = ImageStore.objects(dbname="demo")[0]
assert(obj != None)
print obj._cls, obj.label
with obj:
img = Image.objects()[2]
assert(img!=None)
logger.info("Found image labelled %s"%(img.label))
def test_view_access():
obj = ImageStore.objects(dbname="demo")[0]
assert(obj != None)
print obj._cls, obj.label
with obj:
aview = View.objects(image=ObjectId("4e6ec90183ff8d11c8000001"))[0]
assert(aview != None)
logger.info("Found view : %s"%(str(aview.__dict__)))
def test_sess_access():
obj = ImageStore.objects(dbname="ptiffayodhya")[0]
assert(obj != None)
print obj._cls, obj.label
with obj:
asess = Session.objects.first()
assert(asess != None)
logger.info("Found sess : %s"%(str(asess.__dict__)))
def test_collection_access():
""" Snippet to test collection access """
all_collections_query = models.Collection.objects\
.no_dereference()
can_admin_collections = all_collections_query.can_access(models.Operation.admin)
for col in all_collections_query:
print col.label
def test_and_fix__macro_thumbs():
# params
viewcol = View._get_collection()
which = "macro"
force = False
made = 0
errored = 0
skipped = 0
total = 0
for viewobj in viewcol.find():
total = total + 1
logger.info("Total: %d" % total)
try:
# Make thumbnail
if "thumbs" not in viewobj:
viewobj["thumbs"] = {}
if force or which not in viewobj["thumbs"]:
# Refresh the thumbnail
if which not in ["macro"]:
# Only know how to make macro image
# Todo: add support for label supported
raise Exception("%s thumbnail creation not supported" % which)
# Make the macro thumb
# Get the image store and image id and off load the request
istore = models.ImageStore.objects.get(id=viewobj["ViewerRecords"][0]["Database"])
# All image stores support macro thumb
with istore:
thumbimgdata = istore.make_thumb(
models.Image.objects.get(id=viewobj["ViewerRecords"][0]["Image"]))
viewcol.update({"_id": viewobj["_id"]},
{"$set" : { "thumbs." + which: base64.b64encode(thumbimgdata)}})
made = made + 1
logger.info("Made: %d" % made)
else:
skipped = skipped + 1
logger.info("Skipped: %d" % skipped)
except Exception as e:
errored = errored + 1
logger.info("Errored: %d, %s" % (errored, e.message))
logger.info("Made: %d" % made)
logger.info("Skipped: %d" % skipped)
logger.info("Errored: %d" % errored)
if __name__ == "__main__":
"""
Run few tests
This class will be finally imported from tiff server
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# This is required so that model gets registered
from slideatlas import create_app
app = create_app()
# test_ptiff_tile_store()
# create_ptiff_store()
# test_getlist()
# test_items_mongoengine()
# test_modify_store()
# test_image_access()
# test_view_access()
# test_sess_access()
# test_collection_access()
with app.app_context():
test_and_fix__macro_thumbs()
| apache-2.0 |
sudheesh001/pontoon | pontoon/sync/tasks.py | 1 | 7172 | import logging
from django.conf import settings
from django.db import connection, transaction
from django.utils import timezone
from pontoon.administration.vcs import CommitToRepositoryException
from pontoon.base.models import ChangedEntityLocale, Project, Repository
from pontoon.base.tasks import PontoonTask
from pontoon.sync.changeset import ChangeSet
from pontoon.sync.core import (
commit_changes,
pull_changes,
sync_project as perform_sync_project,
serial_task,
update_project_stats,
update_translations,
)
from pontoon.sync.models import ProjectSyncLog, RepositorySyncLog, SyncLog
from pontoon.sync.vcs_models import VCSProject
log = logging.getLogger(__name__)
def get_or_fail(ModelClass, message=None, **kwargs):
try:
return ModelClass.objects.get(**kwargs)
except ModelClass.DoesNotExist:
if message is not None:
log.error(message)
raise
@serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key='project={0}')
def sync_project(self, project_pk, sync_log_pk, no_pull=False, no_commit=False, force=False):
"""Fetch the project with the given PK and perform sync on it."""
db_project = get_or_fail(Project, pk=project_pk,
message='Could not sync project with pk={0}, not found.'.format(project_pk))
sync_log = get_or_fail(SyncLog, pk=sync_log_pk,
message=('Could not sync project {0}, log with pk={1} not found.'
.format(db_project.slug, sync_log_pk)))
log.info('Syncing project {0}.'.format(db_project.slug))
# Mark "now" at the start of sync to avoid messing with
# translations submitted during sync.
now = timezone.now()
project_sync_log = ProjectSyncLog.objects.create(
sync_log=sync_log,
project=db_project,
start_time=now
)
if not no_pull:
repos_changed = pull_changes(db_project)
else:
repos_changed = True # Assume changed.
# If the repos haven't changed since the last sync and there are
# no Pontoon-side changes for this project, quit early.
if not force and not repos_changed and not db_project.needs_sync:
log.info('Skipping project {0}, no changes detected.'.format(db_project.slug))
project_sync_log.skipped = True
project_sync_log.skipped_end_time = timezone.now()
project_sync_log.save(update_fields=('skipped', 'skipped_end_time'))
return
perform_sync_project(db_project, now)
for repo in db_project.repositories.all():
sync_project_repo.delay(
project_pk,
repo.pk,
project_sync_log.pk,
now,
no_pull=no_pull,
no_commit=no_commit
)
log.info('Synced resources for project {0}.'.format(db_project.slug))
@serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key='project={0},repo={1}')
def sync_project_repo(self, project_pk, repo_pk, project_sync_log_pk, now,
no_pull=False, no_commit=False):
db_project = get_or_fail(Project, pk=project_pk,
message='Could not sync project with pk={0}, not found.'.format(project_pk))
repo = get_or_fail(Repository, pk=repo_pk,
message='Could not sync repo with pk={0}, not found.'.format(project_pk))
project_sync_log = get_or_fail(ProjectSyncLog, pk=project_sync_log_pk,
message=('Could not sync project {0}, log with pk={1} not found.'
.format(db_project.slug, project_sync_log_pk)))
repo_sync_log = RepositorySyncLog.objects.create(
project_sync_log=project_sync_log,
repository=repo,
start_time=timezone.now()
)
# Pull VCS changes in case we're on a different worker than the one
# sync started on.
if not no_pull:
pull_changes(db_project)
if len(repo.locales) < 1:
log.warning('Could not sync repo `{0}`, no locales found within.'
.format(repo.url))
repo_sync_log.end_time = timezone.now()
repo_sync_log.save(update_fields=['end_time'])
return
vcs_project = VCSProject(db_project, locales=repo.locales)
for locale in repo.locales:
try:
with transaction.atomic():
changeset = ChangeSet(db_project, vcs_project, now)
update_translations(db_project, vcs_project, locale, changeset)
changeset.execute()
update_project_stats(db_project, vcs_project, changeset, locale)
# Clear out the "has_changed" markers now that we've finished
# syncing.
(ChangedEntityLocale.objects
.filter(entity__resource__project=db_project,
locale=locale,
when__lte=now)
.delete())
db_project.has_changed = False
db_project.save(update_fields=['has_changed'])
# Clean up any duplicate approvals at the end of sync right
# before we commit the transaction to avoid race conditions.
with connection.cursor() as cursor:
cursor.execute("""
UPDATE base_translation AS b
SET approved = FALSE, approved_date = NULL
WHERE
id IN
(SELECT trans.id FROM base_translation AS trans
LEFT JOIN base_entity AS ent ON ent.id = trans.entity_id
LEFT JOIN base_resource AS res ON res.id = ent.resource_id
WHERE locale_id = %(locale_id)s
AND res.project_id = %(project_id)s)
AND approved_date !=
(SELECT max(approved_date)
FROM base_translation
WHERE entity_id = b.entity_id
AND locale_id = b.locale_id
AND (plural_form = b.plural_form OR plural_form IS NULL));
""", {
'locale_id': locale.id,
'project_id': db_project.id
})
# Perform the commit last so that, if it succeeds, there is
# nothing after it to fail.
if not no_commit and locale in changeset.locales_to_commit:
commit_changes(db_project, vcs_project, changeset, locale)
except CommitToRepositoryException as err:
# Transaction aborted, log and move on to the next locale.
log.warning(
'Failed to sync locale {locale} for project {project} due to '
'commit error: {error}'.format(
locale=locale.code,
project=db_project.slug,
error=err,
)
)
repo_sync_log.end_time = timezone.now()
repo_sync_log.save()
log.info('Synced translations for project {0} in locales {1}.'.format(
db_project.slug, ','.join(locale.code for locale in repo.locales)
))
| bsd-3-clause |
mfherbst/spack | var/spack/repos/builtin/packages/perl-mozilla-ca/package.py | 5 | 1577 | ##############################################################################
# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, [email protected], All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class PerlMozillaCa(PerlPackage):
"""Mozilla's CA cert bundle in PEM format"""
homepage = "http://search.cpan.org/~abh/Mozilla-CA-20160104/lib/Mozilla/CA.pm"
url = "http://search.cpan.org/CPAN/authors/id/A/AB/ABH/Mozilla-CA-20160104.tar.gz"
version('20160104', '1b91edb15953a8188f011ab5ff433300')
| lgpl-2.1 |
yb-kim/gemV | src/arch/x86/isa/insts/general_purpose/rotate_and_shift/__init__.py | 91 | 2283 | # Copyright (c) 2007 The Hewlett-Packard Development Company
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Gabe Black
categories = ["rotate",
"shift"]
microcode = ""
for category in categories:
exec "import %s as cat" % category
microcode += cat.microcode
| bsd-3-clause |
LingxiaoJIA/gem5 | tests/configs/tsunami-simple-atomic.py | 64 | 2352 | # Copyright (c) 2012 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Andreas Sandberg
from m5.objects import *
from alpha_generic import *
root = LinuxAlphaFSSystemUniprocessor(mem_mode='atomic',
mem_class=SimpleMemory,
cpu_class=AtomicSimpleCPU).create_root()
| bsd-3-clause |
SIFTeam/enigma2 | lib/python/Components/Sensors.py | 27 | 2023 | from Components.FanControl import fancontrol
class Sensors:
# (type, name, unit, directory)
TYPE_TEMPERATURE = 0
# (type, name, unit, fanid)
TYPE_FAN_RPM = 1
def __init__(self):
# (type, name, unit, sensor_specific_dict/list)
self.sensors_list = []
self.addSensors()
def getSensorsCount(self, type = None):
if type is None:
return len(self.sensors_list)
count = 0
for sensor in self.sensors_list:
if sensor[0] == type:
count += 1
return count
# returns a list of sensorids of type "type"
def getSensorsList(self, type = None):
if type is None:
return range(len(self.sensors_list))
list = []
for sensorid in range(len(self.sensors_list)):
if self.sensors_list[sensorid][0] == type:
list.append(sensorid)
return list
def getSensorType(self, sensorid):
return self.sensors_list[sensorid][0]
def getSensorName(self, sensorid):
return self.sensors_list[sensorid][1]
def getSensorValue(self, sensorid):
value = -1
sensor = self.sensors_list[sensorid]
if sensor[0] == self.TYPE_TEMPERATURE:
f = open("%s/value" % sensor[3], "r")
value = int(f.readline().strip())
f.close()
elif sensor[0] == self.TYPE_FAN_RPM:
value = fancontrol.getFanSpeed(sensor[3])
return value
def getSensorUnit(self, sensorid):
return self.sensors_list[sensorid][2]
def addSensors(self):
import os
if os.path.exists("/proc/stb/sensors"):
for dirname in os.listdir("/proc/stb/sensors"):
if dirname.find("temp", 0, 4) == 0:
f = open("/proc/stb/sensors/%s/name" % dirname, "r")
name = f.readline().strip()
f.close()
f = open("/proc/stb/sensors/%s/unit" % dirname, "r")
unit = f.readline().strip()
f.close()
self.sensors_list.append((self.TYPE_TEMPERATURE, name, unit, "/proc/stb/sensors/%s" % dirname))
for fanid in range(fancontrol.getFanCount()):
if fancontrol.hasRPMSensor(fanid):
self.sensors_list.append((self.TYPE_FAN_RPM, _("Fan %d") % (fanid + 1), "rpm", fanid))
sensors = Sensors() | gpl-2.0 |
lucalianas/ProMort | promort/reviews_manager/migrations/0012_auto_20170522_1045.py | 2 | 2680 | # -*- coding: utf-8 -*-
# Copyright (c) 2019, CRS4
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
# the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from __future__ import unicode_literals
from django.db import migrations
from uuid import uuid4
def _get_slide_index(slide_label):
return slide_label.split('-')[-1]
def update_rois_annotations(apps, schema_editor):
ROIsAnnotation = apps.get_model('reviews_manager', 'ROIsAnnotation')
for annotation in ROIsAnnotation.objects.all():
annotation_label = uuid4().hex
annotation.label = annotation_label
annotation.save()
for step in annotation.steps.all():
slide_index = _get_slide_index(step.slide.id)
step.label = '%s-%s' % (annotation_label, slide_index)
step.save()
def update_clinical_annotations(apps, schema_editor):
ClinicalAnnotation = apps.get_model('reviews_manager', 'ClinicalAnnotation')
for annotation in ClinicalAnnotation.objects.all():
if annotation.reviewer == annotation.rois_review.reviewer:
annotation_label = annotation.rois_review.label
else:
annotation_label = uuid4().hex
annotation.label = annotation_label
annotation.save()
for step in annotation.steps.all():
slide_index = _get_slide_index(step.slide.id)
step.label = '%s-%s' % (annotation_label, slide_index)
step.save()
class Migration(migrations.Migration):
dependencies = [
('reviews_manager', '0011_auto_20170522_1045'),
]
operations = [
migrations.RunPython(update_rois_annotations),
migrations.RunPython(update_clinical_annotations),
]
| mit |
Easy-as-Bit/p2pool | p2pool/work.py | 52 | 23955 | from __future__ import division
import base64
import random
import re
import sys
import time
from twisted.internet import defer
from twisted.python import log
import bitcoin.getwork as bitcoin_getwork, bitcoin.data as bitcoin_data
from bitcoin import helper, script, worker_interface
from util import forest, jsonrpc, variable, deferral, math, pack
import p2pool, p2pool.data as p2pool_data
class WorkerBridge(worker_interface.WorkerBridge):
COINBASE_NONCE_LENGTH = 8
def __init__(self, node, my_pubkey_hash, donation_percentage, merged_urls, worker_fee):
worker_interface.WorkerBridge.__init__(self)
self.recent_shares_ts_work = []
self.node = node
self.my_pubkey_hash = my_pubkey_hash
self.donation_percentage = donation_percentage
self.worker_fee = worker_fee
self.net = self.node.net.PARENT
self.running = True
self.pseudoshare_received = variable.Event()
self.share_received = variable.Event()
self.local_rate_monitor = math.RateMonitor(10*60)
self.local_addr_rate_monitor = math.RateMonitor(10*60)
self.removed_unstales_var = variable.Variable((0, 0, 0))
self.removed_doa_unstales_var = variable.Variable(0)
self.my_share_hashes = set()
self.my_doa_share_hashes = set()
self.tracker_view = forest.TrackerView(self.node.tracker, forest.get_attributedelta_type(dict(forest.AttributeDelta.attrs,
my_count=lambda share: 1 if share.hash in self.my_share_hashes else 0,
my_doa_count=lambda share: 1 if share.hash in self.my_doa_share_hashes else 0,
my_orphan_announce_count=lambda share: 1 if share.hash in self.my_share_hashes and share.share_data['stale_info'] == 'orphan' else 0,
my_dead_announce_count=lambda share: 1 if share.hash in self.my_share_hashes and share.share_data['stale_info'] == 'doa' else 0,
)))
@self.node.tracker.verified.removed.watch
def _(share):
if share.hash in self.my_share_hashes and self.node.tracker.is_child_of(share.hash, self.node.best_share_var.value):
assert share.share_data['stale_info'] in [None, 'orphan', 'doa'] # we made these shares in this instance
self.removed_unstales_var.set((
self.removed_unstales_var.value[0] + 1,
self.removed_unstales_var.value[1] + (1 if share.share_data['stale_info'] == 'orphan' else 0),
self.removed_unstales_var.value[2] + (1 if share.share_data['stale_info'] == 'doa' else 0),
))
if share.hash in self.my_doa_share_hashes and self.node.tracker.is_child_of(share.hash, self.node.best_share_var.value):
self.removed_doa_unstales_var.set(self.removed_doa_unstales_var.value + 1)
# MERGED WORK
self.merged_work = variable.Variable({})
@defer.inlineCallbacks
def set_merged_work(merged_url, merged_userpass):
merged_proxy = jsonrpc.HTTPProxy(merged_url, dict(Authorization='Basic ' + base64.b64encode(merged_userpass)))
while self.running:
auxblock = yield deferral.retry('Error while calling merged getauxblock on %s:' % (merged_url,), 30)(merged_proxy.rpc_getauxblock)()
self.merged_work.set(math.merge_dicts(self.merged_work.value, {auxblock['chainid']: dict(
hash=int(auxblock['hash'], 16),
target='p2pool' if auxblock['target'] == 'p2pool' else pack.IntType(256).unpack(auxblock['target'].decode('hex')),
merged_proxy=merged_proxy,
)}))
yield deferral.sleep(1)
for merged_url, merged_userpass in merged_urls:
set_merged_work(merged_url, merged_userpass)
@self.merged_work.changed.watch
def _(new_merged_work):
print 'Got new merged mining work!'
# COMBINE WORK
self.current_work = variable.Variable(None)
def compute_work():
t = self.node.bitcoind_work.value
bb = self.node.best_block_header.value
if bb is not None and bb['previous_block'] == t['previous_block'] and self.node.net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(bb)) <= t['bits'].target:
print 'Skipping from block %x to block %x!' % (bb['previous_block'],
bitcoin_data.hash256(bitcoin_data.block_header_type.pack(bb)))
t = dict(
version=bb['version'],
previous_block=bitcoin_data.hash256(bitcoin_data.block_header_type.pack(bb)),
bits=bb['bits'], # not always true
coinbaseflags='',
height=t['height'] + 1,
time=bb['timestamp'] + 600, # better way?
transactions=[],
transaction_fees=[],
merkle_link=bitcoin_data.calculate_merkle_link([None], 0),
subsidy=self.node.net.PARENT.SUBSIDY_FUNC(self.node.bitcoind_work.value['height']),
last_update=self.node.bitcoind_work.value['last_update'],
)
self.current_work.set(t)
self.node.bitcoind_work.changed.watch(lambda _: compute_work())
self.node.best_block_header.changed.watch(lambda _: compute_work())
compute_work()
self.new_work_event = variable.Event()
@self.current_work.transitioned.watch
def _(before, after):
# trigger LP if version/previous_block/bits changed or transactions changed from nothing
if any(before[x] != after[x] for x in ['version', 'previous_block', 'bits']) or (not before['transactions'] and after['transactions']):
self.new_work_event.happened()
self.merged_work.changed.watch(lambda _: self.new_work_event.happened())
self.node.best_share_var.changed.watch(lambda _: self.new_work_event.happened())
def stop(self):
self.running = False
def get_stale_counts(self):
'''Returns (orphans, doas), total, (orphans_recorded_in_chain, doas_recorded_in_chain)'''
my_shares = len(self.my_share_hashes)
my_doa_shares = len(self.my_doa_share_hashes)
delta = self.tracker_view.get_delta_to_last(self.node.best_share_var.value)
my_shares_in_chain = delta.my_count + self.removed_unstales_var.value[0]
my_doa_shares_in_chain = delta.my_doa_count + self.removed_doa_unstales_var.value
orphans_recorded_in_chain = delta.my_orphan_announce_count + self.removed_unstales_var.value[1]
doas_recorded_in_chain = delta.my_dead_announce_count + self.removed_unstales_var.value[2]
my_shares_not_in_chain = my_shares - my_shares_in_chain
my_doa_shares_not_in_chain = my_doa_shares - my_doa_shares_in_chain
return (my_shares_not_in_chain - my_doa_shares_not_in_chain, my_doa_shares_not_in_chain), my_shares, (orphans_recorded_in_chain, doas_recorded_in_chain)
def get_user_details(self, username):
contents = re.split('([+/])', username)
assert len(contents) % 2 == 1
user, contents2 = contents[0], contents[1:]
desired_pseudoshare_target = None
desired_share_target = None
for symbol, parameter in zip(contents2[::2], contents2[1::2]):
if symbol == '+':
try:
desired_pseudoshare_target = bitcoin_data.difficulty_to_target(float(parameter))
except:
if p2pool.DEBUG:
log.err()
elif symbol == '/':
try:
desired_share_target = bitcoin_data.difficulty_to_target(float(parameter))
except:
if p2pool.DEBUG:
log.err()
if random.uniform(0, 100) < self.worker_fee:
pubkey_hash = self.my_pubkey_hash
else:
try:
pubkey_hash = bitcoin_data.address_to_pubkey_hash(user, self.node.net.PARENT)
except: # XXX blah
pubkey_hash = self.my_pubkey_hash
return user, pubkey_hash, desired_share_target, desired_pseudoshare_target
def preprocess_request(self, user):
if (self.node.p2p_node is None or len(self.node.p2p_node.peers) == 0) and self.node.net.PERSIST:
raise jsonrpc.Error_for_code(-12345)(u'p2pool is not connected to any peers')
if time.time() > self.current_work.value['last_update'] + 60:
raise jsonrpc.Error_for_code(-12345)(u'lost contact with bitcoind')
user, pubkey_hash, desired_share_target, desired_pseudoshare_target = self.get_user_details(user)
return pubkey_hash, desired_share_target, desired_pseudoshare_target
def _estimate_local_hash_rate(self):
if len(self.recent_shares_ts_work) == 50:
hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
if hash_rate > 0:
return hash_rate
return None
def get_local_rates(self):
miner_hash_rates = {}
miner_dead_hash_rates = {}
datums, dt = self.local_rate_monitor.get_datums_in_last()
for datum in datums:
miner_hash_rates[datum['user']] = miner_hash_rates.get(datum['user'], 0) + datum['work']/dt
if datum['dead']:
miner_dead_hash_rates[datum['user']] = miner_dead_hash_rates.get(datum['user'], 0) + datum['work']/dt
return miner_hash_rates, miner_dead_hash_rates
def get_local_addr_rates(self):
addr_hash_rates = {}
datums, dt = self.local_addr_rate_monitor.get_datums_in_last()
for datum in datums:
addr_hash_rates[datum['pubkey_hash']] = addr_hash_rates.get(datum['pubkey_hash'], 0) + datum['work']/dt
return addr_hash_rates
def get_work(self, pubkey_hash, desired_share_target, desired_pseudoshare_target):
if self.node.best_share_var.value is None and self.node.net.PERSIST:
raise jsonrpc.Error_for_code(-12345)(u'p2pool is downloading shares')
if self.merged_work.value:
tree, size = bitcoin_data.make_auxpow_tree(self.merged_work.value)
mm_hashes = [self.merged_work.value.get(tree.get(i), dict(hash=0))['hash'] for i in xrange(size)]
mm_data = '\xfa\xbemm' + bitcoin_data.aux_pow_coinbase_type.pack(dict(
merkle_root=bitcoin_data.merkle_hash(mm_hashes),
size=size,
nonce=0,
))
mm_later = [(aux_work, mm_hashes.index(aux_work['hash']), mm_hashes) for chain_id, aux_work in self.merged_work.value.iteritems()]
else:
mm_data = ''
mm_later = []
tx_hashes = [bitcoin_data.hash256(bitcoin_data.tx_type.pack(tx)) for tx in self.current_work.value['transactions']]
tx_map = dict(zip(tx_hashes, self.current_work.value['transactions']))
previous_share = self.node.tracker.items[self.node.best_share_var.value] if self.node.best_share_var.value is not None else None
if previous_share is None:
share_type = p2pool_data.Share
else:
previous_share_type = type(previous_share)
if previous_share_type.SUCCESSOR is None or self.node.tracker.get_height(previous_share.hash) < self.node.net.CHAIN_LENGTH:
share_type = previous_share_type
else:
successor_type = previous_share_type.SUCCESSOR
counts = p2pool_data.get_desired_version_counts(self.node.tracker,
self.node.tracker.get_nth_parent_hash(previous_share.hash, self.node.net.CHAIN_LENGTH*9//10), self.node.net.CHAIN_LENGTH//10)
upgraded = counts.get(successor_type.VERSION, 0)/sum(counts.itervalues())
if upgraded > .65:
print 'Switchover imminent. Upgraded: %.3f%% Threshold: %.3f%%' % (upgraded*100, 95)
print
# Share -> NewShare only valid if 95% of hashes in [net.CHAIN_LENGTH*9//10, net.CHAIN_LENGTH] for new version
if counts.get(successor_type.VERSION, 0) > sum(counts.itervalues())*95//100:
share_type = successor_type
else:
share_type = previous_share_type
if desired_share_target is None:
desired_share_target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target(local_hash_rate * self.node.net.SHARE_PERIOD / 0.0167)) # limit to 1.67% of pool shares by modulating share difficulty
local_addr_rates = self.get_local_addr_rates()
lookbehind = 3600//self.node.net.SHARE_PERIOD
block_subsidy = self.node.bitcoind_work.value['subsidy']
if previous_share is not None and self.node.tracker.get_height(previous_share.hash) > lookbehind:
expected_payout_per_block = local_addr_rates.get(pubkey_hash, 0)/p2pool_data.get_pool_attempts_per_second(self.node.tracker, self.node.best_share_var.value, lookbehind) \
* block_subsidy*(1-self.donation_percentage/100) # XXX doesn't use global stale rate to compute pool hash
if expected_payout_per_block < self.node.net.PARENT.DUST_THRESHOLD:
desired_share_target = min(desired_share_target,
bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy)
)
if True:
share_info, gentx, other_transaction_hashes, get_share = share_type.generate_transaction(
tracker=self.node.tracker,
share_data=dict(
previous_share_hash=self.node.best_share_var.value,
coinbase=(script.create_push_script([
self.current_work.value['height'],
] + ([mm_data] if mm_data else []) + [
]) + self.current_work.value['coinbaseflags'])[:100],
nonce=random.randrange(2**32),
pubkey_hash=pubkey_hash,
subsidy=self.current_work.value['subsidy'],
donation=math.perfect_round(65535*self.donation_percentage/100),
stale_info=(lambda (orphans, doas), total, (orphans_recorded_in_chain, doas_recorded_in_chain):
'orphan' if orphans > orphans_recorded_in_chain else
'doa' if doas > doas_recorded_in_chain else
None
)(*self.get_stale_counts()),
desired_version=(share_type.SUCCESSOR if share_type.SUCCESSOR is not None else share_type).VOTING_VERSION,
),
block_target=self.current_work.value['bits'].target,
desired_timestamp=int(time.time() + 0.5),
desired_target=desired_share_target,
ref_merkle_link=dict(branch=[], index=0),
desired_other_transaction_hashes_and_fees=zip(tx_hashes, self.current_work.value['transaction_fees']),
net=self.node.net,
known_txs=tx_map,
base_subsidy=self.node.net.PARENT.SUBSIDY_FUNC(self.current_work.value['height']),
)
packed_gentx = bitcoin_data.tx_type.pack(gentx)
other_transactions = [tx_map[tx_hash] for tx_hash in other_transaction_hashes]
mm_later = [(dict(aux_work, target=aux_work['target'] if aux_work['target'] != 'p2pool' else share_info['bits'].target), index, hashes) for aux_work, index, hashes in mm_later]
if desired_pseudoshare_target is None:
target = 2**256-1
local_hash_rate = self._estimate_local_hash_rate()
if local_hash_rate is not None:
target = min(target,
bitcoin_data.average_attempts_to_target(local_hash_rate * 1)) # limit to 1 share response every second by modulating pseudoshare difficulty
else:
target = desired_pseudoshare_target
target = max(target, share_info['bits'].target)
for aux_work, index, hashes in mm_later:
target = max(target, aux_work['target'])
target = math.clip(target, self.node.net.PARENT.SANE_TARGET_RANGE)
getwork_time = time.time()
lp_count = self.new_work_event.times
merkle_link = bitcoin_data.calculate_merkle_link([None] + other_transaction_hashes, 0)
print 'New work for worker! Difficulty: %.06f Share difficulty: %.06f Total block value: %.6f %s including %i transactions' % (
bitcoin_data.target_to_difficulty(target),
bitcoin_data.target_to_difficulty(share_info['bits'].target),
self.current_work.value['subsidy']*1e-8, self.node.net.PARENT.SYMBOL,
len(self.current_work.value['transactions']),
)
ba = dict(
version=min(self.current_work.value['version'], 2),
previous_block=self.current_work.value['previous_block'],
merkle_link=merkle_link,
coinb1=packed_gentx[:-self.COINBASE_NONCE_LENGTH-4],
coinb2=packed_gentx[-4:],
timestamp=self.current_work.value['time'],
bits=self.current_work.value['bits'],
share_target=target,
)
received_header_hashes = set()
def got_response(header, user, coinbase_nonce):
assert len(coinbase_nonce) == self.COINBASE_NONCE_LENGTH
new_packed_gentx = packed_gentx[:-self.COINBASE_NONCE_LENGTH-4] + coinbase_nonce + packed_gentx[-4:] if coinbase_nonce != '\0'*self.COINBASE_NONCE_LENGTH else packed_gentx
new_gentx = bitcoin_data.tx_type.unpack(new_packed_gentx) if coinbase_nonce != '\0'*self.COINBASE_NONCE_LENGTH else gentx
header_hash = bitcoin_data.hash256(bitcoin_data.block_header_type.pack(header))
pow_hash = self.node.net.PARENT.POW_FUNC(bitcoin_data.block_header_type.pack(header))
try:
if pow_hash <= header['bits'].target or p2pool.DEBUG:
helper.submit_block(dict(header=header, txs=[new_gentx] + other_transactions), False, self.node.factory, self.node.bitcoind, self.node.bitcoind_work, self.node.net)
if pow_hash <= header['bits'].target:
print
print 'GOT BLOCK FROM MINER! Passing to bitcoind! %s%064x' % (self.node.net.PARENT.BLOCK_EXPLORER_URL_PREFIX, header_hash)
print
except:
log.err(None, 'Error while processing potential block:')
user, _, _, _ = self.get_user_details(user)
assert header['previous_block'] == ba['previous_block']
assert header['merkle_root'] == bitcoin_data.check_merkle_link(bitcoin_data.hash256(new_packed_gentx), merkle_link)
assert header['bits'] == ba['bits']
on_time = self.new_work_event.times == lp_count
for aux_work, index, hashes in mm_later:
try:
if pow_hash <= aux_work['target'] or p2pool.DEBUG:
df = deferral.retry('Error submitting merged block: (will retry)', 10, 10)(aux_work['merged_proxy'].rpc_getauxblock)(
pack.IntType(256, 'big').pack(aux_work['hash']).encode('hex'),
bitcoin_data.aux_pow_type.pack(dict(
merkle_tx=dict(
tx=new_gentx,
block_hash=header_hash,
merkle_link=merkle_link,
),
merkle_link=bitcoin_data.calculate_merkle_link(hashes, index),
parent_block_header=header,
)).encode('hex'),
)
@df.addCallback
def _(result, aux_work=aux_work):
if result != (pow_hash <= aux_work['target']):
print >>sys.stderr, 'Merged block submittal result: %s Expected: %s' % (result, pow_hash <= aux_work['target'])
else:
print 'Merged block submittal result: %s' % (result,)
@df.addErrback
def _(err):
log.err(err, 'Error submitting merged block:')
except:
log.err(None, 'Error while processing merged mining POW:')
if pow_hash <= share_info['bits'].target and header_hash not in received_header_hashes:
last_txout_nonce = pack.IntType(8*self.COINBASE_NONCE_LENGTH).unpack(coinbase_nonce)
share = get_share(header, last_txout_nonce)
print 'GOT SHARE! %s %s prev %s age %.2fs%s' % (
user,
p2pool_data.format_hash(share.hash),
p2pool_data.format_hash(share.previous_hash),
time.time() - getwork_time,
' DEAD ON ARRIVAL' if not on_time else '',
)
self.my_share_hashes.add(share.hash)
if not on_time:
self.my_doa_share_hashes.add(share.hash)
self.node.tracker.add(share)
self.node.set_best_share()
try:
if (pow_hash <= header['bits'].target or p2pool.DEBUG) and self.node.p2p_node is not None:
self.node.p2p_node.broadcast_share(share.hash)
except:
log.err(None, 'Error forwarding block solution:')
self.share_received.happened(bitcoin_data.target_to_average_attempts(share.target), not on_time, share.hash)
if pow_hash > target:
print 'Worker %s submitted share with hash > target:' % (user,)
print ' Hash: %56x' % (pow_hash,)
print ' Target: %56x' % (target,)
elif header_hash in received_header_hashes:
print >>sys.stderr, 'Worker %s submitted share more than once!' % (user,)
else:
received_header_hashes.add(header_hash)
self.pseudoshare_received.happened(bitcoin_data.target_to_average_attempts(target), not on_time, user)
self.recent_shares_ts_work.append((time.time(), bitcoin_data.target_to_average_attempts(target)))
while len(self.recent_shares_ts_work) > 50:
self.recent_shares_ts_work.pop(0)
self.local_rate_monitor.add_datum(dict(work=bitcoin_data.target_to_average_attempts(target), dead=not on_time, user=user, share_target=share_info['bits'].target))
self.local_addr_rate_monitor.add_datum(dict(work=bitcoin_data.target_to_average_attempts(target), pubkey_hash=pubkey_hash))
return on_time
return ba, got_response
| gpl-3.0 |
yg257/Pangea | lib/boto-2.34.0/tests/integration/gs/test_storage_uri.py | 135 | 6558 | # -*- coding: utf-8 -*-
# Copyright (c) 2013, Google, Inc.
# All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
"""Integration tests for StorageUri interface."""
import binascii
import re
import StringIO
from boto import storage_uri
from boto.exception import BotoClientError
from boto.gs.acl import SupportedPermissions as perms
from tests.integration.gs.testcase import GSTestCase
class GSStorageUriTest(GSTestCase):
def testHasVersion(self):
uri = storage_uri("gs://bucket/obj")
self.assertFalse(uri.has_version())
uri.version_id = "versionid"
self.assertTrue(uri.has_version())
uri = storage_uri("gs://bucket/obj")
# Generation triggers versioning.
uri.generation = 12345
self.assertTrue(uri.has_version())
uri.generation = None
self.assertFalse(uri.has_version())
# Zero-generation counts as a version.
uri = storage_uri("gs://bucket/obj")
uri.generation = 0
self.assertTrue(uri.has_version())
def testCloneReplaceKey(self):
b = self._MakeBucket()
k = b.new_key("obj")
k.set_contents_from_string("stringdata")
orig_uri = storage_uri("gs://%s/" % b.name)
uri = orig_uri.clone_replace_key(k)
self.assertTrue(uri.has_version())
self.assertRegexpMatches(str(uri.generation), r"[0-9]+")
def testSetAclXml(self):
"""Ensures that calls to the set_xml_acl functions succeed."""
b = self._MakeBucket()
k = b.new_key("obj")
k.set_contents_from_string("stringdata")
bucket_uri = storage_uri("gs://%s/" % b.name)
# Get a valid ACL for an object.
bucket_uri.object_name = "obj"
bucket_acl = bucket_uri.get_acl()
bucket_uri.object_name = None
# Add a permission to the ACL.
all_users_read_permission = ("<Entry><Scope type='AllUsers'/>"
"<Permission>READ</Permission></Entry>")
acl_string = re.sub(r"</Entries>",
all_users_read_permission + "</Entries>",
bucket_acl.to_xml())
# Test-generated owner IDs are not currently valid for buckets
acl_no_owner_string = re.sub(r"<Owner>.*</Owner>", "", acl_string)
# Set ACL on an object.
bucket_uri.set_xml_acl(acl_string, "obj")
# Set ACL on a bucket.
bucket_uri.set_xml_acl(acl_no_owner_string)
# Set the default ACL for a bucket.
bucket_uri.set_def_xml_acl(acl_no_owner_string)
# Verify all the ACLs were successfully applied.
new_obj_acl_string = k.get_acl().to_xml()
new_bucket_acl_string = bucket_uri.get_acl().to_xml()
new_bucket_def_acl_string = bucket_uri.get_def_acl().to_xml()
self.assertRegexpMatches(new_obj_acl_string, r"AllUsers")
self.assertRegexpMatches(new_bucket_acl_string, r"AllUsers")
self.assertRegexpMatches(new_bucket_def_acl_string, r"AllUsers")
def testPropertiesUpdated(self):
b = self._MakeBucket()
bucket_uri = storage_uri("gs://%s" % b.name)
key_uri = bucket_uri.clone_replace_name("obj")
key_uri.set_contents_from_string("data1")
self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
k = b.get_key("obj")
self.assertEqual(k.generation, key_uri.generation)
self.assertEquals(k.get_contents_as_string(), "data1")
key_uri.set_contents_from_stream(StringIO.StringIO("data2"))
self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
self.assertGreater(key_uri.generation, k.generation)
k = b.get_key("obj")
self.assertEqual(k.generation, key_uri.generation)
self.assertEquals(k.get_contents_as_string(), "data2")
key_uri.set_contents_from_file(StringIO.StringIO("data3"))
self.assertRegexpMatches(str(key_uri.generation), r"[0-9]+")
self.assertGreater(key_uri.generation, k.generation)
k = b.get_key("obj")
self.assertEqual(k.generation, key_uri.generation)
self.assertEquals(k.get_contents_as_string(), "data3")
def testCompose(self):
data1 = 'hello '
data2 = 'world!'
expected_crc = 1238062967
b = self._MakeBucket()
bucket_uri = storage_uri("gs://%s" % b.name)
key_uri1 = bucket_uri.clone_replace_name("component1")
key_uri1.set_contents_from_string(data1)
key_uri2 = bucket_uri.clone_replace_name("component2")
key_uri2.set_contents_from_string(data2)
# Simple compose.
key_uri_composite = bucket_uri.clone_replace_name("composite")
components = [key_uri1, key_uri2]
key_uri_composite.compose(components, content_type='text/plain')
self.assertEquals(key_uri_composite.get_contents_as_string(),
data1 + data2)
composite_key = key_uri_composite.get_key()
cloud_crc32c = binascii.hexlify(
composite_key.cloud_hashes['crc32c'])
self.assertEquals(cloud_crc32c, hex(expected_crc)[2:])
self.assertEquals(composite_key.content_type, 'text/plain')
# Compose disallowed between buckets.
key_uri1.bucket_name += '2'
try:
key_uri_composite.compose(components)
self.fail('Composing between buckets didn\'t fail as expected.')
except BotoClientError as err:
self.assertEquals(
err.reason, 'GCS does not support inter-bucket composing')
| apache-2.0 |
craigds/mapnik2 | scons/scons-local-1.2.0/SCons/compat/_scons_UserString.py | 12 | 3505 | #
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/compat/_scons_UserString.py 3842 2008/12/20 22:59:52 scons"
__doc__ = """
A user-defined wrapper around string objects
This class is "borrowed" from the Python 2.2 UserString and modified
slightly for use with SCons. It is *NOT* guaranteed to be fully compliant
with the standard UserString class from all later versions of Python.
In particular, it does not necessarily contain all of the methods found
in later versions.
"""
import types
StringType = types.StringType
if hasattr(types, 'UnicodeType'):
UnicodeType = types.UnicodeType
def is_String(obj):
return type(obj) in (StringType, UnicodeType)
else:
def is_String(obj):
return type(obj) is StringType
class UserString:
def __init__(self, seq):
if is_String(seq):
self.data = seq
elif isinstance(seq, UserString):
self.data = seq.data[:]
else:
self.data = str(seq)
def __str__(self): return str(self.data)
def __repr__(self): return repr(self.data)
def __int__(self): return int(self.data)
def __long__(self): return long(self.data)
def __float__(self): return float(self.data)
def __complex__(self): return complex(self.data)
def __hash__(self): return hash(self.data)
def __cmp__(self, string):
if isinstance(string, UserString):
return cmp(self.data, string.data)
else:
return cmp(self.data, string)
def __contains__(self, char):
return char in self.data
def __len__(self): return len(self.data)
def __getitem__(self, index): return self.__class__(self.data[index])
def __getslice__(self, start, end):
start = max(start, 0); end = max(end, 0)
return self.__class__(self.data[start:end])
def __add__(self, other):
if isinstance(other, UserString):
return self.__class__(self.data + other.data)
elif is_String(other):
return self.__class__(self.data + other)
else:
return self.__class__(self.data + str(other))
def __radd__(self, other):
if is_String(other):
return self.__class__(other + self.data)
else:
return self.__class__(str(other) + self.data)
def __mul__(self, n):
return self.__class__(self.data*n)
__rmul__ = __mul__
| lgpl-2.1 |
Licshee/shadowsocks | shadowsocks/daemon.py | 694 | 5602 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright 2014-2015 clowwindy
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import, division, print_function, \
with_statement
import os
import sys
import logging
import signal
import time
from shadowsocks import common, shell
# this module is ported from ShadowVPN daemon.c
def daemon_exec(config):
if 'daemon' in config:
if os.name != 'posix':
raise Exception('daemon mode is only supported on Unix')
command = config['daemon']
if not command:
command = 'start'
pid_file = config['pid-file']
log_file = config['log-file']
if command == 'start':
daemon_start(pid_file, log_file)
elif command == 'stop':
daemon_stop(pid_file)
# always exit after daemon_stop
sys.exit(0)
elif command == 'restart':
daemon_stop(pid_file)
daemon_start(pid_file, log_file)
else:
raise Exception('unsupported daemon command %s' % command)
def write_pid_file(pid_file, pid):
import fcntl
import stat
try:
fd = os.open(pid_file, os.O_RDWR | os.O_CREAT,
stat.S_IRUSR | stat.S_IWUSR)
except OSError as e:
shell.print_exception(e)
return -1
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
assert flags != -1
flags |= fcntl.FD_CLOEXEC
r = fcntl.fcntl(fd, fcntl.F_SETFD, flags)
assert r != -1
# There is no platform independent way to implement fcntl(fd, F_SETLK, &fl)
# via fcntl.fcntl. So use lockf instead
try:
fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB, 0, 0, os.SEEK_SET)
except IOError:
r = os.read(fd, 32)
if r:
logging.error('already started at pid %s' % common.to_str(r))
else:
logging.error('already started')
os.close(fd)
return -1
os.ftruncate(fd, 0)
os.write(fd, common.to_bytes(str(pid)))
return 0
def freopen(f, mode, stream):
oldf = open(f, mode)
oldfd = oldf.fileno()
newfd = stream.fileno()
os.close(newfd)
os.dup2(oldfd, newfd)
def daemon_start(pid_file, log_file):
def handle_exit(signum, _):
if signum == signal.SIGTERM:
sys.exit(0)
sys.exit(1)
signal.signal(signal.SIGINT, handle_exit)
signal.signal(signal.SIGTERM, handle_exit)
# fork only once because we are sure parent will exit
pid = os.fork()
assert pid != -1
if pid > 0:
# parent waits for its child
time.sleep(5)
sys.exit(0)
# child signals its parent to exit
ppid = os.getppid()
pid = os.getpid()
if write_pid_file(pid_file, pid) != 0:
os.kill(ppid, signal.SIGINT)
sys.exit(1)
os.setsid()
signal.signal(signal.SIGHUP, signal.SIG_IGN)
print('started')
os.kill(ppid, signal.SIGTERM)
sys.stdin.close()
try:
freopen(log_file, 'a', sys.stdout)
freopen(log_file, 'a', sys.stderr)
except IOError as e:
shell.print_exception(e)
sys.exit(1)
def daemon_stop(pid_file):
import errno
try:
with open(pid_file) as f:
buf = f.read()
pid = common.to_str(buf)
if not buf:
logging.error('not running')
except IOError as e:
shell.print_exception(e)
if e.errno == errno.ENOENT:
# always exit 0 if we are sure daemon is not running
logging.error('not running')
return
sys.exit(1)
pid = int(pid)
if pid > 0:
try:
os.kill(pid, signal.SIGTERM)
except OSError as e:
if e.errno == errno.ESRCH:
logging.error('not running')
# always exit 0 if we are sure daemon is not running
return
shell.print_exception(e)
sys.exit(1)
else:
logging.error('pid is not positive: %d', pid)
# sleep for maximum 10s
for i in range(0, 200):
try:
# query for the pid
os.kill(pid, 0)
except OSError as e:
if e.errno == errno.ESRCH:
break
time.sleep(0.05)
else:
logging.error('timed out when stopping pid %d', pid)
sys.exit(1)
print('stopped')
os.unlink(pid_file)
def set_user(username):
if username is None:
return
import pwd
import grp
try:
pwrec = pwd.getpwnam(username)
except KeyError:
logging.error('user not found: %s' % username)
raise
user = pwrec[0]
uid = pwrec[2]
gid = pwrec[3]
cur_uid = os.getuid()
if uid == cur_uid:
return
if cur_uid != 0:
logging.error('can not set user as nonroot user')
# will raise later
# inspired by supervisor
if hasattr(os, 'setgroups'):
groups = [grprec[2] for grprec in grp.getgrall() if user in grprec[3]]
groups.insert(0, gid)
os.setgroups(groups)
os.setgid(gid)
os.setuid(uid)
| apache-2.0 |
defance/edx-platform | lms/djangoapps/courseware/user_state_client.py | 27 | 14669 | """
An implementation of :class:`XBlockUserStateClient`, which stores XBlock Scope.user_state
data in a Django ORM model.
"""
import itertools
from operator import attrgetter
from time import time
try:
import simplejson as json
except ImportError:
import json
import dogstats_wrapper as dog_stats_api
from django.contrib.auth.models import User
from xblock.fields import Scope, ScopeBase
from courseware.models import StudentModule, StudentModuleHistory
from edx_user_state_client.interface import XBlockUserStateClient, XBlockUserState
class DjangoXBlockUserStateClient(XBlockUserStateClient):
"""
An interface that uses the Django ORM StudentModule as a backend.
A note on the format of state storage:
The state for an xblock is stored as a serialized JSON dictionary. The model
field that it is stored in can also take on a value of ``None``. To preserve
existing analytic uses, we will preserve the following semantics:
A state of ``None`` means that the user hasn't ever looked at the xblock.
A state of ``"{}"`` means that the XBlock has at some point stored state for
the current user, but that that state has been deleted.
Otherwise, the dictionary contains all data stored for the user.
None of these conditions should violate the semantics imposed by
XBlockUserStateClient (for instance, once all fields have been deleted from
an XBlock for a user, the state will be listed as ``None`` by :meth:`get_history`,
even though the actual stored state in the database will be ``"{}"``).
"""
# Use this sample rate for DataDog events.
API_DATADOG_SAMPLE_RATE = 0.1
class ServiceUnavailable(XBlockUserStateClient.ServiceUnavailable):
"""
This error is raised if the service backing this client is currently unavailable.
"""
pass
class PermissionDenied(XBlockUserStateClient.PermissionDenied):
"""
This error is raised if the caller is not allowed to access the requested data.
"""
pass
class DoesNotExist(XBlockUserStateClient.DoesNotExist):
"""
This error is raised if the caller has requested data that does not exist.
"""
pass
def __init__(self, user=None):
"""
Arguments:
user (:class:`~User`): An already-loaded django user. If this user matches the username
supplied to `set_many`, then that will reduce the number of queries made to store
the user state.
"""
self.user = user
def _get_student_modules(self, username, block_keys):
"""
Retrieve the :class:`~StudentModule`s for the supplied ``username`` and ``block_keys``.
Arguments:
username (str): The name of the user to load `StudentModule`s for.
block_keys (list of :class:`~UsageKey`): The set of XBlocks to load data for.
"""
course_key_func = attrgetter('course_key')
by_course = itertools.groupby(
sorted(block_keys, key=course_key_func),
course_key_func,
)
for course_key, usage_keys in by_course:
query = StudentModule.objects.chunked_filter(
'module_state_key__in',
usage_keys,
student__username=username,
course_id=course_key,
)
for student_module in query:
usage_key = student_module.module_state_key.map_into_course(student_module.course_id)
yield (student_module, usage_key)
def _ddog_increment(self, evt_time, evt_name):
"""
DataDog increment method.
"""
dog_stats_api.increment(
'DjangoXBlockUserStateClient.{}'.format(evt_name),
timestamp=evt_time,
sample_rate=self.API_DATADOG_SAMPLE_RATE,
)
def _ddog_histogram(self, evt_time, evt_name, value):
"""
DataDog histogram method.
"""
dog_stats_api.histogram(
'DjangoXBlockUserStateClient.{}'.format(evt_name),
value,
timestamp=evt_time,
sample_rate=self.API_DATADOG_SAMPLE_RATE,
)
def get_many(self, username, block_keys, scope=Scope.user_state, fields=None):
"""
Retrieve the stored XBlock state for the specified XBlock usages.
Arguments:
username: The name of the user whose state should be retrieved
block_keys ([UsageKey]): A list of UsageKeys identifying which xblock states to load.
scope (Scope): The scope to load data from
fields: A list of field values to retrieve. If None, retrieve all stored fields.
Yields:
XBlockUserState tuples for each specified UsageKey in block_keys.
field_state is a dict mapping field names to values.
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported, not {}".format(scope))
block_count = state_length = 0
evt_time = time()
self._ddog_histogram(evt_time, 'get_many.blks_requested', len(block_keys))
modules = self._get_student_modules(username, block_keys)
for module, usage_key in modules:
if module.state is None:
self._ddog_increment(evt_time, 'get_many.empty_state')
continue
state = json.loads(module.state)
state_length += len(module.state)
self._ddog_histogram(evt_time, 'get_many.block_size', len(module.state))
# If the state is the empty dict, then it has been deleted, and so
# conformant UserStateClients should treat it as if it doesn't exist.
if state == {}:
continue
if fields is not None:
state = {
field: state[field]
for field in fields
if field in state
}
block_count += 1
yield XBlockUserState(username, usage_key, state, module.modified, scope)
# The rest of this method exists only to submit DataDog events.
# Remove it once we're no longer interested in the data.
finish_time = time()
self._ddog_histogram(evt_time, 'get_many.blks_out', block_count)
self._ddog_histogram(evt_time, 'get_many.response_time', (finish_time - evt_time) * 1000)
def set_many(self, username, block_keys_to_state, scope=Scope.user_state):
"""
Set fields for a particular XBlock.
Arguments:
username: The name of the user whose state should be retrieved
block_keys_to_state (dict): A dict mapping UsageKeys to state dicts.
Each state dict maps field names to values. These state dicts
are overlaid over the stored state. To delete fields, use
:meth:`delete` or :meth:`delete_many`.
scope (Scope): The scope to load data from
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported")
# We do a find_or_create for every block (rather than re-using field objects
# that were queried in get_many) so that if the score has
# been changed by some other piece of the code, we don't overwrite
# that score.
if self.user is not None and self.user.username == username:
user = self.user
else:
user = User.objects.get(username=username)
evt_time = time()
for usage_key, state in block_keys_to_state.items():
student_module, created = StudentModule.objects.get_or_create(
student=user,
course_id=usage_key.course_key,
module_state_key=usage_key,
defaults={
'state': json.dumps(state),
'module_type': usage_key.block_type,
},
)
num_fields_before = num_fields_after = num_new_fields_set = len(state)
num_fields_updated = 0
if not created:
if student_module.state is None:
current_state = {}
else:
current_state = json.loads(student_module.state)
num_fields_before = len(current_state)
current_state.update(state)
num_fields_after = len(current_state)
student_module.state = json.dumps(current_state)
# We just read this object, so we know that we can do an update
student_module.save(force_update=True)
# The rest of this method exists only to submit DataDog events.
# Remove it once we're no longer interested in the data.
#
# Record whether a state row has been created or updated.
if created:
self._ddog_increment(evt_time, 'set_many.state_created')
else:
self._ddog_increment(evt_time, 'set_many.state_updated')
# Event to record number of fields sent in to set/set_many.
self._ddog_histogram(evt_time, 'set_many.fields_in', len(state))
# Event to record number of new fields set in set/set_many.
num_new_fields_set = num_fields_after - num_fields_before
self._ddog_histogram(evt_time, 'set_many.fields_set', num_new_fields_set)
# Event to record number of existing fields updated in set/set_many.
num_fields_updated = max(0, len(state) - num_new_fields_set)
self._ddog_histogram(evt_time, 'set_many.fields_updated', num_fields_updated)
# Events for the entire set_many call.
finish_time = time()
self._ddog_histogram(evt_time, 'set_many.blks_updated', len(block_keys_to_state))
self._ddog_histogram(evt_time, 'set_many.response_time', (finish_time - evt_time) * 1000)
def delete_many(self, username, block_keys, scope=Scope.user_state, fields=None):
"""
Delete the stored XBlock state for a many xblock usages.
Arguments:
username: The name of the user whose state should be deleted
block_keys (list): The UsageKey identifying which xblock state to delete.
scope (Scope): The scope to delete data from
fields: A list of fields to delete. If None, delete all stored fields.
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported")
evt_time = time()
if fields is None:
self._ddog_increment(evt_time, 'delete_many.empty_state')
else:
self._ddog_histogram(evt_time, 'delete_many.field_count', len(fields))
self._ddog_histogram(evt_time, 'delete_many.block_count', len(block_keys))
student_modules = self._get_student_modules(username, block_keys)
for student_module, _ in student_modules:
if fields is None:
student_module.state = "{}"
else:
current_state = json.loads(student_module.state)
for field in fields:
if field in current_state:
del current_state[field]
student_module.state = json.dumps(current_state)
# We just read this object, so we know that we can do an update
student_module.save(force_update=True)
# Event for the entire delete_many call.
finish_time = time()
self._ddog_histogram(evt_time, 'delete_many.response_time', (finish_time - evt_time) * 1000)
def get_history(self, username, block_key, scope=Scope.user_state):
"""
Retrieve history of state changes for a given block for a given
student. We don't guarantee that history for many blocks will be fast.
If the specified block doesn't exist, raise :class:`~DoesNotExist`.
Arguments:
username: The name of the user whose history should be retrieved.
block_key: The key identifying which xblock history to retrieve.
scope (Scope): The scope to load data from.
Yields:
XBlockUserState entries for each modification to the specified XBlock, from latest
to earliest.
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported")
student_modules = list(
student_module
for student_module, usage_id
in self._get_student_modules(username, [block_key])
)
if len(student_modules) == 0:
raise self.DoesNotExist()
history_entries = StudentModuleHistory.objects.prefetch_related('student_module').filter(
student_module__in=student_modules
).order_by('-id')
# If no history records exist, raise an error
if not history_entries:
raise self.DoesNotExist()
for history_entry in history_entries:
state = history_entry.state
# If the state is serialized json, then load it
if state is not None:
state = json.loads(state)
# If the state is empty, then for the purposes of `get_history`, it has been
# deleted, and so we list that entry as `None`.
if state == {}:
state = None
block_key = history_entry.student_module.module_state_key
block_key = block_key.map_into_course(
history_entry.student_module.course_id
)
yield XBlockUserState(username, block_key, state, history_entry.created, scope)
def iter_all_for_block(self, block_key, scope=Scope.user_state, batch_size=None):
"""
You get no ordering guarantees. Fetching will happen in batch_size
increments. If you're using this method, you should be running in an
async task.
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported")
raise NotImplementedError()
def iter_all_for_course(self, course_key, block_type=None, scope=Scope.user_state, batch_size=None):
"""
You get no ordering guarantees. Fetching will happen in batch_size
increments. If you're using this method, you should be running in an
async task.
"""
if scope != Scope.user_state:
raise ValueError("Only Scope.user_state is supported")
raise NotImplementedError()
| agpl-3.0 |
tdickers/mitmproxy | pathod/utils.py | 4 | 1080 | import os
import sys
import netlib.utils
class MemBool(object):
"""
Truth-checking with a memory, for use in chained if statements.
"""
def __init__(self):
self.v = None
def __call__(self, v):
self.v = v
return bool(v)
data = netlib.utils.Data(__name__)
def daemonize(stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'): # pragma: no cover
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError as e:
sys.stderr.write("fork #1 failed: (%d) %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.chdir("/")
os.umask(0)
os.setsid()
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError as e:
sys.stderr.write("fork #2 failed: (%d) %s\n" % (e.errno, e.strerror))
sys.exit(1)
si = open(stdin, 'rb')
so = open(stdout, 'a+b')
se = open(stderr, 'a+b', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
| mit |
Nirlendu/Dummy-Search-Engine | tornado-3.2/tornado/test/import_test.py | 42 | 1477 | from __future__ import absolute_import, division, print_function, with_statement
from tornado.test.util import unittest
class ImportTest(unittest.TestCase):
def test_import_everything(self):
# Some of our modules are not otherwise tested. Import them
# all (unless they have external dependencies) here to at
# least ensure that there are no syntax errors.
import tornado.auth
import tornado.autoreload
import tornado.concurrent
# import tornado.curl_httpclient # depends on pycurl
import tornado.escape
import tornado.gen
import tornado.httpclient
import tornado.httpserver
import tornado.httputil
import tornado.ioloop
import tornado.iostream
import tornado.locale
import tornado.log
import tornado.netutil
import tornado.options
import tornado.process
import tornado.simple_httpclient
import tornado.stack_context
import tornado.tcpserver
import tornado.template
import tornado.testing
import tornado.util
import tornado.web
import tornado.websocket
import tornado.wsgi
# for modules with dependencies, if those dependencies can be loaded,
# load them too.
def test_import_pycurl(self):
try:
import pycurl
except ImportError:
pass
else:
import tornado.curl_httpclient
| mit |
zhukaixy/kbengine | kbe/res/scripts/common/Lib/test/test_dbm_ndbm.py | 91 | 1622 | from test import support
support.import_module("dbm.ndbm") #skip if not supported
import unittest
import os
import random
import dbm.ndbm
from dbm.ndbm import error
class DbmTestCase(unittest.TestCase):
def setUp(self):
self.filename = support.TESTFN
self.d = dbm.ndbm.open(self.filename, 'c')
self.d.close()
def tearDown(self):
for suffix in ['', '.pag', '.dir', '.db']:
support.unlink(self.filename + suffix)
def test_keys(self):
self.d = dbm.ndbm.open(self.filename, 'c')
self.assertTrue(self.d.keys() == [])
self.d['a'] = 'b'
self.d[b'bytes'] = b'data'
self.d['12345678910'] = '019237410982340912840198242'
self.d.keys()
self.assertIn('a', self.d)
self.assertIn(b'a', self.d)
self.assertEqual(self.d[b'bytes'], b'data')
self.d.close()
def test_modes(self):
for mode in ['r', 'rw', 'w', 'n']:
try:
self.d = dbm.ndbm.open(self.filename, mode)
self.d.close()
except error:
self.fail()
def test_context_manager(self):
with dbm.ndbm.open(self.filename, 'c') as db:
db["ndbm context manager"] = "context manager"
with dbm.ndbm.open(self.filename, 'r') as db:
self.assertEqual(list(db.keys()), [b"ndbm context manager"])
with self.assertRaises(dbm.ndbm.error) as cm:
db.keys()
self.assertEqual(str(cm.exception),
"DBM object has already been closed")
if __name__ == '__main__':
unittest.main()
| lgpl-3.0 |
matijapretnar/projekt-tomo | web/problems/models.py | 2 | 8264 | from copy import deepcopy
import json
from django.conf import settings
from django.core.urlresolvers import reverse
from django.db import models
from django.template.defaultfilters import slugify
from django.template.loader import render_to_string
from rest_framework.authtoken.models import Token
from simple_history.models import HistoricalRecords
from utils import is_json_string_list, truncate
from utils.models import OrderWithRespectToMixin
from taggit.managers import TaggableManager
from django.core import signing
class Problem(OrderWithRespectToMixin, models.Model):
title = models.CharField(max_length=70)
description = models.TextField(blank=True)
problem_set = models.ForeignKey('courses.ProblemSet', related_name='problems')
history = HistoricalRecords()
tags = TaggableManager(blank=True)
language = models.CharField(max_length=8, choices=(
('python', 'Python 3'),
('octave', 'Octave/Matlab'),
('r', 'R')), default='python')
EXTENSIONS = {'python': 'py', 'octave': 'm', 'r': 'r'}
MIMETYPES = {'python': 'text/x-python',
'octave': 'text/x-octave',
'r': 'text/x-R'}
class Meta:
order_with_respect_to = 'problem_set'
def __str__(self):
return self.title
@property
def guarded_description(self):
return 'Navodila so napisana na listu' if self.problem_set.solution_visibility == self.problem_set.PROBLEM_HIDDEN else self.description
def get_absolute_url(self):
return '{}#{}'.format(self.problem_set.get_absolute_url(), self.anchor())
def anchor(self):
return 'problem-{}'.format(self.pk)
def user_attempts(self, user):
return user.attempts.filter(part__problem=self)
def user_solutions(self, user):
return {attempt.part.id: attempt.solution for attempt in self.user_attempts(user)}
@property
def slug(self):
return slugify(self.title).replace("-", "_")
def attempt_file(self, user):
authentication_token = Token.objects.get(user=user)
solutions = self.user_solutions(user)
parts = [(part, solutions.get(part.id, part.template), part.attempt_token(user)) for part in self.parts.all()]
url = settings.SUBMISSION_URL + reverse('attempts-submit')
problem_slug = slugify(self.title).replace("-", "_")
extension = self.EXTENSIONS[self.language]
filename = "{0}.{1}".format(problem_slug, extension)
contents = render_to_string("{0}/attempt.{1}".format(self.language, extension), {
"problem": self,
"parts": parts,
"submission_url": url,
"authentication_token": authentication_token
})
return filename, contents
def marking_file(self, user):
attempts = {attempt.part.id: attempt for attempt in self.user_attempts(user)}
parts = [(part, attempts.get(part.id)) for part in self.parts.all()]
username = user.get_full_name() or user.username
problem_slug = slugify(username).replace("-", "_")
extension = self.EXTENSIONS[self.language]
filename = "{0}.{1}".format(problem_slug, extension)
contents = render_to_string("{0}/marking.{1}".format(self.language, extension), {
"problem": self,
"parts": parts,
"user": user,
})
return filename, contents
def bare_file(self, user):
attempts = {attempt.part.id: attempt for attempt in self.user_attempts(user)}
parts = [(part, attempts.get(part.id)) for part in self.parts.all()]
username = user.get_full_name() or user.username
problem_slug = slugify(username).replace("-", "_")
extension = self.EXTENSIONS[self.language]
filename = "{0}.{1}".format(problem_slug, extension)
contents = render_to_string("{0}/bare.{1}".format(self.language, extension), {
"problem": self,
"parts": parts,
"user": user,
})
return filename, contents
def edit_file(self, user):
authentication_token = Token.objects.get(user=user)
url = settings.SUBMISSION_URL + reverse('problems-submit')
problem_slug = slugify(self.title).replace("-", "_")
filename = "{0}_edit.{1}".format(problem_slug, self.EXTENSIONS[self.language])
contents = render_to_string("{0}/edit.{1}".format(self.language, self.EXTENSIONS[self.language]), {
"problem": self,
"submission_url": url,
"authentication_token": authentication_token
})
return filename, contents
def attempts_by_user(self, active_only=True):
attempts = {}
for part in self.parts.all().prefetch_related('attempts', 'attempts__user'):
for attempt in part.attempts.all():
if attempt.user in attempts:
attempts[attempt.user][part] = attempt
else:
attempts[attempt.user] = {part: attempt}
for student in self.problem_set.course.students.all():
if student not in attempts:
attempts[student] = {}
observed_students = self.problem_set.course.observed_students()
if active_only:
observed_students = observed_students.filter(attempts__part__problem=self).distinct()
observed_students = list(observed_students)
for user in observed_students:
user.valid = user.invalid = user.empty = 0
user.these_attempts = [attempts[user].get(part) for part in self.parts.all()]
for attempt in user.these_attempts:
if attempt is None:
user.empty += 1
elif attempt.valid:
user.valid += 1
else:
user.invalid += 1
return observed_students
def attempts_by_user_all(self):
return self.attempts_by_user(active_only=False)
def copy_to(self, problem_set):
new_problem = deepcopy(self)
new_problem.pk = None
new_problem.problem_set = problem_set
new_problem.save()
for part in self.parts.all():
part.copy_to(new_problem)
return new_problem
def content_type(self):
return self.MIMETYPES[self.language]
class Part(OrderWithRespectToMixin, models.Model):
problem = models.ForeignKey(Problem, related_name='parts')
description = models.TextField(blank=True)
template = models.TextField(blank=True)
solution = models.TextField(blank=True)
validation = models.TextField(blank=True)
secret = models.TextField(default="[]", validators=[is_json_string_list])
history = HistoricalRecords()
class Meta:
order_with_respect_to = 'problem'
def __str__(self):
return '@{0:06d} ({1})'.format(self.pk, truncate(self.description))
@property
def guarded_description(self):
return 'Navodila so napisana na listu' if self.problem.problem_set.solution_visibility == self.problem.problem_set.PROBLEM_HIDDEN else self.description
def get_absolute_url(self):
return '{}#{}'.format(self.problem_set.get_absolute_url(), self.anchor())
def anchor(self):
return 'part-{}'.format(self.pk)
def check_secret(self, secret):
'''
Checks whether a submitted secret corresponds to the official one.
The function accepts a secret (list of strings) and returns the pair:
True, None -- if secret matches the official one
False, None -- if secret has an incorrect length
False, i -- if secret first differs from the official one at index i
'''
official_secret = json.loads(self.secret)
if len(official_secret) != len(secret):
return False, None
for i in range(len(secret)):
if secret[i] != official_secret[i]:
return False, i
return True, None
def copy_to(self, problem):
new_part = deepcopy(self)
new_part.pk = None
new_part.problem = problem
new_part.save()
return new_part
def attempt_token(self, user):
return signing.dumps({
'part': self.pk,
'user': user.pk,
})
| agpl-3.0 |
nathanpc/leafIRC | tests/googletest/xcode/Scripts/versiongenerate.py | 3088 | 4536 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A script to prepare version informtion for use the gtest Info.plist file.
This script extracts the version information from the configure.ac file and
uses it to generate a header file containing the same information. The
#defines in this header file will be included in during the generation of
the Info.plist of the framework, giving the correct value to the version
shown in the Finder.
This script makes the following assumptions (these are faults of the script,
not problems with the Autoconf):
1. The AC_INIT macro will be contained within the first 1024 characters
of configure.ac
2. The version string will be 3 integers separated by periods and will be
surrounded by squre brackets, "[" and "]" (e.g. [1.0.1]). The first
segment represents the major version, the second represents the minor
version and the third represents the fix version.
3. No ")" character exists between the opening "(" and closing ")" of
AC_INIT, including in comments and character strings.
"""
import sys
import re
# Read the command line argument (the output directory for Version.h)
if (len(sys.argv) < 3):
print "Usage: versiongenerate.py input_dir output_dir"
sys.exit(1)
else:
input_dir = sys.argv[1]
output_dir = sys.argv[2]
# Read the first 1024 characters of the configure.ac file
config_file = open("%s/configure.ac" % input_dir, 'r')
buffer_size = 1024
opening_string = config_file.read(buffer_size)
config_file.close()
# Extract the version string from the AC_INIT macro
# The following init_expression means:
# Extract three integers separated by periods and surrounded by squre
# brackets(e.g. "[1.0.1]") between "AC_INIT(" and ")". Do not be greedy
# (*? is the non-greedy flag) since that would pull in everything between
# the first "(" and the last ")" in the file.
version_expression = re.compile(r"AC_INIT\(.*?\[(\d+)\.(\d+)\.(\d+)\].*?\)",
re.DOTALL)
version_values = version_expression.search(opening_string)
major_version = version_values.group(1)
minor_version = version_values.group(2)
fix_version = version_values.group(3)
# Write the version information to a header file to be included in the
# Info.plist file.
file_data = """//
// DO NOT MODIFY THIS FILE (but you can delete it)
//
// This file is autogenerated by the versiongenerate.py script. This script
// is executed in a "Run Script" build phase when creating gtest.framework. This
// header file is not used during compilation of C-source. Rather, it simply
// defines some version strings for substitution in the Info.plist. Because of
// this, we are not not restricted to C-syntax nor are we using include guards.
//
#define GTEST_VERSIONINFO_SHORT %s.%s
#define GTEST_VERSIONINFO_LONG %s.%s.%s
""" % (major_version, minor_version, major_version, minor_version, fix_version)
version_file = open("%s/Version.h" % output_dir, 'w')
version_file.write(file_data)
version_file.close()
| mit |
davidzchen/tensorflow | tensorflow/python/summary/writer/event_file_writer_v2.py | 19 | 5699 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Writes events to disk in a logdir."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import summary_ops_v2
from tensorflow.python.platform import gfile
class EventFileWriterV2(object):
"""Writes `Event` protocol buffers to an event file via the graph.
The `EventFileWriterV2` class is backed by the summary file writer in the v2
summary API (currently in tf.contrib.summary), so it uses a shared summary
writer resource and graph ops to write events.
As with the original EventFileWriter, this class will asynchronously write
Event protocol buffers to the backing file. The Event file is encoded using
the tfrecord format, which is similar to RecordIO.
"""
def __init__(self, session, logdir, max_queue=10, flush_secs=120,
filename_suffix=''):
"""Creates an `EventFileWriterV2` and an event file to write to.
On construction, this calls `tf.contrib.summary.create_file_writer` within
the graph from `session.graph` to look up a shared summary writer resource
for `logdir` if one exists, and create one if not. Creating the summary
writer resource in turn creates a new event file in `logdir` to be filled
with `Event` protocol buffers passed to `add_event`. Graph ops to control
this writer resource are added to `session.graph` during this init call;
stateful methods on this class will call `session.run()` on these ops.
Note that because the underlying resource is shared, it is possible that
other parts of the code using the same session may interact independently
with the resource, e.g. by flushing or even closing it. It is the caller's
responsibility to avoid any undesirable sharing in this regard.
The remaining arguments to the constructor (`flush_secs`, `max_queue`, and
`filename_suffix`) control the construction of the shared writer resource
if one is created. If an existing resource is reused, these arguments have
no effect. See `tf.contrib.summary.create_file_writer` for details.
Args:
session: A `tf.compat.v1.Session`. Session that will hold shared writer
resource. The writer ops will be added to session.graph during this
init call.
logdir: A string. Directory where event file will be written.
max_queue: Integer. Size of the queue for pending events and summaries.
flush_secs: Number. How often, in seconds, to flush the
pending events and summaries to disk.
filename_suffix: A string. Every event file's name is suffixed with
`filename_suffix`.
"""
self._session = session
self._logdir = logdir
self._closed = False
if not gfile.IsDirectory(self._logdir):
gfile.MakeDirs(self._logdir)
with self._session.graph.as_default():
with ops.name_scope('filewriter'):
file_writer = summary_ops_v2.create_file_writer(
logdir=self._logdir,
max_queue=max_queue,
flush_millis=flush_secs * 1000,
filename_suffix=filename_suffix)
with summary_ops_v2.always_record_summaries(), file_writer.as_default():
self._event_placeholder = array_ops.placeholder_with_default(
constant_op.constant('unused', dtypes.string),
shape=[])
self._add_event_op = summary_ops_v2.import_event(
self._event_placeholder)
self._init_op = file_writer.init()
self._flush_op = file_writer.flush()
self._close_op = file_writer.close()
self._session.run(self._init_op)
def get_logdir(self):
"""Returns the directory where event file will be written."""
return self._logdir
def reopen(self):
"""Reopens the EventFileWriter.
Can be called after `close()` to add more events in the same directory.
The events will go into a new events file.
Does nothing if the EventFileWriter was not closed.
"""
if self._closed:
self._closed = False
self._session.run(self._init_op)
def add_event(self, event):
"""Adds an event to the event file.
Args:
event: An `Event` protocol buffer.
"""
if not self._closed:
event_pb = event.SerializeToString()
self._session.run(
self._add_event_op, feed_dict={self._event_placeholder: event_pb})
def flush(self):
"""Flushes the event file to disk.
Call this method to make sure that all pending events have been written to
disk.
"""
self._session.run(self._flush_op)
def close(self):
"""Flushes the event file to disk and close the file.
Call this method when you do not need the summary writer anymore.
"""
if not self._closed:
self.flush()
self._session.run(self._close_op)
self._closed = True
| apache-2.0 |
neerajvashistha/pa-dude | lib/python2.7/site-packages/nltk/tag/util.py | 3 | 2281 | # Natural Language Toolkit: Tagger Utilities
#
# Copyright (C) 2001-2015 NLTK Project
# Author: Edward Loper <[email protected]>
# Steven Bird <[email protected]>
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
def str2tuple(s, sep='/'):
"""
Given the string representation of a tagged token, return the
corresponding tuple representation. The rightmost occurrence of
*sep* in *s* will be used to divide *s* into a word string and
a tag string. If *sep* does not occur in *s*, return (s, None).
>>> from nltk.tag.util import str2tuple
>>> str2tuple('fly/NN')
('fly', 'NN')
:type s: str
:param s: The string representation of a tagged token.
:type sep: str
:param sep: The separator string used to separate word strings
from tags.
"""
loc = s.rfind(sep)
if loc >= 0:
return (s[:loc], s[loc+len(sep):].upper())
else:
return (s, None)
def tuple2str(tagged_token, sep='/'):
"""
Given the tuple representation of a tagged token, return the
corresponding string representation. This representation is
formed by concatenating the token's word string, followed by the
separator, followed by the token's tag. (If the tag is None,
then just return the bare word string.)
>>> from nltk.tag.util import tuple2str
>>> tagged_token = ('fly', 'NN')
>>> tuple2str(tagged_token)
'fly/NN'
:type tagged_token: tuple(str, str)
:param tagged_token: The tuple representation of a tagged token.
:type sep: str
:param sep: The separator string used to separate word strings
from tags.
"""
word, tag = tagged_token
if tag is None:
return word
else:
assert sep not in tag, 'tag may not contain sep!'
return '%s%s%s' % (word, sep, tag)
def untag(tagged_sentence):
"""
Given a tagged sentence, return an untagged version of that
sentence. I.e., return a list containing the first element
of each tuple in *tagged_sentence*.
>>> from nltk.tag.util import untag
>>> untag([('John', 'NNP'), ('saw', 'VBD'), ('Mary', 'NNP')])
['John', 'saw', 'Mary']
"""
return [w for (w, t) in tagged_sentence]
| mit |
kmp3325/linguine-python | linguine/ops/remove_caps.py | 4 | 1485 | #!/usr/bin/env python
"""
Removes all non-proper-noun capitals from a given text.
Removes capital letters from text, even for Bill Clinton.
Accepts as input a non-tokenized string.
There are multiple types of cap-removal to do.
greedy: removes all caps. GOAL -> goal, Mr. -> mr., Cook -> cook
preserve_nnp: removes capitalization that isn't a proper noun.
"""
from textblob import TextBlob
class RemoveCapsGreedy:
def run(self, data):
results = []
for corpus in data:
corpus.contents = corpus.contents.lower()
results.append(corpus)
return results
class RemoveCapsPreserveNNP:
def run(self, data):
results = []
for corpus in data:
blob = TextBlob(corpus.contents)
tags = blob.tags
words = list()
wordCount = 0
tokenCount = 0
while(tokenCount < len(blob.tokens)):
if blob.tokens[tokenCount][0].isalpha():
if tags[wordCount][1] != 'NNP':
words.append(blob.words[wordCount].lower())
else:
words.append(blob.words[wordCount])
wordCount += 1
else:
words[len(words)-1] = ''.join(
[words[len(words)-1],blob.tokens[tokenCount]])
tokenCount += 1
corpus.contents = (' '.join(words))
results.append(corpus)
return results
| mit |
leiferikb/bitpop | build/third_party/twisted_10_2/twisted/mail/pb.py | 57 | 3847 | # Copyright (c) 2001-2004 Twisted Matrix Laboratories.
# See LICENSE for details.
from twisted.spread import pb
from twisted.spread import banana
import os
import types
class Maildir(pb.Referenceable):
def __init__(self, directory, rootDirectory):
self.virtualDirectory = directory
self.rootDirectory = rootDirectory
self.directory = os.path.join(rootDirectory, directory)
def getFolderMessage(self, folder, name):
if '/' in name:
raise IOError("can only open files in '%s' directory'" % folder)
fp = open(os.path.join(self.directory, 'new', name))
try:
return fp.read()
finally:
fp.close()
def deleteFolderMessage(self, folder, name):
if '/' in name:
raise IOError("can only delete files in '%s' directory'" % folder)
os.rename(os.path.join(self.directory, folder, name),
os.path.join(self.rootDirectory, '.Trash', folder, name))
def deleteNewMessage(self, name):
return self.deleteFolderMessage('new', name)
remote_deleteNewMessage = deleteNewMessage
def deleteCurMessage(self, name):
return self.deleteFolderMessage('cur', name)
remote_deleteCurMessage = deleteCurMessage
def getNewMessages(self):
return os.listdir(os.path.join(self.directory, 'new'))
remote_getNewMessages = getNewMessages
def getCurMessages(self):
return os.listdir(os.path.join(self.directory, 'cur'))
remote_getCurMessages = getCurMessages
def getNewMessage(self, name):
return self.getFolderMessage('new', name)
remote_getNewMessage = getNewMessage
def getCurMessage(self, name):
return self.getFolderMessage('cur', name)
remote_getCurMessage = getCurMessage
def getSubFolder(self, name):
if name[0] == '.':
raise IOError("subfolder name cannot begin with a '.'")
name = name.replace('/', ':')
if self.virtualDirectoy == '.':
name = '.'+name
else:
name = self.virtualDirectory+':'+name
if not self._isSubFolder(name):
raise IOError("not a subfolder")
return Maildir(name, self.rootDirectory)
remote_getSubFolder = getSubFolder
def _isSubFolder(self, name):
return (not os.path.isdir(os.path.join(self.rootDirectory, name)) or
not os.path.isfile(os.path.join(self.rootDirectory, name,
'maildirfolder')))
class MaildirCollection(pb.Referenceable):
def __init__(self, root):
self.root = root
def getSubFolders(self):
return os.listdir(self.getRoot())
remote_getSubFolders = getSubFolders
def getSubFolder(self, name):
if '/' in name or name[0] == '.':
raise IOError("invalid name")
return Maildir('.', os.path.join(self.getRoot(), name))
remote_getSubFolder = getSubFolder
class MaildirBroker(pb.Broker):
def proto_getCollection(self, requestID, name, domain, password):
collection = self._getCollection()
if collection is None:
self.sendError(requestID, "permission denied")
else:
self.sendAnswer(requestID, collection)
def getCollection(self, name, domain, password):
if not self.domains.has_key(domain):
return
domain = self.domains[domain]
if (domain.dbm.has_key(name) and
domain.dbm[name] == password):
return MaildirCollection(domain.userDirectory(name))
class MaildirClient(pb.Broker):
def getCollection(self, name, domain, password, callback, errback):
requestID = self.newRequestID()
self.waitingForAnswers[requestID] = callback, errback
self.sendCall("getCollection", requestID, name, domain, password)
| gpl-3.0 |
exelearning/iteexe | nevow/stan.py | 14 | 16657 | # Copyright (c) 2004 Divmod.
# See LICENSE for details.
"""An s-expression-like syntax for expressing xml in pure python.
Stan tags allow you to build XML documents using Python. Stan tags
have special attributes that enable the developer to insert hooks in
the document for locating data and custom rendering.
Stan is a DOM, or Document Object Model, implemented using
basic Python types and functions called "flatteners". A flattener is
a function that knows how to turn an object of a specific type
into something that is closer to an HTML string. Stan differs
from the W3C DOM by not being as cumbersome and heavy
weight. Since the object model is built using simple python types
such as lists, strings, and dictionaries, the API is simpler and
constructing a DOM less cumbersome.
Stan also makes it convenient to build trees of XML in pure python
code. See nevow.stan.Tag for details, and nevow.tags for tag
prototypes for all of the XHTML element types.
"""
from __future__ import generators
import warnings
import sys
from nevow import inevow
class Proto(str):
"""Proto is a string subclass. Instances of Proto, which are constructed
with a string, will construct Tag instances in response to __call__
and __getitem__, delegating responsibility to the tag.
"""
__slots__ = []
def __call__(self, **kw):
return Tag(self)(**kw)
def __getitem__(self, children):
return Tag(self)[children]
def fillSlots(self, slotName, slotValue):
return Tag(self).fillSlots(slotName, slotValue)
def clone(self, deep=True):
return self
class xml(object):
"""XML content marker.
xml contains content that is already correct XML and should not be escaped
to make it XML-safe. xml can contain unicode content and will be encoded to
utf-8 when flattened.
"""
__slots__ = ['content']
def __init__(self, content):
self.content = content
def __repr__(self):
return '<xml %r>' % self.content
class raw(str):
"""Raw content marker.
Raw content is never altered in any way. It is a sequence of bytes that will
be passed through unchanged to the XML output.
You probably don't want this - look at xml first.
"""
__slots__ = []
def cdata(data):
"""CDATA section. data must be a string
"""
return xml('<![CDATA[%s]]>' % data)
class directive(object):
"""Marker for a directive in a template
"""
__slots__ = ['name']
def __init__(self, name):
self.name = name
def __repr__(self):
return "directive('%s')" % self.name
class slot(object):
"""Marker for slot insertion in a template
"""
__slots__ = ['name', 'children', 'default']
def __init__(self, name, default=None):
self.name = name
self.children = []
self.default = default
def __repr__(self):
return "slot('%s')" % self.name
def __getitem__(self, children):
"""Allow slots to have children. These children will not show up in the
output, but they will be searched for patterns.
"""
if not isinstance(children, (list, tuple)):
children = [children]
self.children.extend(children)
return self
def __iter__(self):
"""Prevent an infinite loop if someone tries to do
for x in slot('foo'):
"""
raise NotImplementedError, "Stan slot instances are not iterable."
class Tag(object):
"""Tag instances represent XML tags with a tag name, attributes,
and children. Tag instances can be constructed using the Prototype
tags in the 'tags' module, or may be constructed directly with a tag
name. Tags have two special methods, __call__ and __getitem__,
which make representing trees of XML natural using pure python
syntax. See the docstrings for these methods for more details.
"""
__implements__ = inevow.IQ,
specials = ['data', 'render', 'remember', 'pattern', 'key', 'macro']
slotData = None
def __init__(self, tag, attributes=None, children=None, specials=None):
self.tagName = tag
if attributes is None:
self.attributes = {}
else:
self.attributes = attributes
if children is None:
self.children = []
else:
self.children = children
if specials is None:
self._specials = {}
else:
self._specials = specials
def fillSlots(self, slotName, slotValue):
"""Remember the stan 'slotValue' with the name 'slotName' at this position
in the DOM. During the rendering of children of this node, slots with
the name 'slotName' will render themselves as 'slotValue'.
"""
if self.slotData is None:
self.slotData = {}
self.slotData[slotName] = slotValue
return self
def patternGenerator(self, pattern, default=None):
"""Returns a psudeo-Tag which will generate clones of matching
pattern tags forever, looping around to the beginning when running
out of unique matches.
If no matches are found, and default is None, raise an exception,
otherwise, generate clones of default forever.
You can use the normal stan syntax on the return value.
Useful to find repeating pattern elements. Example rendering function:
>>> def simpleSequence(context, data):
... pattern = context.patternCloner('item')
... return [pattern(data=element) for element in data]
"""
patterner = _locatePatterns(self, pattern, default)
return PatternTag(patterner)
def allPatterns(self, pattern):
"""Return a list of all matching pattern tags, cloned.
Useful if you just want to insert them in the output in one
place.
E.g. the sequence renderer's header and footer are found with this.
"""
return [tag.clone(deep=False, clearPattern=True) for tag in
specialMatches(self, 'pattern', pattern)]
def onePattern(self, pattern):
"""Return a single matching pattern, cloned.
If there is more than one matching pattern or no matching patterns,
raise an exception.
Useful in the case where you want to locate one and only one
sub-tag and do something with it.
"""
return _locateOne(pattern,
lambda pattern: specialMatches(
self, 'pattern', pattern),
'pattern').clone(deep=False, clearPattern=True)
def __call__(self, **kw):
"""Change attributes of this tag. This is implemented using
__call__ because it then allows the natural syntax::
table(width="100%", height="50%", border="1")
Attributes may be 'invisible' tag instances (so that
C{a(href=invisible(data="foo", render=myhrefrenderer))} works),
strings, functions, or any other object which has a registered
flattener.
If the attribute is a python keyword, such as 'class', you can
add an underscore to the name, like 'class_'.
A few magic attributes have values other than these, as they
are not serialized for output but rather have special purposes
of their own:
- data: The value is saved on the context stack and passed to
render functions.
- render: A function to call that may modify the tag in any
way desired.
- remember: Remember the value on the context stack with
context.remember(value) for later lookup with
context.locate()
- pattern: Value should be a key that can later be used to
locate this tag with context.patternGenerator() or
context.allPatterns()
- key: A string used to give the node a unique label. This
is automatically namespaced, so in C{span(key="foo")[span(key="bar")]}
the inner span actually has a key of 'foo.bar'. The key is
intended for use as e.g. an html 'id' attribute, but will
is not automatically output.
- macro - A function which will be called once in the lifetime
of the template, when the template is loaded. The return
result from this function will replace this Tag in the template.
"""
if not kw:
return self
for name in self.specials:
if kw.has_key(name):
setattr(self, name, kw[name])
del kw[name]
for k, v in kw.iteritems():
if k[-1] == '_':
k = k[:-1]
elif k[0] == '_':
k = k[1:]
self.attributes[k] = v
return self
def __getitem__(self, children):
"""Add children to this tag. Multiple children may be added by
passing a tuple or a list. Children may be other tag instances,
strings, functions, or any other object which has a registered
flatten.
This is implemented using __getitem__ because it then allows
the natural syntax::
html[
head[
title["Hello World!"]
],
body[
"This is a page",
h3["How are you!"],
div(style="color: blue")["I hope you are fine."]
]
]
"""
if not isinstance(children, (list, tuple)):
children = [children]
self.children.extend(children)
return self
def __iter__(self):
"""Prevent an infinite loop if someone tries to do
for x in stantaginstance:
"""
raise NotImplementedError, "Stan tag instances are not iterable."
def _clearSpecials(self):
"""Clears all the specials in this tag. For use by flatstan.
"""
self._specials = {}
# FIXME: make this function actually be used.
def precompilable(self):
"""Is this tag precompilable?
Tags are precompilable if they will not be modified by a user
render function.
Currently, the following attributes prevent the tag from being
precompiled:
- render (because the function can modify its own tag)
- pattern (because it is locatable and thus modifiable by an
enclosing renderer)
"""
return self.render is Unset and self.pattern is Unset
def _clone(self, obj, deep):
if hasattr(obj, 'clone'):
return obj.clone(deep)
elif isinstance(obj, (list, tuple)):
return [self._clone(x, deep)
for x in obj]
else:
return obj
def clone(self, deep=True, clearPattern=False):
"""Return a clone of this tag. If deep is True, clone all of this
tag's children. Otherwise, just shallow copy the children list
without copying the children themselves.
"""
if deep:
newchildren = [self._clone(x, True) for x in self.children]
else:
newchildren = self.children[:]
newattrs = self.attributes.copy()
for key in newattrs:
newattrs[key]=self._clone(newattrs[key], True)
newslotdata = None
if self.slotData:
newslotdata = self.slotData.copy()
for key in newslotdata:
newslotdata[key] = self._clone(newslotdata[key], True)
newtag = Tag(
self.tagName,
attributes=newattrs,
children=newchildren,
specials=self._specials.copy()
)
newtag.slotData = newslotdata
if clearPattern:
newtag.pattern = None
return newtag
def clear(self):
"""Clear any existing children from this tag.
"""
self._specials = {}
self.children = []
return self
def __repr__(self):
rstr = ''
if self.attributes:
rstr += ', attributes=%r' % self.attributes
if self._specials:
rstr += ', specials=%r' % self._specials
if self.children:
rstr += ', children=%r' % self.children
return "Tag(%r%s)" % (self.tagName, rstr)
def freeze(self):
"""Freeze this tag so that making future calls to __call__ or __getitem__ on the
return value will result in clones of this tag.
"""
def forever():
while True:
yield self.clone()
return PatternTag(forever())
class UnsetClass:
def __nonzero__(self):
return False
def __repr__(self):
return "Unset"
Unset=UnsetClass()
def makeAccessors(special):
def getSpecial(self):
return self._specials.get(special, Unset)
def setSpecial(self, data):
self._specials[special] = data
return getSpecial, setSpecial
for name in Tag.specials:
setattr(Tag, name, property(*makeAccessors(name)))
del name
### Pattern machinery
class NodeNotFound(KeyError):
def __str__(self):
return "The %s named %r wasn't found in the template." % tuple(self.args[:2])
class TooManyNodes(Exception):
def __str__(self):
return "More than one %r with the name %r was found." % tuple(self.args[:2])
class PatternTag(object):
'''A pseudotag created by Tag.patternGenerator() which loops
through a sequence of matching patterns.'''
def __init__(self, patterner):
self.pat = patterner.next()
self.patterner = patterner
def next(self):
if self.pat:
p, self.pat = self.pat, None
return p
return self.patterner.next()
def makeForwarder(name):
return lambda self, *args, **kw: getattr(self.next(), name)(*args, **kw)
for forward in ['__call__', '__getitem__', 'fillSlots']:
setattr(PatternTag, forward, makeForwarder(forward))
def _locatePatterns(tag, pattern, default, loop=True):
gen = specialMatches(tag, 'pattern', pattern)
produced = []
for x in gen:
produced.append(x)
cloned = x.clone(deep=False, clearPattern=True)
yield cloned
gen=None
if produced:
if not loop:
return
while True:
for x in produced:
cloned = x.clone(deep=False, clearPattern=True)
yield cloned
if default is None:
raise NodeNotFound, ("pattern", pattern)
if hasattr(default, 'clone'):
while True: yield default.clone(deep=False)
else:
while True: yield default
Tag._locatePatterns = staticmethod(_locatePatterns)
def _locateOne(name, locator, descr):
found = False
for node in locator(name):
if found:
raise TooManyNodes(descr, name)
found = node
if not found:
raise NodeNotFound(descr, name)
return found
def specials(tag, special):
"""Generate tags with special attributes regardless of attribute value.
"""
for childOrContext in getattr(tag, 'children', []):
child = getattr(childOrContext, 'tag', childOrContext)
if getattr(child, special, Unset) is not Unset:
yield child
else:
for match in specials(child, special):
yield match
def specialMatches(tag, special, pattern):
"""Generate special attribute matches starting with the given tag;
if a tag has special, do not look any deeper below that tag, whether
it matches pattern or not. Returns an iterable.
"""
for childOrContext in getattr(tag, 'children', []):
child = getattr(childOrContext, 'tag', childOrContext)
data = getattr(child, special, Unset)
if data == pattern:
yield child
elif data is Unset:
for match in specialMatches(child, special, pattern):
yield match
## End pattern machinery
class CommentProto(Proto):
__slots__ = []
def __call__(self, **kw):
return Comment(self)(**kw)
def __getitem__(self, children):
return Comment(self)[children]
class Comment(Tag):
def __call__(self, **kw):
raise NotImplementedError('comments are not callable')
invisible = Proto('')
class Entity(object):
def __init__(self, name, num, description):
self.name = name
self.num = num
self.description = description
def __repr__(self):
return "Entity(%r, %r, %r)" % (self.name, self.num, self.description)
| gpl-2.0 |
keto/askbot-devel | askbot/migrations/0058_transplant_answer_count_field_2.py | 17 | 26835 | # encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting field 'Question.answer_count'
db.delete_column(u'question', 'answer_count')
def backwards(self, orm):
# Adding field 'Question.answer_count'
db.add_column(u'question', 'answer_count', self.gf('django.db.models.fields.PositiveIntegerField')(default=0), keep_default=False)
models = {
'askbot.activity': {
'Meta': {'object_name': 'Activity', 'db_table': "u'activity'"},
'active_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'activity_type': ('django.db.models.fields.SmallIntegerField', [], {}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_auditted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['askbot.Question']", 'null': 'True'}),
'receiving_users': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'received_activity'", 'symmetrical': 'False', 'to': "orm['auth.User']"}),
'recipients': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'incoming_activity'", 'symmetrical': 'False', 'through': "orm['askbot.ActivityAuditStatus']", 'to': "orm['auth.User']"}),
'summary': ('django.db.models.fields.TextField', [], {'default': "''"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'askbot.activityauditstatus': {
'Meta': {'unique_together': "(('user', 'activity'),)", 'object_name': 'ActivityAuditStatus'},
'activity': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['askbot.Activity']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'status': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'askbot.anonymousanswer': {
'Meta': {'object_name': 'AnonymousAnswer'},
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip_addr': ('django.db.models.fields.IPAddressField', [], {'max_length': '15'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'anonymous_answers'", 'to': "orm['askbot.Question']"}),
'session_key': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '180'}),
'text': ('django.db.models.fields.TextField', [], {}),
'wiki': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
'askbot.anonymousquestion': {
'Meta': {'object_name': 'AnonymousQuestion'},
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip_addr': ('django.db.models.fields.IPAddressField', [], {'max_length': '15'}),
'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'session_key': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '180'}),
'tagnames': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'text': ('django.db.models.fields.TextField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '300'}),
'wiki': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
'askbot.answer': {
'Meta': {'object_name': 'Answer', 'db_table': "u'answer'"},
'accepted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'accepted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'answers'", 'to': "orm['auth.User']"}),
'comment_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'deleted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deleted_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'deleted_answers'", 'null': 'True', 'to': "orm['auth.User']"}),
'html': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_edited_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'last_edited_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'last_edited_answers'", 'null': 'True', 'to': "orm['auth.User']"}),
'locked': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'locked_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'locked_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'locked_answers'", 'null': 'True', 'to': "orm['auth.User']"}),
'offensive_flag_count': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'answers'", 'to': "orm['askbot.Question']"}),
'score': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'text': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'vote_down_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'vote_up_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'wiki': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'wikified_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'askbot.award': {
'Meta': {'object_name': 'Award', 'db_table': "u'award'"},
'awarded_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'badge': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'award_badge'", 'to': "orm['askbot.BadgeData']"}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'notified': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'award_user'", 'to': "orm['auth.User']"})
},
'askbot.badgedata': {
'Meta': {'ordering': "('slug',)", 'object_name': 'BadgeData'},
'awarded_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'awarded_to': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'badges'", 'symmetrical': 'False', 'through': "orm['askbot.Award']", 'to': "orm['auth.User']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50', 'db_index': 'True'})
},
'askbot.comment': {
'Meta': {'ordering': "('-added_at',)", 'object_name': 'Comment', 'db_table': "u'comment'"},
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'comment': ('django.db.models.fields.CharField', [], {'max_length': '2048'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'html': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '2048'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'offensive_flag_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'score': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'comments'", 'to': "orm['auth.User']"})
},
'askbot.emailfeedsetting': {
'Meta': {'object_name': 'EmailFeedSetting'},
'added_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'feed_type': ('django.db.models.fields.CharField', [], {'max_length': '16'}),
'frequency': ('django.db.models.fields.CharField', [], {'default': "'n'", 'max_length': '8'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'reported_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'subscriber': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'notification_subscriptions'", 'to': "orm['auth.User']"})
},
'askbot.favoritequestion': {
'Meta': {'object_name': 'FavoriteQuestion', 'db_table': "u'favorite_question'"},
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['askbot.Question']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'user_favorite_questions'", 'to': "orm['auth.User']"})
},
'askbot.markedtag': {
'Meta': {'object_name': 'MarkedTag'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'reason': ('django.db.models.fields.CharField', [], {'max_length': '16'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'user_selections'", 'to': "orm['askbot.Tag']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'tag_selections'", 'to': "orm['auth.User']"})
},
'askbot.postrevision': {
'Meta': {'ordering': "('-revision',)", 'unique_together': "(('answer', 'revision'), ('question', 'revision'))", 'object_name': 'PostRevision'},
'answer': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'revisions'", 'null': 'True', 'to': "orm['askbot.Answer']"}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'postrevisions'", 'to': "orm['auth.User']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'revisions'", 'null': 'True', 'to': "orm['askbot.Question']"}),
'revised_at': ('django.db.models.fields.DateTimeField', [], {}),
'revision': ('django.db.models.fields.PositiveIntegerField', [], {}),
'revision_type': ('django.db.models.fields.SmallIntegerField', [], {}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '300', 'blank': 'True'}),
'tagnames': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '125', 'blank': 'True'}),
'text': ('django.db.models.fields.TextField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '300', 'blank': 'True'})
},
'askbot.question': {
'Meta': {'object_name': 'Question', 'db_table': "u'question'"},
'added_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'answer_accepted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'author': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'questions'", 'to': "orm['auth.User']"}),
'close_reason': ('django.db.models.fields.SmallIntegerField', [], {'null': 'True', 'blank': 'True'}),
'closed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'closed_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'closed_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'closed_questions'", 'null': 'True', 'to': "orm['auth.User']"}),
'comment_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'deleted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deleted_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'deleted_questions'", 'null': 'True', 'to': "orm['auth.User']"}),
'favorited_by': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'favorite_questions'", 'symmetrical': 'False', 'through': "orm['askbot.FavoriteQuestion']", 'to': "orm['auth.User']"}),
'followed_by': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'followed_questions'", 'symmetrical': 'False', 'to': "orm['auth.User']"}),
'html': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_activity_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_activity_by': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'last_active_in_questions'", 'to': "orm['auth.User']"}),
'last_edited_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'last_edited_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'last_edited_questions'", 'null': 'True', 'to': "orm['auth.User']"}),
'locked': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'locked_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'locked_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'locked_questions'", 'null': 'True', 'to': "orm['auth.User']"}),
'offensive_flag_count': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'score': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '180'}),
'tagnames': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'tags': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'questions'", 'symmetrical': 'False', 'to': "orm['askbot.Tag']"}),
'text': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'thread': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'questions'", 'unique': 'True', 'to': "orm['askbot.Thread']"}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '300'}),
'view_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'vote_down_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'vote_up_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'wiki': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'wikified_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'askbot.questionview': {
'Meta': {'object_name': 'QuestionView'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'viewed'", 'to': "orm['askbot.Question']"}),
'when': ('django.db.models.fields.DateTimeField', [], {}),
'who': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'question_views'", 'to': "orm['auth.User']"})
},
'askbot.repute': {
'Meta': {'object_name': 'Repute', 'db_table': "u'repute'"},
'comment': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'negative': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'positive': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['askbot.Question']", 'null': 'True', 'blank': 'True'}),
'reputation': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'reputation_type': ('django.db.models.fields.SmallIntegerField', [], {}),
'reputed_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'askbot.tag': {
'Meta': {'ordering': "('-used_count', 'name')", 'object_name': 'Tag', 'db_table': "u'tag'"},
'created_by': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'created_tags'", 'to': "orm['auth.User']"}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'deleted_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'deleted_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'deleted_tags'", 'null': 'True', 'to': "orm['auth.User']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'used_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'askbot.thread': {
'Meta': {'object_name': 'Thread'},
'answer_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'favourite_count': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'askbot.vote': {
'Meta': {'unique_together': "(('content_type', 'object_id', 'user'),)", 'object_name': 'Vote', 'db_table': "u'vote'"},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'votes'", 'to': "orm['auth.User']"}),
'vote': ('django.db.models.fields.SmallIntegerField', [], {}),
'voted_at': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'})
},
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'about': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'avatar_type': ('django.db.models.fields.CharField', [], {'default': "'n'", 'max_length': '1'}),
'bronze': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'consecutive_days_visit_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'country': ('django_countries.fields.CountryField', [], {'max_length': '2', 'blank': 'True'}),
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'date_of_birth': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'display_tag_filter_strategy': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'email_isvalid': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'email_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True'}),
'email_tag_filter_strategy': ('django.db.models.fields.SmallIntegerField', [], {'default': '1'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'gold': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'gravatar': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ignored_tags': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'interesting_tags': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'new_response_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'questions_per_page': ('django.db.models.fields.SmallIntegerField', [], {'default': '10'}),
'real_name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'reputation': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'}),
'seen_response_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'show_country': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'silver': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'status': ('django.db.models.fields.CharField', [], {'default': "'w'", 'max_length': '2'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}),
'website': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
}
}
complete_apps = ['askbot']
| gpl-3.0 |
grlee77/nipype | nipype/interfaces/mrtrix/tests/test_auto_ConstrainedSphericalDeconvolution.py | 9 | 1849 | # AUTO-GENERATED by tools/checkspecs.py - DO NOT EDIT
from nipype.testing import assert_equal
from nipype.interfaces.mrtrix.tensors import ConstrainedSphericalDeconvolution
def test_ConstrainedSphericalDeconvolution_inputs():
input_map = dict(args=dict(argstr='%s',
),
debug=dict(argstr='-debug',
),
directions_file=dict(argstr='-directions %s',
position=-2,
),
encoding_file=dict(argstr='-grad %s',
position=1,
),
environ=dict(nohash=True,
usedefault=True,
),
filter_file=dict(argstr='-filter %s',
position=-2,
),
ignore_exception=dict(nohash=True,
usedefault=True,
),
in_file=dict(argstr='%s',
mandatory=True,
position=-3,
),
iterations=dict(argstr='-niter %s',
),
lambda_value=dict(argstr='-lambda %s',
),
mask_image=dict(argstr='-mask %s',
position=2,
),
maximum_harmonic_order=dict(argstr='-lmax %s',
),
normalise=dict(argstr='-normalise',
position=3,
),
out_filename=dict(argstr='%s',
genfile=True,
position=-1,
),
response_file=dict(argstr='%s',
mandatory=True,
position=-2,
),
terminal_output=dict(nohash=True,
),
threshold_value=dict(argstr='-threshold %s',
),
)
inputs = ConstrainedSphericalDeconvolution.input_spec()
for key, metadata in input_map.items():
for metakey, value in metadata.items():
yield assert_equal, getattr(inputs.traits()[key], metakey), value
def test_ConstrainedSphericalDeconvolution_outputs():
output_map = dict(spherical_harmonics_image=dict(),
)
outputs = ConstrainedSphericalDeconvolution.output_spec()
for key, metadata in output_map.items():
for metakey, value in metadata.items():
yield assert_equal, getattr(outputs.traits()[key], metakey), value
| bsd-3-clause |
flabby/rocksdb | build_tools/amalgamate.py | 45 | 4700 | #!/usr/bin/python
# amalgamate.py creates an amalgamation from a unity build.
# It can be run with either Python 2 or 3.
# An amalgamation consists of a header that includes the contents of all public
# headers and a source file that includes the contents of all source files and
# private headers.
#
# This script works by starting with the unity build file and recursively expanding
# #include directives. If the #include is found in a public include directory,
# that header is expanded into the amalgamation header.
#
# A particular header is only expanded once, so this script will
# break if there are multiple inclusions of the same header that are expected to
# expand differently. Similarly, this type of code causes issues:
#
# #ifdef FOO
# #include "bar.h"
# // code here
# #else
# #include "bar.h" // oops, doesn't get expanded
# // different code here
# #endif
#
# The solution is to move the include out of the #ifdef.
from __future__ import print_function
import argparse
from os import path
import re
import sys
include_re = re.compile('^[ \t]*#include[ \t]+"(.*)"[ \t]*$')
included = set()
excluded = set()
def find_header(name, abs_path, include_paths):
samedir = path.join(path.dirname(abs_path), name)
if path.exists(samedir):
return samedir
for include_path in include_paths:
include_path = path.join(include_path, name)
if path.exists(include_path):
return include_path
return None
def expand_include(include_path, f, abs_path, source_out, header_out, include_paths, public_include_paths):
if include_path in included:
return False
included.add(include_path)
with open(include_path) as f:
print('#line 1 "{}"'.format(include_path), file=source_out)
process_file(f, include_path, source_out, header_out, include_paths, public_include_paths)
return True
def process_file(f, abs_path, source_out, header_out, include_paths, public_include_paths):
for (line, text) in enumerate(f):
m = include_re.match(text)
if m:
filename = m.groups()[0]
# first check private headers
include_path = find_header(filename, abs_path, include_paths)
if include_path:
if include_path in excluded:
source_out.write(text)
expanded = False
else:
expanded = expand_include(include_path, f, abs_path, source_out, header_out, include_paths, public_include_paths)
else:
# now try public headers
include_path = find_header(filename, abs_path, public_include_paths)
if include_path:
# found public header
expanded = False
if include_path in excluded:
source_out.write(text)
else:
expand_include(include_path, f, abs_path, header_out, None, public_include_paths, [])
else:
sys.exit("unable to find {}, included in {} on line {}".format(filename, abs_path, line))
if expanded:
print('#line {} "{}"'.format(line+1, abs_path), file=source_out)
elif text != "#pragma once\n":
source_out.write(text)
def main():
parser = argparse.ArgumentParser(description="Transform a unity build into an amalgamation")
parser.add_argument("source", help="source file")
parser.add_argument("-I", action="append", dest="include_paths", help="include paths for private headers")
parser.add_argument("-i", action="append", dest="public_include_paths", help="include paths for public headers")
parser.add_argument("-x", action="append", dest="excluded", help="excluded header files")
parser.add_argument("-o", dest="source_out", help="output C++ file", required=True)
parser.add_argument("-H", dest="header_out", help="output C++ header file", required=True)
args = parser.parse_args()
include_paths = list(map(path.abspath, args.include_paths or []))
public_include_paths = list(map(path.abspath, args.public_include_paths or []))
excluded.update(map(path.abspath, args.excluded or []))
filename = args.source
abs_path = path.abspath(filename)
with open(filename) as f, open(args.source_out, 'w') as source_out, open(args.header_out, 'w') as header_out:
print('#line 1 "{}"'.format(filename), file=source_out)
print('#include "{}"'.format(header_out.name), file=source_out)
process_file(f, abs_path, source_out, header_out, include_paths, public_include_paths)
if __name__ == "__main__":
main()
| bsd-3-clause |
xiangel/hue | desktop/core/ext-py/Django-1.6.10/django/db/backends/oracle/base.py | 17 | 40746 | """
Oracle database backend for Django.
Requires cx_Oracle: http://cx-oracle.sourceforge.net/
"""
from __future__ import unicode_literals
import decimal
import re
import sys
import warnings
def _setup_environment(environ):
import platform
# Cygwin requires some special voodoo to set the environment variables
# properly so that Oracle will see them.
if platform.system().upper().startswith('CYGWIN'):
try:
import ctypes
except ImportError as e:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("Error loading ctypes: %s; "
"the Oracle backend requires ctypes to "
"operate correctly under Cygwin." % e)
kernel32 = ctypes.CDLL('kernel32')
for name, value in environ:
kernel32.SetEnvironmentVariableA(name, value)
else:
import os
os.environ.update(environ)
_setup_environment([
# Oracle takes client-side character set encoding from the environment.
('NLS_LANG', '.UTF8'),
# This prevents unicode from getting mangled by getting encoded into the
# potentially non-unicode database character set.
('ORA_NCHAR_LITERAL_REPLACE', 'TRUE'),
])
try:
import cx_Oracle as Database
except ImportError as e:
from django.core.exceptions import ImproperlyConfigured
raise ImproperlyConfigured("Error loading cx_Oracle module: %s" % e)
try:
import pytz
except ImportError:
pytz = None
from django.db import utils
from django.db.backends import *
from django.db.backends.oracle.client import DatabaseClient
from django.db.backends.oracle.creation import DatabaseCreation
from django.db.backends.oracle.introspection import DatabaseIntrospection
from django.utils.encoding import force_bytes, force_text
DatabaseError = Database.DatabaseError
IntegrityError = Database.IntegrityError
# Check whether cx_Oracle was compiled with the WITH_UNICODE option if cx_Oracle is pre-5.1. This will
# also be True for cx_Oracle 5.1 and in Python 3.0. See #19606
if int(Database.version.split('.', 1)[0]) >= 5 and \
(int(Database.version.split('.', 2)[1]) >= 1 or
not hasattr(Database, 'UNICODE')):
convert_unicode = force_text
else:
convert_unicode = force_bytes
class Oracle_datetime(datetime.datetime):
"""
A datetime object, with an additional class attribute
to tell cx_Oracle to save the microseconds too.
"""
input_size = Database.TIMESTAMP
@classmethod
def from_datetime(cls, dt):
return Oracle_datetime(dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second, dt.microsecond)
class DatabaseFeatures(BaseDatabaseFeatures):
empty_fetchmany_value = ()
needs_datetime_string_cast = False
interprets_empty_strings_as_nulls = True
uses_savepoints = True
has_select_for_update = True
has_select_for_update_nowait = True
can_return_id_from_insert = True
allow_sliced_subqueries = False
supports_subqueries_in_group_by = False
supports_transactions = True
supports_timezones = False
has_zoneinfo_database = pytz is not None
supports_bitwise_or = False
can_defer_constraint_checks = True
ignores_nulls_in_unique_constraints = False
has_bulk_insert = True
supports_tablespaces = True
supports_sequence_reset = False
atomic_transactions = False
class DatabaseOperations(BaseDatabaseOperations):
compiler_module = "django.db.backends.oracle.compiler"
def autoinc_sql(self, table, column):
# To simulate auto-incrementing primary keys in Oracle, we have to
# create a sequence and a trigger.
sq_name = self._get_sequence_name(table)
tr_name = self._get_trigger_name(table)
tbl_name = self.quote_name(table)
col_name = self.quote_name(column)
sequence_sql = """
DECLARE
i INTEGER;
BEGIN
SELECT COUNT(*) INTO i FROM USER_CATALOG
WHERE TABLE_NAME = '%(sq_name)s' AND TABLE_TYPE = 'SEQUENCE';
IF i = 0 THEN
EXECUTE IMMEDIATE 'CREATE SEQUENCE "%(sq_name)s"';
END IF;
END;
/""" % locals()
trigger_sql = """
CREATE OR REPLACE TRIGGER "%(tr_name)s"
BEFORE INSERT ON %(tbl_name)s
FOR EACH ROW
WHEN (new.%(col_name)s IS NULL)
BEGIN
SELECT "%(sq_name)s".nextval
INTO :new.%(col_name)s FROM dual;
END;
/""" % locals()
return sequence_sql, trigger_sql
def cache_key_culling_sql(self):
return """
SELECT cache_key
FROM (SELECT cache_key, rank() OVER (ORDER BY cache_key) AS rank FROM %s)
WHERE rank = %%s + 1
"""
def date_extract_sql(self, lookup_type, field_name):
if lookup_type == 'week_day':
# TO_CHAR(field, 'D') returns an integer from 1-7, where 1=Sunday.
return "TO_CHAR(%s, 'D')" % field_name
else:
# http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions050.htm
return "EXTRACT(%s FROM %s)" % (lookup_type.upper(), field_name)
def date_interval_sql(self, sql, connector, timedelta):
"""
Implements the interval functionality for expressions
format for Oracle:
(datefield + INTERVAL '3 00:03:20.000000' DAY(1) TO SECOND(6))
"""
minutes, seconds = divmod(timedelta.seconds, 60)
hours, minutes = divmod(minutes, 60)
days = str(timedelta.days)
day_precision = len(days)
fmt = "(%s %s INTERVAL '%s %02d:%02d:%02d.%06d' DAY(%d) TO SECOND(6))"
return fmt % (sql, connector, days, hours, minutes, seconds,
timedelta.microseconds, day_precision)
def date_trunc_sql(self, lookup_type, field_name):
# http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions230.htm#i1002084
if lookup_type in ('year', 'month'):
return "TRUNC(%s, '%s')" % (field_name, lookup_type.upper())
else:
return "TRUNC(%s)" % field_name
# Oracle crashes with "ORA-03113: end-of-file on communication channel"
# if the time zone name is passed in parameter. Use interpolation instead.
# https://groups.google.com/forum/#!msg/django-developers/zwQju7hbG78/9l934yelwfsJ
# This regexp matches all time zone names from the zoneinfo database.
_tzname_re = re.compile(r'^[\w/:+-]+$')
def _convert_field_to_tz(self, field_name, tzname):
if not self._tzname_re.match(tzname):
raise ValueError("Invalid time zone name: %s" % tzname)
# Convert from UTC to local time, returning TIMESTAMP WITH TIME ZONE.
result = "(FROM_TZ(%s, '0:00') AT TIME ZONE '%s')" % (field_name, tzname)
# Extracting from a TIMESTAMP WITH TIME ZONE ignore the time zone.
# Convert to a DATETIME, which is called DATE by Oracle. There's no
# built-in function to do that; the easiest is to go through a string.
result = "TO_CHAR(%s, 'YYYY-MM-DD HH24:MI:SS')" % result
result = "TO_DATE(%s, 'YYYY-MM-DD HH24:MI:SS')" % result
# Re-convert to a TIMESTAMP because EXTRACT only handles the date part
# on DATE values, even though they actually store the time part.
return "CAST(%s AS TIMESTAMP)" % result
def datetime_extract_sql(self, lookup_type, field_name, tzname):
if settings.USE_TZ:
field_name = self._convert_field_to_tz(field_name, tzname)
if lookup_type == 'week_day':
# TO_CHAR(field, 'D') returns an integer from 1-7, where 1=Sunday.
sql = "TO_CHAR(%s, 'D')" % field_name
else:
# http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions050.htm
sql = "EXTRACT(%s FROM %s)" % (lookup_type.upper(), field_name)
return sql, []
def datetime_trunc_sql(self, lookup_type, field_name, tzname):
if settings.USE_TZ:
field_name = self._convert_field_to_tz(field_name, tzname)
# http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions230.htm#i1002084
if lookup_type in ('year', 'month'):
sql = "TRUNC(%s, '%s')" % (field_name, lookup_type.upper())
elif lookup_type == 'day':
sql = "TRUNC(%s)" % field_name
elif lookup_type == 'hour':
sql = "TRUNC(%s, 'HH24')" % field_name
elif lookup_type == 'minute':
sql = "TRUNC(%s, 'MI')" % field_name
else:
sql = field_name # Cast to DATE removes sub-second precision.
return sql, []
def convert_values(self, value, field):
if isinstance(value, Database.LOB):
value = value.read()
if field and field.get_internal_type() == 'TextField':
value = force_text(value)
# Oracle stores empty strings as null. We need to undo this in
# order to adhere to the Django convention of using the empty
# string instead of null, but only if the field accepts the
# empty string.
if value is None and field and field.empty_strings_allowed:
value = ''
# Convert 1 or 0 to True or False
elif value in (1, 0) and field and field.get_internal_type() in ('BooleanField', 'NullBooleanField'):
value = bool(value)
# Force floats to the correct type
elif value is not None and field and field.get_internal_type() == 'FloatField':
value = float(value)
# Convert floats to decimals
elif value is not None and field and field.get_internal_type() == 'DecimalField':
value = util.typecast_decimal(field.format_number(value))
# cx_Oracle always returns datetime.datetime objects for
# DATE and TIMESTAMP columns, but Django wants to see a
# python datetime.date, .time, or .datetime. We use the type
# of the Field to determine which to cast to, but it's not
# always available.
# As a workaround, we cast to date if all the time-related
# values are 0, or to time if the date is 1/1/1900.
# This could be cleaned a bit by adding a method to the Field
# classes to normalize values from the database (the to_python
# method is used for validation and isn't what we want here).
elif isinstance(value, Database.Timestamp):
if field and field.get_internal_type() == 'DateTimeField':
pass
elif field and field.get_internal_type() == 'DateField':
value = value.date()
elif field and field.get_internal_type() == 'TimeField' or (value.year == 1900 and value.month == value.day == 1):
value = value.time()
elif value.hour == value.minute == value.second == value.microsecond == 0:
value = value.date()
return value
def deferrable_sql(self):
return " DEFERRABLE INITIALLY DEFERRED"
def drop_sequence_sql(self, table):
return "DROP SEQUENCE %s;" % self.quote_name(self._get_sequence_name(table))
def fetch_returned_insert_id(self, cursor):
return int(cursor._insert_id_var.getvalue())
def field_cast_sql(self, db_type, internal_type):
if db_type and db_type.endswith('LOB'):
return "DBMS_LOB.SUBSTR(%s, 4000)"
else:
return "%s"
def last_executed_query(self, cursor, sql, params):
# http://cx-oracle.sourceforge.net/html/cursor.html#Cursor.statement
# The DB API definition does not define this attribute.
statement = cursor.statement
if statement and six.PY2 and not isinstance(statement, unicode):
statement = statement.decode('utf-8')
# Unlike Psycopg's `query` and MySQLdb`'s `_last_executed`, CxOracle's
# `statement` doesn't contain the query parameters. refs #20010.
return super(DatabaseOperations, self).last_executed_query(cursor, statement, params)
def last_insert_id(self, cursor, table_name, pk_name):
sq_name = self._get_sequence_name(table_name)
cursor.execute('SELECT "%s".currval FROM dual' % sq_name)
return cursor.fetchone()[0]
def lookup_cast(self, lookup_type):
if lookup_type in ('iexact', 'icontains', 'istartswith', 'iendswith'):
return "UPPER(%s)"
return "%s"
def max_in_list_size(self):
return 1000
def max_name_length(self):
return 30
def prep_for_iexact_query(self, x):
return x
def process_clob(self, value):
if value is None:
return ''
return force_text(value.read())
def quote_name(self, name):
# SQL92 requires delimited (quoted) names to be case-sensitive. When
# not quoted, Oracle has case-insensitive behavior for identifiers, but
# always defaults to uppercase.
# We simplify things by making Oracle identifiers always uppercase.
if not name.startswith('"') and not name.endswith('"'):
name = '"%s"' % util.truncate_name(name.upper(),
self.max_name_length())
# Oracle puts the query text into a (query % args) construct, so % signs
# in names need to be escaped. The '%%' will be collapsed back to '%' at
# that stage so we aren't really making the name longer here.
name = name.replace('%','%%')
return name.upper()
def random_function_sql(self):
return "DBMS_RANDOM.RANDOM"
def regex_lookup_9(self, lookup_type):
raise NotImplementedError("Regexes are not supported in Oracle before version 10g.")
def regex_lookup_10(self, lookup_type):
if lookup_type == 'regex':
match_option = "'c'"
else:
match_option = "'i'"
return 'REGEXP_LIKE(%%s, %%s, %s)' % match_option
def regex_lookup(self, lookup_type):
# If regex_lookup is called before it's been initialized, then create
# a cursor to initialize it and recur.
self.connection.cursor()
return self.connection.ops.regex_lookup(lookup_type)
def return_insert_id(self):
return "RETURNING %s INTO %%s", (InsertIdVar(),)
def savepoint_create_sql(self, sid):
return convert_unicode("SAVEPOINT " + self.quote_name(sid))
def savepoint_rollback_sql(self, sid):
return convert_unicode("ROLLBACK TO SAVEPOINT " + self.quote_name(sid))
def sql_flush(self, style, tables, sequences, allow_cascade=False):
# Return a list of 'TRUNCATE x;', 'TRUNCATE y;',
# 'TRUNCATE z;'... style SQL statements
if tables:
# Oracle does support TRUNCATE, but it seems to get us into
# FK referential trouble, whereas DELETE FROM table works.
sql = ['%s %s %s;' % (
style.SQL_KEYWORD('DELETE'),
style.SQL_KEYWORD('FROM'),
style.SQL_FIELD(self.quote_name(table))
) for table in tables]
# Since we've just deleted all the rows, running our sequence
# ALTER code will reset the sequence to 0.
sql.extend(self.sequence_reset_by_name_sql(style, sequences))
return sql
else:
return []
def sequence_reset_by_name_sql(self, style, sequences):
sql = []
for sequence_info in sequences:
sequence_name = self._get_sequence_name(sequence_info['table'])
table_name = self.quote_name(sequence_info['table'])
column_name = self.quote_name(sequence_info['column'] or 'id')
query = _get_sequence_reset_sql() % {'sequence': sequence_name,
'table': table_name,
'column': column_name}
sql.append(query)
return sql
def sequence_reset_sql(self, style, model_list):
from django.db import models
output = []
query = _get_sequence_reset_sql()
for model in model_list:
for f in model._meta.local_fields:
if isinstance(f, models.AutoField):
table_name = self.quote_name(model._meta.db_table)
sequence_name = self._get_sequence_name(model._meta.db_table)
column_name = self.quote_name(f.column)
output.append(query % {'sequence': sequence_name,
'table': table_name,
'column': column_name})
# Only one AutoField is allowed per model, so don't
# continue to loop
break
for f in model._meta.many_to_many:
if not f.rel.through:
table_name = self.quote_name(f.m2m_db_table())
sequence_name = self._get_sequence_name(f.m2m_db_table())
column_name = self.quote_name('id')
output.append(query % {'sequence': sequence_name,
'table': table_name,
'column': column_name})
return output
def start_transaction_sql(self):
return ''
def tablespace_sql(self, tablespace, inline=False):
if inline:
return "USING INDEX TABLESPACE %s" % self.quote_name(tablespace)
else:
return "TABLESPACE %s" % self.quote_name(tablespace)
def value_to_db_date(self, value):
"""
Transform a date value to an object compatible with what is expected
by the backend driver for date columns.
The default implementation transforms the date to text, but that is not
necessary for Oracle.
"""
return value
def value_to_db_datetime(self, value):
"""
Transform a datetime value to an object compatible with what is expected
by the backend driver for datetime columns.
If naive datetime is passed assumes that is in UTC. Normally Django
models.DateTimeField makes sure that if USE_TZ is True passed datetime
is timezone aware.
"""
if value is None:
return None
# cx_Oracle doesn't support tz-aware datetimes
if timezone.is_aware(value):
if settings.USE_TZ:
value = value.astimezone(timezone.utc).replace(tzinfo=None)
else:
raise ValueError("Oracle backend does not support timezone-aware datetimes when USE_TZ is False.")
return Oracle_datetime.from_datetime(value)
def value_to_db_time(self, value):
if value is None:
return None
if isinstance(value, six.string_types):
return datetime.datetime.strptime(value, '%H:%M:%S')
# Oracle doesn't support tz-aware times
if timezone.is_aware(value):
raise ValueError("Oracle backend does not support timezone-aware times.")
return Oracle_datetime(1900, 1, 1, value.hour, value.minute,
value.second, value.microsecond)
def year_lookup_bounds_for_date_field(self, value):
# Create bounds as real date values
first = datetime.date(value, 1, 1)
last = datetime.date(value, 12, 31)
return [first, last]
def year_lookup_bounds_for_datetime_field(self, value):
# cx_Oracle doesn't support tz-aware datetimes
bounds = super(DatabaseOperations, self).year_lookup_bounds_for_datetime_field(value)
if settings.USE_TZ:
bounds = [b.astimezone(timezone.utc) for b in bounds]
return [Oracle_datetime.from_datetime(b) for b in bounds]
def combine_expression(self, connector, sub_expressions):
"Oracle requires special cases for %% and & operators in query expressions"
if connector == '%%':
return 'MOD(%s)' % ','.join(sub_expressions)
elif connector == '&':
return 'BITAND(%s)' % ','.join(sub_expressions)
elif connector == '|':
raise NotImplementedError("Bit-wise or is not supported in Oracle.")
return super(DatabaseOperations, self).combine_expression(connector, sub_expressions)
def _get_sequence_name(self, table):
name_length = self.max_name_length() - 3
return '%s_SQ' % util.truncate_name(table, name_length).upper()
def _get_trigger_name(self, table):
name_length = self.max_name_length() - 3
return '%s_TR' % util.truncate_name(table, name_length).upper()
def bulk_insert_sql(self, fields, num_values):
items_sql = "SELECT %s FROM DUAL" % ", ".join(["%s"] * len(fields))
return " UNION ALL ".join([items_sql] * num_values)
class _UninitializedOperatorsDescriptor(object):
def __get__(self, instance, owner):
# If connection.operators is looked up before a connection has been
# created, transparently initialize connection.operators to avert an
# AttributeError.
if instance is None:
raise AttributeError("operators not available as class attribute")
# Creating a cursor will initialize the operators.
instance.cursor().close()
return instance.__dict__['operators']
class DatabaseWrapper(BaseDatabaseWrapper):
vendor = 'oracle'
operators = _UninitializedOperatorsDescriptor()
_standard_operators = {
'exact': '= %s',
'iexact': '= UPPER(%s)',
'contains': "LIKE TRANSLATE(%s USING NCHAR_CS) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
'icontains': "LIKE UPPER(TRANSLATE(%s USING NCHAR_CS)) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
'gt': '> %s',
'gte': '>= %s',
'lt': '< %s',
'lte': '<= %s',
'startswith': "LIKE TRANSLATE(%s USING NCHAR_CS) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
'endswith': "LIKE TRANSLATE(%s USING NCHAR_CS) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
'istartswith': "LIKE UPPER(TRANSLATE(%s USING NCHAR_CS)) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
'iendswith': "LIKE UPPER(TRANSLATE(%s USING NCHAR_CS)) ESCAPE TRANSLATE('\\' USING NCHAR_CS)",
}
_likec_operators = _standard_operators.copy()
_likec_operators.update({
'contains': "LIKEC %s ESCAPE '\\'",
'icontains': "LIKEC UPPER(%s) ESCAPE '\\'",
'startswith': "LIKEC %s ESCAPE '\\'",
'endswith': "LIKEC %s ESCAPE '\\'",
'istartswith': "LIKEC UPPER(%s) ESCAPE '\\'",
'iendswith': "LIKEC UPPER(%s) ESCAPE '\\'",
})
Database = Database
def __init__(self, *args, **kwargs):
super(DatabaseWrapper, self).__init__(*args, **kwargs)
self.features = DatabaseFeatures(self)
use_returning_into = self.settings_dict["OPTIONS"].get('use_returning_into', True)
self.features.can_return_id_from_insert = use_returning_into
self.ops = DatabaseOperations(self)
self.client = DatabaseClient(self)
self.creation = DatabaseCreation(self)
self.introspection = DatabaseIntrospection(self)
self.validation = BaseDatabaseValidation(self)
def _connect_string(self):
settings_dict = self.settings_dict
if not settings_dict['HOST'].strip():
settings_dict['HOST'] = 'localhost'
if settings_dict['PORT'].strip():
dsn = Database.makedsn(settings_dict['HOST'],
int(settings_dict['PORT']),
settings_dict['NAME'])
else:
dsn = settings_dict['NAME']
return "%s/%s@%s" % (settings_dict['USER'],
settings_dict['PASSWORD'], dsn)
def get_connection_params(self):
conn_params = self.settings_dict['OPTIONS'].copy()
if 'use_returning_into' in conn_params:
del conn_params['use_returning_into']
return conn_params
def get_new_connection(self, conn_params):
conn_string = convert_unicode(self._connect_string())
return Database.connect(conn_string, **conn_params)
def init_connection_state(self):
cursor = self.create_cursor()
# Set the territory first. The territory overrides NLS_DATE_FORMAT
# and NLS_TIMESTAMP_FORMAT to the territory default. When all of
# these are set in single statement it isn't clear what is supposed
# to happen.
cursor.execute("ALTER SESSION SET NLS_TERRITORY = 'AMERICA'")
# Set oracle date to ansi date format. This only needs to execute
# once when we create a new connection. We also set the Territory
# to 'AMERICA' which forces Sunday to evaluate to a '1' in
# TO_CHAR().
cursor.execute(
"ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS'"
" NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF'"
+ (" TIME_ZONE = 'UTC'" if settings.USE_TZ else ''))
cursor.close()
if 'operators' not in self.__dict__:
# Ticket #14149: Check whether our LIKE implementation will
# work for this connection or we need to fall back on LIKEC.
# This check is performed only once per DatabaseWrapper
# instance per thread, since subsequent connections will use
# the same settings.
cursor = self.create_cursor()
try:
cursor.execute("SELECT 1 FROM DUAL WHERE DUMMY %s"
% self._standard_operators['contains'],
['X'])
except DatabaseError:
self.operators = self._likec_operators
else:
self.operators = self._standard_operators
cursor.close()
# There's no way for the DatabaseOperations class to know the
# currently active Oracle version, so we do some setups here.
# TODO: Multi-db support will need a better solution (a way to
# communicate the current version).
if self.oracle_version is not None and self.oracle_version <= 9:
self.ops.regex_lookup = self.ops.regex_lookup_9
else:
self.ops.regex_lookup = self.ops.regex_lookup_10
try:
self.connection.stmtcachesize = 20
except:
# Django docs specify cx_Oracle version 4.3.1 or higher, but
# stmtcachesize is available only in 4.3.2 and up.
pass
def create_cursor(self):
return FormatStylePlaceholderCursor(self.connection)
def _commit(self):
if self.connection is not None:
try:
return self.connection.commit()
except Database.DatabaseError as e:
# cx_Oracle 5.0.4 raises a cx_Oracle.DatabaseError exception
# with the following attributes and values:
# code = 2091
# message = 'ORA-02091: transaction rolled back
# 'ORA-02291: integrity constraint (TEST_DJANGOTEST.SYS
# _C00102056) violated - parent key not found'
# We convert that particular case to our IntegrityError exception
x = e.args[0]
if hasattr(x, 'code') and hasattr(x, 'message') \
and x.code == 2091 and 'ORA-02291' in x.message:
six.reraise(utils.IntegrityError, utils.IntegrityError(*tuple(e.args)), sys.exc_info()[2])
raise
# Oracle doesn't support savepoint commits. Ignore them.
def _savepoint_commit(self, sid):
pass
def _set_autocommit(self, autocommit):
with self.wrap_database_errors:
self.connection.autocommit = autocommit
def check_constraints(self, table_names=None):
"""
To check constraints, we set constraints to immediate. Then, when, we're done we must ensure they
are returned to deferred.
"""
self.cursor().execute('SET CONSTRAINTS ALL IMMEDIATE')
self.cursor().execute('SET CONSTRAINTS ALL DEFERRED')
def is_usable(self):
try:
if hasattr(self.connection, 'ping'): # Oracle 10g R2 and higher
self.connection.ping()
else:
# Use a cx_Oracle cursor directly, bypassing Django's utilities.
self.connection.cursor().execute("SELECT 1 FROM DUAL")
except Database.Error:
return False
else:
return True
@cached_property
def oracle_version(self):
with self.temporary_connection():
version = self.connection.version
try:
return int(version.split('.')[0])
except ValueError:
return None
class OracleParam(object):
"""
Wrapper object for formatting parameters for Oracle. If the string
representation of the value is large enough (greater than 4000 characters)
the input size needs to be set as CLOB. Alternatively, if the parameter
has an `input_size` attribute, then the value of the `input_size` attribute
will be used instead. Otherwise, no input size will be set for the
parameter when executing the query.
"""
def __init__(self, param, cursor, strings_only=False):
# With raw SQL queries, datetimes can reach this function
# without being converted by DateTimeField.get_db_prep_value.
if settings.USE_TZ and (isinstance(param, datetime.datetime) and
not isinstance(param, Oracle_datetime)):
if timezone.is_naive(param):
warnings.warn("Oracle received a naive datetime (%s)"
" while time zone support is active." % param,
RuntimeWarning)
default_timezone = timezone.get_default_timezone()
param = timezone.make_aware(param, default_timezone)
param = Oracle_datetime.from_datetime(param.astimezone(timezone.utc))
# Oracle doesn't recognize True and False correctly in Python 3.
# The conversion done below works both in 2 and 3.
if param is True:
param = "1"
elif param is False:
param = "0"
if hasattr(param, 'bind_parameter'):
self.force_bytes = param.bind_parameter(cursor)
elif isinstance(param, six.memoryview):
self.force_bytes = param
else:
self.force_bytes = convert_unicode(param, cursor.charset,
strings_only)
if hasattr(param, 'input_size'):
# If parameter has `input_size` attribute, use that.
self.input_size = param.input_size
elif isinstance(param, six.string_types) and len(param) > 4000:
# Mark any string param greater than 4000 characters as a CLOB.
self.input_size = Database.CLOB
else:
self.input_size = None
class VariableWrapper(object):
"""
An adapter class for cursor variables that prevents the wrapped object
from being converted into a string when used to instanciate an OracleParam.
This can be used generally for any other object that should be passed into
Cursor.execute as-is.
"""
def __init__(self, var):
self.var = var
def bind_parameter(self, cursor):
return self.var
def __getattr__(self, key):
return getattr(self.var, key)
def __setattr__(self, key, value):
if key == 'var':
self.__dict__[key] = value
else:
setattr(self.var, key, value)
class InsertIdVar(object):
"""
A late-binding cursor variable that can be passed to Cursor.execute
as a parameter, in order to receive the id of the row created by an
insert statement.
"""
def bind_parameter(self, cursor):
param = cursor.cursor.var(Database.NUMBER)
cursor._insert_id_var = param
return param
class FormatStylePlaceholderCursor(object):
"""
Django uses "format" (e.g. '%s') style placeholders, but Oracle uses ":var"
style. This fixes it -- but note that if you want to use a literal "%s" in
a query, you'll need to use "%%s".
We also do automatic conversion between Unicode on the Python side and
UTF-8 -- for talking to Oracle -- in here.
"""
charset = 'utf-8'
def __init__(self, connection):
self.cursor = connection.cursor()
# Necessary to retrieve decimal values without rounding error.
self.cursor.numbersAsStrings = True
# Default arraysize of 1 is highly sub-optimal.
self.cursor.arraysize = 100
def _format_params(self, params):
try:
return dict((k,OracleParam(v, self, True)) for k,v in params.items())
except AttributeError:
return tuple([OracleParam(p, self, True) for p in params])
def _guess_input_sizes(self, params_list):
# Try dict handling; if that fails, treat as sequence
if hasattr(params_list[0], 'keys'):
sizes = {}
for params in params_list:
for k, value in params.items():
if value.input_size:
sizes[k] = value.input_size
self.setinputsizes(**sizes)
else:
# It's not a list of dicts; it's a list of sequences
sizes = [None] * len(params_list[0])
for params in params_list:
for i, value in enumerate(params):
if value.input_size:
sizes[i] = value.input_size
self.setinputsizes(*sizes)
def _param_generator(self, params):
# Try dict handling; if that fails, treat as sequence
if hasattr(params, 'items'):
return dict((k, v.force_bytes) for k,v in params.items())
else:
return [p.force_bytes for p in params]
def _fix_for_params(self, query, params):
# cx_Oracle wants no trailing ';' for SQL statements. For PL/SQL, it
# it does want a trailing ';' but not a trailing '/'. However, these
# characters must be included in the original query in case the query
# is being passed to SQL*Plus.
if query.endswith(';') or query.endswith('/'):
query = query[:-1]
if params is None:
params = []
query = convert_unicode(query, self.charset)
elif hasattr(params, 'keys'):
# Handle params as dict
args = dict((k, ":%s"%k) for k in params.keys())
query = convert_unicode(query % args, self.charset)
else:
# Handle params as sequence
args = [(':arg%d' % i) for i in range(len(params))]
query = convert_unicode(query % tuple(args), self.charset)
return query, self._format_params(params)
def execute(self, query, params=None):
query, params = self._fix_for_params(query, params)
self._guess_input_sizes([params])
try:
return self.cursor.execute(query, self._param_generator(params))
except Database.DatabaseError as e:
# cx_Oracle <= 4.4.0 wrongly raises a DatabaseError for ORA-01400.
if hasattr(e.args[0], 'code') and e.args[0].code == 1400 and not isinstance(e, IntegrityError):
six.reraise(utils.IntegrityError, utils.IntegrityError(*tuple(e.args)), sys.exc_info()[2])
raise
def executemany(self, query, params=None):
if not params:
# No params given, nothing to do
return None
# uniform treatment for sequences and iterables
params_iter = iter(params)
query, firstparams = self._fix_for_params(query, next(params_iter))
# we build a list of formatted params; as we're going to traverse it
# more than once, we can't make it lazy by using a generator
formatted = [firstparams]+[self._format_params(p) for p in params_iter]
self._guess_input_sizes(formatted)
try:
return self.cursor.executemany(query,
[self._param_generator(p) for p in formatted])
except Database.DatabaseError as e:
# cx_Oracle <= 4.4.0 wrongly raises a DatabaseError for ORA-01400.
if hasattr(e.args[0], 'code') and e.args[0].code == 1400 and not isinstance(e, IntegrityError):
six.reraise(utils.IntegrityError, utils.IntegrityError(*tuple(e.args)), sys.exc_info()[2])
raise
def fetchone(self):
row = self.cursor.fetchone()
if row is None:
return row
return _rowfactory(row, self.cursor)
def fetchmany(self, size=None):
if size is None:
size = self.arraysize
return tuple([_rowfactory(r, self.cursor)
for r in self.cursor.fetchmany(size)])
def fetchall(self):
return tuple([_rowfactory(r, self.cursor)
for r in self.cursor.fetchall()])
def var(self, *args):
return VariableWrapper(self.cursor.var(*args))
def arrayvar(self, *args):
return VariableWrapper(self.cursor.arrayvar(*args))
def __getattr__(self, attr):
if attr in self.__dict__:
return self.__dict__[attr]
else:
return getattr(self.cursor, attr)
def __iter__(self):
return CursorIterator(self.cursor)
class CursorIterator(six.Iterator):
"""Cursor iterator wrapper that invokes our custom row factory."""
def __init__(self, cursor):
self.cursor = cursor
self.iter = iter(cursor)
def __iter__(self):
return self
def __next__(self):
return _rowfactory(next(self.iter), self.cursor)
def _rowfactory(row, cursor):
# Cast numeric values as the appropriate Python type based upon the
# cursor description, and convert strings to unicode.
casted = []
for value, desc in zip(row, cursor.description):
if value is not None and desc[1] is Database.NUMBER:
precision, scale = desc[4:6]
if scale == -127:
if precision == 0:
# NUMBER column: decimal-precision floating point
# This will normally be an integer from a sequence,
# but it could be a decimal value.
if '.' in value:
value = decimal.Decimal(value)
else:
value = int(value)
else:
# FLOAT column: binary-precision floating point.
# This comes from FloatField columns.
value = float(value)
elif precision > 0:
# NUMBER(p,s) column: decimal-precision fixed point.
# This comes from IntField and DecimalField columns.
if scale == 0:
value = int(value)
else:
value = decimal.Decimal(value)
elif '.' in value:
# No type information. This normally comes from a
# mathematical expression in the SELECT list. Guess int
# or Decimal based on whether it has a decimal point.
value = decimal.Decimal(value)
else:
value = int(value)
# datetimes are returned as TIMESTAMP, except the results
# of "dates" queries, which are returned as DATETIME.
elif desc[1] in (Database.TIMESTAMP, Database.DATETIME):
# Confirm that dt is naive before overwriting its tzinfo.
if settings.USE_TZ and value is not None and timezone.is_naive(value):
value = value.replace(tzinfo=timezone.utc)
elif desc[1] in (Database.STRING, Database.FIXED_CHAR,
Database.LONG_STRING):
value = to_unicode(value)
casted.append(value)
return tuple(casted)
def to_unicode(s):
"""
Convert strings to Unicode objects (and return all other data types
unchanged).
"""
if isinstance(s, six.string_types):
return force_text(s)
return s
def _get_sequence_reset_sql():
# TODO: colorize this SQL code with style.SQL_KEYWORD(), etc.
return """
DECLARE
table_value integer;
seq_value integer;
BEGIN
SELECT NVL(MAX(%(column)s), 0) INTO table_value FROM %(table)s;
SELECT NVL(last_number - cache_size, 0) INTO seq_value FROM user_sequences
WHERE sequence_name = '%(sequence)s';
WHILE table_value > seq_value LOOP
SELECT "%(sequence)s".nextval INTO seq_value FROM dual;
END LOOP;
END;
/"""
| apache-2.0 |
olapaola/olapaola-android-scripting | python/src/Lib/bsddb/test/test_basics.py | 31 | 32840 | """
Basic TestCases for BTree and hash DBs, with and without a DBEnv, with
various DB flags, etc.
"""
import os
import errno
import string
from pprint import pprint
import unittest
import time
from test_all import db, test_support, verbose, get_new_environment_path, \
get_new_database_path
DASH = '-'
#----------------------------------------------------------------------
class VersionTestCase(unittest.TestCase):
def test00_version(self):
info = db.version()
if verbose:
print '\n', '-=' * 20
print 'bsddb.db.version(): %s' % (info, )
print db.DB_VERSION_STRING
print '-=' * 20
self.assertEqual(info, (db.DB_VERSION_MAJOR, db.DB_VERSION_MINOR,
db.DB_VERSION_PATCH))
#----------------------------------------------------------------------
class BasicTestCase(unittest.TestCase):
dbtype = db.DB_UNKNOWN # must be set in derived class
dbopenflags = 0
dbsetflags = 0
dbmode = 0660
dbname = None
useEnv = 0
envflags = 0
envsetflags = 0
_numKeys = 1002 # PRIVATE. NOTE: must be an even value
def setUp(self):
if self.useEnv:
self.homeDir=get_new_environment_path()
try:
self.env = db.DBEnv()
self.env.set_lg_max(1024*1024)
self.env.set_tx_max(30)
self.env.set_tx_timestamp(int(time.time()))
self.env.set_flags(self.envsetflags, 1)
self.env.open(self.homeDir, self.envflags | db.DB_CREATE)
self.filename = "test"
# Yes, a bare except is intended, since we're re-raising the exc.
except:
test_support.rmtree(self.homeDir)
raise
else:
self.env = None
self.filename = get_new_database_path()
# create and open the DB
self.d = db.DB(self.env)
self.d.set_flags(self.dbsetflags)
if self.dbname:
self.d.open(self.filename, self.dbname, self.dbtype,
self.dbopenflags|db.DB_CREATE, self.dbmode)
else:
self.d.open(self.filename, # try out keyword args
mode = self.dbmode,
dbtype = self.dbtype,
flags = self.dbopenflags|db.DB_CREATE)
self.populateDB()
def tearDown(self):
self.d.close()
if self.env is not None:
self.env.close()
test_support.rmtree(self.homeDir)
else:
os.remove(self.filename)
def populateDB(self, _txn=None):
d = self.d
for x in range(self._numKeys//2):
key = '%04d' % (self._numKeys - x) # insert keys in reverse order
data = self.makeData(key)
d.put(key, data, _txn)
d.put('empty value', '', _txn)
for x in range(self._numKeys//2-1):
key = '%04d' % x # and now some in forward order
data = self.makeData(key)
d.put(key, data, _txn)
if _txn:
_txn.commit()
num = len(d)
if verbose:
print "created %d records" % num
def makeData(self, key):
return DASH.join([key] * 5)
#----------------------------------------
def test01_GetsAndPuts(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test01_GetsAndPuts..." % self.__class__.__name__
for key in ['0001', '0100', '0400', '0700', '0999']:
data = d.get(key)
if verbose:
print data
self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321')
# By default non-existant keys return None...
self.assertEqual(d.get('abcd'), None)
# ...but they raise exceptions in other situations. Call
# set_get_returns_none() to change it.
try:
d.delete('abcd')
except db.DBNotFoundError, val:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_NOTFOUND)
else :
self.assertEqual(val.args[0], db.DB_NOTFOUND)
if verbose: print val
else:
self.fail("expected exception")
d.put('abcd', 'a new record')
self.assertEqual(d.get('abcd'), 'a new record')
d.put('abcd', 'same key')
if self.dbsetflags & db.DB_DUP:
self.assertEqual(d.get('abcd'), 'a new record')
else:
self.assertEqual(d.get('abcd'), 'same key')
try:
d.put('abcd', 'this should fail', flags=db.DB_NOOVERWRITE)
except db.DBKeyExistError, val:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_KEYEXIST)
else :
self.assertEqual(val.args[0], db.DB_KEYEXIST)
if verbose: print val
else:
self.fail("expected exception")
if self.dbsetflags & db.DB_DUP:
self.assertEqual(d.get('abcd'), 'a new record')
else:
self.assertEqual(d.get('abcd'), 'same key')
d.sync()
d.close()
del d
self.d = db.DB(self.env)
if self.dbname:
self.d.open(self.filename, self.dbname)
else:
self.d.open(self.filename)
d = self.d
self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321')
if self.dbsetflags & db.DB_DUP:
self.assertEqual(d.get('abcd'), 'a new record')
else:
self.assertEqual(d.get('abcd'), 'same key')
rec = d.get_both('0555', '0555-0555-0555-0555-0555')
if verbose:
print rec
self.assertEqual(d.get_both('0555', 'bad data'), None)
# test default value
data = d.get('bad key', 'bad data')
self.assertEqual(data, 'bad data')
# any object can pass through
data = d.get('bad key', self)
self.assertEqual(data, self)
s = d.stat()
self.assertEqual(type(s), type({}))
if verbose:
print 'd.stat() returned this dictionary:'
pprint(s)
#----------------------------------------
def test02_DictionaryMethods(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test02_DictionaryMethods..." % \
self.__class__.__name__
for key in ['0002', '0101', '0401', '0701', '0998']:
data = d[key]
self.assertEqual(data, self.makeData(key))
if verbose:
print data
self.assertEqual(len(d), self._numKeys)
keys = d.keys()
self.assertEqual(len(keys), self._numKeys)
self.assertEqual(type(keys), type([]))
d['new record'] = 'a new record'
self.assertEqual(len(d), self._numKeys+1)
keys = d.keys()
self.assertEqual(len(keys), self._numKeys+1)
d['new record'] = 'a replacement record'
self.assertEqual(len(d), self._numKeys+1)
keys = d.keys()
self.assertEqual(len(keys), self._numKeys+1)
if verbose:
print "the first 10 keys are:"
pprint(keys[:10])
self.assertEqual(d['new record'], 'a replacement record')
# We check also the positional parameter
self.assertEqual(d.has_key('0001', None), 1)
# We check also the keyword parameter
self.assertEqual(d.has_key('spam', txn=None), 0)
items = d.items()
self.assertEqual(len(items), self._numKeys+1)
self.assertEqual(type(items), type([]))
self.assertEqual(type(items[0]), type(()))
self.assertEqual(len(items[0]), 2)
if verbose:
print "the first 10 items are:"
pprint(items[:10])
values = d.values()
self.assertEqual(len(values), self._numKeys+1)
self.assertEqual(type(values), type([]))
if verbose:
print "the first 10 values are:"
pprint(values[:10])
#----------------------------------------
def test03_SimpleCursorStuff(self, get_raises_error=0, set_raises_error=0):
if verbose:
print '\n', '-=' * 30
print "Running %s.test03_SimpleCursorStuff (get_error %s, set_error %s)..." % \
(self.__class__.__name__, get_raises_error, set_raises_error)
if self.env and self.dbopenflags & db.DB_AUTO_COMMIT:
txn = self.env.txn_begin()
else:
txn = None
c = self.d.cursor(txn=txn)
rec = c.first()
count = 0
while rec is not None:
count = count + 1
if verbose and count % 100 == 0:
print rec
try:
rec = c.next()
except db.DBNotFoundError, val:
if get_raises_error:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_NOTFOUND)
else :
self.assertEqual(val.args[0], db.DB_NOTFOUND)
if verbose: print val
rec = None
else:
self.fail("unexpected DBNotFoundError")
self.assertEqual(c.get_current_size(), len(c.current()[1]),
"%s != len(%r)" % (c.get_current_size(), c.current()[1]))
self.assertEqual(count, self._numKeys)
rec = c.last()
count = 0
while rec is not None:
count = count + 1
if verbose and count % 100 == 0:
print rec
try:
rec = c.prev()
except db.DBNotFoundError, val:
if get_raises_error:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_NOTFOUND)
else :
self.assertEqual(val.args[0], db.DB_NOTFOUND)
if verbose: print val
rec = None
else:
self.fail("unexpected DBNotFoundError")
self.assertEqual(count, self._numKeys)
rec = c.set('0505')
rec2 = c.current()
self.assertEqual(rec, rec2)
self.assertEqual(rec[0], '0505')
self.assertEqual(rec[1], self.makeData('0505'))
self.assertEqual(c.get_current_size(), len(rec[1]))
# make sure we get empty values properly
rec = c.set('empty value')
self.assertEqual(rec[1], '')
self.assertEqual(c.get_current_size(), 0)
try:
n = c.set('bad key')
except db.DBNotFoundError, val:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_NOTFOUND)
else :
self.assertEqual(val.args[0], db.DB_NOTFOUND)
if verbose: print val
else:
if set_raises_error:
self.fail("expected exception")
if n != None:
self.fail("expected None: %r" % (n,))
rec = c.get_both('0404', self.makeData('0404'))
self.assertEqual(rec, ('0404', self.makeData('0404')))
try:
n = c.get_both('0404', 'bad data')
except db.DBNotFoundError, val:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_NOTFOUND)
else :
self.assertEqual(val.args[0], db.DB_NOTFOUND)
if verbose: print val
else:
if get_raises_error:
self.fail("expected exception")
if n != None:
self.fail("expected None: %r" % (n,))
if self.d.get_type() == db.DB_BTREE:
rec = c.set_range('011')
if verbose:
print "searched for '011', found: ", rec
rec = c.set_range('011',dlen=0,doff=0)
if verbose:
print "searched (partial) for '011', found: ", rec
if rec[1] != '': self.fail('expected empty data portion')
ev = c.set_range('empty value')
if verbose:
print "search for 'empty value' returned", ev
if ev[1] != '': self.fail('empty value lookup failed')
c.set('0499')
c.delete()
try:
rec = c.current()
except db.DBKeyEmptyError, val:
if get_raises_error:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], db.DB_KEYEMPTY)
else :
self.assertEqual(val.args[0], db.DB_KEYEMPTY)
if verbose: print val
else:
self.fail("unexpected DBKeyEmptyError")
else:
if get_raises_error:
self.fail('DBKeyEmptyError exception expected')
c.next()
c2 = c.dup(db.DB_POSITION)
self.assertEqual(c.current(), c2.current())
c2.put('', 'a new value', db.DB_CURRENT)
self.assertEqual(c.current(), c2.current())
self.assertEqual(c.current()[1], 'a new value')
c2.put('', 'er', db.DB_CURRENT, dlen=0, doff=5)
self.assertEqual(c2.current()[1], 'a newer value')
c.close()
c2.close()
if txn:
txn.commit()
# time to abuse the closed cursors and hope we don't crash
methods_to_test = {
'current': (),
'delete': (),
'dup': (db.DB_POSITION,),
'first': (),
'get': (0,),
'next': (),
'prev': (),
'last': (),
'put':('', 'spam', db.DB_CURRENT),
'set': ("0505",),
}
for method, args in methods_to_test.items():
try:
if verbose:
print "attempting to use a closed cursor's %s method" % \
method
# a bug may cause a NULL pointer dereference...
apply(getattr(c, method), args)
except db.DBError, val:
import sys
if sys.version_info[0] < 3 :
self.assertEqual(val[0], 0)
else :
self.assertEqual(val.args[0], 0)
if verbose: print val
else:
self.fail("no exception raised when using a buggy cursor's"
"%s method" % method)
#
# free cursor referencing a closed database, it should not barf:
#
oldcursor = self.d.cursor(txn=txn)
self.d.close()
# this would originally cause a segfault when the cursor for a
# closed database was cleaned up. it should not anymore.
# SF pybsddb bug id 667343
del oldcursor
def test03b_SimpleCursorWithoutGetReturnsNone0(self):
# same test but raise exceptions instead of returning None
if verbose:
print '\n', '-=' * 30
print "Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \
self.__class__.__name__
old = self.d.set_get_returns_none(0)
self.assertEqual(old, 2)
self.test03_SimpleCursorStuff(get_raises_error=1, set_raises_error=1)
def test03b_SimpleCursorWithGetReturnsNone1(self):
# same test but raise exceptions instead of returning None
if verbose:
print '\n', '-=' * 30
print "Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \
self.__class__.__name__
old = self.d.set_get_returns_none(1)
self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=1)
def test03c_SimpleCursorGetReturnsNone2(self):
# same test but raise exceptions instead of returning None
if verbose:
print '\n', '-=' * 30
print "Running %s.test03c_SimpleCursorStuffWithoutSetReturnsNone..." % \
self.__class__.__name__
old = self.d.set_get_returns_none(1)
self.assertEqual(old, 2)
old = self.d.set_get_returns_none(2)
self.assertEqual(old, 1)
self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=0)
#----------------------------------------
def test04_PartialGetAndPut(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test04_PartialGetAndPut..." % \
self.__class__.__name__
key = "partialTest"
data = "1" * 1000 + "2" * 1000
d.put(key, data)
self.assertEqual(d.get(key), data)
self.assertEqual(d.get(key, dlen=20, doff=990),
("1" * 10) + ("2" * 10))
d.put("partialtest2", ("1" * 30000) + "robin" )
self.assertEqual(d.get("partialtest2", dlen=5, doff=30000), "robin")
# There seems to be a bug in DB here... Commented out the test for
# now.
##self.assertEqual(d.get("partialtest2", dlen=5, doff=30010), "")
if self.dbsetflags != db.DB_DUP:
# Partial put with duplicate records requires a cursor
d.put(key, "0000", dlen=2000, doff=0)
self.assertEqual(d.get(key), "0000")
d.put(key, "1111", dlen=1, doff=2)
self.assertEqual(d.get(key), "0011110")
#----------------------------------------
def test05_GetSize(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test05_GetSize..." % self.__class__.__name__
for i in range(1, 50000, 500):
key = "size%s" % i
#print "before ", i,
d.put(key, "1" * i)
#print "after",
self.assertEqual(d.get_size(key), i)
#print "done"
#----------------------------------------
def test06_Truncate(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test99_Truncate..." % self.__class__.__name__
d.put("abcde", "ABCDE");
num = d.truncate()
self.assert_(num >= 1, "truncate returned <= 0 on non-empty database")
num = d.truncate()
self.assertEqual(num, 0,
"truncate on empty DB returned nonzero (%r)" % (num,))
#----------------------------------------
def test07_verify(self):
# Verify bug solved in 4.7.3pre8
self.d.close()
d = db.DB(self.env)
d.verify(self.filename)
#----------------------------------------
#----------------------------------------------------------------------
class BasicBTreeTestCase(BasicTestCase):
dbtype = db.DB_BTREE
class BasicHashTestCase(BasicTestCase):
dbtype = db.DB_HASH
class BasicBTreeWithThreadFlagTestCase(BasicTestCase):
dbtype = db.DB_BTREE
dbopenflags = db.DB_THREAD
class BasicHashWithThreadFlagTestCase(BasicTestCase):
dbtype = db.DB_HASH
dbopenflags = db.DB_THREAD
class BasicWithEnvTestCase(BasicTestCase):
dbopenflags = db.DB_THREAD
useEnv = 1
envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
#----------------------------------------
def test08_EnvRemoveAndRename(self):
if not self.env:
return
if verbose:
print '\n', '-=' * 30
print "Running %s.test08_EnvRemoveAndRename..." % self.__class__.__name__
# can't rename or remove an open DB
self.d.close()
newname = self.filename + '.renamed'
self.env.dbrename(self.filename, None, newname)
self.env.dbremove(newname)
# dbremove and dbrename are in 4.1 and later
if db.version() < (4,1):
del test08_EnvRemoveAndRename
#----------------------------------------
class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase):
dbtype = db.DB_BTREE
class BasicHashWithEnvTestCase(BasicWithEnvTestCase):
dbtype = db.DB_HASH
#----------------------------------------------------------------------
class BasicTransactionTestCase(BasicTestCase):
import sys
if sys.version_info[:3] < (2, 4, 0):
def assertTrue(self, expr, msg=None):
self.failUnless(expr,msg=msg)
dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT
useEnv = 1
envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK |
db.DB_INIT_TXN)
envsetflags = db.DB_AUTO_COMMIT
def tearDown(self):
self.txn.commit()
BasicTestCase.tearDown(self)
def populateDB(self):
txn = self.env.txn_begin()
BasicTestCase.populateDB(self, _txn=txn)
self.txn = self.env.txn_begin()
def test06_Transactions(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test06_Transactions..." % self.__class__.__name__
self.assertEqual(d.get('new rec', txn=self.txn), None)
d.put('new rec', 'this is a new record', self.txn)
self.assertEqual(d.get('new rec', txn=self.txn),
'this is a new record')
self.txn.abort()
self.assertEqual(d.get('new rec'), None)
self.txn = self.env.txn_begin()
self.assertEqual(d.get('new rec', txn=self.txn), None)
d.put('new rec', 'this is a new record', self.txn)
self.assertEqual(d.get('new rec', txn=self.txn),
'this is a new record')
self.txn.commit()
self.assertEqual(d.get('new rec'), 'this is a new record')
self.txn = self.env.txn_begin()
c = d.cursor(self.txn)
rec = c.first()
count = 0
while rec is not None:
count = count + 1
if verbose and count % 100 == 0:
print rec
rec = c.next()
self.assertEqual(count, self._numKeys+1)
c.close() # Cursors *MUST* be closed before commit!
self.txn.commit()
# flush pending updates
try:
self.env.txn_checkpoint (0, 0, 0)
except db.DBIncompleteError:
pass
statDict = self.env.log_stat(0);
self.assert_(statDict.has_key('magic'))
self.assert_(statDict.has_key('version'))
self.assert_(statDict.has_key('cur_file'))
self.assert_(statDict.has_key('region_nowait'))
# must have at least one log file present:
logs = self.env.log_archive(db.DB_ARCH_ABS | db.DB_ARCH_LOG)
self.assertNotEqual(logs, None)
for log in logs:
if verbose:
print 'log file: ' + log
if db.version() >= (4,2):
logs = self.env.log_archive(db.DB_ARCH_REMOVE)
self.assertTrue(not logs)
self.txn = self.env.txn_begin()
#----------------------------------------
def test08_TxnTruncate(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test08_TxnTruncate..." % self.__class__.__name__
d.put("abcde", "ABCDE");
txn = self.env.txn_begin()
num = d.truncate(txn)
self.assert_(num >= 1, "truncate returned <= 0 on non-empty database")
num = d.truncate(txn)
self.assertEqual(num, 0,
"truncate on empty DB returned nonzero (%r)" % (num,))
txn.commit()
#----------------------------------------
def test09_TxnLateUse(self):
txn = self.env.txn_begin()
txn.abort()
try:
txn.abort()
except db.DBError, e:
pass
else:
raise RuntimeError, "DBTxn.abort() called after DB_TXN no longer valid w/o an exception"
txn = self.env.txn_begin()
txn.commit()
try:
txn.commit()
except db.DBError, e:
pass
else:
raise RuntimeError, "DBTxn.commit() called after DB_TXN no longer valid w/o an exception"
class BTreeTransactionTestCase(BasicTransactionTestCase):
dbtype = db.DB_BTREE
class HashTransactionTestCase(BasicTransactionTestCase):
dbtype = db.DB_HASH
#----------------------------------------------------------------------
class BTreeRecnoTestCase(BasicTestCase):
dbtype = db.DB_BTREE
dbsetflags = db.DB_RECNUM
def test08_RecnoInBTree(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test08_RecnoInBTree..." % self.__class__.__name__
rec = d.get(200)
self.assertEqual(type(rec), type(()))
self.assertEqual(len(rec), 2)
if verbose:
print "Record #200 is ", rec
c = d.cursor()
c.set('0200')
num = c.get_recno()
self.assertEqual(type(num), type(1))
if verbose:
print "recno of d['0200'] is ", num
rec = c.current()
self.assertEqual(c.set_recno(num), rec)
c.close()
class BTreeRecnoWithThreadFlagTestCase(BTreeRecnoTestCase):
dbopenflags = db.DB_THREAD
#----------------------------------------------------------------------
class BasicDUPTestCase(BasicTestCase):
dbsetflags = db.DB_DUP
def test09_DuplicateKeys(self):
d = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test09_DuplicateKeys..." % \
self.__class__.__name__
d.put("dup0", "before")
for x in "The quick brown fox jumped over the lazy dog.".split():
d.put("dup1", x)
d.put("dup2", "after")
data = d.get("dup1")
self.assertEqual(data, "The")
if verbose:
print data
c = d.cursor()
rec = c.set("dup1")
self.assertEqual(rec, ('dup1', 'The'))
next_reg = c.next()
self.assertEqual(next_reg, ('dup1', 'quick'))
rec = c.set("dup1")
count = c.count()
self.assertEqual(count, 9)
next_dup = c.next_dup()
self.assertEqual(next_dup, ('dup1', 'quick'))
rec = c.set('dup1')
while rec is not None:
if verbose:
print rec
rec = c.next_dup()
c.set('dup1')
rec = c.next_nodup()
self.assertNotEqual(rec[0], 'dup1')
if verbose:
print rec
c.close()
class BTreeDUPTestCase(BasicDUPTestCase):
dbtype = db.DB_BTREE
class HashDUPTestCase(BasicDUPTestCase):
dbtype = db.DB_HASH
class BTreeDUPWithThreadTestCase(BasicDUPTestCase):
dbtype = db.DB_BTREE
dbopenflags = db.DB_THREAD
class HashDUPWithThreadTestCase(BasicDUPTestCase):
dbtype = db.DB_HASH
dbopenflags = db.DB_THREAD
#----------------------------------------------------------------------
class BasicMultiDBTestCase(BasicTestCase):
dbname = 'first'
def otherType(self):
if self.dbtype == db.DB_BTREE:
return db.DB_HASH
else:
return db.DB_BTREE
def test10_MultiDB(self):
d1 = self.d
if verbose:
print '\n', '-=' * 30
print "Running %s.test10_MultiDB..." % self.__class__.__name__
d2 = db.DB(self.env)
d2.open(self.filename, "second", self.dbtype,
self.dbopenflags|db.DB_CREATE)
d3 = db.DB(self.env)
d3.open(self.filename, "third", self.otherType(),
self.dbopenflags|db.DB_CREATE)
for x in "The quick brown fox jumped over the lazy dog".split():
d2.put(x, self.makeData(x))
for x in string.letters:
d3.put(x, x*70)
d1.sync()
d2.sync()
d3.sync()
d1.close()
d2.close()
d3.close()
self.d = d1 = d2 = d3 = None
self.d = d1 = db.DB(self.env)
d1.open(self.filename, self.dbname, flags = self.dbopenflags)
d2 = db.DB(self.env)
d2.open(self.filename, "second", flags = self.dbopenflags)
d3 = db.DB(self.env)
d3.open(self.filename, "third", flags = self.dbopenflags)
c1 = d1.cursor()
c2 = d2.cursor()
c3 = d3.cursor()
count = 0
rec = c1.first()
while rec is not None:
count = count + 1
if verbose and (count % 50) == 0:
print rec
rec = c1.next()
self.assertEqual(count, self._numKeys)
count = 0
rec = c2.first()
while rec is not None:
count = count + 1
if verbose:
print rec
rec = c2.next()
self.assertEqual(count, 9)
count = 0
rec = c3.first()
while rec is not None:
count = count + 1
if verbose:
print rec
rec = c3.next()
self.assertEqual(count, len(string.letters))
c1.close()
c2.close()
c3.close()
d2.close()
d3.close()
# Strange things happen if you try to use Multiple DBs per file without a
# DBEnv with MPOOL and LOCKing...
class BTreeMultiDBTestCase(BasicMultiDBTestCase):
dbtype = db.DB_BTREE
dbopenflags = db.DB_THREAD
useEnv = 1
envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
class HashMultiDBTestCase(BasicMultiDBTestCase):
dbtype = db.DB_HASH
dbopenflags = db.DB_THREAD
useEnv = 1
envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
class PrivateObject(unittest.TestCase) :
import sys
if sys.version_info[:3] < (2, 4, 0):
def assertTrue(self, expr, msg=None):
self.failUnless(expr,msg=msg)
def tearDown(self) :
del self.obj
def test01_DefaultIsNone(self) :
self.assertEqual(self.obj.get_private(), None)
def test02_assignment(self) :
a = "example of private object"
self.obj.set_private(a)
b = self.obj.get_private()
self.assertTrue(a is b) # Object identity
def test03_leak_assignment(self) :
import sys
a = "example of private object"
refcount = sys.getrefcount(a)
self.obj.set_private(a)
self.assertEqual(refcount+1, sys.getrefcount(a))
self.obj.set_private(None)
self.assertEqual(refcount, sys.getrefcount(a))
def test04_leak_GC(self) :
import sys
a = "example of private object"
refcount = sys.getrefcount(a)
self.obj.set_private(a)
self.obj = None
self.assertEqual(refcount, sys.getrefcount(a))
class DBEnvPrivateObject(PrivateObject) :
def setUp(self) :
self.obj = db.DBEnv()
class DBPrivateObject(PrivateObject) :
def setUp(self) :
self.obj = db.DB()
class CrashAndBurn(unittest.TestCase) :
import sys
if sys.version_info[:3] < (2, 4, 0):
def assertTrue(self, expr, msg=None):
self.failUnless(expr,msg=msg)
#def test01_OpenCrash(self) :
# # See http://bugs.python.org/issue3307
# self.assertRaises(db.DBInvalidArgError, db.DB, None, 65535)
def test02_DBEnv_dealloc(self):
# http://bugs.python.org/issue3885
import gc
self.assertRaises(db.DBInvalidArgError, db.DBEnv, ~db.DB_RPCCLIENT)
gc.collect()
#----------------------------------------------------------------------
#----------------------------------------------------------------------
def test_suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(VersionTestCase))
suite.addTest(unittest.makeSuite(BasicBTreeTestCase))
suite.addTest(unittest.makeSuite(BasicHashTestCase))
suite.addTest(unittest.makeSuite(BasicBTreeWithThreadFlagTestCase))
suite.addTest(unittest.makeSuite(BasicHashWithThreadFlagTestCase))
suite.addTest(unittest.makeSuite(BasicBTreeWithEnvTestCase))
suite.addTest(unittest.makeSuite(BasicHashWithEnvTestCase))
suite.addTest(unittest.makeSuite(BTreeTransactionTestCase))
suite.addTest(unittest.makeSuite(HashTransactionTestCase))
suite.addTest(unittest.makeSuite(BTreeRecnoTestCase))
suite.addTest(unittest.makeSuite(BTreeRecnoWithThreadFlagTestCase))
suite.addTest(unittest.makeSuite(BTreeDUPTestCase))
suite.addTest(unittest.makeSuite(HashDUPTestCase))
suite.addTest(unittest.makeSuite(BTreeDUPWithThreadTestCase))
suite.addTest(unittest.makeSuite(HashDUPWithThreadTestCase))
suite.addTest(unittest.makeSuite(BTreeMultiDBTestCase))
suite.addTest(unittest.makeSuite(HashMultiDBTestCase))
suite.addTest(unittest.makeSuite(DBEnvPrivateObject))
suite.addTest(unittest.makeSuite(DBPrivateObject))
suite.addTest(unittest.makeSuite(CrashAndBurn))
return suite
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
| apache-2.0 |
snakeleon/YouCompleteMe-x86 | third_party/ycmd/cpp/ycm/tests/gmock/gtest/test/gtest_color_test.py | 3259 | 4911 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Verifies that Google Test correctly determines whether to use colors."""
__author__ = '[email protected] (Zhanyong Wan)'
import os
import gtest_test_utils
IS_WINDOWS = os.name = 'nt'
COLOR_ENV_VAR = 'GTEST_COLOR'
COLOR_FLAG = 'gtest_color'
COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_color_test_')
def SetEnvVar(env_var, value):
"""Sets the env variable to 'value'; unsets it when 'value' is None."""
if value is not None:
os.environ[env_var] = value
elif env_var in os.environ:
del os.environ[env_var]
def UsesColor(term, color_env_var, color_flag):
"""Runs gtest_color_test_ and returns its exit code."""
SetEnvVar('TERM', term)
SetEnvVar(COLOR_ENV_VAR, color_env_var)
if color_flag is None:
args = []
else:
args = ['--%s=%s' % (COLOR_FLAG, color_flag)]
p = gtest_test_utils.Subprocess([COMMAND] + args)
return not p.exited or p.exit_code
class GTestColorTest(gtest_test_utils.TestCase):
def testNoEnvVarNoFlag(self):
"""Tests the case when there's neither GTEST_COLOR nor --gtest_color."""
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', None, None))
self.assert_(not UsesColor('emacs', None, None))
self.assert_(not UsesColor('xterm-mono', None, None))
self.assert_(not UsesColor('unknown', None, None))
self.assert_(not UsesColor(None, None, None))
self.assert_(UsesColor('linux', None, None))
self.assert_(UsesColor('cygwin', None, None))
self.assert_(UsesColor('xterm', None, None))
self.assert_(UsesColor('xterm-color', None, None))
self.assert_(UsesColor('xterm-256color', None, None))
def testFlagOnly(self):
"""Tests the case when there's --gtest_color but not GTEST_COLOR."""
self.assert_(not UsesColor('dumb', None, 'no'))
self.assert_(not UsesColor('xterm-color', None, 'no'))
if not IS_WINDOWS:
self.assert_(not UsesColor('emacs', None, 'auto'))
self.assert_(UsesColor('xterm', None, 'auto'))
self.assert_(UsesColor('dumb', None, 'yes'))
self.assert_(UsesColor('xterm', None, 'yes'))
def testEnvVarOnly(self):
"""Tests the case when there's GTEST_COLOR but not --gtest_color."""
self.assert_(not UsesColor('dumb', 'no', None))
self.assert_(not UsesColor('xterm-color', 'no', None))
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', 'auto', None))
self.assert_(UsesColor('xterm-color', 'auto', None))
self.assert_(UsesColor('dumb', 'yes', None))
self.assert_(UsesColor('xterm-color', 'yes', None))
def testEnvVarAndFlag(self):
"""Tests the case when there are both GTEST_COLOR and --gtest_color."""
self.assert_(not UsesColor('xterm-color', 'no', 'no'))
self.assert_(UsesColor('dumb', 'no', 'yes'))
self.assert_(UsesColor('xterm-color', 'no', 'auto'))
def testAliasesOfYesAndNo(self):
"""Tests using aliases in specifying --gtest_color."""
self.assert_(UsesColor('dumb', None, 'true'))
self.assert_(UsesColor('dumb', None, 'YES'))
self.assert_(UsesColor('dumb', None, 'T'))
self.assert_(UsesColor('dumb', None, '1'))
self.assert_(not UsesColor('xterm', None, 'f'))
self.assert_(not UsesColor('xterm', None, 'false'))
self.assert_(not UsesColor('xterm', None, '0'))
self.assert_(not UsesColor('xterm', None, 'unknown'))
if __name__ == '__main__':
gtest_test_utils.Main()
| gpl-3.0 |
Gaojiquan/android_kernel_zte_digger | tools/perf/scripts/python/netdev-times.py | 11271 | 15048 | # Display a process of packets and processed time.
# It helps us to investigate networking or network device.
#
# options
# tx: show only tx chart
# rx: show only rx chart
# dev=: show only thing related to specified device
# debug: work with debug mode. It shows buffer status.
import os
import sys
sys.path.append(os.environ['PERF_EXEC_PATH'] + \
'/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
from perf_trace_context import *
from Core import *
from Util import *
all_event_list = []; # insert all tracepoint event related with this script
irq_dic = {}; # key is cpu and value is a list which stacks irqs
# which raise NET_RX softirq
net_rx_dic = {}; # key is cpu and value include time of NET_RX softirq-entry
# and a list which stacks receive
receive_hunk_list = []; # a list which include a sequence of receive events
rx_skb_list = []; # received packet list for matching
# skb_copy_datagram_iovec
buffer_budget = 65536; # the budget of rx_skb_list, tx_queue_list and
# tx_xmit_list
of_count_rx_skb_list = 0; # overflow count
tx_queue_list = []; # list of packets which pass through dev_queue_xmit
of_count_tx_queue_list = 0; # overflow count
tx_xmit_list = []; # list of packets which pass through dev_hard_start_xmit
of_count_tx_xmit_list = 0; # overflow count
tx_free_list = []; # list of packets which is freed
# options
show_tx = 0;
show_rx = 0;
dev = 0; # store a name of device specified by option "dev="
debug = 0;
# indices of event_info tuple
EINFO_IDX_NAME= 0
EINFO_IDX_CONTEXT=1
EINFO_IDX_CPU= 2
EINFO_IDX_TIME= 3
EINFO_IDX_PID= 4
EINFO_IDX_COMM= 5
# Calculate a time interval(msec) from src(nsec) to dst(nsec)
def diff_msec(src, dst):
return (dst - src) / 1000000.0
# Display a process of transmitting a packet
def print_transmit(hunk):
if dev != 0 and hunk['dev'].find(dev) < 0:
return
print "%7s %5d %6d.%06dsec %12.3fmsec %12.3fmsec" % \
(hunk['dev'], hunk['len'],
nsecs_secs(hunk['queue_t']),
nsecs_nsecs(hunk['queue_t'])/1000,
diff_msec(hunk['queue_t'], hunk['xmit_t']),
diff_msec(hunk['xmit_t'], hunk['free_t']))
# Format for displaying rx packet processing
PF_IRQ_ENTRY= " irq_entry(+%.3fmsec irq=%d:%s)"
PF_SOFT_ENTRY=" softirq_entry(+%.3fmsec)"
PF_NAPI_POLL= " napi_poll_exit(+%.3fmsec %s)"
PF_JOINT= " |"
PF_WJOINT= " | |"
PF_NET_RECV= " |---netif_receive_skb(+%.3fmsec skb=%x len=%d)"
PF_NET_RX= " |---netif_rx(+%.3fmsec skb=%x)"
PF_CPY_DGRAM= " | skb_copy_datagram_iovec(+%.3fmsec %d:%s)"
PF_KFREE_SKB= " | kfree_skb(+%.3fmsec location=%x)"
PF_CONS_SKB= " | consume_skb(+%.3fmsec)"
# Display a process of received packets and interrputs associated with
# a NET_RX softirq
def print_receive(hunk):
show_hunk = 0
irq_list = hunk['irq_list']
cpu = irq_list[0]['cpu']
base_t = irq_list[0]['irq_ent_t']
# check if this hunk should be showed
if dev != 0:
for i in range(len(irq_list)):
if irq_list[i]['name'].find(dev) >= 0:
show_hunk = 1
break
else:
show_hunk = 1
if show_hunk == 0:
return
print "%d.%06dsec cpu=%d" % \
(nsecs_secs(base_t), nsecs_nsecs(base_t)/1000, cpu)
for i in range(len(irq_list)):
print PF_IRQ_ENTRY % \
(diff_msec(base_t, irq_list[i]['irq_ent_t']),
irq_list[i]['irq'], irq_list[i]['name'])
print PF_JOINT
irq_event_list = irq_list[i]['event_list']
for j in range(len(irq_event_list)):
irq_event = irq_event_list[j]
if irq_event['event'] == 'netif_rx':
print PF_NET_RX % \
(diff_msec(base_t, irq_event['time']),
irq_event['skbaddr'])
print PF_JOINT
print PF_SOFT_ENTRY % \
diff_msec(base_t, hunk['sirq_ent_t'])
print PF_JOINT
event_list = hunk['event_list']
for i in range(len(event_list)):
event = event_list[i]
if event['event_name'] == 'napi_poll':
print PF_NAPI_POLL % \
(diff_msec(base_t, event['event_t']), event['dev'])
if i == len(event_list) - 1:
print ""
else:
print PF_JOINT
else:
print PF_NET_RECV % \
(diff_msec(base_t, event['event_t']), event['skbaddr'],
event['len'])
if 'comm' in event.keys():
print PF_WJOINT
print PF_CPY_DGRAM % \
(diff_msec(base_t, event['comm_t']),
event['pid'], event['comm'])
elif 'handle' in event.keys():
print PF_WJOINT
if event['handle'] == "kfree_skb":
print PF_KFREE_SKB % \
(diff_msec(base_t,
event['comm_t']),
event['location'])
elif event['handle'] == "consume_skb":
print PF_CONS_SKB % \
diff_msec(base_t,
event['comm_t'])
print PF_JOINT
def trace_begin():
global show_tx
global show_rx
global dev
global debug
for i in range(len(sys.argv)):
if i == 0:
continue
arg = sys.argv[i]
if arg == 'tx':
show_tx = 1
elif arg =='rx':
show_rx = 1
elif arg.find('dev=',0, 4) >= 0:
dev = arg[4:]
elif arg == 'debug':
debug = 1
if show_tx == 0 and show_rx == 0:
show_tx = 1
show_rx = 1
def trace_end():
# order all events in time
all_event_list.sort(lambda a,b :cmp(a[EINFO_IDX_TIME],
b[EINFO_IDX_TIME]))
# process all events
for i in range(len(all_event_list)):
event_info = all_event_list[i]
name = event_info[EINFO_IDX_NAME]
if name == 'irq__softirq_exit':
handle_irq_softirq_exit(event_info)
elif name == 'irq__softirq_entry':
handle_irq_softirq_entry(event_info)
elif name == 'irq__softirq_raise':
handle_irq_softirq_raise(event_info)
elif name == 'irq__irq_handler_entry':
handle_irq_handler_entry(event_info)
elif name == 'irq__irq_handler_exit':
handle_irq_handler_exit(event_info)
elif name == 'napi__napi_poll':
handle_napi_poll(event_info)
elif name == 'net__netif_receive_skb':
handle_netif_receive_skb(event_info)
elif name == 'net__netif_rx':
handle_netif_rx(event_info)
elif name == 'skb__skb_copy_datagram_iovec':
handle_skb_copy_datagram_iovec(event_info)
elif name == 'net__net_dev_queue':
handle_net_dev_queue(event_info)
elif name == 'net__net_dev_xmit':
handle_net_dev_xmit(event_info)
elif name == 'skb__kfree_skb':
handle_kfree_skb(event_info)
elif name == 'skb__consume_skb':
handle_consume_skb(event_info)
# display receive hunks
if show_rx:
for i in range(len(receive_hunk_list)):
print_receive(receive_hunk_list[i])
# display transmit hunks
if show_tx:
print " dev len Qdisc " \
" netdevice free"
for i in range(len(tx_free_list)):
print_transmit(tx_free_list[i])
if debug:
print "debug buffer status"
print "----------------------------"
print "xmit Qdisc:remain:%d overflow:%d" % \
(len(tx_queue_list), of_count_tx_queue_list)
print "xmit netdevice:remain:%d overflow:%d" % \
(len(tx_xmit_list), of_count_tx_xmit_list)
print "receive:remain:%d overflow:%d" % \
(len(rx_skb_list), of_count_rx_skb_list)
# called from perf, when it finds a correspoinding event
def irq__softirq_entry(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__softirq_exit(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__softirq_raise(name, context, cpu, sec, nsec, pid, comm, vec):
if symbol_str("irq__softirq_entry", "vec", vec) != "NET_RX":
return
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, vec)
all_event_list.append(event_info)
def irq__irq_handler_entry(name, context, cpu, sec, nsec, pid, comm,
irq, irq_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
irq, irq_name)
all_event_list.append(event_info)
def irq__irq_handler_exit(name, context, cpu, sec, nsec, pid, comm, irq, ret):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, irq, ret)
all_event_list.append(event_info)
def napi__napi_poll(name, context, cpu, sec, nsec, pid, comm, napi, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
napi, dev_name)
all_event_list.append(event_info)
def net__netif_receive_skb(name, context, cpu, sec, nsec, pid, comm, skbaddr,
skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__netif_rx(name, context, cpu, sec, nsec, pid, comm, skbaddr,
skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__net_dev_queue(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, dev_name)
all_event_list.append(event_info)
def net__net_dev_xmit(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen, rc, dev_name):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen, rc ,dev_name)
all_event_list.append(event_info)
def skb__kfree_skb(name, context, cpu, sec, nsec, pid, comm,
skbaddr, protocol, location):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, protocol, location)
all_event_list.append(event_info)
def skb__consume_skb(name, context, cpu, sec, nsec, pid, comm, skbaddr):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr)
all_event_list.append(event_info)
def skb__skb_copy_datagram_iovec(name, context, cpu, sec, nsec, pid, comm,
skbaddr, skblen):
event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
skbaddr, skblen)
all_event_list.append(event_info)
def handle_irq_handler_entry(event_info):
(name, context, cpu, time, pid, comm, irq, irq_name) = event_info
if cpu not in irq_dic.keys():
irq_dic[cpu] = []
irq_record = {'irq':irq, 'name':irq_name, 'cpu':cpu, 'irq_ent_t':time}
irq_dic[cpu].append(irq_record)
def handle_irq_handler_exit(event_info):
(name, context, cpu, time, pid, comm, irq, ret) = event_info
if cpu not in irq_dic.keys():
return
irq_record = irq_dic[cpu].pop()
if irq != irq_record['irq']:
return
irq_record.update({'irq_ext_t':time})
# if an irq doesn't include NET_RX softirq, drop.
if 'event_list' in irq_record.keys():
irq_dic[cpu].append(irq_record)
def handle_irq_softirq_raise(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
if cpu not in irq_dic.keys() \
or len(irq_dic[cpu]) == 0:
return
irq_record = irq_dic[cpu].pop()
if 'event_list' in irq_record.keys():
irq_event_list = irq_record['event_list']
else:
irq_event_list = []
irq_event_list.append({'time':time, 'event':'sirq_raise'})
irq_record.update({'event_list':irq_event_list})
irq_dic[cpu].append(irq_record)
def handle_irq_softirq_entry(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
net_rx_dic[cpu] = {'sirq_ent_t':time, 'event_list':[]}
def handle_irq_softirq_exit(event_info):
(name, context, cpu, time, pid, comm, vec) = event_info
irq_list = []
event_list = 0
if cpu in irq_dic.keys():
irq_list = irq_dic[cpu]
del irq_dic[cpu]
if cpu in net_rx_dic.keys():
sirq_ent_t = net_rx_dic[cpu]['sirq_ent_t']
event_list = net_rx_dic[cpu]['event_list']
del net_rx_dic[cpu]
if irq_list == [] or event_list == 0:
return
rec_data = {'sirq_ent_t':sirq_ent_t, 'sirq_ext_t':time,
'irq_list':irq_list, 'event_list':event_list}
# merge information realted to a NET_RX softirq
receive_hunk_list.append(rec_data)
def handle_napi_poll(event_info):
(name, context, cpu, time, pid, comm, napi, dev_name) = event_info
if cpu in net_rx_dic.keys():
event_list = net_rx_dic[cpu]['event_list']
rec_data = {'event_name':'napi_poll',
'dev':dev_name, 'event_t':time}
event_list.append(rec_data)
def handle_netif_rx(event_info):
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
if cpu not in irq_dic.keys() \
or len(irq_dic[cpu]) == 0:
return
irq_record = irq_dic[cpu].pop()
if 'event_list' in irq_record.keys():
irq_event_list = irq_record['event_list']
else:
irq_event_list = []
irq_event_list.append({'time':time, 'event':'netif_rx',
'skbaddr':skbaddr, 'skblen':skblen, 'dev_name':dev_name})
irq_record.update({'event_list':irq_event_list})
irq_dic[cpu].append(irq_record)
def handle_netif_receive_skb(event_info):
global of_count_rx_skb_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
if cpu in net_rx_dic.keys():
rec_data = {'event_name':'netif_receive_skb',
'event_t':time, 'skbaddr':skbaddr, 'len':skblen}
event_list = net_rx_dic[cpu]['event_list']
event_list.append(rec_data)
rx_skb_list.insert(0, rec_data)
if len(rx_skb_list) > buffer_budget:
rx_skb_list.pop()
of_count_rx_skb_list += 1
def handle_net_dev_queue(event_info):
global of_count_tx_queue_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, dev_name) = event_info
skb = {'dev':dev_name, 'skbaddr':skbaddr, 'len':skblen, 'queue_t':time}
tx_queue_list.insert(0, skb)
if len(tx_queue_list) > buffer_budget:
tx_queue_list.pop()
of_count_tx_queue_list += 1
def handle_net_dev_xmit(event_info):
global of_count_tx_xmit_list
(name, context, cpu, time, pid, comm,
skbaddr, skblen, rc, dev_name) = event_info
if rc == 0: # NETDEV_TX_OK
for i in range(len(tx_queue_list)):
skb = tx_queue_list[i]
if skb['skbaddr'] == skbaddr:
skb['xmit_t'] = time
tx_xmit_list.insert(0, skb)
del tx_queue_list[i]
if len(tx_xmit_list) > buffer_budget:
tx_xmit_list.pop()
of_count_tx_xmit_list += 1
return
def handle_kfree_skb(event_info):
(name, context, cpu, time, pid, comm,
skbaddr, protocol, location) = event_info
for i in range(len(tx_queue_list)):
skb = tx_queue_list[i]
if skb['skbaddr'] == skbaddr:
del tx_queue_list[i]
return
for i in range(len(tx_xmit_list)):
skb = tx_xmit_list[i]
if skb['skbaddr'] == skbaddr:
skb['free_t'] = time
tx_free_list.append(skb)
del tx_xmit_list[i]
return
for i in range(len(rx_skb_list)):
rec_data = rx_skb_list[i]
if rec_data['skbaddr'] == skbaddr:
rec_data.update({'handle':"kfree_skb",
'comm':comm, 'pid':pid, 'comm_t':time})
del rx_skb_list[i]
return
def handle_consume_skb(event_info):
(name, context, cpu, time, pid, comm, skbaddr) = event_info
for i in range(len(tx_xmit_list)):
skb = tx_xmit_list[i]
if skb['skbaddr'] == skbaddr:
skb['free_t'] = time
tx_free_list.append(skb)
del tx_xmit_list[i]
return
def handle_skb_copy_datagram_iovec(event_info):
(name, context, cpu, time, pid, comm, skbaddr, skblen) = event_info
for i in range(len(rx_skb_list)):
rec_data = rx_skb_list[i]
if skbaddr == rec_data['skbaddr']:
rec_data.update({'handle':"skb_copy_datagram_iovec",
'comm':comm, 'pid':pid, 'comm_t':time})
del rx_skb_list[i]
return
| gpl-2.0 |
underlost/GamerNews | gamernews/apps/threadedcomments/management/commands/migrate_threaded_comments.py | 5 | 3452 | from django.core.management.base import NoArgsCommand
from django.contrib.sites.models import Site
from django.db import transaction, connection
from django.conf import settings
from threadedcomments.models import ThreadedComment
USER_SQL = """
SELECT
content_type_id,
object_id,
parent_id,
user_id,
date_submitted,
date_modified,
date_approved,
comment,
markup,
is_public,
is_approved,
ip_address
FROM threadedcomments_threadedcomment
"""
FREE_SQL = """
SELECT
content_type_id,
object_id,
parent_id,
name,
website,
email,
date_submitted,
date_modified,
date_approved,
comment,
markup,
is_public,
is_approved,
ip_address
FROM threadedcomments_freethreadedcomment
"""
PATH_SEPARATOR = getattr(settings, 'COMMENT_PATH_SEPARATOR', '/')
PATH_DIGITS = getattr(settings, 'COMMENT_PATH_DIGITS', 10)
class Command(NoArgsCommand):
help = "Migrates django-threadedcomments <= 0.5 to the new model structure"
def handle(self, *args, **options):
transaction.commit_unless_managed()
transaction.enter_transaction_management()
transaction.managed(True)
site = Site.objects.all()[0]
cursor = connection.cursor()
cursor.execute(FREE_SQL)
for row in cursor:
(content_type_id, object_id, parent_id, name, website, email,
date_submitted, date_modified, date_approved, comment, markup,
is_public, is_approved, ip_address) = row
tc = ThreadedComment(
content_type_id=content_type_id,
object_pk=object_id,
user_name=name,
user_email=email,
user_url=website,
comment=comment,
submit_date=date_submitted,
ip_address=ip_address,
is_public=is_public,
is_removed=not is_approved,
parent_id=parent_id,
site=site,
)
tc.save(skip_tree_path=True)
cursor = connection.cursor()
cursor.execute(USER_SQL)
for row in cursor:
(content_type_id, object_id, parent_id, user_id, date_submitted,
date_modified, date_approved, comment, markup, is_public,
is_approved, ip_address) = row
tc = ThreadedComment(
content_type_id=content_type_id,
object_pk=object_id,
user_id=user_id,
comment=comment,
submit_date=date_submitted,
ip_address=ip_address,
is_public=is_public,
is_removed=not is_approved,
parent_id=parent_id,
site=site,
)
tc.save(skip_tree_path=True)
for comment in ThreadedComment.objects.all():
path = [str(comment.id).zfill(PATH_DIGITS)]
current = comment
while current.parent:
current = current.parent
path.append(str(current.id).zfill(PATH_DIGITS))
comment.tree_path = PATH_SEPARATOR.join(reversed(path))
comment.save(skip_tree_path=True)
if comment.parent:
ThreadedComment.objects.filter(pk=comment.parent.pk).update(
last_child=comment)
transaction.commit()
transaction.leave_transaction_management()
| mit |
cloudera/hue | desktop/core/ext-py/Paste-2.0.1/paste/config.py | 78 | 4312 | # (c) 2006 Ian Bicking, Philip Jenvey and contributors
# Written for Paste (http://pythonpaste.org)
# Licensed under the MIT license: http://www.opensource.org/licenses/mit-license.php
"""Paste Configuration Middleware and Objects"""
from paste.registry import RegistryManager, StackedObjectProxy
__all__ = ['DispatchingConfig', 'CONFIG', 'ConfigMiddleware']
class DispatchingConfig(StackedObjectProxy):
"""
This is a configuration object that can be used globally,
imported, have references held onto. The configuration may differ
by thread (or may not).
Specific configurations are registered (and deregistered) either
for the process or for threads.
"""
# @@: What should happen when someone tries to add this
# configuration to itself? Probably the conf should become
# resolved, and get rid of this delegation wrapper
def __init__(self, name='DispatchingConfig'):
super(DispatchingConfig, self).__init__(name=name)
self.__dict__['_process_configs'] = []
def push_thread_config(self, conf):
"""
Make ``conf`` the active configuration for this thread.
Thread-local configuration always overrides process-wide
configuration.
This should be used like::
conf = make_conf()
dispatching_config.push_thread_config(conf)
try:
... do stuff ...
finally:
dispatching_config.pop_thread_config(conf)
"""
self._push_object(conf)
def pop_thread_config(self, conf=None):
"""
Remove a thread-local configuration. If ``conf`` is given,
it is checked against the popped configuration and an error
is emitted if they don't match.
"""
self._pop_object(conf)
def push_process_config(self, conf):
"""
Like push_thread_config, but applies the configuration to
the entire process.
"""
self._process_configs.append(conf)
def pop_process_config(self, conf=None):
self._pop_from(self._process_configs, conf)
def _pop_from(self, lst, conf):
popped = lst.pop()
if conf is not None and popped is not conf:
raise AssertionError(
"The config popped (%s) is not the same as the config "
"expected (%s)"
% (popped, conf))
def _current_obj(self):
try:
return super(DispatchingConfig, self)._current_obj()
except TypeError:
if self._process_configs:
return self._process_configs[-1]
raise AttributeError(
"No configuration has been registered for this process "
"or thread")
current = current_conf = _current_obj
CONFIG = DispatchingConfig()
no_config = object()
class ConfigMiddleware(RegistryManager):
"""
A WSGI middleware that adds a ``paste.config`` key (by default)
to the request environment, as well as registering the
configuration temporarily (for the length of the request) with
``paste.config.CONFIG`` (or any other ``DispatchingConfig``
object).
"""
def __init__(self, application, config, dispatching_config=CONFIG,
environ_key='paste.config'):
"""
This delegates all requests to `application`, adding a *copy*
of the configuration `config`.
"""
def register_config(environ, start_response):
popped_config = environ.get(environ_key, no_config)
current_config = environ[environ_key] = config.copy()
environ['paste.registry'].register(dispatching_config,
current_config)
try:
app_iter = application(environ, start_response)
finally:
if popped_config is no_config:
environ.pop(environ_key, None)
else:
environ[environ_key] = popped_config
return app_iter
super(self.__class__, self).__init__(register_config)
def make_config_filter(app, global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
return ConfigMiddleware(app, conf)
make_config_middleware = ConfigMiddleware.__doc__
| apache-2.0 |
cedriclaunay/gaffer | apps/license/license-1.py | 7 | 3146 | ##########################################################################
#
# Copyright (c) 2011-2012, John Haddon. All rights reserved.
# Copyright (c) 2011, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import sys
import os
import IECore
import Gaffer
class license( Gaffer.Application ) :
def __init__( self ) :
Gaffer.Application.__init__( self )
self.parameters().addParameter(
IECore.BoolParameter(
name = "withDependencies",
description = "Display the copyright and licensing information for the dependencies.",
defaultValue = True
)
)
def _run( self, args ) :
sys.stderr.write( Gaffer.About.name() + " " + Gaffer.About.versionString() + "\n" )
sys.stderr.write( Gaffer.About.copyright() + "\n" )
sys.stderr.write( Gaffer.About.url() + "\n" )
if args["withDependencies"].value :
sys.stderr.write( "\n" + Gaffer.About.dependenciesPreamble() + "\n" )
for d in Gaffer.About.dependencies() :
sys.stderr.write( "\n" + d["name"] + "\n" )
sys.stderr.write( "-" * len( d["name"] ) + "\n\n" )
if "credit" in d :
sys.stderr.write( d["credit"] + "\n" )
if "url" in d :
sys.stderr.write( "Project URL : " + d["url"] + "\n" )
if "license" in d :
sys.stderr.write( "License : %s\n" % os.path.expandvars( d["license"] ) )
if "source" in d :
sys.stderr.write( "Source : %s\n" % os.path.expandvars( d["source"] ) )
return 0
IECore.registerRunTimeTyped( license )
| bsd-3-clause |
ReganBell/QReview | networkx/utils/tests/test_heaps.py | 64 | 3979 | from nose.tools import *
import networkx as nx
from networkx.utils import *
class X(object):
def __eq__(self, other):
raise self is other
def __ne__(self, other):
raise self is not other
def __lt__(self, other):
raise TypeError('cannot compare')
def __le__(self, other):
raise TypeError('cannot compare')
def __ge__(self, other):
raise TypeError('cannot compare')
def __gt__(self, other):
raise TypeError('cannot compare')
def __hash__(self):
return hash(id(self))
x = X()
data = [# min should not invent an element.
('min', nx.NetworkXError),
# Popping an empty heap should fail.
('pop', nx.NetworkXError),
# Getting nonexisting elements should return None.
('get', 0, None),
('get', x, None),
('get', None, None),
# Inserting a new key should succeed.
('insert', x, 1, True),
('get', x, 1),
('min', (x, 1)),
# min should not pop the top element.
('min', (x, 1)),
# Inserting a new key of different type should succeed.
('insert', 1, -2.0, True),
# int and float values should interop.
('min', (1, -2.0)),
# pop removes minimum-valued element.
('insert', 3, -10 ** 100, True),
('insert', 4, 5, True),
('pop', (3, -10 ** 100)),
('pop', (1, -2.0)),
# Decrease-insert should succeed.
('insert', 4, -50, True),
('insert', 4, -60, False, True),
# Decrease-insert should not create duplicate keys.
('pop', (4, -60)),
('pop', (x, 1)),
# Popping all elements should empty the heap.
('min', nx.NetworkXError),
('pop', nx.NetworkXError),
# Non-value-changing insert should fail.
('insert', x, 0, True),
('insert', x, 0, False, False),
('min', (x, 0)),
('insert', x, 0, True, False),
('min', (x, 0)),
# Failed insert should not create duplicate keys.
('pop', (x, 0)),
('pop', nx.NetworkXError),
# Increase-insert should succeed when allowed.
('insert', None, 0, True),
('insert', 2, -1, True),
('min', (2, -1)),
('insert', 2, 1, True, False),
('min', (None, 0)),
# Increase-insert should fail when disallowed.
('insert', None, 2, False, False),
('min', (None, 0)),
# Failed increase-insert should not create duplicate keys.
('pop', (None, 0)),
('pop', (2, 1)),
('min', nx.NetworkXError),
('pop', nx.NetworkXError)]
def _test_heap_class(cls, *args, **kwargs):
heap = cls(*args, **kwargs)
# Basic behavioral test
for op in data:
if op[-1] is not nx.NetworkXError:
assert_equal(op[-1], getattr(heap, op[0])(*op[1:-1]))
else:
assert_raises(op[-1], getattr(heap, op[0]), *op[1:-1])
# Coverage test.
for i in range(99, -1, -1):
assert_true(heap.insert(i, i))
for i in range(50):
assert_equal(heap.pop(), (i, i))
for i in range(100):
assert_equal(heap.insert(i, i), i < 50)
for i in range(100):
assert_false(heap.insert(i, i + 1))
for i in range(50):
assert_equal(heap.pop(), (i, i))
for i in range(100):
assert_equal(heap.insert(i, i + 1), i < 50)
for i in range(49):
assert_equal(heap.pop(), (i, i + 1))
assert_equal(sorted([heap.pop(), heap.pop()]), [(49, 50), (50, 50)])
for i in range(51, 100):
assert_false(heap.insert(i, i + 1, True))
for i in range(51, 70):
assert_equal(heap.pop(), (i, i + 1))
for i in range(100):
assert_true(heap.insert(i, i))
for i in range(100):
assert_equal(heap.pop(), (i, i))
assert_raises(nx.NetworkXError, heap.pop)
def test_PairingHeap():
_test_heap_class(PairingHeap)
def test_BinaryHeap():
_test_heap_class(BinaryHeap)
| bsd-3-clause |
ogajduse/spacewalk | backend/server/action/kickstart_guest.py | 10 | 4459 | #
# Copyright (c) 2008--2016 Red Hat, Inc.
#
# This software is licensed to you under the GNU General Public License,
# version 2 (GPLv2). There is NO WARRANTY for this software, express or
# implied, including the implied warranties of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. You should have received a copy of GPLv2
# along with this software; if not, see
# http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt.
#
# Red Hat trademarks are not licensed under GPLv2. No permission is
# granted to use or replicate Red Hat trademarks that are incorporated
# in this software or its documentation.
#
import sys
from spacewalk.common.usix import raise_with_tb
from spacewalk.common.rhnLog import log_debug
from spacewalk.server import rhnSQL
from spacewalk.server.rhnLib import InvalidAction, ShadowAction
from spacewalk.server.action.utils import SubscribedChannel, \
ChannelPackage, \
PackageInstallScheduler, \
NoActionInfo, \
PackageNotFound
from spacewalk.server.rhnChannel import subscribe_to_tools_channel
__rhnexport__ = ['initiate', 'schedule_virt_guest_pkg_install', 'add_tools_channel']
_query_initiate_guest = rhnSQL.Statement("""
select ksd.label as profile_name, akg.kickstart_host, kvt.label as virt_type,
akg.mem_kb, akg.vcpus, akg.disk_path, akg.virt_bridge, akg.cobbler_system_name,
akg.disk_gb, akg.append_string,
akg.guest_name, akg.ks_session_id from rhnActionKickstartGuest akg,
rhnKSData ksd, rhnKickstartSession ksess,
rhnKickstartDefaults ksdef, rhnKickstartVirtualizationType kvt
where akg.action_id = :action_id
and ksess.kickstart_id = ksd.id
and ksess.id = akg.ks_session_id
and ksdef.kickstart_id = ksd.id
and ksdef.virtualization_type = kvt.id
""")
def schedule_virt_guest_pkg_install(server_id, action_id, dry_run=0):
"""
ShadowAction that schedules a package installation action for the
rhn-virtualization-guest package.
"""
log_debug(3)
virt_host_package_name = "rhn-virtualization-guest"
tools_channel = SubscribedChannel(server_id, "rhn-tools")
found_tools_channel = tools_channel.is_subscribed_to_channel()
if not found_tools_channel:
raise InvalidAction("System not subscribed to the RHN Tools channel.")
rhn_v12n_package = ChannelPackage(server_id, virt_host_package_name)
if not rhn_v12n_package.exists():
raise InvalidAction("Could not find the rhn-virtualization-guest package.")
try:
install_scheduler = PackageInstallScheduler(server_id, action_id, rhn_v12n_package)
if (not dry_run):
install_scheduler.schedule_package_install()
else:
log_debug(4, "dry run requested")
except NoActionInfo:
nai = sys.exc_info()[1]
raise_with_tb(InvalidAction(str(nai)), sys.exc_info()[2])
except PackageNotFound:
pnf = sys.exc_info()[1]
raise_with_tb(InvalidAction(str(pnf)), sys.exc_info()[2])
except Exception:
e = sys.exc_info()[1]
raise_with_tb(InvalidAction(str(e)), sys.exc_info()[2])
log_debug(3, "Completed scheduling install of rhn-virtualization-guest!")
raise ShadowAction("Scheduled installation of RHN Virtualization Guest packages.")
def initiate(server_id, action_id, dry_run=0):
log_debug(3)
h = rhnSQL.prepare(_query_initiate_guest)
h.execute(action_id=action_id)
row = h.fetchone_dict()
if not row:
raise InvalidAction("Kickstart action without an associated kickstart")
kickstart_host = row['kickstart_host']
virt_type = row['virt_type']
name = row['guest_name']
boot_image = "spacewalk-koan"
append_string = row['append_string']
vcpus = row['vcpus']
disk_gb = row['disk_gb']
mem_kb = row['mem_kb']
ks_session_id = row['ks_session_id']
virt_bridge = row['virt_bridge']
disk_path = row['disk_path']
cobbler_system_name = row['cobbler_system_name']
if not boot_image:
raise InvalidAction("Boot image missing")
return (kickstart_host, cobbler_system_name, virt_type, ks_session_id, name,
mem_kb, vcpus, disk_gb, virt_bridge, disk_path, append_string)
def add_tools_channel(server_id, action_id, dry_run=0):
log_debug(3)
if (not dry_run):
subscribe_to_tools_channel(server_id)
else:
log_debug(4, "dry run requested")
raise ShadowAction("Subscribed guest to tools channel.")
| gpl-2.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.